Home Up Feedback Contents Search

Mind Reading

This is a historical page from the old MaxMax.com website. Please use the current site at www.MaxMax.com.


Parts of this article first appeared in Personal Computer World magazine, November 2000 written by Toby Howard.  

 

Have you ever longed for a computer that could read your mind and, for once, do exactly what you wanted? With new reseach into a direct brain-computer interface, that wish might soon be coming true.

Research into a hands-free brain-computer interface (BCI) has traditionally followed two approaches: biofeedback and stimulus-and-response. With biofeedback, a subject is connected to an electroencephalograph (EEG), and particular groups of brain signals are monitored. One widely-used signal is the "mu" rhythm, an 8-12 Hz brain rhythm centred on the sensorimotor cortex. The varying amplitude of the signal is used to control a cursor on a computer display. After a period of training, 4 out of 5 subjects can to some extent learn how to make the cursor move -- even if they're not consciously aware of exactly how they're doing it. The problem with biofeedback is that the training period can stretch to months, and the results can be very variable between subjects and the tasks they try to perform.

# PLANCK4  DOS ver 1.1     Planck's radiation intensity formula          3/5/98

 # Planck's blackbody radiation formula

 planck:= (lambda,T) -> 2*Pi*c^2*h/lambda^5*(exp(h*c/(lambda*k*T))-1)^(-1);

 #constants

 c:=3*10^8:                       # speed of light in m/s
 h:=6.626*10^(-34):          # Planck's constant in Js
 k:=1.381*10^(-23):          # Boltzmann constant in J/K

 # Plot the intensity curves for 6000, 4000, and 2000 K

 plot({planck(lambda,6000),planck(lambda,4000),planck(lambda,2000)\
 },lambda=0..6*10^(-6),title=`Radiation intensity`);
maple101.gif
# Note the figure was obtained using Release 4.0. The wavelength lambda is in m and 
# the intensity is in W/m2/lambda.

# Derive Wien's displacement law

 c:='c':h:='h':k:='k':
 eq:=diff(planck(lambda,T),lambda):

 # Set the derivative of the intensity zero to find lambda maximum

 lambdamax:=solve(eq=0, lambda):
 lambdamax:=subs(c=3*10^8,k=1.381*10^(-23),h=6.626*10^(-34),lambdamax):
 lambdamax:=evalf(");
                                                 .002899010331
                           lambdamax := -------------
                                                          T

 # Notice that the result shows lambdamax inversely proportional to T.

 # Derive the Stefan-Boltsmann equation

 planck:=2*Pi*c^2*h/lambda^5*(exp(h*c/(lambda*k*T))-1)^(-1):

 # Rewrite the formula in terms of x=h*c/(l*k*T)

 with(student):
 R:=changevar(h*c/(lambda*k*T)=x, Int(planck, lambda=0..infinity),x);

 # Note the result shows it is proportional to T4. If you integrate
 # the result above from infinity to zero, you get the total intensity,
 # which is still proportional to T4: the stefan-Boltzmann law.

 # The integration above becomes

 eq:=x^3/(exp(x)-1):
 defint:=evalf(Int(eq,x=0..infinity));
 # Thus the constant sigma in R=sigma*T4 becomes

 c:=3*10^8:                       # speed of light in m/s
 h:=6.626*10^(-34):          # Planck's constant in Js
 k:=1.381*10^(-23):          # Boltzmann constant in J/K

 sigma:=evalf(defint*2*Pi*k^4/(h^3*c^2));
                                                                    -7
                           sigma := .5668472060*10

# Note that the constant sigma is measured in W/m2/K4.

 

 

The stimulus-and-response technique differs from biofeedback in that when a subject is given a certain stimulus, the brain will automatically produce a measurable response -- so there's no need to train the subject to manipulate specific brain waves. One signal, the "P300 evoked potential", is ideal for this approach. First discovered in 1965, the P300 signal is the signature of the rush of neural activity that occurs about 300 milliseconds after a subject notices an external stimulus which they've been asked to watch out for.

At the University of Rochester in New York, researcher Jessica Bayliss is using the P300 signal to let people control objects in a virtual 3D world. Subjects in her experiments wear a skull-cap instrumented with 27 EEG sensors. On top of the cap the subject dons a pair of standard VR goggles, which provide a view of a computer-generated 3D world. In this simple world there's a table-lamp, a stereo system, and a TV. Above each object there's a flashing light. Each of the lights flashes at its own rate, and out of sync with the others. If the subject wants to switch on any of the objects, they simply think about which object they're interested in, and the P300 signal does the rest.

Suppose the subject wishes to switch the TV on. Whenever the light above it flashes on, the brain recognises the correspondence between "the light is on" and "I want to switch the TV on", and 300ms later generates a P300 signal. Bayliss's system automatically compares any P300 traces recorded on the EEG with the states of the flashing lights. If any are found to be in sync -- such that one of the lights flashes on followed 300ms later by a P300 signal -- then the system knows which object the subject was thinking of, and can switch the object on in the virtual world. In effect, the subject has communicated a "yes/no" decision entirely by thinking, and it works about 85% of the time.

What makes these experiments significant is that picking up the brain's electrical activity is fraught with problems. The signals are tiny, no more than a few millionths of a volt, so they're easily swamped by any stray electromagnetic noise in the environment. Most BCI experiments are therefore done in carefully shielded laboratories. Bayliss's achievement is to have found a way to clearly measure the signals in a very electrically noisy place -- a virtual reality laboratory stuffed with computers, displays and tracking devices.

The work of Bayliss and others is paving the way for BCI-enabled consumer products. You'll be able to control your ear-implanted phone/MP3 player just by thinking about it. Minds will soon be boggling, and presumably so will any machines they happen to be controlling.

 

It is important to distinguish these absolutist questions from operational questions about what computers can do. A question such as `Can a computer beat the world champion at chess?' is perfectly coherent and subject to test (In fact, such a test is beginning just as these words are being written). The coherence of this question does not depend on any properties of chess, although those properties may well be critical to the empirical answer. Questions such as `Can a computer compose a symphony that people find moving and beautiful?' or `Can a computer come up with a cure for cancer?' are equally understandable and testable. But `Can a computer understand' or `Can a computer have intentions' are not, since they depend on an assumption that a predicate such as `X understands' is objectively meaningful.

Many people have misinterpreted Searle's term `strong AI', distorting it in order to apply it to the argument that raged in the AI community during the 70s and 80s. Some AI researchers maintained faithfulness to a long-term goal of producing full human-like intelligence, while others argued that more pragmatic, `weak' approaches (under labels such as `knowledge engineering' and `expert systems') were the appropriate focus for research. Searle's argument was not about the question of whether AI should strive for human-like behavior. In his discussion he was willing to simply cede the question of whether full human-like behavior could be achieved. He argued that even if it were, the result would not be `strong AI'. The machine might act in every way as though it were a fully intelligent human (given a robotic body, a developmental history, and other such accoutrements), but it would never have real intentionality.

This is the incoherence that I referred to earlier: the assumption that `has real intentionality' is a meaningful predicate. Consider a somewhat flippant analogy. I ask my teenage daughter which world-wide-web sites are `cool'. She (and her friends and others) can all make judgments about particular instances. I now ask `What if that site were really put up by the CIA as an imitation of the web site you think it is - would it still be cool?' The answer is not defined. Coolness isn't the kind of concept that has a sharp delineation, or which accounts for the dimension I have raised. Nevertheless, the term is used successfully for communicating within a community of people who have a (relatively) shared understanding of it.

We would like to believe that something as respectable as `intentionality' doesn't have the open-ended interpreter-dependent quality of `cool'. With so many intelligent philosophers spilling much ink about it over the years, it must have a real definition, like `triangle' in mathematics, or at least to the degree of `molecule' in physics. Unfortunately this is an illusion. Every person has a strong subjective sense of what it means to have intentions, to be conscious, or to understand Chinese. The clarity of that subjective impression fools us into thinking that there is a corresponding clarity of the concept. In the end, the question as to whether Searle's Chinese room really understands Chinese is no more objectively answerable than the question as to whether his home page is a cool web site.

This does not imply, of course, that there are no meaningful questions to be argued. We can identify operational qualities that we care about, which fall within the community of usage of terms such as `intention', `understand', and `intelligent'. We can ask whether a specific technological approach is capable of achieving those qualities. These were the kinds of questions addressed in the writings by the Dreyfus brothers, myself, Flores, and others, which initiated the journal issue that preceded this book. The arguments are not about how we would label a machine's behaviors if it COULD achieve human-like capacities, but about HOW such capacities could potentially be achieved. The thrust of our argument is that the traditional symbolic/rational approach to AI can not achieve capacities such as normal human language understanding. This is not a quibble about whether the computer's behavior deserves to be called `understanding', but a claim that computers will not be able to duplicate the full range of human language capabilities without some fundamentally new approach to the way in which they are built and programmed.

This kind of question is ultimately open to empirical test. Perhaps some unknown genius in a secluded garage has already followed the traditional AI approach and produced a machine which we would all agree meets the full range of operational tests of human intelligence. Then the Dreyfus/Winograd claim would be falsified. By this measure of course, it can never be validated, since no sequence of failures precludes the possibility that the next attempt will succeed. But arguments have great practical value, even when we are inherently unable to come up with objective proof.

Imagine that some visionary were to propose that world peace could be achieved by having enough people around the world sit down in crystal pyramids, hold hands, and chant. It would be impossible to prove him wrong a priori, and he might even be able to cite evidence, such as the peace-bringing effect that this activity had on his commune and those of his friends. But it would certainly be worth arguing about the plausibility of extending the approach to world-wide peace, before committing resources to carry out the experiment. Similarly, the philosophical arguments about the basis of AI can shed significant light on the plausibility of different approaches, and can therefore be of great consequence in the development of computer technologies.

My own interests in the debate lie along pragmatic lines. As active participants in an intellectual field, we are always faced with the question of `What is worth doing?' Many of my colleagues in computer science and engineering are skeptical about the value of philosophical debate in answering such questions. They do not see such discourse producing the kinds of hard-edged answers that would give them definite direction and specific guidance. I believe that they are wrong, both in rejecting the value of such discussions, and in expecting answers that meet the criteria of mathematical and scientific argumentation.

The kind of debate represented by this volume is indeed relevant and practical. The wisdom of philosophically grounded knowledge complements the power of technologically grounded analysis. We may not be able to give precise answers to the questions we ask, but in asking them and pursuing them with serious rigor, we continually open up new possibilities for thought

 

To let others know about this new technology, Click Here

To go to our home page, Click Here

 

                                

 

Send mail to webmaster@maxmax.com with questions or comments about this web site.
Last modified: June 18, 2015