Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 275

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 2 Dec 1986      Volume 4 : Issue 275 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 28 Nov 86 06:27:20 GMT
From: rutgers!princeton!mind!harnad@titan.arc.nasa.gov (Stevan
Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


Lambert Meertens (lambert@boring.uucp) of CWI, Amsterdam, writes:

> for me it is not the case that I perceive/experience/
> am-directly-aware-of my performance being caused by anything.
> It just happens.

Phenomenology is of course not something it's easy to settle
disagreements about, but I think I can say with some confidence that
most people experience their (voluntary) behavior as caused by THEM.
My point about free will's being an illusion is a subtler one. I am
not doubting that we all experience our voluntary actions as freely
willed by ourselves. That EXPERIENCE is certainly real, and no
illusion. What I am doubting is that our will is actually the cause of our
actions, as it seems to be. I think our actions are caused by our
brain activity (and its causes) BEFORE we are aware of having willed
them, and that our experience of willing and causing them involves a
temporal illusion (see S. Harnad [1982] "Consciousness: An afterthought,"
Cognition and Brain Theory 5: 29 - 47, and B. Libet [1986]
"Unconscious cerebral initiative and the role of conscious will in
voluntary action,"
Behavioral and Brain Sciences 8: 529 - 566.)

Of course, my task of supporting this position would be much easier if
the phenomenology you describe were more prevalent...

> How do I know I have a mind?... The problem is that if you
> look up "mind" in an English-Dutch dictionary, some eight
> translations are suggested.

The mind/body problem is not just a lexical one; nor can it be settled by
definitions. The question "How do I know I have a mind?" is synonymous
with the question "How do I know I am experiencing anything at all
[now, rather than just going through the motions AS IF I were having
experience, but in fact being only an insentient automaton]?"

And the answer is: By direct, first-hand experience.

> "Consciousness" is more like "appetite"... How can we know for
> sure that other people have appetites as well?... "Can machines
> have an appetite?"


I quite agree that consciousness is like appetite. Or, to put it more
specifically: If consciousness is the ability to have (or the actual
having of) experience in general, appetite is a particular experience
most conscious subjects have. And, yes, the same questions that apply to
consciousness in general apply to appetite in particular. But I'm
afraid that this conclusion was not your objective here...

> Now why is consciousness "real", if free will is an illusion?
> Or, rather, why should the thesis that consciousness is "real"
> be more compelling than the analogous thesis for free will?
> In either case, the essential argument is: "Because I [the
> proponent of that thesis] have direct, immediate, evidence of it."


The difference is that in the case of the (Cartesian) thesis of the
reality of consciousness (or mind) the question is whether there is
any qualitative, subjective experience going on AT ALL, whereas in the
case of the thesis of the reality of free will the question is whether
the dictates of a particular CONTENT of experience (namely, the causal
impression it gives us) is true of the world. The latter, like the
existence of the outside world itself, is amenable to doubt. But the former,
namely, THAT we are experiencing anything at all, is not open to doubt,
and is settled by the very act of experiencing something. That is the
celebrated Cartesian Cogito.

> Sometimes we are conscious of certain sensations. Do these
> sensations disappear if we are not conscious of them? Or do they go
> on on a subconscious level? That is like the question "If a falling
> tree..."


The following point is crucial to a coherent discussion of the
mind/body problem: The notion of an unconscious sensation (or, more
generally, an unconscious experience) is a contradiction in terms!

[Test it in the form: "unexperienced experience." Whatever might that
mean? Don't answer. The Viennese delegation (as Nabokov used to call
it) has already made almost a century's worth of hermeneutic hay with the
myth of the "subconscious" -- a manifest nonsolution to the mind/body
problem that simply consisted of multiplying the mystery by two. The problem
isn't the unconscious causation of behavior: If we were all
unconscious automata there would be no mind/body problem. The problem
is conscious experience. And anthropomorphizing the sizeable portion
of our behavior that we DON'T have the illusion of being the cause of
is not only no solution to the mind/body problem but not even a
contribution to the problem of finding the unconscious causes of
behavior -- which calls for cognitive theory, not hermeneutics.]

It would be best to stay away from the usually misunderstood and
misused problem of the "unheard sound of the falling tree." Typically
used to deride philosophers, the unheard last laugh is usually on the derider.

> Let us agree that the sensations continue at least if it can be
> shown that the person involved keeps behaving as if the concomitant
> sensations continued, even though professing in retrospection not
> to have been aware of them. So people can be afraid without
> realizing it, say, or drive a car without being conscious of the
> traffic lights (and still halt for a red light).

I'm afraid I can't agree with any of this. A sensation may be experienced and
then forgotten, and then perhaps again remembered. That's unproblematic,
but that's not the issue here, is it? The issue is either (1)
unexperienced sensations (which I suggest is a completely incoherent
notion) or (2) unconsciously caused or guided behavior. The latter is
of course the category most behavior falls into. So unconscious
stopping for a red light is okay; so is unconscious avoidance or even
unconscious escape. But unconscious fear is another matter, because
fear is an experience, not a behavior (and, as I've argued, the
concept of an unconscious experience is self-contradictory).

If I may anticipate what I will be saying below: You seem to have
altogether too much intuitive confidence in the explanatory
power of the concept and phenomenology of memory in your views on the
mind/body problem. But the problem is that of immediate, ongoing
qualitative experience. Anything else -- including the specifics of the
immediate content of the experience (apart from the fact THAT it is an
experience) and its relation to the future, the past or the outside
world -- is open to doubt and is merely a matter of inference, rather
than one of direct, immediate certainty in the way experiential matters
are. Hence whereas veridical memories and continuities may indeed happen
to be present in our immediate experiences, there is no direct way that
we can know that they are in fact veridical. Directly, we know only
that they APPEAR to be veridical. But that's how all phenomenological
experience is: An experience of how things appear. Sorting out what's
what is an indirect, inferential matter, and that includes sorting out
the experiences that I experience correctly as remembered from those
that are really only "deja vu." (This is what much of the writing on
the problem of the continuity of personal identity is concerned with.)

> Maybe everything is conscious. Maybe stones are conscious...
> Their problem is, they can hardly tell us. The other problem is,
> they have no memory... They are like us with that traffic light...
> Even if we experience something consciously, if we lose all
> remembrance of it, there is no way in which we can tell for sure
> that there was a conscious experience. Maybe we can infer
> consciousness by an indirect argument, but that doesn't count.
> Indirect evidence can be pretty strong, but it can never give
> certainty. Barring false memories, we can only be sure if we
> remember the experience itself.

Stones have worse problems than not being able to tell us they're
conscious and not being able to remember. And the mind/problem is not
solved by animism (attributing conscious experience to everything); It
is merely compounded by it. The question is: Do stones have
experiences? I rather doubt it, and feel that a good part of the M/B
problem is sorting out the kinds of things that do have experiences from
the kinds of things, like stones, that do not (and how, and why,
functionally speaking).

If we experience something, we experience it consciously. That's what
"experience" means. Otherwise it just "happens" to us (e.g., when we're
distracted, asleep, comatose or dead), and then we may indeed be like the
stone (rather than vice versa). And if we forget an experience, we
forget it. So what? Being conscious of it does not consist in or
depend on remembering it, but on actually experiencing it at the time.
The same is true of remembering a previously forgotten experience:
Maybe it was so, maybe it wasn't. The only thing we are directly
conscious of is that we experience it AS something remembered.

Inference may be involved in trying to determine whether or not a
memory is veridical, but it is certainly not involved in determining
THAT I am having any particular conscious experience. That fact is
ascertained directly. Indeed it is the ONLY fact of consciousness, and
it is immediate and incorrigible. The particulars of its content, on
the other hand -- what an experience indicates about the outside world, the
past, the future, etc. -- are indirect, inferential matters. (To put
it another way, there is no way to "bar false memories." Experiences
wear their experientiality on their ears, so to speak, but all of the
rest of their apparel could be false, and requires inference for
indirect confirmation.)

> If some things we experience do not leave a recallable trace, then
> why should we say that they were experienced consciously? Or, why
> shouldn't we maintain the position that stones are conscious
> as well?... More useful, then, to use "consciousness" only for
> experiences that are, somehow, recallable.

These stipulations would be arbitrary (and probably false). Moreover,
they would simply fail to be faithful to our direct experience -- to
"what it's like" to have an experience. The "recallability" criterion
is a (weak) external one we apply to others, and to ourselves when
we're wondering whether or not something really happened. But when
we're judging whether we're consciously experiencing a tooth-ache NOW,
recallability has nothing to do with it. And if we forget the
experience (say, because of subsequent anesthesia) and never recall it
again, that would not make the original experience any less conscious.

> the things that go on in our heads are stored away: in order to use for
> determining patterns, for better evaluation of the expected outcome of
> alternatives, for collecting material that is useful for the
> construction or refinement of the model we have of the outside world,
> and so on.

All these conjectures about the functions of memory and other
cognitive processes are fine, but they do not provide (nor can they
provide) the slightest hint as to why all these functional and
behavioral objectives are not simply accomplished UNconsciously. This
shows as graphically as anything how the mind/body problem is
completely bypassed by such functional considerations. (This is also
why I have been repeatedly recommending "methodological
epiphenomenalism"
as a research strategy in cognitive modeling.)

> Imagine now a machine programmed to "eat" and also to keep up
> some dinner conversation... IF hunger THEN eat... equipped with
> a conflict-resolution module... dinner-conversation module...
> Speaking anthropomorphically, we would say that the machine is
> feeling uneasy... apology submodule... PROBABLE CAUSE OF eat
> IS appetite... "<<SELF, having, appetite>... <goodness, 0.6785>>"
> How different are we from that machine?

On the information you give here, the difference is likely to be like
night and day. What you have described is a standard anthropomorphic
interpretation of simple symbol-manipulations. Overzealous AI workers
do it all the time. What I believe is needed is not more
over-interpretation of the pathetically simple toy tricks that current
programs can perform, but an effort to model life-size performance
capacity: The Total Turing Test. That will diminish the degrees of
freedom of the model to the size of the normal underdetermination of
scientific theories by their data, and it will augment the problem of
machine minds to the size of the other-minds problem, with which we
are already dealing daily by means of the TTT.

In the process of pursuing that distant scientific goal, we may come to
know certain constraints on the enterprise, such as: (1) Symbol-manipulation
alone is not sufficient to pass the TTT. (2) The capacity to pass the TTT
does not arise from a mere accretion of toy modules. (3) There is no autonomous
symbolic macromodule or level: Symbolic representations must be grounded in
nonsymbolic processes. And if methodological epiphenomenalism is
faithfully adhered to, the only interpretative question we will ever need
to ask about the mind of the candidate system will be precisely the
same one we ask about one another's minds; and it will be answered on
precisely the same basis as the one we use daily in dealing with the
other-minds problem: the TTT.

> if we ponder a question consciously... I think the outcome is not
> the result of the conscious process, but, rather, that the
> consciousness is a side-effect of the conflict-resolution
> process going on. I think the same can be said about all "conscious"
> processes. The process is there, anyway; it could (in principle) take
> place without leaving a trace in memory, but for functional reasons
> it does leave such a trace. And the word we use for these cognitive
> processes that we can recall as having taken place is "conscious".

Again, your account seems to be influenced by certain notions, such as
memory and "conflict-resolution," that appear to be carrying more intuitive
weight than they can bear. Not only is the issue not that of "leaving
a trace"
(as mentioned earlier), but there is no real functional
argument here for why all this shouldn't or couldn't be accomplished
unconsciously. [However, if you substitute for "side-effect" the word
"epiphenomenon," you may be calling things by their proper name, and
providing (inadevertently) a perfectly good rationale for ignoring them
in trying to devise a model to pass the TTT.]

> it is functional that I can raise my arm by "willing" it to raise,
> although I can use that ability to raise it gratuitously. If the
> free will here is an illusion (which I think is primarily a matter
> of how you choose to define something as elusive as "free will"),
> then so is the free will to direct your attention now to this,
> then to that. Rather than to say that free will is an "illusion",
> we might say that it is something that features in the model
> people have about "themselves". Similarly, I think it is better to say
> that consciousness is not so much an illusion, but rather something to
> be found in that model. A relatively recent acquisition of that model is
> known as the "subconscious". A quite recent addition are "programs",
> "sub-programs", "wrong wiring", etc.

My arm seems able to rise in two important ways: voluntarily and
involuntarily (I don't know what "gratuitously" means). It is not a
matter of definition that we feel as if we are causing the motion in
the voluntary case; it is a matter of immediate experience. Whether
or not that experience is veridical depends on various other factors,
such as the true order of the events in question (brain activity,
conscious experience, movement) in real time, and the relation of the
experiential to the physical (i.e., whether or not it can be causal). The
same question does indeed apply to willed changes in the focus of
attention. If free will "is something that features in the model
people have of 'themselves',"
then the question to ask is whether that
model is illusory. Consciousness itself cannot be something found in
a model (although the concept of consciousness might be) because
consciousness is simple the capacity to have (or the having of)
experience. (My responses to the concept of the "subconscious" and the
over-interpretation of programs and symbols are described earlier in
this module.

> A sufficiently "intelligent" machine, able to pass not only the
> dinner-conversation test but also a sophisticated Turing test,
> must have a model of itself. Using that model, and observing its
> own behaviour (including "internal" behaviour!), it will be led to
> conclude not only that it has an appetite, but also volition and
> awareness...Is it mistaken then? Is the machine taken in by an illusion?
> "Can machines have illusions?"

What a successful candidate for the TTT will have to have is not
something we can decide by introspection. Doing hermeneutics on its
putative inner life before we build it would seem to be putting the
cart before the horse. The question whether machines can have
illusions (or appetites, or fears, etc.) is simply a variant on the
basic question of whether any organism or device other than oneself
can have experiences.
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT