Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 042

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Saturday, 14 Feb 1987      Volume 5 : Issue 42 

Today's Topics:
Philosophy - Emotions & Consciousness & Methodology

----------------------------------------------------------------------

Date: Mon, 9 Feb 1987 18:52 EST
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: Glands and Psychic Function

In asking about my qualifications for endorsing Eric Drexler's book about
nanotechnology, Tim Maroney says
> The psychology {in Eric Drexler's "Engines of Creation"} is so
> amazingly shallow; e.g., reducing identity to a matter of memory,
> ignoring effects of the glands and digestion on personality.
> ...in my opinion his approach is very anti-humanistic.

It is not a matter of reducing identity to memory alone, but, if he
will read what Drexler said, a matter of replacing each minute section
of the brain by some machinery that is functionally the same.
Naturally, many of those functions will be affected by chemicals that,
in turn, are partially controlled by other brain activities. A
functional duplicate of the brain will have to be embedded in a system
that duplicates enough of those non-neurological functions.

However, in the view of many thinkers concerned with what is sometimes
called the "downloading" enterprise, the functions of glands,
digestion, and the rest are much simpler than those embodied in the
brain; furthermore, they are common to all of us - and to all mammals
as well, with presumably minor variations; in this sense they are not
particularly involved in what we think of as individual identity.

I should add that it is in order to avoid falling prey to such
conventional superstitions, as this one - that emotions are much
harder to comprehend and duplicate than are intellectual functions -
that it is the requisite if sometimes unpleasant obligation of the
good psychologist to try to be as anti-humanistic as possible; that
is, in the sense of assuming that our oldest beliefs must be
preserved, no matter what the scientific cost.

------------------------------

Date: 10 Feb 87 06:19:58 GMT
From: well!wcalvin@lll-lcc.arpa (William Calvin)
Subject: Re: More on Minsky on Mind(s)

Sender:
Reply-To: wcalvin@well.UUCP (William Calvin)
Followup-To:
Distribution:
Organization: Whole Earth 'Lectronic Link, Sausalito, CA
Keywords: Consciousness, throwing, command buffer, evolution, foresight


Reply to Peter O. Mikes <lll-lcc!mordor!pom> email remarks:
> The ability to form 'the model of reality' and to exercise that model is
> (I believe) a necessary attribute of 'sentient' being and the richness
> of such model may one-day point a way to 'something better' then
> word-logic. Certainly, the machines which exist so far, do not indeed
> have any model of universe 'to speak off' and are not conscious.

A model of reality is not uniquely human; I'd ascribe it to a spider
as well as my pet cat. Similarly, rehearsing with peripherals switched off
is probably not very different from the "get set" behavior of said cat when
about to pounce. Choosing between behaviors isn't unique either, as when
the cat chooses between taking an interest in my shoe-laces vs. washing a
little more. What is, I suspect, different about humans is the wide range
of simulations and scenario-spinning. To use the railroad analogy again,
it isn't having two short candidate trains to choose between, but having
many strings of a half-dozen each, being shaped up into more realistic
scenarios all the time by testing against memory -- and being able to
select the best of that lot as one's next act.
I'd agree that present machines aren't conscious, but that's because
they aren't Darwin machines with this random element, followed by
successive selection steps. Granted, they don't have even a spider's model
of the (spider's limited) universe; improve that all you like, and you
still won't have human-like forecasting-the-future worry-fretting-joy. It
takes that touch of the random, as W. Ross Ashby noted back in 1956 in his
cybernetics book, to create anything really new -- and I'd bet on a Darwin-
machine-like process such as multitrack stochastic sequencing as the source
of both our continuing production of novelty and our uniquely-human aspects
of consciousness.

William H. Calvin
University of Washington 206/328-1192 or 206/543-1648
Biology Program NJ-15 BITNET: wcalvin@uwalocke
Seattle WA 98195 USA USENET: wcalvin@well.uucp

------------------------------

Date: Tue, 10 Feb 87 13:32:05 n
From: DAVIS@EMBL.BITNET
Subject: oh no, not more philosophy!


From: "CUGINI, JOHN" <cugini@icst-ecf>

> I (and Reed and Taylor?) been pushing the "brain-as-criterion" based
> on a very simple line of reasoning:

> 1. my brain causes my consciousness.

> .......

> Now, when I say simple things like this, Harnad says complicated things like:
> re 1: how do you KNOW your brain causes your consciousness? How can you have
> causal knowledge without a good theory of mind-brain interaction?
> Re 2: How do you KNOW your brain is similar to others'? Similar wrt
> what features? How do you know these are the relevant features?

> .....

> We are dealing with the mind-body problem. That's enough of a philosophical
> problem to keep us busy. I have noticed (although I can't explain why),
> that when you start discussing the mind-body problem, people (even me, once
> in a while) start to use it as a hook on which to hang every other
> known philosophical problem:

> 1. well how do we know anything at all, much less our neighbors' mental states
?
(skepticism and epistemology).

> ........

> All of these are perfectly legitimate philosophical questions, but
> they are general problems, NOT peculiar to the mind-body problem.
> When addressing the mind-body problem, we should deal with its
> peculiar features (of which there are enough), and not get mired in
> more general problems * unless they are truly in doubt and thus their
> solution truly necessary for M-B purposes. *

> I do not believe that this is so of the issues Harnad raises.

Sorry John, but you can't get away with this sort of 'simple' stuff. Dressing
up complex issues in straightforward clothing is not an answer.

Firstly, as Ken Laws recently indicated with considerable flair (though
to my mind, insufficient force), we have to deal with your assertion that
'my brain causes my conciousness'. Harnad's question may or may not be
relevant, but *IF* we are going to get bogged down in subjective conciousness
(which is of little relevance to AI for the next 30 years AT LEAST), then
we must begin by questioning even this most basic assumption. I don't think
its necessary to take you through the argument, only to note that we end
up with Nagel in asserting that "it is like something to be me/us". Its
not difficult to assert and to cogently argue that conciousness is an
illusion, but what is not so easily got around is that *something* could
be having an illusion. The mere fact that we are aware (yes, I know, that's
what conciousness *used* to mean!) immediately propels us to question how
"anything can know anything at all".

This question is absolutely central to the M-B problem, and there is no
getting around by arguing for ways in which we might organise concious
experience. The simple fact that we either *are* or even just *seem to be*
concious immediately forces to deal with this issue. Of course, you can
avoid it if you want to return to pre-computational philosophy, and
put the M-B problem simply as the issue as the localisation of concious
activity, but that seems to me to be as enourmous a bypass of the *real*
issue as you can get.

Speaking personally, I must say that it seems initially easier to suppose
that we only suffer an illusion of conciousness - by which I mean we only
suffer the illusion of being aware of possessing motivation, desire, intention,
(maybe even intension !!!!) and emotion. In a superficial sense this clears
everything up quite nicely, since it tends to be sort of things that have
been referred to (implicitly or not) during the Minsky Meanderings. However,
it DOES NOT get around the fact that there still seems to be a 'we' being
the subject of these (magnificent) illusions.

And that, my friends, must surely be the central issue. It makes not an
iota of difference what our 'concious experiences' actually consist of,
it makes no difference how our neural networks are linked to allow us to
access previous events, to formulate reasons, to plan, to rehearse (re:
Calvin). The problem at the heart of all this is simply that as individuals
we are aware of *something*, and that is the biggest problem of all.

Buts its irrelevant for ai. We will never be the computers we have designed,
and hence they will always be 'other minds'. Hence, the issue for practical
ai is simply one of nomenclature, and can never (?) be one of design.C'est
ca.

I don't think I explained this too well - maybe a prod will help me rearrange
my thoughts.....

so, robot cow -bolts or electronic battering rams to:

paul ("the answers come easy - you have any questions ?") davis

netmail: davis@embl.bitnet

wetmail: embl, postfach 10.2209, 6900 Heidelberg, FRG.

"conciousness is as a butterfly,
which, chased after with great fervour,
will never be yours.
but if you will only sit down quietly,
to admire the view,
may alight gently upon your arm."


with apologies to Nathaniel Hawthorne (I think)

------------------------------

Date: 9 Feb 87 14:48:28 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s)


wcalvin@well.UUCP (William Calvin), Whole Earth 'Lectronic Link, Sausalito, CA
writes:

> Rehearsing movements may be the key to appreciating the brain
> mechanisms [of consciousness and free will]

But WHY do the functional mechanisms of planning have to be conscious?
What does experience, awareness, etc., have to do with the causal
processes involved in the fanciest plan you may care to describe? This
is not a teleological why-question I'm asking (as other contributors
have mistakenly suggested); it is a purely causal and functional one:
Every one of the internal functions described for a planning,
past/future-oriented device of the kind Minsky describes (and we too
could conceivably be) would be physically, causally and functionally EXACTLY
THE SAME -- i.e., would accomplish the EXACT same things, by EXACTLY the same
means -- WITHOUT being interpreted as being conscious. So what functional
work is the consciousness doing? And if none, what is the justification
for the conscious interpretation of any such processes (except in
my own private case -- and of course that can't be claimed to the credit of
Minsky's hypothetical processes)? [As to "free will" -- apart from the aspect
that is redundant with the consciousness-problem [namely, the experience,
surely illusory, of free will], I sure wouldn't want to have to defend a
functional blueprint for that...]


--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 9 Feb 87 19:28:40 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s) (Reply to Davis)

Causality
Summary: On the "how" vs. the "why" of consciousness
References: <460@mind.UUCP> <1032@cuuxb.UUCP> <465@mind.UUCP>
<2556@well.UUCP> <491@mind.UUCP>




Paul Davis (davis@embl.bitnet) EMBL,postfach 10.22.09, 6900 Heidleberg, FRG
wrote on mod.ai:


> we see Harnad struggling with why's and not how's...
> conciousness is a *biological* phenomenon... because
> this is so, the question of *why* conciousness is used
> is quite irrelevant in this context...[Davis cites Armstrong,
> etc., on "conciousness as a means for social interaction"]...
> conciousness would certainly seem to be here -- leave it to
> the evolutionary biologists to sort out why, while we get on
> with the how...

I'm concerned ONLY with "how," not "why." That's what the TTT and
methodological epiphenomenalism are about. When I ask pointedly about
"why," I am not asking a teleological question or even an evolutionary one.
[In prior iterations I explained why evolutionary accounts of the origins
and "survival value" of consciousness are doomed: because they're
turing-indistinguishable from the IDENTICAL selective-advantage scenario,
minus consciousness.] My "why" is a logical and methodological challenge
to inadequate, overinterpreted "how" stories (including evolutionary
"just-so" stories, e.g., "social" ones): Why couldn't the objectively
identical "how" features stand alone, without being conscious? What
functional work is the consciousness itself doing, as opposed to
piggy-backing on the real functional work? If there's no answer to that,
then there is no justification for the conscious interpretation of the "how."
[If we're not causal dualists, it's not even clear whether we would
WANT consciousness to be doing any independent work. But if we
wouldn't, then why does it figure in our functional accounts? -- Just
give me the objective "how," without the frills.]

> the mystery of the C-1: How can ANYTHING *know* ANYTHING at all?

The problem of consciousness is not really the same as the problem of
knowledge (although they're linked, since, until shown otherwise, only
conscious devices have knowledge). To know X is not the same as to
experience X. In fact, I don't think knowledge is a C-1-level
phenomenon. [I know (C-2) THAT I experience pain, but does the cow know
THAT she experiences pain? Yet she presumably does experience pain (C-1).]
Moreover, "knowledge" is mired in epistemological and even
ontological issues that cog-sci would do well to steer clear of (such
as the difference between knowing X and merely believing X, with
justification, when X is true).
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

Date: 9 Feb 87 18:33:49 GMT
From: princeton!mind!harnad@rutgers.rutgers.edu (Stevan Harnad)
Subject: Re: More on Minsky on Mind(s) (Reply to Laws)


Ken Laws <Laws@SRI-STRIPE.ARPA> wrote on mod.ai:

> I'm not so sure that I'm conscious... I'm not sure I do experience
> the pain because I'm not sure what "I" is doing the experiencing

This is a tough condition to remedy. How about this for a start: The
inferential story, involving "I" and objects, etc. (i.e., C-2) may
have the details wrong. Never mind who or what seems to be doing the
experiencing of what. The question of C-1 is whether there is any
experience going on at all. That's not a linguistic matter. And it's
something we presumably share with speechless, unreflective cows.

> on the other hand, I'm not sure that silicon systems
> can't experience pain in essentially the same way.

Neither am I. But there's been a critical inversion of the null hypothesis
here. From the certainty that there's experience going on in one privileged
case (the first one), one cannot be too triumphant about the ordinary inductive
uncertainty attending all other cases. That's called the other-minds
problem, and the validity of that ineference is what's at issue here.
The substantive problem is characterizing the functional capacities of
artificial and natural systems that warrant inferring they're conscious.

> Instead of claiming that robots can be conscious, I am just as
> willing to claim that consciousness is an illusion and that I am
> just as unconscious as any robot.

If what you're saying is that you feel nothing (or, if you prefer, "no
feeling is going on"
) when I pinch you, then I must of course defer to
your higher authority on whether or not you are really an unconscious robot.
If you're simply saying that some features of the experience of pain and
how we describe it are inferential (or "linguistic," if you prefer)
and may be wrong, I agree, but that's beside the point (and a C-2
matter, not a C-1 matter). If you're saying that the contents of
experience, even its form of presentation, may be illusory -- i.e.,
the way things seem may not be the way things are -- I again agree,
and again remind you that that's not the issue. But if you're saying
that the fact THAT there's an experience going on is an illusion, then
it would seem that you're either saying something (1) incoherent or (in
MY case, in any event) (2) false. It's incoherent to say that it's
illusory that there is experience because the experience is illusory.
If it's an experience, it's an experience (rather than something else,
say, an inert event), irrespective of its relation to reality or to any
interpretations and inferences we may wrap it in. And it's false (of me,
at any rate) that there's no experience going on at all when I say (and
feel) I have a toothache. As for the case of the robot, well, that's
what's at issue here.

[Cartesian exercise: Try to apply Descartes' method of doubt -- which
so easily undermines "I have a toothache" -- to "It feels as if I have
a toothache."
This, by the way, is to extend the "cogito" (validly) even
further than its author saw it as leading. You can doubt that things
ARE as they seem, but you can't doubt that things SEEM as they seem.
And that's the problem of experience (of appearances, if you will).
Calling them "illusions" just doesn't help.]

> One way out is to assume that neurons themselves are aware of pain

Out of what? The other-minds problem? This sounds more like an
instance of it than a way out. (And assumption hardly seems to amount
to solution.)

> How do we know that we experience pain?

I'm not sure about the "I," and the specifics of the pain and its
characterization are negotiable, but THAT there is SOME experience
going on when "I" feel "pain" is something that anyone but an
unconscious robot can experience for himself. And that's how one
"knows" it.

> I propose that... our "experience" or "awareness" of pain is
> an illusion, replicable in all relevant respects by inorganic systems.

Replicate that "illusion" -- design devices that can experience the
illusion of pain -- and you've won the battle. [One little question:
How are you going to know whether the device really experiences that
illusion, rather than your merely being under the illusion that it
does?]

As to inorganic systems: As ever, I think I have no more (or less)
reason to deny that an inorganic system that can pass the TTT has a
mind than I do to deny that anyone else other than myself has a mind.
That really is a "way out" of the other-minds problem. But inorganic
systems that can't pass the TTT...
--

Stevan Harnad (609) - 921 7771
{allegra, bellcore, seismo, rutgers, packard} !princeton!mind!harnad
harnad%mind@princeton.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT