Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 216

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 16 Oct 1986     Volume 4 : Issue 216 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 9 Oct 86 15:23:35 GMT
From: cbatt!ukma!drew@ucbvax.Berkeley.EDU (Andrew Lawson)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)

In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>On my argument the distinction between the two versions is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).

This is not clear. When I look at my surroundings, you are no
more than a symbol (just as is anything outside of my being).
Remember that "symbol" is not rigidly defined most of the time.
When I recognize the symbol of a car heading toward me, I respond
by moving out of the way. This is not essentially different from
a linguistic system recognizing a symbol and responding with another
symbol.

--
Drew Lawson cbosgd!ukma!drew
"Parts is parts." drew@uky.csnet
drew@UKMA.BITNET

------------------------------

Date: 6 Oct 86 18:15:42 GMT
From: mnetor!utzoo!utcsri!utai!me@seismo.css.gov (Daniel Simon)
Subject: Re: Searle, Turing, Symbols, Categories (Question not comment)

In article <160@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
>
>In reply to (1): The linguistic version of the turing test (turing's
>original version) is restricted to linguistic interactions:
>Language-in/Language-out. The robotic version requires the candidate
>system to operate on objects in the world. In both cases the (turing)
>criterion is whether the system can PERFORM indistinguishably from a human
>being. (The original version was proposed largely so that your
>judgment would not be prejudiced by the system's nonhuman appearance.)
>
I have no idea if this is a relevant issue or a relevant place to bring it up,
but this whole business of the Turing test makes me profoundly suspicious. For
example, we all know about Weizenbaum's ELIZA, which, he claimed, convinced
many clever, relatively computer-literate (for their day) people that it was
intelligent. This fact leads me to some questions which, in my view, ought to
be seriously addressed before the phrase "Turing test" is bandied about (and
probably already have been addressed, but I didn't notice, and will thank
everybody in advance for telling me where to find a treatment of them and
asking me to kindly buzz off):

1) To what extent is our discernment of intelligent behaviour context-
dependent? ELIZA was able to appear intelligent because of the
clever choice of context (in a Rogerian therapy session, the kind
of dull, repetitive comments made by ELIZA seem perfectly
appropriate, and hence, intelligent). Mr. Harnad has brought up
the problem of physical appearance as a prejudicing factor in the
assessment of "human" qualities like intelligence. Might not the
robot version lead to the opposite problem of testers being
insufficiently skeptical of a machine with human appearance (or
even of a machine so unlike a human being in appearance that mildly
human-like behaviour takes on an exaggerated significance in the
tester's mind)? Is it ever possible to trust the results of any
instance of the test as being a true indicator of the properties of
the tested entity itself, rather than those of the environment in
which it was tested?

2) Assuming that some "neutral" context can be found which would not
"distort" the results of the test (and I'm not at all convinced
that such a context exists, or even that the idea of such a context
has any meaning), what would be so magic about the level of
perceptiveness of the shrewdest, most perspicacious tester
available, that would make his inability to distinguish man from
machine in some instance the official criterion by which to judge
intelligence? In short, what does passing (or failing) the Turing
test really mean?

3) If the Turing test is in fact an unacceptable standard, and
building a machine that can pass it an inappropriate goal (and, as
questions 1 and 2 have probably already suggested, this is what I
strongly suspect), are there more appropriate means by which we
could evaluate the human-like or intelligent properties of an AI
system? In effect, is it possible to formulate the qualities that
constitute intelligence in a manner which is more intuitively
satisfying than the standard AI stuff about reasoning, but still
more rigorous than the Turing test?

As I said, I don't know if my questions are legitimate, or if they have already
been satisfactorily resolved, or if they belong elsewhere; I merely bring them
up here because this is the first place I have seen the Turing test brought up
in a long time. I am eager to see what others have to say on the subject.


>Stevan Harnad
>princeton!mind!harnad


Daniel R. Simon

"Look at them yo-yo's, that's the way to do it
Ya go to grad school, get your PhD"

------------------------------

Date: 10 Oct 86 13:47:46 GMT
From: rutgers!princeton!mind!harnad@think.com (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories

In response to what I wrote in article <160@mind.UUCP>, namely:

>On my argument the distinction between the two versions
>[of the turing test] is critical,
>because the linguistic version can (in principle) be accomplished by
>nothing but symbols-in/symbols-out (and symbols in between) whereas
>the robotic version necessarily calls for non-symbolic processes
>(transducer, effector, analog and A/D).

Drew Lawson replies:

> This is not clear. When I look at my surroundings, you are no
> more than a symbol (just as is anything outside of my being).
> Remember that "symbol" is not rigidly defined most of the time.
> When I recognize the symbol of a car heading toward me, I respond
> by moving out of the way. This is not essentially different from
> a linguistic system recognizing a symbol and responding with another
> symbol.

It's important, when talking about what is and is not a symbol, to
speak literally and not symbolically. What I mean by a symbol is an
arbitrary formal token, physically instantiated in some way (e.g., as
a mark on a piece of paper or the state of a 0/1 circuit in a
machine) and manipulated according to certain formal rules. The
critical thing is that the rules are syntactic, that is, the symbol is
manipulated on the basis of its shape only -- which is arbitrary,
apart from the role it plays in the formal conventions of the syntax
in question. The symbol is not manipulated in virtue of its "meaning."
Its meaning is simply an interpretation we attach to the formal
goings-on. Nor is it manipulated in virtue of a relation of
resemblance to whatever "objects" it may stand for in the outside
world, or in virtue of any causal connection with them. Those
relations are likewise mediated only by our interpretations.

This is why the distinction between symbolic and nonsymbolic processes
in cognition (and robotics) is so important. It will not do to simply
wax figurative on what counts as a symbol. If I'm allowed to use the
word metaphorically, of course everything's a "symbol." But if I stick
to a specific, physically realizable sense of the word, then it
becomes a profound theoretical problem just exactly how I (or any
device) can recognize you, or a car, or anything else, and how I (or it)
can interact with such external objects robotically. And the burden of
my paper is to show that this capacity depends crucially on nonsymbolic
processes.

Finally, apart from the temptation to lapse into metaphor about
"symbols," there is also the everpresent lure of phenomenology in
contemplating such matters. For, apart from my robotic capacity to
interact with objects in the world -- to recognize them, manipulate
them, name them, describe them -- there is also my concsiousness: My
subjective sense, accompanying all these capacities, of what it's
like (qualitatively) to recognize, manipulate, etc. That, as I argue
in another paper (and only hint at in the two under discussion), is a
problem that we'd do best to steer clear of in AI, robotics and
cognitive modeling, at least for the time being. We already have our hands
full coming up with a model that can successfully pass the (robotic
and/or linguistic) turing test -- i.e., perform exactly AS IF it had
subjective experiences, the way we do, while it successfully accomplishes
all those clever things. Until we manage that, let's not worry too much
about whether the outcome will indeed be merely "as if." Overinterpreting
our tools phenomenologically is just as unproductive as overinterpreting them
metaphorically.

Stevan Harnad
princeton!mind!harnad

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT