Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 217

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 16 Oct 1986     Volume 4 : Issue 217 

Today's Topics:
Philosophy - Searle, Turing, Symbols, Categories

----------------------------------------------------------------------

Date: 10 Oct 86 15:50:33 GMT
From: rutgers!princeton!mind!harnad@lll-crg.arpa (Stevan Harnad)
Subject: Re: Searle, Turing, Symbols, Categories


In response to my article <160@mind.UUCP>, Daniel R. Simon asks:

> 1) To what extent is our discernment of intelligent behaviour
> context-dependent?...Might not the robot version [of the
> turing test] lead to the...problem of testers being
> insufficiently skeptical of a machine with human appearance?
> ...Is it ever possible to trust the results of any
> instance of the test...?

My reply to these questions is quite explicit in the papers in
question: The turing test has two components, (i) a formal, empirical one,
and (ii) an informal, intuitive one. The formal empirical component (i)
is the requirement that the system being tested be able to generate human
performance (be it robotic or linguistic). That's the nontrivial
burden that will occupy theorists for at least decades to come, as we
converge on (what I've called) the "total" turing test -- a model that
exhibits all of our robotic and lingistic capacities. The informal,
intuitive component (ii) is that the system in question must perform in a
way that is indistinguishable from the performance of a person, as
judged by a person.

It is not always clear which of the two components a sceptic is
worrying about. It's usually (ii), because who can quarrel with the
principle that a veridical model should have all of our performance
capacities? Now the only reply I have for the sceptic about (ii) is
that he should remember that he has nothing MORE than that to go on in
the case of any other mind than his own. In other words, there is no
rational reason for being more sceptical about robots' minds (if we
can't tell their performance apart from that of people) than about
(other) peoples' minds. The turing test is ALREADY the informal way we
contend with the "other-minds" problem [i.e., how can you be sure
anyone else but you has a mind, rather than merely acting AS IF it had
a mind?], so why should we demand more in the case of robots? It's
surely not because of any intuitive or a priori knowledge we have
about the FUNCTIONAL basis of our own minds, otherwise we could have put
those intuitive ideas to work in designing successful candidates for the
turing test long ago.

So, since we have absolutely no intuitive idea about the functional
(symbolic, nonsymbolic, physical, causal) basis of the mind, our only
nonarbitrary basis for discriminating robots from people remains their
performance.

As to "context," as I argue in the paper, the only one that is
ultimately defensible is the "total" turing test, since there is no
evidence at all that either capacities or contexts are modular. The
degrees of freedom of a successful total-turing model are then reduced
to the usual underdetermination of scientific theory by data. (It's always
possible to carp at a physicist that his theoretic model of the
universe "is turing-indistinguishable from the real one, but how can
you be sure it's `really true' of the world?")

> 2) Assuming that some "neutral" context can be found...
> what does passing (or failing) the Turing test really mean?

It means you've successfully modelled the objective observables under
investigation. No empirical science can offer more. And the only
"neutral" context is the total turing test (which, like all inductive
contexts, always has an open end, namely, the everpresent possibility
that things could turn out differently tomorrow -- philosophers call
this "inductive risk," and all empirical inquiry is vulnerable to it).

> 3) ...are there more appropriate means by which we
> could evaluate the human-like or intelligent properties of an AI
> system? ...is it possible to formulate the qualities that
> constitute intelligence in a manner which is more intuitively
> satisfying than the standard AI stuff about reasoning, but still
> more rigorous than the Turing test?

I don't think there's anything more rigorous than the total turing
test since, when formulated in the suitably generalized way I
describe, it can be seen to be identical to the empirical criterion for
all of the objective sciences. Residual doubts about it come from
four sources, as far as I can make out, and only one of these is
legitimate. The legitimate one (a) is doubts about autonomous
symbolic processes (that's what my papers are about). The three
illegitimate ones (in my view) are (b) misplaced doubts about
underdetermination and inductive risk, (c) misplaced hold-outs for
the nervous system, and (d) misplaced hold-outs for consciousness.

For (a), read my papers. I've sketched an answer to (b) above.

The quick answer to (c) [brain bias] -- apart from the usual
structure/function and multiple-realizability arguments in engineering,
computer science and biology -- is that as one approaches the
asymptotic Total Turing Test, any objective aspect of brain
"performance" that anyone believes is relevant -- reaction time,
effects of damage, effects of chemicals -- is legitimate performance
data too, including microperformance (like pupillary dilation,
heart-rate and perhaps even synactic transmission). I believe that
sorting out how much of that is really relevant will only amount to the
fine-tuning -- the final leg of our trek to theoretic Utopia,
with most of the substantive theoretical work already behind us.

Finally, my reply to (d) [mind bias] is that holding out for
consciousness is a red herring. Either our functional attempts to
model performance will indeed "capture" consciousness at some point, or
they won't. If we do capture it, the only ones that will ever know for
sure that we've succeeded are our robots. If we don't capture it,
then we're stuck with a second level of underdetermination -- call it
"subjective" underdetermination -- to add to our familiar objective
underdetermination (b): Objective underdetermination is the usual
underdetermination of objective theories by objective data; i.e., there
may be more than one way to skin a cat; we may not happen to have
converged on nature's way in any of our theories, and we'll never be
able to know for sure. The subjective twist on this is that, apart
from this unresolvable uncertainty about whether or not the objective models
that fit all of our objective (i.e., intersubjective) observations capture
the unobservable basis of everything that is objectively observable,
there may be a further unresolvable uncertainty about whether or not
they capture the unobservable basis of everything (or anything) that is
subjectively observable.

AI, robotics and cognitive modeling would do better to learn to live
with this uncertainty and put it in context, rather than holding out
for the un-do-able, while there's plenty of the do-able to be done.

Stevan Harnad
princeton!mind!harnad

------------------------------

Date: 12 Oct 86 19:26:35 GMT
From: well!jjacobs@lll-lcc.arpa (Jeffrey Jacobs)
Subject: Searle, AI, NLP, understanding, ducks


I. What is "understanding", or "ducking" the issue...

If it looks like a duck, swims like a duck, and
quacks like a duck, then it is *called* a duck. If you cut it open and
find that the organs are something other than a duck's, *then*
maybe it shouldn't be called a duck. What it should be called becomes
open to discussion (maybe dinner).

The same principle applies to "understanding".

If the "box" performs all of what we accept to be the defining requirements
of "understanding", such as reading and responding to the same level as
that of a "native Chinese", then it certainly has a fair claim to be
called "understanding".

Most so-called "understanding" is the result of training and
education. We are taught "procedures" to follow to
arrive at a desired result/conclusion. The primary difference between
human education and Searle's "formal procedures" is a matter
of how *well* the procedures are specified . Education is primarily a
matter of teaching "procedures", whether it be mathematics, chemistry
or creative writing. The *better* understood the field, the more "formal"
the procedures. Mathematics is very well understood, and
consists almost entirely of "formal procedures". (Mathematics
was also once considered highest form of philosophy and intellectual
attainment).

This leads to the obvious conclusion that humans do not
*understand* natural language very well. Natural language processing
via purely formal procedures has been a dismal failure.

The lack of understanding of natural languages is also empirically
demonstrable. Confusion about the meaning
of a person's words, intentions etc can be seen in every
interaction with your boss/students/teachers/spouse/parents/kids
etc etc.

"You only think you understand what I said..."

Jeffrey M. Jacobs
CONSART Systems Inc.
Technical and Managerial Consultants
P.O. Box 3016, Manhattan Beach, CA 90266
(213)376-3802
CIS:75076,2603
BIX:jeffjacobs
USENET: well!jjacobs

"It used to be considered a hoax if there *was* a man in the box..."

------------------------------

Date: 13 Oct 86 22:07:54 GMT
From: ladkin@kestrel.arpa
Subject: Re: Searle, AI, NLP, understanding, ducks

In article <1919@well.UUCP>, jjacobs@well.UUCP (Jeffrey Jacobs) writes:
> Mathematics is very well understood, and
> consists almost entirely of "formal procedures".

I infer from your comment that you're not a mathematician.
As a practicing mathematician (amongst other things), I'd
like to ask precisely what you mean by *well understood*?

And I would like to strongly disagree with your comment that
doing mathematics consists almost entirely of formal procedures.
Are you aware that one of the biggest problems in formalising
mathematics is trying to figure out what it is that
mathematicians do to prove new theorems?

Peter Ladkin
ladkin@kestrel.arpa

------------------------------

Date: 13 Oct 86 17:13:35 GMT
From: jade!entropy!cda@ucbvax.Berkeley.EDU
Subject: Re: Searle, Turing, Symbols, Categories

In article <167@mind.UUCP> harnad@mind.UUCP (Stevan Harnad) writes:
<as one approaches the
<asymptotic Total Turing Test, any objective aspect of brain
<"performance" that anyone believes is relevant -- reaction time,
<effects of damage, effects of chemicals -- is legitimate performance
<data too, including microperformance (like pupillary dilation,
<heart-rate and perhaps even synactic transmission).

Does this mean that in order to successfully pass the Total Turing Test,
a robot will have to be able to get high on drugs? Does this imply that the
ability of the brain to respond to drugs is an integral component of
intelligence? What will Ron, Nancy, and the DOD think of this idea?

Turing said that the way to give a robot free will was to incorporate
sufficient randomness into its actions, which I'm sure the DOD won't like
either.

It seems that intelligence is not exactly the quality our government is
trying to achieve in its AI hard and software.

------------------------------

Date: Sat, 11 Oct 86 12:03:27 -0200
From: Eyal mozes <eyal%wisdom.bitnet@WISCVM.WISC.EDU>
Subject: your paper about category induction and representation

First of all, I'd like a preprint of the full paper.

Judging by the abstract, I have two main criticisms.

The first one is that I don't see your point at all about "categorical
perception". You say that "differences between reds and differences
between yellows look much smaller than equal-sized differences that
cross the red/yellow boundary". But if they look much smaller, this
means they're NOT "equal-sized"; the differences in wave-length may be
the same, but the differences in COLOR are much smaller.

Your whole theory is based on the assumption that perceptual qualities
are something physical in the outside world (e.g., that colors ARE
wave-lengths). But this is wrong. Perceptual qualities represent the
form in which we perceive external objects, and they're determined both
by external physical conditions and by the physical structure of our
sensory apparatus; thus, colors are determined both by wave-lengths and
by the physical structure of our visual system. So there's no apriori
reason to expect that equal-sized differences in wave-length will lead
to equal-sized differences in color, or to assume that deviations from
this rule must be caused by internal representations of categories. And
this seems to completely cut the grounds from under your theory.

My second criticism is that, even if "categorical perception" really
provided a base for a theory of categorization, it would be very
limited; it would apply only to categories of perceptual qualities. I
can't see how you'd apply your approach to a category such as "table",
let alone "justice".

Actually, there already exists a theory of categorization that is along
similar lines to your approach, but integrated with a detailed theory
of perception and not subject to the two criticisms above; that is the
Objectivist theory of concepts. It was presented by Ayn Rand in her
book "Introduction to Objectivist Epistemology", and by David Kelley in
his paper "A Theory of Abstraction" in Cognition and Brain Theory vol.
7 pp. 329-57 (1984); this theory was integrated with a theory of
perception, and applied to categories of perceptual qualities, and in
particular to perception of colors and of phonemes, in the second part
of David Kelley's book "The Evidence of the Senses".

Eyal Mozes

BITNET: eyal@wisdom
CSNET and ARPA: eyal%wisdom.bitnet@wiscvm.ARPA
UUCP: ...!ihnp4!talcott!WISDOM!eyal

Physical address: Department of Applied Math.
Weizmann Institute of Science
Rehovot 76100
Israel

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT