Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 046

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Friday, 7 Mar 1986       Volume 4 : Issue 46 

Today's Topics:
Theory - Knowledge & Dreyfus & Turing Test

----------------------------------------------------------------------

Date: Sat 1 Mar 86 20:04:39-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: Alan Watts on AI

I thought Ailist readers might be interested in the following
excerpt from "Oriental Omnipotence" in THE ESSENTIAL ALAN WATTS:

We must begin by showing the difference between Western and
Eastern ideas of omniscience and omnipotence. A Chinese Buddhist poem
says:
You may wish to ask where the flowers come from,
But even the God of Spring doesn't know.

A Westerner would expect that, of all people, the God of Spring would
know exactly how flowers are made. But if he doesn't know, how can he
possibly make them? A buddhist would answer that the question itself is
misleading since flowers are grown, not made. Things which are made are
either assemblages of formerly separate parts (like houses) or
constructed by cutting and shaping from without inwards (like pots of
clay or images). But things which are grown formulate their own
structure and differentiate their own parts from within outwards. ...
If, then, the God of Spring does not make the flowers, how does
he produce them? The answer is that he does so in the same way that you
and I grow our hair, beat our hearts, structure our bones and nerves,
and move our limbs. To us, this seems a very odd statement because we
do not ordinarily think of ourselves as actively growing our hair in the
same way that we move our limbs. But the difference vanishes when we
ask ourselves just HOW we raise a hand, or just how we make a mental
decision to raise a hand. For we do not know-- or, more corectly, we do
know but we cannot describe how it is done in words.
To be more exact: the process is so innate and so SIMPLE that
it cannot be conveyed by anything so complicated and cumbersome as human
language, which has to describe everything in terms of a linear series
of fixed signs. This cumbersome way of making communicable
representations of the world makes the description of certain events as
complicated as trying to drink water with a fork. It is not that these
actions or events are complicated in themselves: the complexity lies in
trying to fit them into the clumsy instrumentality of language, which
can deal only with one thing (or "think") at a time.
Now the Western mind identifies what it knows with what it can
describe and communicate in some system of symbols, whether linguistic
or mathematical-- that is, with what it can think about. Knowledge is
thus primarily the content of thought, of a system of symbols which make
up a very approximate model or representation of reality. In somewhat
the same way, a newspaper photograph is a repesentation of a natural
scene in terms of a fine screen of dots. But as the actual scene is not
a lot of dots, so the real world is not in fact a lot of things or
"thinks".
The Oriental mind uses the term KNOWLEDGE in another sense
besides this-- in the sense of knowing how to do actions which cannot be
explained . In this sense, we know how to breathe and how to walk, and
even how to grow hair, because that is just what we do!

------------------------------

Date: Sat 1 Mar 86 20:10:32-PST
From: Stuart Russell <RUSSELL@SUMEX-AIM.ARPA>
Subject: Addressing some of Dreyfus' specific points

To address some of the actual content of Dreyfus' recent talk at Stanford,
delivered to an audience consisting mostly of AI researchers:

1) The discussion after the talk was remarkably free of strong dissent, for
the simple reason that Dreyfus is now making a sloppy attempt at a
cognitive model for AI, rather than making any substantive criticism of AI.
Had his talk been submitted to AAAI as a paper, it would probably have been
rejected as containing no new ideas and weak empirical backing.

2) The backbone of his argument is that human *experts* solve problems by
accessing a store of cached, generalized solutions rather than by extensive
reasoning. He admits that before becoming expert, humans operate just like
AI reasoning systems, otherwise they couldn't solve any problems and thus
couldn't cache solutions. He also admits that even experts use reasoning
to solve problems insufficiently similar to those they have seen before.
He doesn't say how solutions are to be abstracted before caching, and
doesn't seem to be aware of much of the work on chunking, rule compilation,
explanation-based generalization and macro-operator formation which has
been going on for several years. Thus he seems to be proposing a performance
mechanism that was proposed long ago in AI, acts as if he (or his brother)
invented it and assumes, therefore, that AI can't have made any progress yet
towards understanding it.

3) He proposes that humans access previous situations and their solutions
by an "intuitive, holistic matching process" based on "total similarity"
rather than on "breaking down situations into features and matching on
relevant features"
. When I asked him what he meant by this, he said
he couldn't be any more specific and didn't know any more than he'd said.
(He taped our conversation, so he can no doubt correct the wording.)
In the talk, he mentioned Roger Shepard's work on similarity (stimulus
generalization) as support for this view, but when I asked him how the
work supported his ideas, it became clear that he knew very little about it.
Shepard's results can be explained equally well if situations are
described in terms of features, but more importantly they only apply when
the subject has no idea of which parts of the situation are relevant to the
solution, which is hardly the case when an expert is solving problems. In
fact, the fallacy of analogical reasoning by total similarity (which is the
only mechanism he is proposing to support his expert phase of skilled
performance) has long been recognized in philosophy, and also more recently
in AI. Moreover, the concept of similarity without any goal context (i.e.
without any purpose for which the similarity will be used) seems to be
incoherent. Perhaps this is why he doesn't attempt to define what it means.

4) His final point is that such a mechanism cannot be implemented in a
system which uses symbolic descriptions. Quite apart from the fact that
the mechanism doesn't work, and cannot produce any kind of useful
performance, there is no reason to believe this, nor does he give one.

In short, to use the terminology of review forms, he is now doing AI but
the work doesn't contain any novel ideas or techniques, does not report
on substantial research, does not properly cite related work and does
not contribute substantially to knowledge in the field. If it weren't
for the bee in his bonnet about proving AI (except the part he's now doing)
to be fruitless and dishonest, he might be able to make a useful
contribution, especially given his training in philosophy.

Stuart Russell
Stanford Knowledge Systems Lab

------------------------------

Date: Sat, 1 Mar 86 14:23:45 est
From: Jeffrey Greenberg <green@ohio-state.ARPA>
Reply-to: green@osu-eddie.UUCP (Jeffrey Greenberg)
Subject: Re: Technology Review article


> re:
> Dreyfus' distinction between learning symbolically how to do a task
> and 'doing' the task...i.e. body's knowledge.
>
I agree with the Dreyfus brothers - the difficulty many AI people have
(in my opinion) is a fundamental confusion of
"knowledge of" versus "knowledge that."

------------------------------

Date: 28 Feb 86 02:37:13 GMT
From: hplabs!ames!eugene@ucbvax.berkeley.edu (Eugene Miya)
Subject: Re: Technology Review article (Deryfus actuall)

<1814@bbncc5.UUCP>
>
> About 14 years ago Hubert Dreyfus wrote a paper titled "Why Computers Can't
> Play Chess"
- immediately thereafter, someone at the MIT AI lab challenged
> Dreyfus to play one of the chess programs - which trounced him royally -
> the output of this was an MIT AI Lab Memo titled "The Artificial Intelligence
> of Hubert Dreyfus, or Why Dreyfus Can't Play Chess"
.
>
> The document was hilarious. If anyone still has a copy, I'd like to arrange
> a xerox of it.
>
> Miles Fidelman (mfidelman@bbncc5.arpa)

Excuse the fact I reproduced all that above rather than digest it.
I just attended a talk given by Dreyfus (for the first time). I think
the AI community is FORTUNATE to have a loyal opposition following of
Dr. Dreyfus. In some defense, Dreyfus is somewhat kind to the AI
community (in constrast to some AI critics I know) for instance he does
believe in the benefit of expert systems and expert assistants.
Dreyfus feels that the AI community harped on the above:
Men play chess.
Computers play chess.
Dreyfus is a man.
Computer beat Dreyfus.
Therefore, computers can beat man playing chess.
He pointed out he sent his brother (supposedily captain of the Harvard
chess team at one time) and he beat the computer (we should write
his brother at UCB CS to verify this I supose).
While I do not fully agree with Dreyfus's philosophy or his
"methodology," he is a bright thinker and critic. [One point we
do not agree on: he believes in the validity of the Turing test,
I do not (in the way it currently stands).]

--eugene miya
NASA Ames Research Center
{hplabs,ihnp4,dual,hao,decwrl,allegra}!ames!aurora!eugene
eugene@ames-nas.ARPA
p.s. I would not mind seeing a copy of the paper myself. :-)

------------------------------

Date: 3 Mar 86 02:17:00 GMT
From: pur-ee!uiucdcs!uiucdcsp!bsmith@ucbvax.berkeley.edu
Subject: Re: "self-styled philosophers"

William James once wrote that all great theories go through three
distinct stages: first, everyone claims the theory is simply wrong,
and not worth taking seriously. Second, people start saying that,
maybe it's true, but it's trivial. And third, people are heard to
say that not only is it true and important, but they thought of it
first.
Here at the University of Illinois, it seems to be de rigeur
to laugh and deride Dreyfuss whenever his name comes up. I am
convinced the majority of these people have never read any of
Dreyfuss' work--however, this is unimportant to them (clearly I don't
mean everyone here). There are also those who spend a great deal of
time and effort rejecting everything Dreyfuss says. For example,
recently Dr. Buchanan (of Stanford) gave a lecture here. He purported
to be answering Dreyfuss, but in the great majority of cases agreed
with him (always saying something like, "Well, maybe it's true, but
who cares?"
). It seems to me that, if Dreyfuss is so unimportant, it
is very strange indeed that so many people get so offended by
everything he says and does. Perhaps AI researchers ought to be less
sensitive and start encouraging this sort of interdisciplinary
activity. Perhaps then AI will move forward and finally live up to
its promise.
Barry Smith

------------------------------

Date: Wed, 5 Mar 86 15:38:08 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Tale for Marvin the Paranoid Android.

> From AIList Vol 4 # 33:-
> His main thesis is that there are certain human qualities and
> attributes, for example certain emotions, that are just not the
> kinds of things that are amenable to mechanical mimicry.
> ...
> Peter Ladkin

> From AIList Vol 4 # 41:-
> As I pointed out, but you deleted, his major argument is that
> there are some areas of human experience related to intelligence
> which do not appear amenable to machine mimicry.
> ...
> Peter Ladkin

Could these areas be named exactly? Agreed that there are emotional
aspects that cannot be programmed into a machine, what parts of the
``human experience related to intelligence'' will also remain out-
side of the machine's grip?

Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj

------------------------------

Date: Mon, 3 Mar 86 12:54:02 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: The Turing Test - A Third Quantisation?

The original basis for the Turing test was to see if it was possible
to distinguish, purely from a text, whether you were talking to a man
or woman. The extension of this, the Turing test itself, seeks to give
a criterion for deciding on whether or not a intelligent system is
"truly intelligent". A human asks questions and receives answers in
textual form. (S)he then has to decide if it is a machine behind the
screen or not.
Now, supposing a system has been built which "passes" the test. Why
not take the process one stage further? Why not try to design an
intelligent system which can decide whether *it* is talking to machine
or not?

Gordon Joly
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT