Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 169

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 18 Jul 1986      Volume 4 : Issue 169 

Today's Topics:
Natural Language - Interactive Architectures,
Philosophy - Common Sense & Intelligence Testing & Searle's Chinese Room

----------------------------------------------------------------------

Date: Mon, 14 Jul 86 16:15:01 BST
From: ZNAC450 <mcvax!kcl-cs!fgtbell@seismo.CSS.GOV>
Subject: Interactive Architectures & Common Sense


Subject: Re: Architectures for interactive systems?

In article <8607032203.AA12866@linc.cis.upenn.edu>
brant%linc.cis.upenn.edu@CIS.UPENN.EDU.UUCP writes:
>There seems to have been a great deal of work done in
>natural language processing, yet so far I am unaware of
>any attempt to build a practical yet theoretically well-
>founded interactive system or an architecture for one.
>
>When I use the phrase "practical yet theoretically well-
>founded interactive system," I mean a system that a user
>can interact with in natural language, that is capable of
>some useful subset of intelligent interactive (question-
>answering) behaviors, and that is not merely a clever hack.
>
>Many of the sub-problems have been studied at least once.
>Work has been done on various types of necessary response
>behavior, such as clarification and misconception correction.
>Work has been done on parsing, semantic interpretation, and
>text generation, and other problems as well. But has any
>work been done on putting all these ideas together in a
>"real" system?

I would like to try to build such a system but it's not going to
be easy and will probably take several years. I'm going to have to
build it in small pieces, starting off small and gradually improving
the areas that the system can cope with.

>I see a lot of research that concludes with
>an implementation that solves only the stated problem, and
>nothing else.

That's because the time taken to construct a sufficiently general system is
greater than most people are prepared to put in (measure it in decades),and
is so demanding on system resources that with present machines it will
run so slowly that the user gets bored waiting for a response (like UN*X :-)).

>Presumably, a "real user" will not want to
>have to run system A to correct invalid plans, system B to
>answer direct questions, system C to handle questions with
>misconceptions, and so forth.
>
No, what we ideally want is a system which can hold a conversation in real
time, with user models, an idea of `context', and a great deal of information
about the world in general. The last, by the way, is the real stumbling block.
Current models of knowledge representation just aren't up to coping with
large amounts of information. This is why expert systems, for example, tend
to have 3,000 rules or less. It is true that dealing with large amounts of
information will become easier as hardware improves and the LIPS (Logical
Inferences Per Second) rate increases. However, it won't solve the real
problem which is that we just don't know how to organise information in
a sufficiently efficient manner at present.

>I would be interested to get any references to work on such
>integrated systems.

If you want to solve the problem of building integrated NLP systems,
you are aiming to produce truly intelligent behaviour -- if you accept
the definition that AI is about performing tasks by machine which require
intelligence in humans. The problems of building integrated NLP systems
are the problems of AI, period. I.e.-- Knowledge representation, reasoning
by analogy, reasoning by inference, dealing with large search spaces,
forming user models etc.

I believe that in order to perform these tasks efficiently, we are going to
have to look at how people perform these tasks. What I mean by this is that
we are going to have to take a long hard look at the way the brain works --
down at the `hardware' level, i.e. neurons. The problem may well be that our
approach to AI so far has been too `high-level'. We have attempted to
simulate high-level activities of the human brain (reasoning by analogy,
symbol perception etc.) by high-level algorithms.

These simulations have not been unsuccesssful, but they have not exactly
been very efficient either.It is about time we stopped trying to simulate,
and performed some real analysis of what the brain does, at the bottom
level.If this means constructing computer models of the brain, then so
be it.

Two books which argue this point of view much better than I can are :
Godel, Escher, Bach : An Eternal Golden Braid, by Douglas R. Hofstadter,
and Metamagical Themas', also by Douglas R. Hofstadter.


>Also, what are people's opinions on this
>subject: are practical NLP too hard to build now?

No, but they are *very* hard to build. An integrated system would take
more resources than anyone is prepared to spend.


>Should we
>leave the construction of practical systems to private enter-
> prise and restrict ourselves to the basic research problems?

Not at all. If we can't build something useful at the end of the day
then we haven't justified the cost of all this effort. But a lot
more basic research has to be done before we can even think about
building a practical system.

----francis

mcvax!ukc!kcl-cs!fgtbell


Subject: Re: common sense
References: <8607031718.AA14552@ucbjade.Berkeley.Edu>

In article <8607031718.AA14552@ucbjade.Berkeley.Edu>
KVQJ@CORNELLA.BITNET.UUCP writes:
>My point is this, I think it is intrinically impossible to program
>common sense because a computer is not a man. A computer cannot
>experience what man can;it can not see or make ubiquitous judgements
>that man can.

What if you allow a computer to gather data from its environment ?
Wouldn't it be possible to make predictive decisions, based on what
had happened before ? Isn't this what humans do ?

I thought common sense was what allowed one to say what was *likely*
to happen, based on one's previous experiences. Is there some reason
why computers couldn't do this ?

-----francis

mcvax!ukc!kcl-cs!fgtbell

------------------------------

Date: Mon, 14 Jul 86 17:02:52 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Blade Runner and Intelligence Testing (Vol 4 # 165).

The test used in the film is to look for an emotional response to the
questions. They are fired off in quick succession, without giving the
candidate time to think. He might then get angry...

> By the way, the fastest way to identify human
> intelligence may be to look for questions that a human will recognize
> as nonsense or outside his expected sphere of knowledge ("How long
> would you broil a 1-pound docket?" "Is the Des Moines courthouse taller
> or shorter than the Wichita city hall?") but that an imitator might try
> to bluff through. -- KIL

``Bluff''? What's the payoff?

Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj

------------------------------

Date: Tue, 15 Jul 86 11:34:27 bst
From: Gordon Joly <gcj%qmc-ori.uucp@Cs.Ucl.AC.UK>
Subject: Blade Runner and Intelligence Testing (Vol 4 # 165) -- Coda

Interesting point about the imitator not being able to discover
what is a valid question and what is a piece of nonsense. Reminds
me of the theory of automatic integration in computer algebra.
The analogy is a bit thin, but basically the algebra system decides
first whether or not it has the power (ie there exists an algorithm)
before trying to proceed with the integration.
If fact, the machine never integrates; it just differentiates in a
clever way to get near to the answer. It then alters the result to
get the correct answer, and uses the inverse nature of differentiation
and integration. I said it was a bit thin; the integrator is working
backwards from the answer to find the correct question:-)

Gordon Joly
INET: gcj%maths.qmc.ac.uk%cs.qmc.ac.uk@cs.ucl.ac.uk
EARN: gcj%UK.AC.QMC.MATHS%UK.AC.QMC.CS@AC.UK
UUCP: ...!seismo!ukc!qmc-ori!gcj

------------------------------

Date: Mon, 14 Jul 86 21:17:10 est
From: Perry Wagle <wagle%iuvax.indiana.edu@CSNET-RELAY.ARPA>
Subject: common sense

[this is a response to ucbjade!KVQJ's note on common sense. ]

The flaw in Searle's Chinese Room Experiment is that he gets bogged down
in considering the demon to be doing the "understanding" rather than the
formal rule system itself. And of course it is absurd to claim that the
demon is understanding anything -- just as it is absurd to claim that the
individual neurons in your brain are understanding anything.

Perry Wagle, Indiana University, Bloomington Indiana.
...!ihnp4!inuxc!iuvax!wagle (USENET)
wagle@indiana (CSNET)
wagle%indiana@csnet-relay (ARPA)

------------------------------

Date: Tue, 15 Jul 86 10:57:50 EDT
From: "Col. G. L. Sicherman" <colonel%buffalo.csnet@CSNET-RELAY.ARPA>
Subject: Re: common sense

In article <860714-094227-1917@Xerox>, Newman.pasa@XEROX.COM asks:
>
> However, I think that my point still stands. Searle's argument seems to
> assume some "magical" property ... of biology that allows neurons ...
> to produce a phenomenon ... that is not producible by other
> deterministic systems.
>
> What is this strange feature of neurobiology?

I believe that the mysterious factor is not literally "magic" (in your
broad sense), but merely "invisible" to the classical scientific method.
A man's brain is very much an _interactive_ system. It interacts con-
tinually with all of the world that it can sense.

On the other hand, laboratory experiments are designed to be closed
systems. They are designed to be controllable; they rely on artificial
input, at least in the experimental stage. (When they are used in the
field, they may be regarded as intelligent; even a door controlled by
an electric eye meets our intuitive criterion for intelligence.)

Just what do we demand of "artificial intelligence?" Opening doors
for us? Writing music and poems for us? Discoursing on philosophy
for us? --Or doing things for _itself,_ and to Hell with humans?
I don't think that A.I. people agree about this.

------------------------------

Date: 15 Jul 86 08:16:00 EDT
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Searle and Understanding


This is in response to recent discussion about whether AI systems
can/will understand things as humans do. Searle's Chinese room
example suggests the extent to which the implementation of a formal
system may or may not understand something. Here's another,
perhaps simpler, example that's been discussed on the philosophy
list.

Imagine we are visited by ETS - an extra-terrestial scientist.
He knows all the science we do plus a lot more - quarks,
quantum mechanics, neurobiology, you-name-it. Being smart,
he quickly learns our language and studies our (pitifully
primitive) biology, so he knows about how we perceive as well.
But, like all of his species, he's totally color-blind.

Now, making the common assumption that color-knowledge cannot
be conveyed verbally or symbolically, does ETS "understand"
the concept of yellow?

I think the example shows that there are two related meanings
of "understanding". Certainly, in a formal, scientific sense,
ETS knows (understands-1) as much about yellow as anyone - all
the associated wavelengths, retinal reactions, brain-states,
etc. He can use this concept in formal systems, manipulate it,
etc. But *something* is missing - ETS doesn't know
(understand-2) "what it's like to see yellow", to borrow/bend
Nagel's phrase.

It's this "what it's like to be a subject experiencing X" that
eludes capture (I suppose) by AI systems. And I think the
point of the Chinese room example is the same - the system as
a whole *does* understand-1 Chinese, but doesn't understand-2
Chinese.

To get a bit more poignant, what systems understand-2 pain?
Would you really feel as guilty kicking a very sophisticated
robot as kicking a cat? I think it's the ambiguity between
these senses of understanding that underlies a lot of the debate.
They correspond somewhat to Dennett's "program-receptive" and
"program-resistant" properties of consciousness.

As far as I can see, the lack of understanding-2 in artificial
systems poses no particular barrier to their performance.
Eg, no doubt we could build a machine which in fact would
correctly label colors - but that is not a reason to suppose
that it's *conscious* of colors, as we and some animals are.

Nonetheless, *even if there are no performance implications*,
there is a real something-or-other we have going on inside us
that does not go on inside Chinese rooms, robots, etc., and no
one knows how even to begin to address the replication of this
understanding-2 (if indeed anyone wants to bother).

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: Tue 15 Jul 86 12:31:07-PDT
From: Pat Hayes <PHayes@SRI-KL>
Subject: Re: AIList Digest V4 #166

re: Searle's chinese room
There has been by now an ENORMOUS amount of discussion of this argument, far
more than it deserves. For a start, check out the BBS treatment surrounding
the original paper, with all the commentaries and replies.
Searle's position is quite coherent and rational, and ultimately
whether or not he is right will have to be decided empirically, I
believe. This is not to say that all his arguments are good, but
that's a different question. He thinks that whatever it is about the
brain ( or perhaps the whole organism ) which gives it the power of
intentional thought will be something biological. No mechanical
electronic device will therefore really be able to *think about* the
world in the way we can. An artificial brain might be able to, it's
not a matter of natural vs. artificial, notice: and it's just possible
that some other kind of hardware might support intentional thinking,
although he believes not; but certainly, it can't be done by a
representationalist machine, whose behavior is at best a simulation of
thought ( and which, he believes, will never in fact be a successful
simulation ). Part of this position is that the behavior of a system
is no guide to whether or not it is *really* thinking. If his closest
friend died, and an autopsy revealed, to Searles great surprise, that
he had been a computational robot all his life, then Searle would say
that the man hadn't been aware of anything all along. The 'Turing test'
is quite unconvincing to Searle.
This intellectual position is quite consistent and impregnable to argument.
It turns ultimately on an almost legal point: if a robot behaves
'intelligently', is that enough reason to attribute 'intelligence'
to it? ( Substitute your favorite psychological predicate. ) Turing and his
successors say yes, Searle says no. I think all we can do is agree to
disagree for the time being. When the robots get to be more convincing, let's
come back and ask him again ( or send one of them to do it ).
Pat Hayes

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT