Copy Link
Add to Bookmark
Report

AIList Digest Volume 6 Issue 067

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 13 Apr 1988     Volume 6 : Issue 67 

Today's Topics:
Opinion - The Future Of AI

----------------------------------------------------------------------

Date: 31 Mar 88 09:18:18 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net (Simon Brooke)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

In article <5789@swan.ulowell.edu> sbrunnoc@eagle.UUCP (Sean Brunnock) writes:
>In article <962@daisy.UUCP> klee@daisy.UUCP (Ken Lee) writes:
>>What do people think of the PRACTICAL future of artificial intelligence?
>>
>>Is AI just too expensive and too complicated for practical use? I
>>
>>Does AI have any advantage over conventional programming?
>
> Bear with me while I put this into a sociological perspective. The first
>great "age" in mankind's history was the agricultural age, followed by the
>industrial age, and now we are heading into the information age. The author

Oh God! I suppose the advantage of the net is that it allows us to betray
our ignorance in public, now and again. This is 'sociology'? Dear God!

> For example, give a machine access to knowledge of aerodynamics,
>engines, materials, etc. Now tell this machine that you want it to
>design a car that can go this fast, use this much fuel per mile, cost
>this much to make, etc. The machine thinks about it and out pops a
>design for a car that meets these specifications.

And here we really do have God - the General Omnicompetent Device - which
can search an infinite space in finite time. (Remember that Deep Thought
took 7 1/2 million years to calculate the answer to the ultimate question
of life, the universe, and everything - and at the end of that time could
not say what the question was).

Seriously, if this is why you are studying AI, throw it in and study some
philosophy. There *are* good reasons for studying AI: some people do it in
order to 'find out how people work' - I have no idea whether this project
is well directed, but it is certain to raise a lot of interesting
problems. Another is to use it as a tool for exploring our understanding
of such concepts as 'understanding', 'knowledge', 'intelligence' - or, in
my case, 'explanation'. Obviously I believe this project is well directed,
and I know it raises lots of of interesting problems...

And occasionally these interesting problems will spin off technologies
which can be applied to real world tasks. But to see AI research as driven
by the need to produce spin-offs seems to me to be turning the whole
enterprise on its head.


** Simon Brooke *********************************************************
* e-mail : simon@uk.ac.lancs.comp *
* surface: Dept of Computing, University of Lancaster, LA 1 4 YW, UK. *
*************************************************************************

------------------------------

Date: 7 Apr 88 18:35:41 GMT
From: trwrb!aero!srt@ucbvax.Berkeley.EDU (Scott R. Turner)
Subject: Re: The future of AI - my opinion

I think the important point is that as soon as AI figures something out,
it is not only no longer considered to be AI, it is also no longer considered
to be intelligence.

Expert systems is a good example. The early theory was, let's try and
build programs like experts, and that will give us some idea of why
those experts are intelligent. Now a days, people say "expert
systems - oh, that's just rule application."
There's some truth to
that viewpoint - I don't think expert systems has a lot to say about
intelligence - but it's a bad trap to fall into.

Eventually we'll build a computer that can pass the Turing Test and
people will still be saying "That's not intelligence, that's just a
machine."

-- Scott Turner

------------------------------

Date: 7 Apr 88 18:13:10 GMT
From: bloom-beacon.mit.edu!boris@bloom-beacon.mit.edu (Boris N
Goldowsky)
Subject: Re: The future of AI - my opinion


In article <28619@aero.ARPA> srt@aero.ARPA (Scott R. Turner) writes:

Eventually we'll build a computer that can pass the Turing Test and
people will still be saying "That's not intelligence, that's just a
machine."

-- Scott Turner
This may be true, but at the same time the notion that a machine could
never think is slowly being eroded away. Perhaps by the time such a
"Turing Machine"* could be built, "just a machine" will no longer
imply non-intelligence, because they'll be too many semiinteligent
machines around.

But I think it is a good point that every time we do begin to understand
some subdomain of intelligence, it becomes clear that there is much
more left to be understood...

->Boris G.

(*sorry.)
--
Boris Goldowsky boris@athena.mit.edu or @adam.pika.mit.edu
%athena@eddie.UUCP
@69 Chestnut St.Cambridge.MA.02139
@6983.492.(617)

------------------------------

Date: 6 Apr 88 18:27:25 GMT
From: ssc-vax!bcsaic!rwojcik@beaver.cs.washington.edu (Rick Wojcik)
Subject: Re: The future of AI

In article <1134@its63b.ed.ac.uk> gvw@its63b.ed.ac.uk (G Wilson) writes:
>[re: my reference to natural language programs]
>Errmmm...show me *any* program which can do these things? To date,
>AI has been successful in these areas only when used in toy domains.
>
NLI's Datatalker, translation programs marketed by Logos, ALPs, WCC, &
other companies, LUNAR, the LIFER programs, CLOUT, Q&A, ASK, INTELLECT,
etc. There are plenty. All have flaws. Some are more "toys" than
others. Some are more commercially successful than others. (The goal
of machine translation, at present, is to increase the efficiency of
translators--not to produced polished translations.)

>... Does anyone think AI would be as prominent
>as it is today without (a) the unrealistic expectations of Star Wars,
>and (b) America's initial nervousness about the Japanese Fifth Generation
>project?
>
I do. The Japanese are overly optimistic. But they have shown greater
persistence of vision than Americans in many commercial areas. Maybe
they are attracted by the enormous potential of AI. While it is true
that Star Wars needs AI, AI doesn't need Star Wars. It is difficult to
think of a scientific project that wouldn't benefit by computers that
behave more intelligently.

>Manifest destiny?? A century ago, one could have justified
>continued research in phrenology by its popularity. Judge science
>by its results, not its fashionability.
>
Right. And in the early 1960's a lot of people believed that we
couldn't land people on the moon. When Sputnik I was launched my 5th
grade teacher told the class that they would never orbit a man around
the earth. I don't know if phrenology ever had a respectable following
in the scientific community. AI does, and we ought to pursue it whether
it is popular or not.

>I think AI can be summed up by Terry Winograd's defection. His
>SHRDLU program is still quoted in *every* AI textbook (at least all
>the ones I've seen), but he is no longer a believer in the AI
>research programme (see "Understanding Computers and Cognition",
>by Winograd and Flores).

Weisenbaum's defection is even better known, and his Eliza program is
cited (but not quoted :-) in every AI textbook too. Winograd took us a
quantum leap beyond Weisenbaum. Let's hope that there will be people to take
us a quantum leap beyond Winograd. But if our generation lacks the will
to tackle the problems, you can be sure that the problems will wait
around for some other generation. They won't get solved by pessimists.
Henry Ford had a good way of putting it: "If you believe you can, or if
you believe you can't, you're right."

--
Rick Wojcik csnet: rwojcik@boeing.com
uucp: {uw-june uw-beaver!ssc-vax}!bcsaic!rwojcik
address: P.O. Box 24346, MS 7L-64, Seattle, WA 98124-0346
phone: 206-865-3844

------------------------------

Date: 8 Apr 88 12:24:51 GMT
From: otter!cdfk@hplabs.hp.com (Caroline Knight)
Subject: Re: Re: The future of AI - my opinion

The Turing Test is hardly adequate - I'm surprised that people
still bring it up - indeed it is exactly the way in which people's
expectations change with what they have already seen on a computer
which makes this a test with continuously changing criteria.

For instance, take someone who has never heard of computers
and show them any competent game and the technically
unsophisticated may well believe the machine is playing
intelligently (I have trouble with my computer beating
me at Scrabble) but those who have become familiar with
such phenomena "know better" - its "just programmed".

The day when we have won is the inverse of the Turing Test - someone
will say this has to be a human not a computer - a computer
couldn't have made such a crass mistake - but then maybe
the computer just wanted to win and looked like a human...

I realise that this sounds a little flippant but I think that
there is a serious point in it - I rely on your abilities
as intelligent readers to read past my own crassness and
understand my point.

Caroline Knight

------------------------------

Date: 7 Apr 88 18:47:28 GMT
From: hpcea!hpnmd!hpsrla!hpmwtla!garyb@hplabs.hp.com (Gary
Bringhurst)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

> Some people wondered what
> was the use of opening up a trans-continental railroad when the pony
> express could send the same letter or package to where you wanted in just
> seven days....
>
> Sean Brunnock
> University of Lowell
> sbrunnoc@eagle.cs.ulowell.edu

I have to agree with Sean here. So let's analyze his analogy more closely.
AI is to the railroad as conventional CS wisdom is to the pony express.
Railroads can move mail close to three times faster than ponys, therefore
AI programs perform proportionately better than the alternatives, and are not
sluggish or resource gluttons. Trains are MUCH larger than ponys, so AI
programs must be larger as well. Trains travel only in well defined tracks,
while ponys have no such limitations...

Hey, don't trains blow a lot of smoke?

Gary L. Bringhurst

------------------------------

Date: 3 Apr 88 18:11:49 GMT
From: pur-phy!mrstve!mdbs!kbc@ee.ecn.purdue.edu (Kevin Castleberry)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

> It should increase the skill of the
>person doing the job by doing those things which are boring
>or impractical for humans but possible for computers.
>...
> When sharing a job
>with a computer which tasks are best automated and which best
>given to the human - not just which is it possible to automate!

For the most part, this is what I see happening in the truly succesful
ES applications I see implemented. Occasionally there is one that provides
a solution to a problem so complex that humans did not try. Most of
the time it is just providing the human a quicker and more reliable way
to get the job done so s/he can move on to more interesting tasks.

>Perhaps computers will free people up so that they can go back
>to doing some of the tasks that we currently have machines do
>- has anyone thought of it that way?

I certainly have observed this. Often the human starts out doing interesting
designing, problem solving etc., but then gets bogged down in the necessities
of keeping the *system* running. I have observed such automation giving
humans back the job they enjoy.

>And if we are going to do people out of jobs then we'd better
>start understanding that a person is still valuable even if
>they do not do "regular work".

My own belief is if systems aren't developed to help us work smarter
then the jobs will disappear anyway to the company that does develop such
systems.


support@mdbs.uucp
or
{rutgers,ihnp4,decvax,ucbvax}!pur-ee!mdbs!support

The mdbs BBS can be reached at: (317) 447-6685
300/1200/2400 baud, 8 bits, 1 stop bit, no parity

Kevin Castleberry (kbc)
Director of Customer Services

Micro Data Base Systems Inc.
P.O. Box 248
Lafayette, IN 47902
(317) 448-6187

------------------------------

Date: 11 Apr 88 01:56:48 GMT
From: hubcap!mrspock@gatech.edu (Steve Benz)
Subject: Re: The future of AI - my opinion

>From article <2070012@otter.hple.hp.com>, by cdfk@otter.hple.hp.com
(Caroline Knight):
> The Turing Test is hardly adequate - I'm surprised that people
> still bring it up...
>
> The day when we have won is the inverse of the Turing Test - someone
> will say this has to be a human not a computer - a computer
> couldn't have made such a crass mistake...
>
> ...Caroline Knight

Isn't this exactly the Turing test (rather than the inverse?)
A computer being just as human as a human? Well, either way,
the point is taken.

In fact, I agree with it. I think that in order for a machine to be
convincing as a human, it would need to have the bad qualities of a human
as well as the good ones, i.e. it would have to be occasionally stupid,
arrogant, ignorant, etc.&soforth.

So, who needs that? Who is going to sit down and (intentionally)
write a program that has the capacity to be stupid, arrogant, or ignorant?

I think the goal of AI is somewhat askew of the Turing test.
If a rational human develops an intelligent computer, it will
almost certainly have a personality quite distinct from any human.

- Steve
mrspock@hubcap.clemson.edu
...!gatech!hubcap!mrspock

------------------------------

Date: 11 Apr 88 07:46:11 GMT
From: cca.ucsf.edu!daedalus!brianc@cgl.ucsf.edu (Brian Colfer)
Subject: Re: The future of AI [was Re: Time Magazine -- Computers of
the Future]

Douglas Hofsteader says in Godel, Escher, Bach that we are probably
too dumb to understand ourselves at level to make an intelligence
comparable to our own. He uses the analogy of giraffes which just
don't have the bio-hardware to contemplate their own exisitance.

We too may just not have the bio-hardware to organize a true
intelligence. Now there are many significant things to be done short
of this goal. The real question for AI is, "Can there really be an
alternative paradigm to the Turing test which will guide and inspire
the field in significant areas?"


Well...thats my $0.02


===============================================================================
: UC San Francisco : brianc@daedalus.ucsf.edu
Brian Colfer : Dept. of Lab. Medicine : ...!ucbvax!daedalus.ucsf.edu!brianc
: PH. 415-476-2325 : brianc@ucsfcca.bitnet
===============================================================================

------------------------------

Date: 12 Apr 88 04:33:54 GMT
From: phoenix!pucc!RLWALD@princeton.edu (Robert Wald)
Subject: Re: The future of AI - my opinion

In article <1348@hubcap.UUCP>, mrspock@hubcap.UUCP (Steve Benz) writes:

> Isn't this exactly the Turing test (rather than the inverse?)
>A computer being just as human as a human? Well, either way,
>the point is taken.
>
> In fact, I agree with it. I think that in order for a machine to be
>convincing as a human, it would need to have the bad qualities of a human
>as well as the good ones, i.e. it would have to be occasionally stupid,
>arrogant, ignorant, etc.&soforth.
>
> So, who needs that? Who is going to sit down and (intentionally)
>write a program that has the capacity to be stupid, arrogant, or ignorant?


I think that you are missing the point. Its because you're using charged
words to describe humans.

Ignorant: Well, I would certainly expect an AI to be ignorant of things
or combinations of things it hasn't been told about.

Stupid: People are stupid either because they don't have proper procedures
to deal with information, or because they are ignorant of the real meaning
of the information they do possess and thus use it wrongly. I don't see
any practical computer having some method of always using the right procedure,
and I've already said that I think it would be ignorant of certain things.
People think and operate by using a lot of heuristics on an incredible
amount of information. So much that it is probably hopeless to develop
perfect algorithms, even with a very fast computer. So i think that computers
will have to use these heuristics also.
Eventually, we may develop methods that are more powerful and reliable
than humans. Computers are not subject to the hardware limitations of the
brain. But meanwhile I don't think that what you have mentioned are
'bad' qualities of the brain, nor unapplicable to computers.

Arrogance: It is unlikely that people will attempt to give computers
emotions for some time. On the other hand, I try not (perhaps
failing at times) to be arrogant or nasty. But as far as the turing
test is concerned, a computer which can parse real language could
conceivably parse for emotional content and be programmed to
respond. There may even be some application for this, so it may
be done. The only application for simulating arrogance production
might be if you are really trying to fool workers into thinking
their boss is a human, or at least trying to make them forget it
is a computer.

I'm not really that concerned with arrogance, but I think that an
AI could be very 'stupid' and 'ignorant'. Not ones that deal with limited
domains, but ones that are going to operate in the real world.
-Rob Wald Bitnet: RLWALD@PUCC.BITNET
Uucp: {ihnp4|allegra}!psuvax1!PUCC.BITNET!RLWALD
Arpa: RLWALD@PUCC.Princeton.Edu
"Why are they all trying to kill me?"
"They don't realize that you're already dead." -The Prisoner

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT