Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 046

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 23 Aug 1983      Volume 1 : Issue 46 

Today's Topics:
Artificial Intelligence - Prejudice & Frames & Turing Test & Evolution,
Fifth Generation - Top-Down Research Approach
----------------------------------------------------------------------

Date: Thu 18 Aug 83 14:49:13-PDT
From: Richard Treitel <TREITEL@SUMEX-AIM.ARPA>
Subject: Prejudice

The message from (I think .. apologies if wrong) Stan the Leprechaun,
which sets up "rational thought" as the opposite of "right-wingism"
and of "irascibility", disproves the contention in another message
that "bigotry and intelligence are mutually exclusive". Indeed this
latter message is its own disproof, at least by my definition of
bigotry. All of which leads me to believe that one or other of them
*was* sent by an AI project Flamer-type program. Good work.
- Richard

------------------------------

Date: 22 Aug 83 19:45:38-EDT (Mon)
From: The soapbox of Gene Spafford <spaf%gatech@UDel-Relay>
Subject: AI and Human Intelligence

[The following are excerpts from several interchanges with the author.
-- KIL]

Words mean not necessarily what I want them to mean nor what you want
them to mean, but what we all agree that they mean. My point is that
we must very possibly consider emotions and ethics in any model we
care to construct of a "human" intelligence. The ability to handle a
conversation, as is implied by the Turing test, is not sufficient in
my eyes to classify something as "intelligent." That is, what
*exactly* is intelligence? Is it something measured by an IQ test?
I'm sure you realize that that particular point is a subject of much
conjecture.

If these discussion groups are for discussion of artificial
"intelligence," then I would like to see some thought given as to the
definition of "intelligence." Is emotion part of intelligence? Is
superstition part of intelligence?

FYI, I do not believe what I suggested -- that bigots are less than
human. I made that suggestion to start some comments. I have gotten
some interesting mail from people who have thought some about the
idea, and from a great many people who decided I should be locked away
for even coming up with the idea.

[...]

That brought to mind a second point -- what is human? What is
intelligence? Are the the same thing? (My belief -- no, they aren't.)
I proposed that we might classify "human" as being someone who *at
least tries* to overcome irrational prejudices and bigotry. More than
ever we need such qualitites as open-mindedness and compassion, as
individuals and as a society. Can those qualities be programmed into
an AI system? [...]

My original submission to Usenet was intended to be a somewhat
sarcastic remark about the nonsense that was going on in a few of the
newsgroups. Responses to me via mail indicate that at least a few
people saw through to some deeper, more interesting questions. For
those people who immediately jumped on my case for making the
suggestion, not only did you miss the point -- you *are* the point.

--
The soapbox of Gene Spafford
CSNet: Spaf @ GATech ARPA: Spaf.GATech @ UDel-Relay
uucp: ...!{sb1,allegra,ut-ngp}!gatech!spaf
...!duke!mcnc!msdc!gatech!spaf

------------------------------

Date: 18 Aug 83 13:40:03-PDT (Thu)
From: decvax!linus!vaxine!wjh12!brh @ Ucb-Vax
Subject: Re: AI Projects on the Net
Article-I.D.: wjh12.299

I realize this article was a while ago, but I'm just catching
up with my news reading, after vacation. Bear with me.

I wonder why folks think it would be so easy for an AI program
to "change it's thought processes" in ways we humans can't. I submit
that (whether it's an expert system, experiment in KR or what) maybe
the suggestion to 'not think about zebras' would have a similiar
effect on an AI proj. as on a human. After all, it IS going to have
to decipher exactly what you meant by the suggestion. On the other
hand, might it not be easier for one of you humans .... we, I mean ...
to consciously think of something else, and 'put it out of your
mind'??

Still an open question in my mind... (Now, let's hope this
point isn't already in an article I haven't read...)

Brian Holt
wjh!brh

------------------------------

Date: Friday, 19 Aug 1983 09:39-PDT
From: turner@rand-unix
Subject: Prejudice and Frames, Turing Test


I don't think prejudice is a by-product of Minsky-like frames.
Prejudice is simply one way to be misinformed about the world. In
people, we also connect prejudism with the inability to correct
incorrect information in light of experiences which prove it be wrong.

Nothing in Minsky frames as opposed to any other theory is a
necessary condition for this. In any understanding situation, the
thinker must call on background information, regardless of how that is
best represented. If this background information is incorrect and not
corrected in light of new information, then we may have prejudism.

Of course, this is a subtle line. A scientist doesn't change his
theories just because a fact wanders by that seems to contradict his
theories. If he is wise, he waits until a body of irrefutable
evidence builds up. Is he prejudiced towards his current theories?
Yes, I'd say so, but in this case it is a useful prejudism.

So prejudism is really related to the algorithm for modifying known
information in light of new information. An algorithm that resists
change too strongly results in prejudism. The opposite extreme -- an
algorithm that changes too easily -- results in fadism, blowing the
way the wind blows and so on.

-----------

Stan's point in I:42 about Zeno's paradox is interesting. Perhaps
the mind cast forced upon the AI community by Alan Turing is wrong.
Is Turing's Test a valid test for Artificial Intelligence?

Clearly not. It is a test of Human Mimicry Ability. It is the
assumption that the ability to mimic a human requires intelligence.
This has been shown in the past not to be entirely true; ELIZA is an
example of a program that clearly has no intelligence and yet mimics a
human in a limited domain fairly well.

A common theme in science fiction is "Alien Intelligence". That is,
the sf writer basis his story on the idea: "What if alien
intelligence wasn't like human intelligence?"
Many interesting
stories have resulted from this basis. We face a similar situation
here. We assume that Artificial Intelligence will be detectable by
its resemblance to human intelligence. We really have little ground
for this belief.

What we need is a better definition of intelligence, and a test
based on this definition. In the Turing mind set, the definition of
intelligence is "acts like a human being" and that is clearly
insufficient. The Turing test also leads one to think erroneously
that intelligence is a property with two states (intelligent and
non-intelligent) when even amongst humans there is a wide variance in
the level of intelligence.

My initial feeling is to relate intelligence to the ability to
achieve goals in a given environment. The more intelligent man today
is the one who gets what he wants; in short, the more you achieve your
goals, the more intelligent you are. This means that a person may be
more intelligent in one area of life than in another. He is, for
instance, a great businessman but a poor father. This is no surprise.
We all recognize that people have different levels of competence in
different areas.

Of course, this defintion has problems. If your goal is to lift
great weights, then your intelligence may be dependent on your
physical build. That doesn't seem right. Is a chess program more
intelligent when it runs on a faster machine?

In the sense of this definition we already have many "intelligent"
programs in limited domains. For instance, in the domain of
electronic mail handling, there are many very intelligent entities.
In the domain of human life, no electronic entities. In the domain of
human politics, no human entities (*ha*ha*).

I'm sure it is nothing new to say that we should not worry about the
Turing test and instead worry about more practical and functional
problems in the field of AI. It does seem, however, that the Turing
Test is a limited and perhaps blinding outlook onto the AI field.


Scott Turner
turner@randvax

------------------------------

Date: 21 Aug 83 13:01:46-PDT (Sun)
From: harpo!eagle!mhuxt!mhuxi!mhuxa!ulysses!smb @ Ucb-Vax
Subject: Hofstadter
Article-I.D.: ulysses.560

Douglas Hofstadter is the subject of today's N.Y. Times Magazine cover
story. The article is worth reading, though not, of course,
particularly deep technically. Among the points made: that
Hofstadter is not held in high regard by many AI workers, because they
regard him as a popularizer without any results to back up his
theories.

------------------------------

Date: Tue, 23 Aug 83 10:35 PDT
From: "Glasser Alan"@LLL-MFE.ARPA
Subject: Program Genesis

After reading in the New York Times Sunday Magazine of August 21 about
Douglas Hofstadter's latest idea on artificial intelligence arising
from the interplay of lower levels, I was inspired to carry his
suggestion to the logical limit. I wrote the following item partly in
jest, but the idea may have some merit, at least to stimulate
discussion. It was also inspired by Stanislaw Lem's story "Non
Serviam"
.

------------------------------------------------------------------------


PROGRAM GENESIS

A COMPUTER MODEL OF THE PRIMORDIAL SOUP


The purpose of this program is to model the primordial soup that
existed in the earth's oceans during the period when life first
formed. The program sets up a workspace (the ocean) in which storage
space in memory and CPU time (resources) are available to
self-replicating mod- ules of memory organization (organisms).
Organisms are sections of code and data which, when run, cause copies
of themselves to be written into other regions of the workspace and
then run. Overproduction of species, competition for scarce
resources, and occasional copying errors, either accidental or
deliberately introduced, create all the conditions neces- sary for the
onset of evolutionary processes. A diagnostic package pro- vides an
ongoing picture of the evolving state of the system. The goal of the
project is to monitor the evolutionary process and see what this might
teach us about the nature of evolution. A possible long-range
application is a novel method for producing artificial intelligence.
The novelty is, of course, not complete, since it has been done at
least once before.

------------------------------

Date: 18 Aug 83 11:16:24-PDT (Thu)
From: decvax!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Japanese 5th Generation Effort
Article-I.D.: dciem.293

There seems to be an analogy between the 5th generation project and
the ARPA-SUR project on automatic speech understanding of a decade
ago. Both are top-down, initiated with a great deal of hope, and
dependent on solving some "nitty-gritty problems" at the bottom. The
result of the ARPA-SUR project was at first to slow down research in
ASR (automatic speech recognition) because a lot of people got scared
off by finding how hard the problem really is. But it did, as Robert
Amsler suggests the 5th generation project will, show just what
"nitty-gritty problems" are important. It provided a great step
forward in speech recognition, not only for those who continued to
work on projects initiated by ARPA-SUR, but also for those who have
come afterward. I doubt we would now be where we are in ASR if it had
not been for that apparently failed project ten years ago.
(Parenthetically, notice that a lot of the subsequent advances in ASR
have been due to the Japanese, and that European/American researchers
freely use those advances.)

Martin Taylor

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT