Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 131

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 23 Nov 1988    Volume 8 : Issue 131 

Philosophy:

Limits of AI
Epistemology of Common Sense (2 messages)
Capabilities of "logic machines" (2 messages)
Lightbulbs and Related Thoughts (2 messages)
Computer science as a subset of artificial intelligence

----------------------------------------------------------------------

Date: 7 Nov 88 17:23:43 GMT
From: mcvax!ukc!etive!hwcs!nick@uunet.uu.net (Nick Taylor)
Subject: Re: Limits of AI

In Article 2254 of comp.ai, Gilbert Cockton writes :
"... intelligence is a social construct ... it is not a measure ..."

Hear, hear. I entirely agree. I suppose it was inevitable that this discussion
would boil down to the problem of defining "intelligence". Still, it was fun
watching it happen anyway.

I offer the following in an effort to clarify the framework within which we
must discuss this topic. No doubt to some people this will seem to obfuscate
the issue rather than clarify it but, either way, I am sure it will
generate some discussion.

Like Gilbert most people treat the idea of intelligence as an intra-species
comparitor. This is all well and good so long as we remember that it is
just a social construct which we find convenient when comparing the
apparent intellectual abilities of two people or two dogs or two nematodes,
etc.

However, when we move outside a single species and attempt to say things
such as "humans are more intelligent than nematodes" we are in a very
different ball game. We are now using the concept of intelligence as an
inter-species comparator. Whilst it might seem natural to use the same
concept we really have no right to. One of the most important axioms of
any scientific method is that you cannot generalise across hierarchies.
What we know to be true of humans cannot be applied to other species
willy-nilly.

Until we generate a concept ('label') of inter-species intelligence which
cannot be confused with intra-species intelligence we will forever be
running around in circles discussing two different ideas as if they were
one and the same. Clearly, machine intelligence is also concerned with
a different 'species' to ourselves and as such could be a very useful
concept but neither 'machine intelligence' nor 'human intelligence' are
useful in a discussion of which is, or might become, the more intelligent
(in the inter-species meaning of the word).

For more information on bogus reasoning about brains and behaviour
see Stephen Rose's "The Conscious Brain" (published by Penguin I think).

------------------------------

Date: Wed, 16 Nov 88 19:47 EDT
From: Dourson <"DPK100::DOURSON_SE%gmr.com"@RELAY.CS.NET>
Subject: Epistemology of Common Sense

Bruce Nevin (#125, Mon, 7 Nov 88 11:21:08 EST), responding to
McCarthy (#121, 31 Oct 88 2154 PST), compares the common sense of
"social facts" with the common sense of physical facts. He
states,

"Suppose we had an AI equipped with common sense defined
solely in terms of physical facts. This is somewhat like the
proverbial person who knows the price of everything but the
value of nothing."


Knowledge of physical facts is an essential condition for knowing
the value of things. The value of any thing is determined by what
it contributes to a person's survival and happiness. The price of
a thing is determined by the resources and effort required to
create it. Money is the means by which we measure both a thing's
value and its price.

A person whose knowledge is defined solely in terms of physical
facts, i.e., in terms of reality, would know (or could determine)
both the value of anything (in terms of what it contributes to his
survival and happiness), and the price (in terms of his own
personal effort) he would have to pay to create it or to purchase
it from others. Such a person knows that there is no such thing
as a "social _fact_"; and that survival and happiness are facts of
reality rooted in the natures of existence and man, not matters of
"social convention".

A person who knows prices of things without knowing their value,
has never had to earn his money, his survival, or his happiness.

A person whose values are based on social conventions, does not
think or act for himself, i.e., is not independent, and could not
survive and be happy on his own.

People talk a lot about equipping an AI with common sense and
goals, but seldom about equipping an AI with values and the
ability to make value judgements. When they do mention values, it
is usually in terms such as "social 'facts'", "social
conventions"
, and so-called "higher values", all of which are a
lot of floating fuzzy abstractions that signify nothing.

A value judgement is a precondition for a setting goal, which in
turn is a precondition for thought and action carried out to
achieve the goal. Knowledge, common sense, and effort are means
to achieve the goal. A successful AI will have the ability to
identify and explain the value to itself of a thing, and to
measure the thing's price in terms of its own time and effort. The
values an AI holds will be based on its nature and the conditions
required for it to exist, function, and survive. These values
will not be a matter of "social convention".

McCarthy stated that sociology is peripheral to the study of
intelligence. I submit that it is irrelevant.


Stephen Dourson

------------------------------

Date: 19 Nov 88 02:45:39 GMT
From: quintus!ok@sun.com (Richard A. O'Keefe)
Subject: Re: The epistemology of the common sense world

In a previous article, Gilbert Cockton writes:
>we require enlightenment on how AI workers are trained to study tasks
>in the common sense world. Task analysis and description is an
>established component of Ergonomics/Human Factors. I am curious as to
>how AI workers have absorbed the experience here into their work, so
>that unlike old-fashioned systems analysts, they automate the real
>task rather than an imaginary one which fits into the machine.

>If I were to choose 2 dozen AI workers at random and ask them for an
>essay on methodological hygiene in task analysis and description, what
>are my chances of getting anything ...

I'm far from sure that I could _write_ such an essay, but I'd very much
like to _read_ it. (I was hoping to find topics like that discussed in
comp.cog-eng, but no such luck.) Could you give us a reading list, please?

I may have misunderstood him, but Donald Michie talks about "The Human
Window"
, and seems to be saying that his view of AI is that it uses
computers to move a task in complexity/volume space into the human window
so that humans can finish the job. This would suggest that MMI and that
sort of AI should have a lot in common, and that a good understanding of
task analysis would be very helpful to people trying to build "smart tools".

------------------------------

Date: 15 Nov 88 18:26:20 GMT
From: fluke!ssc-vax!bcsaic!ray@beaver.cs.washington.edu (Ray Allis)
Subject: Re: Capabilities of "logic machines"

In article <393@uceng.UC.EDU> dmocsny@uceng.UC.EDU (daniel mocsny) writes:
>In article <42136@yale-celray.yale.UUCP>, Krulwich-Bruce@cs.yale.edu
(Bruce Krulwich) writes:
>
>[ in reply to my doubts about ``logic-machine'' approaches to learning ]
>
>> If you're claiming that it's possible to do something with connectionist
>> models that its not possible to do with "logical machines," you have to
>> define "logical machines" in such a way that they aren't capable of
>> simulating connectionist models.
>
>Good point, and since simulating a connectionist model can be easily
>expressed as a sequence of logical operations, I would have to be
>pretty creative to design a logical machine that could not do that.

Whoa! Wrong! (Well, sort of.) I think you conceded much too quickly.
'Simulate' and 'model' are trick words here. The problem is that most
'connectionist' approaches are indeed models, and logical ones, of some
hypothesized 'reality'. There is no fundamental difference between such
models and more traditional logical or mathematical models; of course
they can be interchanged.

A distinction must be made between digital and analog; between form and
content; between symbol and referent; between model and that which is
modelled.

Suppose you want to calculate the state of a toy rubber balloon full of
air at ambient temperature and pressure as it is moved from your office
to direct sunlight outside. To do a completely accurate job, you're
going to need to know the vector of every molecule of the balloon and
its contents, every external molecule which affects the balloon, or
affects molecules which affect the balloon, the photon flux, the effects
of haze and clouds drifting by, and whether passing birds and aircraft
cast shadows on the balloon. And of course even that's not nearly enough,
or at fine enough detail. To diminishing degrees, everything from
sunspots to lunar reflectivity will have some effect. Did you account for
the lawn sprinkler's effect on temperature and humidity? "Son of a gun!"
you say, "I didn't even notice the lousy sprinkler!"

Well, it's impossible. In any case most of these are physical quantities
which we cannot know absolutely but can only measure to the limits of our
instruments. Even if we could manage to include all the factors affecting
some real object or event, the values used in the arithmetic calculations
are approximations anyway. So, we approximate, we abstract and model.
And arithmetic is symbolic logic, which deals, not directly with quantities,
but with symbols for quantities.

Now with powerful digital computers, calculation might be fast enough to
produce a pretty good fake, one which is hard for a person to distinguish
from "the real thing", something like a movie. But I don't think this is
likely to be really satisfactory. Consider another example I like, the
modelling of Victoria Falls. Water, air, impurities, debris and rock all
interacting in real time on ninety-seven Cray Hyper-para-multi-3000s. Will
you be inspired to poetry by the ground shaking under your feet? No?

You see, all the ai work being done on digital computers is modelling using
formal logic. There is no reason to argue over whether one type of logical
model can simulate another. The so-called "neurologically plausible"
approach, when it uses real, physical devices is an actual alternative to
logical systems. In my estimation, it's the most promising game in town.

>much like a logical machine -- pushing symbols around, performing
>elementary operations on them one at a time, until the input vector
>becomes the output vector. I have trouble imagining that is what is
>going on when I recognize a friend's face, predict a driver's
>unsignaled turn by the sound of his motor, realize that a particular
>computer command applies to a novel problem, etc.

Me, too!

>Can a system that only does logical inferences on symbols with direct
>semantic significance achieve a similar information gain through
>experience?

Key here is "What constitutes experience?" How is this system in touch
with its environment?

>I will appreciate pointers to significant results. Is anyone making
>serious progress with the classical approach in non-toy-problem
>domains?
[...]
> Can a
>purely logical machine demonstrate a convincing ability to spot
>analogies that don't follow directly from explicit coding or
>hand-holding? Is any logical machine demonstrating information gain
>ratios exceeding (or even approaching) unity? Are any of these
>machines _really_ surprising their creators?
>
>Dan Mocsny

Excellent questions. I'd also like to hear of any significant results.

Ray Allis, Boeing Computer Services, Seattle, Wa. ray@boeing.com

------------------------------

Date: 18 Nov 88 02:22:18 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Capabilities of "logic machines"

In article <8673@bcsaic.UUCP> ray@bcsaic.UUCP (Ray Allis) writes:
>Whoa! Wrong! (Well, sort of.) I think you conceded much too quickly.
>'Simulate' and 'model' are trick words here.

Correct. A better would would be _emulate_.
For any given electronic realisation of a neural net,
there is a digital emulation of that net which cannot be
behaviourally distinguished from the net.
The net is indeed an analogue device, but such devices are
subject to the effects of thermal noise, and provided the
digital emulation carries enough digits to get the
differences down below the noise level, you're set.

In order for a digital system to emulate a neural net adequately,
it is not necessary to model the entire physical universe, as Ray
Allis seems to suggest. It only has to emulate the net.

>You see, all the ai work being done on digital computers is modelling using
>formal logic.

Depending on what you mean by "formal logic", this is either false or
vacuous. All the work on neural nets uses formal logic too (whether the
_nets_ do is another matter).

>>much like a logical machine -- pushing symbols around, performing
>>elementary operations on them one at a time, until the input vector
>>becomes the output vector. I have trouble imagining that is what is
>>going on when I recognize a friend's face, predict a driver's
>>unsignaled turn by the sound of his motor, realize that a particular
>>computer command applies to a novel problem, etc.

>Me, too!

Where does this "one at a time" come from? Most computers these days
do at least three things at a time, and the Connection Machine, for all
that it pushes bits around, does thousands and thousands of things at
a time. Heck, most machines have some sort of cache which does
thousands of lookups at once. Once and for all, free yourself of the
idea that "logical machines" must do "elementary operations one at a
time"
.

------------------------------

Date: Tue, 15 Nov 88 16:02:18 PST
From: norman%ics@ucsd.edu (Donald A Norman-UCSD Cog Sci Dept)
Reply-to: danorman@ucsd.edu
Subject: Lightbulbs and Related Thoughts


Iconic memory is the brief, reasonably veridical image of a sensory
event. In the visual system, it has a time constant of somewhere
around 100 msec. Visual iconic memory is what makes TV and motion
pictures possible: 30 to 60 images a second fuse into a coherent,
apparently continuous percept. I demonstrate this in class by waving
a flashlight in a circle in a dark auditorium: I have to rotate about
3 to 5 times/second for the class to see a continuous image of a
circle (the tail almost dying away).

The illustration of seeing complementary colors after staring at, say,
an image of a flag, is called a visual after effect, and is caused by
entirely different mechanisms.

don norman

Donald A. Norman [ danorman@ucsd.edu BITNET: danorman@ucsd ]
Department of Cognitive Science C-015
University of California, San Diego
La Jolla, California 92093 USA

UNIX: {gatech,rutgers,ucbvax,uunet}!ucsd!danorman
[e-mail paths often fail: please give postal address and all e-mail addresses.]

------------------------------

Date: 21 Nov 88 23:56:31 GMT
From: hpda!hpcuhb!hp-sde!hpcea!hpcehfe!paul@bloom-beacon.mit.edu
(Paul Sorenson)
Subject: Re: Lightbulbs and Related Thoughts


In article <778@wsccs.UUCP> dharvey@wsccs.UUCP (David Harvey) writes:
>Don't forget to include the iconic memory. This is the buffers
>so to speak of our sensory processes. I am sure that you have
>saw many aspects of this phenomenon by now. Examples are staring
>at a flag of the United States for 30 seconds, then observing the
>complementary colors of the flag if you then look at a blank wall
>(usually works best if the wall is dark). [...]

Perhaps this question is a witness to my ignorance, but isn't the phenomenon
you describe a result of the way the retina processes images, and if
so, do you mean to say that iconic memory is located in the retina?

------------------------------------------------------------------------------
Jo Lammens Internet: lammens@cs.Buffalo.EDU
uucp : ..!{ames,boulder,decvax,rutgers}!sunybcs!lammens
BITNET : lammens@sunybcs.BITNET
----------

No, you are correct and the example is wrong. Color after images like
those described are NOT instances of iconic memory. Iconic memory is a
theoretical stage of memory, patterned after short term memory, that
functions as a limited capacity storage buffer for sensory information
(just as STM serves as a limited [7 + or - 2] capacity storage for
information prior to its being encoded into "Long Term Memory").
Presumably, Iconic memory preceeds STM, which preceeds LTM, which
preceeds....(forgetting, making it up,?).

------------------------------

Date: Thu, 17 Nov 88 10:49:42 pst
From: Ray Allis <ray@ATC.BOEING.COM>
Subject: Re: Computer science as a subset of artificial intelligence

In <639@quintus.UUCP> ok@quintus.UUCP (Richard A. O'Keefe) writes:

>In a previous article, Ray Allis writes:
>>I was disagreeing with that too-limited definition of AI. *Computer
>>science* is about applications of computers, *AI* is about the creation
>>of intelligent artifacts. I don't believe digital computers, or rather
>>physical symbol systems, can be intelligent. It's more than difficult,
>>it's not possible.
>
>There being no other game in town, this implies that AI is impossible.
>Let's face it, connectionist nets are rule-governed systems; anything a
>connectionist net can do a collection of binary gates can do and vice
>versa. (Real neurons &c may be another story, or may not.)

But there ARE other games. I don't believe AI is impossible. I'm convinced
on my interpretation of evidence that AI IS possible (i.e. artifacts that
think like people). It's just that I don't think it can be done if methods
are arbitrarily limited to only formal logic. If by "connectionist net" you
are referring to networks of symbols, such as semantic nets, implemented on
digital computers, then, in that tiny domain, they may well all be
rule-governed systems, interchangeable with "a collection of binary gates".
Those are not the same as "neural nets" which are modelled after real
organisms' central nervous systems. Real neurons do indeed appear to be
another story. In their domain, rules should not be thought of as governing,
but rather as *describing* operations which are physical analogs and not
symbols. To be repeatedly redundant, an organism's central nervous system
runs just fine without reference to explicit rules; rules DESCRIBE, to beings
who think with symbols (guess who) what happens anyway. AI methodology must
deal with real objects and real events in addition to symbols and form.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT