Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 135

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Friday, 2 Dec 1988      Volume 8 : Issue 135 

Philosophy:

Defining Machine Intelligence (6 messages)

----------------------------------------------------------------------

Date: 17 Nov 88 21:16:04 GMT
From: uwslh!lishka@speedy.wisc.edu (Fish-Guts)
Subject: The difference between machine and human intelligence (was:
AI and Intelligence)

In article <4216@homxc.UUCP> marty@homxc.UUCP (M.B.BRILLIANT) writes:
>
>Any definition of ``artificial intelligence'' must allow intelligence
>to be characteristically human, but not exclusively so.

A very good point (IMHO). I believe that artificial intelligence
is possible, but that machine intelligence will probably *NOT*
resemble human intelligence all that closely. My main reason for this
is that unless you duplicate much of what a human is (i.e. the neural
structure, all of the senses, etc.), you will not get the same result.
I propose that a machine without human-like senses cannot "understand"
many ideas and imagery the way a human does, simply because it will
not be able to perceive its surroundings in the same way as a human.
Any comments?

.oO Chris Oo.--
Christopher Lishka ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene lishka%uwslh.uucp@cs.wisc.edu
Immunology Section (608)262-1617 lishka@uwslh.uucp

"I'm not aware of too many things...
I know what I know if you know what I mean"

-- Edie Brickell & the New Bohemians

------------------------------

Date: 19 Nov 88 17:06:03 GMT
From: uwslh!lishka@speedy.wisc.edu (Fish-Guts)
Subject: Re: Defining Machine Intelligence.

In article <1111@dukeac.UUCP> sbigham@dukeac.UUCP (Scott Bigham) writes:
>In article <401@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
>>I believe that artificial intelligence
>>is possible, but that machine intelligence will probably *NOT*
>>resemble human intelligence...
>
>So how shall we define machine intelligence? More importantly, how will we
>recognize it when (if?) we see it?
>
> sbigham

A good question, to which I do not have a good answer. I *have*
thought about it quite a bit, though ... however, I haven't come up
with much that I am satisfied with. Here is what my current lines or
thought are on this subject:

Many (if not most) attempts at definitions of "machine
intelligence"
relate it to "human intelligence." However, I have yet
to find a good definition of "human intelligence" that is less vague
than a dictionary's definition. It would seem (to me at least) that
AI scientists (as well as scientists in many other fields) have yet to
come up with a good, working definition of "human intelligence" that
most will accept. Rather, most AI people I have spoken with
(including myself ;-) have a vague notion of what "human intelligence"
is, or else have definitions of "human intelligence" that relies on
many personal assumptions. I still do not think that the AI community
has developed a definition of "human intelligence" that can be
universally presented in an introductory course on AI. It is no
wonder, then, that there is no commonly accepted definition of machine
intelligence (which would seem to be a crucial definition in AI, IMHO).

So how do we define machine intelligence? I propose that we
define it apart from human intelligence at first, and try to relate it
to human intelligence afterwards. In my opinion, machine intelligence
does not have to be the same as human intelligence (and probably will
not), for reasons I have mentioned in other articles. From what I
have read here, I believe that at least a few other people in this
group also feel this way.

First, the necessary "features" of machine intelligence should be
discussed and decided upon. It is important that this be done
*without* considering current architectures and AI knowledge; the
"features" should be for an ideal "machine intelligence," and not
geared towards something that can be achieve in fifty years. Also,
human intelligence should be *considered* at this point, but not used
as a *basis* for defining machine intelligence; intelligence in other
beings (mammals, birds, insects, rocks (;-), whatever) should also be
considered.

Second, after having figured out what we want machine
intelligence to be, we should then try and come up with some good
"indicators" that could be used to tell whether an AI system exhibits
machine intelligence. These indicators can include specific tests,
but I have a feeling that tests for any form of intelligence have
never been very good indicators (note that I do not put that much
value on IQ tests as measures of intelligence). Indicators of
intelligence in humans and other beings should be considered here as
well (i.e. what do we feel is a good sign that someone is intelligent?).

After all that is done (and it may never get done ;-), then we
can try and compare it to human intelligence. Chances are the two
definitions of intelligence (for machines and humans) will be
different. Of course, if, in looking at human intelligence, some
important points of machine intelligence have been missed, then
revisions are in order ... there is always time to revise the
definition.

I am sorry that I could not provide a concrete definition of
what machine intelligence is. However, I hoped I have provided a
small framework for discussions on how to go about defining machine
intelligence. And of course all the above is only my view on the
subject, and is subject to change; do with it what you will ... if you
want to print it up and use it as bird-cage liner, well that is fine
by me ;-)

.oO Chris Oo.--
Christopher Lishka ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene lishka%uwslh.uucp@cs.wisc.edu
Immunology Section (608)262-1617 lishka@uwslh.uucp

"I'm not aware of too many things...
I know what I know if you know what I mean"

-- Edie Brickell & the New Bohemians

------------------------------

Date: 20 Nov 88 06:53:44 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Defining Machine Intelligence.

In article <404@uwslh.UUCP> lishka@uwslh.UUCP (Fish-Guts) writes:
> Many (if not most) attempts at definitions of "machine
>intelligence"
relate it to "human intelligence." However, I have yet
>to find a good definition of "human intelligence" that is less vague
>than a dictionary's definition. It would seem (to me at least) that
>AI scientists (as well as scientists in many other fields) have yet to
>come up with a good, working definition of "human intelligence" that
>most will accept. Rather, most AI people I have spoken with
>(including myself ;-) have a vague notion of what "human intelligence"
>is, or else have definitions of "human intelligence" that relies on
>many personal assumptions. I still do not think that the AI community
>has developed a definition of "human intelligence" that can be
>universally presented in an introductory course on AI. It is no
>wonder, then, that there is no commonly accepted definition of machine
>intelligence (which would seem to be a crucial definition in AI, IMHO).

I think it is useful to bear in mind that "intelligence" is a _social_
construct. We can identify particular characters which are associated
with it, and we may be able to measure those. (For example, one of the
old intelligence tests identified knowing that Crisco (sp?) is a cooking
oil as a component of intelligence.) It is _NOT_ the responsibility of
AI people to define "human intelligence". It is the job of sociologists
to determine how the notion of "intelligence" is deployed in various
cultures, and of psychologists to study whatever aspects turn out to be
based on mental characteristics of the individual.

The field called "Machine Intelligence" or "Artificial Intelligence" is
something which originated in a particular related group of cultures and
took the "folk" notion of "intelligence" as its starting point. We wave
our hands a bit, and say "you know how smart people are, and how dumb
machines are, well, we want to make machines smarter."
At some point we
will declare victory, and whatever we have at that point, _that_ will be
the definition of "machine intelligence". ("Intelligent" is already used
to mean "able to perform the operations of a computer", so is "smart" in
the phrase "smart card".)

Let's face it, 13th century philosophers didn't have a definition of "mass",
"potential field", "tensor", or even "hadron" when they started out trying
to make sense of motion. They used the ordinary language they had. The
definitions came _last_.

There are at least two approaches to AI, which may be caricatured as
(1) "Let's build a god"
(2) "Let's build amplifiers for the mind"
I belong to the second camp: I don't give a Continental whether we end
up with "machine intelligences" or not, just so long as we end up with
cognitive tools which are far more intelligible to humans than what we
have now. For the first camp, the possibility of "inhuman" machine
intelligences is of interest. It would definitely be a kind of success.
For the second camp, something which is not close enough to the human
style to be readily comprehended by an ordinary human would be an utter
failure.

We are still close enough to the beginnings of AI (whatever that is) that
both camps can pursue their goals by similar means, and have useful things
to say to each other, but don't confuse them!

------------------------------

Date: 23 Nov 88 11:49 EST
From: SDEIBEL%ZEUS.decnet@ge-crd.arpa
Subject: What the heck is intelligence and should we care?


In Vol8 Issue 131 of the BITNET distribution of AILIST, Nick Taylor
mentioned the problem of defining intelligence. This is indeed a problem:
What really are we talking about when we set ourselves off from the
"animals", etc? I'm not foolish enough to pretend I have any answers but
did find some interesting ideas in Ray Jackendoff's book "Conciousness and
the Computational Mind"
.

Jackendoff suggests (in Chapter 2, I believe) that one fundamental
characterestic of intelligence that seperates the actions of humans (and
possibly, other animals) from non-intelligent systems/animals/etc is the
way in which components of intelligent entities interact. The matter
of interest in intelligent entities is the way in which independently
acting sub-parts (e.g. neurons) interact and the way in which the states
of these sub-parts combinatorily combine. On the other hand, the matter
of interest in non-intelligent entities (e.g. a stomach) is the way in
which the action of subparts (e.g. secreting cells) SUM into a coherent
whole.

While vague, this idea of intelligence as arising from complexity and
the interaction of independent units seemed interesting to me in that it
offers a nice and simplistic general description of intelligence. Oh, yes
it could start to imply that computers are intelligent, etc, etc but one
must not forget the complexity gap between the brian and the most complex
computers in existence today! Rather that wrestle with the subtleties and
complexities of words like "intelligence" (among others), it might be better
to accept the fact that we may never be able to decide what intelligence is.
How about "The sum total of human cognitive abilities" and forget about it
to concentrate on deciding how humans might acheive some of their cognitive
feats? Try deciding what we really mean when we say "bicycle" and you'll run
into monumental problems. Why should we expect to be able to characterise
"intelligence" any easier?

Stephan Deibel (sdeibel%zeus.decnet@ge-crd.arpa)

------------------------------

Date: 25 Nov 88 08:55:39 GMT
From: tramp!hassell@boulder.colorado.edu (Christopher Hassell)
Subject: Re: Intelligent Displacement of Dirt (was: Re: Artificial
Intelligence and Intelligence)

In article <4561@phoenix.Princeton.EDU> eliot@phoenix.Princeton.EDU
(Eliot Handelman) writes:

>What I've come to admire as intelligence is the capacity to understand the
>nature of one's limitations, and through that understanding to construct
>alternative approaches to whichever goal one has undertaken to acheive.
>Where real intelligence begins, I think, is the capacity to apply this idea
>to itself, that is, the capacity to assess the machinery of discrimination
>and criticism itself. I surmise that a finite level of recursion is sufficient
>to justify intelligent behaviour.
>
>As an example, supposing that my goal is to displace an enormous pile of dirt
>in the course of an afternoon. I may know that it takes me an afternoon to
>displace a fraction of the total amount. The questions are, how would I know
>this if I haven't tried, and how do I arrive at the idea of a shovel. I invite
>discussion of this matter.

On the whole subject, this does appear to be one of the better definitions
of intelligence because it is self-propogating (it'll get smarter over time).

I still believe that this analysis, though requiring agile thought, isn't
even attempted by most of us 'intelligent' beings. We have our general
inference mech's to say .. "well that possibility is still not tried" or
"The outside world opened that option etc.." .. not too terribly difficult.
Myself, I am a pragmatist and find sufficient evidence for `getting'
intelligence from the outside world given a critical mass of inherently
important initial syllogisms, (i.e. the original `how to learn' questions)

I throw a verbose attempted `solution' to the world in this:
One realizes the fact of
a homogeneous material (dirt) needing `transport' from one place
to another, and the motor recognition of gravity and its effect
on the dirt (needing a bowl-like container to 'move' it)
the inability of anything DIRECTLY equalling the task
(no BIG auto-digging-and-moving-and-dumping-bowls to control)

>From this comes the reduction that given the ability to 'integrate' over
time the more human-sized act of moving 'some' dirt (homog materials)
which requires the ability to break down this inhuman goal to a normal one.
(this state can be more good than that original state .. so try it)
(this does come from some recognition of being able to manipulate dirt at all)

hands are the first suggestion but upon experimentation (remembrance too)
one gets "bored" <that beautifully intelligent perception>.
<hope and need for something that works "better">
upon this the extrapolations of 'what holds dirt' goes on towards
other objects, this mixed with handyness would lead to a shovel and
maybe even a wheelbarrow (a larger 'bowl' but one that can't
be used directly to get dirt with)

YES this is only the break-down-the-problem-to-managible-sub-parts but
this is a general version for with X "resources" find a way Y fits into
them upon a thing called an attempt. (Y being the problem)
(yes also "resources" are nice and static too. Just change the problem
to a set of responses that must propogate into themselves)

I hope this gets some opinions (not all of them unfavorable?? /:-)
--------------------------------------------------------------------------
In any situation the complete redefinition of the problem *IS* the answer
itself, ... so let's get redefining. :-)
{sunybcs, ncar, nbires}!boulder!tramp!hassell ## and oh so much of it ##
#### C. H. ####

------------------------------

Date: 30 Nov 88 18:04:02 GMT
From: uwslh!lishka@speedy.wisc.edu (Fish-Guts)
Subject: Re: The difference between machine and human intelligence
(was: AI and Intelligence)

In article <960@dgbt.uucp> thom@dgbt.uucp (Thom Whalen) writes:
>From article <401@uwslh.UUCP>, by lishka@uwslh.UUCP (Fish-Guts):
>> I propose that a machine without human-like senses cannot "understand"
>> many ideas and imagery the way a human does, simply because it will
>> not be able to perceive its surroundings in the same way as a human.
>> Any comments?
>
>Do you believe that Helen Keller "understood many ideas and imagery the
>way a human does? She certainly lacked much of the sensory input that
>we normally associate with intelligence.
>
>Thom Whalen

I do not believe she *perceived* the world as most people with
full senses do. I do believe she "
understood many ideas and imagery"
the way humans do because she had (1) touch, (2) taste, and (3)
olfactory senses (she was not able to hear or see, if I remember
correctly), as well as other internal sensations (i.e. sickness, pain,
etc.). The way I remember it, she was taught to speak by having her
"
feel" the vibrations of her teacher's throat as words were said while
associating the words with some sensation (i.e. the "
feeling" or
water as it ran over her hands). Also (and this is a highly personal
judgement) I think the fact that she was a human, with a human nervous
system and human reactions to other sensations (i.e. a sore stomach,
human sicknesses, etc.), also added to her "
human understanding."

.oO Chris Oo.--
Christopher Lishka ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
Wisconsin State Lab of Hygiene lishka%uwslh.uucp@cs.wisc.edu
Immunology Section (608)262-1617 lishka@uwslh.uucp

"
I'm not aware of too many things...
I know what I know if you know what I mean"
-- Edie Brickell & the New Bohemians

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT