Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 046

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 29 Jun 1988     Volume 7 : Issue 46 

Today's Topics:

Philosophy:

replicating the brain with a Turing machine
Deep Thought.
possible value of AI
H. G. Wells
Who else isn't a science?
metaepistemology
questions and answers about meta-epistemology

----------------------------------------------------------------------

Date: Sat, 25 Jun 88 21:16 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: replicating the brain with a Turing machine

Distribution-File:
AILIST@AI.AI.MIT.EDU

In AIList Digest V7 #29, agate!garnet!weemba@presto.ig.com (Obnoxious
Math Grad Student) writes:

>In article <517@dcl-csvax.comp.lancs.ac.uk>, simon@comp (Simon Brooke) writes:
>>[...]
>>If all this is so, then it is possible to exactly reproduce the workings
>>of a human brain in a [Turing machine].
>
>Your argument was pretty slipshod. I for one do not believe the above
>is even possible in principle.

Why? You must / at least should have a basis for the opinion.

One possibility I can think of is the dualist position: we have a
spirit but don't know how to make a machine with one.

Any other Dualists out there?

Andy Ylikoski

------------------------------

Date: Sun, 26 Jun 88 13:10:51 +0100
From: "Gordon Joly, Statistics, UCL"
<gordon%stats.ucl.ac.uk@ESS.Cs.Ucl.AC.UK>
Subject: Deep Thought.

Kurt Thearling quotes a quote in AIList Digest V7 #45.

> An interesting quote from the article is "Fredkin believes that
> the universe is very literally a computer and that it is being
> used by someone, or something, to solve a problem."


Excuse my ignorance, but I am not able to judge who is plagiarising
whom. Douglas Adams invented a computer which does just this in "The
Hitch Hikers Guide to the Galaxy"
. Most people regard this novel as a
work of fiction.

Gordon Joly.

------------------------------

Date: 26 Jun 88 15:41:38 GMT
From: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Reply-to: uwvax!uwslh!lishka@rutgers.edu (Fish-Guts)
Subject: Re: possible value of AI


In a previous article, DJS%UTRC@utrcgw.utc.COM writes:
>Gilbert Cockton writes:
>
>"... Once again, what the hell can a computer program tell us about
>ourselves? Secondly, what can it tell us that we couldn't find out by
>studying people instead?"

>
>What do people use mirrors for? What the hell can a MIRROR tell us about
>ourselves? Secondly, what can it tell us that we couldn't find out by
>studying people instead?

Personally, I think everything can tell us something about
ourselves, be it mirror, computer, or rock. Maybe it depends on what
one expects to find?

> Isn't it possible that a computer program could have properties
>which might facilitate detailed self analysis? I believe some people
>have already seen the dim reflection of true intelligence in the primitive
>attempts of AI research. Hopefully all that is needed is extensive
>polishing and the development of new tools.

Whether or not there has been a "dim reflection of true
intelligence in the primitive attempts of AI research"
is an opinion,
varying greatly with who you talk to. I would think that there are
"some people" who have already seen "true intelligence" (however
bright or dim) in anything and everything. Again, their opinions.

> David Sirag
> UTRC

The above are my opinions, although they might be those of my
cockatiels as well.

-Chris
--
Christopher Lishka | lishka@uwslh.uucp
Wisconsin State Lab of Hygiene | lishka%uwslh.uucp@cs.wisc.edu
Immunology Section (608)262-1617 | ...!{rutgers|ucbvax|...}!uwvax!uwslh!lishka
"...Just because someone is shy and gets straight A's does not mean they won't
put wads of gum in your arm pits."

- Lynda Barry, "Ernie Pook's Commeek: Gum of Mystery"

------------------------------

Date: Sun, 26 Jun 88 13:59:24 -0400 (EDT)
From: David Greene <dg1v+@andrew.cmu.edu>
Subject: Re: H. G. Wells

>I was disappointed to see no reaction to John Cugini's quotation from
>H. G. Wells. Is no one willing to admit that things haven't changed
>since 1906?

Of course things have changed...

there are now far more than 4000 "scientists" in Washington and Herbert George
Wells is dead.

------------------------------

Date: 26 Jun 88 18:17:37 GMT
From: pasteur!agate!garnet!weemba@ames.arpa (Obnoxious Math Grad
Student)
Subject: Re: Who else isn't a science?

In article <????>, now expired here, ???? asked me for references. I
find this request strange, since at least one of my references was in
the very article being replied to, although not spelled out as such.

Anyway, let me recommend the following works by neurophysiologists:

G M Edelman _Neural Darwinism: The Theory of Neuronal Group Selection_
(Basic Books, 1987)

C A Skarda and W J Freeman "How brains make chaos in order to make sense
of the world"
, _Behavorial and Brain Sciences_, (1987) 10:2 pp 161-195.

These researchers start by looking at *real* brains, *real* EEGs, they
work with what is known about *real* biological systems, and derive very
intriguing connectionist-like models. To me, *this* is science.

GME rejects all the standard categories about the real world as the start-
ing point for anything. He views brains as--yes, a Society of Mind--but
in this case a *biological* society whose basic unit is the neuronal group,
and that the brain develops by these neuronal groups evolving in classical
Darwinian competition with each other, as stimulated by their environment.

CAS & WJF have developed a rudimentary chaotic model based on the study
of olfactory bulb EEGs in rabbits. They hooked together actual ODEs with
actual parameters that describe actual rabbit brains, and get chaotic EEG
like results.
------------------------------------------------------------------------
In article <34227@linus.UUCP>, marsh@mitre-bedford (Ralph J. Marshall) writes:
> "The ability to learn or understand or to deal with new or
> trying situations."


>I'm not at all sure that this is really the focus of current AI work,
>but I am reasonably convinced that it is a long-term goal that is worth
>pursuing.

Well, sure. So what? Everyone's in favor of apple pie.
------------------------------------------------------------------------
In article <2618@mit-amt.MEDIA.MIT.EDU>, bc@mit-amt (bill coderre) writes:

>Oh boy. Just wonderful. We have people who have never done AI arguing
>about whether or not it is a science [...]

We've also got what I think a lot of people who've never studied the
philosophy of science here too. Join the crowd.

>May I also inform the above participants that a MAJORITY of AI
>research is centered around some of the following:

>[a list of topics]

Which sure sounded like programming/engineering to me.

> As it happens, I am doing simulations of animal
>behavior using Society of Mind theories. So I do lots of learning and
>knowledge acquisition.

Well good for you! But are you doing SCIENCE? As in:

If your simulations have only the slightest relevance to ethology, is your
advisor going to tell you to chuck everything and try again? I doubt it.

ucbvax!garnet!weemba Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: Mon, 27 Jun 88 09:18:11 EDT
From: csrobe@icase.arpa (Charles S. Roberson)
Subject: Re: metaepistemology

Assume the "basic structure of the world is unknowable"
[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
world, NOT KNOW that what we perceive is ACTUALLY how the
world is.

Now imagine that I have created an agent that interacts
with *our* world and which builds models of the world
as it PERCEIVES it (via sensors, nerves, or whatever).

My question is this: Where does this agent stand, in
relation to me, in its perception of reality? Does it
share the same level of perception that I 'enjoy' or is
it 'doomed' to be one level removed from my world (i.e.
is its perception inextricably linked to my perception
of the world, since I built it)?

Assume now, that the agent is so doomed. Therefore, it
may perceive things that are inconsistent with the world
(though we may never know it) but are consistent with
*my* perception of the world.

Does this imply that "true intelligence" is possible
if and only if an agent's perception is not nested
in the perception of its creator? I don't think so.
If it is true that we cannot know the "basic structure of
the world"
then our actions are based solely on our
perceptions and are independent of the reality of the
world.

I believe we all accept perception as a vital part of an
intelligent entity. (Please correct me if I am wrong.)
However, a flawed perception does not make the entity any
less intelligent (does it?). What does this say about
the role of perception to intelligence? It has to be
there but it doesn't have to function free of original
bias?

Perhaps, we have just created an agent that perceives
freely but it can only perceive a sub-world that I
defined based on my perceptions. Could it ever be
possible to create an agent that perceives freely and
that does not live in a sub-world?

-chip
+-------------------------------------------------------------------------+
|Charles S. Roberson ARPANET: csrobe@icase.arpa |
|ICASE, MS 132C BITNET: $csrobe@wmmvs.bitnet |
|NASA Langley Rsch. Ctr. UUCP: ...!uunet!pyrdc!gmu90x!wmcs!csrobe|
|Hampton, VA 23665-5225 Phone: (804) 865-4090
+-------------------------------------------------------------------------+

------------------------------

Date: Mon, 27 Jun 88 20:57 O
From: <YLIKOSKI%FINFUN.BITNET@MITVMA.MIT.EDU>
Subject: questions and answers about meta-epistemology

Distribution-File:
AILIST@AI.AI.MIT.EDU

Here come questions by csrobe@icase.arpa (Charles S. Roberson) and my
answers to them:

>Assume the "basic structure of the world is unknowable"
>[JMC@SAIL.Stanford.edu] and that we can only PERCEIVE our
>world, NOT KNOW that what we perceive is ACTUALLY how the
>world is.
>
>Now imagine that I have created an agent that interacts
>with *our* world and which builds models of the world
>as it PERCEIVES it (via sensors, nerves, or whatever).
>
>My question is this: Where does this agent stand, in
>relation to me, in its perception of reality? Does it
>share the same level of perception that I 'enjoy' or is
>it 'doomed' to be one level removed from my world (i.e.
>is its perception inextricably linked to my perception
>of the world, since I built it)?

It has the perceptory and inferencing capabilities you designed and
implemented, unless you gave it some kind of self-rebuilding or
self-improving capability. Thus its perception is linked to your
world.

>Does this imply that "true intelligence" is possible
>if and only if an agent's perception is not nested
>in the perception of its creator? I don't think so.

I also don't think so. The limitation of the perception of the robot
being linked to the designer of the robot is unessential I think.

>I believe we all accept perception as a vital part of an
>intelligent entity. (Please correct me if I am wrong.)

Perception is extremely essential. All observation of reality takes
place by means of perception.

>However, a flawed perception does not make the entity any
>less intelligent (does it?). What does this say about
>the role of perception to intelligence? It has to be
>there but it doesn't have to function free of original
>bias?

A flawed perception can be lethal for example to an animal.
Perception is a necessary requirement.

It can be argued, though, that all human perception is biased (our
education influeces how we interpret that which we perceive).

>Perhaps, we have just created an agent that perceives
>freely but it can only perceive a sub-world that I
>defined based on my perceptions. Could it ever be
>possible to create an agent that perceives freely and
>that does not live in a sub-world?

Yes, at least if the agent has the capability to extend itself for
example by being able to redesign and rebuild itself. How much
computational power in the Turing machine sense this capability
requires is an interesting theoretical question which may already have
been studied by the theoreticians out there.

Andy Ylikoski

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT