Copy Link
Add to Bookmark
Report

AIList Digest Volume 7 Issue 029

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 14 Jun 1988      Volume 7 : Issue 29 

Today's Topics:

Philosophy and AI

----------------------------------------------------------------------

Date: 9 Jun 88 04:05:36 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: Me and Karl Kluge (no flames, no insults, no abuse)

I see that Gilbert Cockton is still judging the quality of AI by his
statistical survey of bibliographies in AAAI and IJCAI proceedings.
In the hope that the rest of us agree to the speciousness of such arguments,
I shall try to take a more productive approach.
In article <1312@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>
>The point I have been making repeatedly is that you cannot study human
>intelligence without studying humans. John Anderson and his paradigm
>partners and Vision apart, there is a lot of AI research which has
>never been near a human being. Once again, what the hell can a computer
>program tell us about ourselves? Secondly, what can it tell us that we
>couldn't find out by studying people instead?

Let us consider a specific situation. When we study a subject like physics,
there is general agreement that a good textbook must include not only an
exposition of fundamental principles but also a few examples of solved
problems. Why are these examples of benefit to the student? It would
appear that he uses them as some sort of a model (perhaps the basis for
analogical reasoning) when he starts doing assigned problems; but how
doesd he know when an example is the right one to draw upon? The underlying
question is this: HOW DOES KNOWLEDGE OF SUCCESSFULLY SOLVED PROBLEMS
ENHANCE OUR ABILITY TO SOLVE NEW PROBLEMS?

Now, the question to Mr. Cockton is: What have all those researchers who
don't spend so much time with computer programs have to tell us? From what
I have been able to discern, the answer is: NOT VERY MUCH. Meanwhile, there
are a variety of AI projects which have begun to address the questions
concerned with what constitutes experiential memory and how it might be
modeled. I am not claiming they have come up with any answers yet, but
I see no more reason to rail against their attempts than to attack attempts
by those who would not sully their investigative efforts with such ugly
artifacts as computer programs.

------------------------------

Date: 9 Jun 88 09:06:42 GMT
From: mcvax!ukc!dcl-cs!simon@uunet.uu.net (Simon Brooke)
Subject: AI seen as an experiment to determine the existence of
reality

Following the recent debate in this newsgroup about the value of AI, a
thought struck me. It's a bit tenuous....

As I understand it, Turing's work shows that the behaviour of any
computing device can be reproduced by any other. Modern cosmology holds
that:
1] there is a material world.
2] if there is a spiritual world, it's irrelevent, as the
spiritual cannot affect the material.
3] the brain is a material object, and is the organ which largely
determines the behaviour of human beings.

If all this is so, then it is possible to exactly reproduce the workings
of a human brain in a machine (I think Turing actually claimed this, but I
can't remember where).

So AI could be seen as an experiment to determine whether a material world
actually exists. While the generation of a completely successful
computational model of a human brain would not prove the existence of th
material, the continued failure to do so over a long period would surely
prove its non-existence... wouldn't it?


** Simon Brooke *********************************************************
* e-mail : simon@uk.ac.lancs.comp *
* surface: Dept of Computing, University of Lancaster, LA 1 4 YW, UK. *
* *
* Neural Nets: "It doesn't matter if you don't know how your program *
*************** works, so long as it's parallel"
- R. O'Keefe **********

------------------------------

Date: 10 Jun 88 19:13:26 GMT
From: ncar!noao!amethyst!kww@gatech.edu (K Watkins)
Subject: Re: Bad AI: A Clarification

In article <1336@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>
>I do not think there is a field of AI. There is a strange combination
>of topic areas covered at IJCAI etc. It's a historical accident, not
>an epistemic imperative.
>
Of what field(s) is such a statement false? An inventive imagination can
regroup the topics of study and knowledge in a great many ways. Indeed, it
might be very useful to do so more often. (Then again, the cross-tabulating
chore of making sure we lost a minimum of understanding in the transition
would be enormous.)

------------------------------

Date: 11 Jun 88 01:50:33 GMT
From: pasteur!agate!garnet!weemba@ames.arpa (Obnoxious Math Grad
Student)
Subject: Re: Who else isn't a science?

In article <13100@shemp.CS.UCLA.EDU>, bjpt@maui (Benjamin Thompson) writes:
>In article <10510@agate.BERKELEY.EDU> weemba@garnet.berkeley.edu writes:
>>Gerald Edelman, for example, has compared AI with Aristotelian
>>dentistry: lots of theorizing, but no attempt to actually compare
>>models with the real world. AI grabs onto the neural net paradigm,
>>say, and then never bothers to check if what is done with neural
>>nets has anything to do with actual brains.
>
>This is symptomatic of a common fallacy.

No, it is not. You did not catch the point of my posting, embedded in
the subject line.

> Why should the way our brains
>work be the only way "brains" can work? Why shouldn't *A*I workers look
>at weird and wonderful models?

AI researchers can do whatever they want. But they should stop trying
to gain scientific legitimacy from wild unproven conjectures.

> We (basically) don't know anything about
>how the brain really works anyway, so who can really tell if what they're
>doing corresponds to (some part of) the brain?

Right. Or if they're all just hacking for the hell of it.

But if they are in fact interested in the brain, then they could period-
ically check back at what is know about real brains now and then. Since
they don't, I think Edelman's "Aristotelian dentistry" criticism is per-
fectly valid.

In article <3c84f2a9.224b@apollo.uucp>, nelson_p@apollo (Peter Nelson) writes,
replying to the same article:

> I don't see why everyone gets hung up on mimicking natural
> intelligence. The point is to solve real-world problems.

This makes for an engineering discipline, not a science. I'm all for
AI research in methods of solving difficult ill-defined problems. But
calling the resulting behavior "intelligent" is completely unjustified.

Indeed, many modern dictionaries now give an extra meaning to the word
"intelligent", thanks, partly due to AI's decades of abuse of the term:
it means "able to peform some of the functions of a computer".

Ain't it wonderful? AI succeeded by changing the meaning of the word.

ucbvax!garnet!weemba Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: 11 Jun 88 07:00:13 GMT
From: oodis01!uplherc!sp7040!obie!wsccs!dharvey@tis.llnl.gov (David
Harvey)
Subject: Re: AI and Sociology

In article <1301@crete.cs.glasgow.ac.uk>, Gilbert Cockton writes:
> It is quite improper to cut out a territory which deliberately ignores
> others. In this sense, psychology and sociology are guilty like AI,
> but not nearly so much, as they have territories rather than a
> territory. Still, the separation of sociology from psychology is
> regrettable, but areas like social psychology and cognitive sociology
> do bridge the two, as do applied areas such as education and management.
> Where are the bridges to "pure" AI? Answer that if you can.
>
You are correct in asserting that these are the bridges between
Psychology and Sociology, but my limited observation of people in both
groups is that people in Social Psychology rarely poke their heads into
the Sociology department, and people in Cognitive Sociology rarely
interact with the people in Cognitive Psychology. The reason I know is
that I have observed them first-hand while getting degrees in Math and
Psychology. In other words, the bridges are quite superficial, since
the interaction between the two groups is minimal. In regards to this
situation I am referring to the status quo as it existed at the
University of Utah where I got my degrees and at Brigham Young
University which I visited fairly often. And in answer to your demands
of AI, perhaps you better take a very good look at how well social
scientists are at answering questions about thinking. They are making
progress, but it is not in the form of a universal theory, ala Freud.
In other words, they are snipping away at this little idea and that
little paradigm, just like AI researchers are doing.

> Again, I challenge AI's rejection of social criticisms of its
> paradigm. We become what we are through socialisation, not programming
> (although some teaching IS close to programming, especially in
> mathematics). Thus a machine can never become what we are, because it
> cannot experience socialisation in the same way as a human being. Thus
> a machine can never reason like us, as it can never absorb its model of
> reality in a proper social context. Again, there are well documented
> examples of the effect of social neglect on children. Machines will not
> suffer in the same way, as they only benefit from programming, and not
> all forms of human company. Anyone who thinks that programming is
> social interaction is really missing out on something (probably social
> interaction :-))

You obviously have not installed a new operating system on a VAX only to
discover that it has serious bugs. Down comes the machine to the >>>
prompt and the process of starting the machine up with old OS that
worked begins. Since the machine does not have feelings (AHA!) it
doesn't care, but it certainly was not beneficial to its performance.
Or a student's program with severe bugs that causes core dumps doesn't
help either. Then there is the case of our electric news feed being
down for several weeks. When it finally resumed operation it completely
filled the process table, making it impossible to even sign on as
super-user and do an 'ls'! The kind of programming that allowed it to
spawn that many child processes is not my idea of something beneficial!
In other words, bad programming is to a certain extent an analog to
social neglect. Running a machine in bad physical conditions and
physically abusing a person are also similar. Yes, you can create
enough havoc with Death Valley heat to totally kill a computer!
>
> RECOMMENDED READING
>
> Jerome Bruner on MACOS (Man: A Course of Study), for the reasoning
> behind interdisciplinary education.
>
^^^ No qualms with the ideas presented in this book
>
> Skinner's "Beyond Freedom and Dignity" and the collected essays in
> response to it, for an understanding of where behaviourism takes you
> ("pure" AI is neo-behaviourist, it's about little s-r modelling).
>
^^^ And I still think his model has lots of holes in it!

dharvey @ WSCCS (David A Harvey)

------------------------------

Date: 11 Jun 88 13:49:09 GMT
From: trwrb!aero!venera.isi.edu!smoliar@bloom-beacon.mit.edu
(Stephen Smoliar)
Subject: Re: Bad AI: A Clarification

In article <1336@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert
Cockton) writes:
>But when I read misanthropic views of Humanity in AI, I will reply.

Do you mean that all your wholesale railing against AI over the last
several weeks (and it HAS been pretty wholesale) is just a response
to "misanthropic views of Humanity?" Perhaps we may have finally
penetrated to the root of the problem. I wish to go on record as
observing that I have yet to read a paper on AI which has passed
through peer review which embodies any sense of misanthropy whatsoever,
and that includes all those conference proceedings which Mr. Cockton
wishes to take as his primary source of knowledge about the field.
There is certainly a lot of OBJECTIVITY, but I have never felt that
such objectivity could be confused with misanthropy. As I said before,
stop biting the fingers long enough to look where they are pointing!

------------------------------

Date: 11 Jun 88 19:15:20 GMT
From: well!sierch@lll-lcc.llnl.gov (Michael Sierchio)
Subject: Re: Who else isn't a science?


I agree, I think anyone should study whatever s/he likes -- after all, what
matters but whatever you decide matters. I also agree that, simply because
you are interested in something, you shouldn't expect me to regard your
study as important or valid.

AI suffers from the same syndrome as many academic fields -- dissertations
are the little monographs that are part of the ticket to respectability in
academe. The big, seminal questions (seedy business, I know) remain
unanswered, while the rush to produce results and get grants and make $$
(or pounds, the symbol for which...) is overwhelming. Perhaps we would not
be complaining if the study of intelligence and automata, and all the
theoretical foundations for AI work received their due. It HAS become an
engineering discipline, if not for the nefarious reasons I mentioned, then
simply because the gratification that comes from RESULTS is easier to get
than answers to the nagging questions about what we are, and what intelligence
is, etc.

Engineering has its pleasures, and I wouldn't deny them to anyone. But to
those who hold fast to the "?" and abjure the "!", I salute you.
--
Michael Sierchio @ SMALL SYSTEMS SOLUTIONS
2733 Fulton St / Berkeley / CA / 94705 (415) 845-1755

sierch@well.UUCP {..ucbvax, etc...}!lll-crg!well!sierch

------------------------------

Date: 11 Jun 88 20:20:59 GMT
From: agate!garnet!weemba@presto.ig.com (Obnoxious Math Grad Student)
Subject: Re: AI seen as an experiment to determine the existence of
reality

In article <517@dcl-csvax.comp.lancs.ac.uk>, simon@comp (Simon Brooke) writes:
>[...]
>If all this is so, then it is possible to exactly reproduce the workings
>of a human brain in a [Turing machine].

Your argument was pretty slipshod. I for one do not believe the above
is even possible in principle.

ucbvax!garnet!weemba Matthew P Wiener/Brahms Gang/Berkeley CA 94720

------------------------------

Date: 13 Jun 88 03:46:25 GMT
From: quintus!ok@unix.sri.com (Richard A. O'Keefe)
Subject: Re: Constructive Question (Texts and social context)

In article <1335@crete.cs.glasgow.ac.uk> Gilbert Cockton writes:
>IKBS programs are essentially private readings which freeze, despite
>the animation of knowledge via their inference mechanisms (just a
>fancy index really :-)). They are only sensitive to manual reprogramming,
>a controlled intervention. They are unable to reshape their knowledge
>to fit the current interaction AS HUMANS DO. They are insensitive,
>intolerant, arrogant, overbearing, single-minded and ruthless. Oh,
>and they usually don't work either :-) :-)

This is rather desperately anthropomorphic. I am surprised to see Gilbert
Cockton, of all people, ascribing such human qualities to programs.

There is no reason why a program cannot learn from its input; as a
trivial example, Rob Milne's parser for PRESS could acquire new words
from the person typing to it. What does it mean "to reshape one's
knowledge to fit"
? Writing programs which adapt to the particular
client has been an active research area in AI for several years now. As
for insensitivity &c, if we could be given some examples of what kinds
of IKBS behaviour Gilbert Cockton interprets as having these qualities,
and or otherwise similar behaviours not so interpreted, perhaps we could
get some constructive criticism out of this.

The fact that "knowledge", once put into an IKBS, is fossilized, bothers
me. I am so far in sympathy with Cockton as to think that any particular
set of facts & rules is most valuable when it is part of a tradition/
practice/social-context for interpreting, acquiring, and revising such
facts & rules, and I am worried that chunks of "knowledge", once handed
over to computers, may be effectively lost to human society. But this
is no different from the human practice of abdicating responsibility to
human experts, who are also insensitive, &c. Expert systems which are
designed to explain (in ICAI style) the knowledge in them as well as to
deploy it may in fact be a positive social factor.

Instead of waffling on in high-level generalisations, how would it be if
one particular case were to be examined. I propose that the social effect
of Nolo Press's "WillWriter" should be examined (or a similar product).
It is worth noting that the ideology of Nolo Press is quite explicitly to
empower the masses and reduce the power of lawyers. What _might_ such a
program do to society? What _is_ it doing? Do people who use it experience
it as more or less intolerant than a lawyer? And so on. This seems like a
worthy topic for a Masters in Sociology, whatever attitude you take to AI.
(Not that WillWriter is a notable AI program, but it serves the function of
an IKBS.) Has a study like this already been done?

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT