Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 114

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Sunday, 18 Dec 1983      Volume 1 : Issue 114 

Today's Topics:
Intelligence - Confounding with Culture,
Jargon - Mental States,
Scientific Method - Research Methodology
----------------------------------------------------------------------

Date: 13 Dec 83 10:34:03-PST (Tue)
From: hplabs!hpda!fortune!amd70!dual!onyx!bob @ Ucb-Vax
Subject: Re: Intelligence = culture
Article-I.D.: onyx.112

I'm surprised that there have been no references to culture in
all of these "what is intelligence?" debates...

The simple fact of the matter is, that "intelligence" means very
little outside of any specific cultural reference point. I am
not referring just to culturally-biased vs. non-culturally-biased
IQ tests, although that's a starting point.

Consider someone raised from infancy in the jungle (by monkeys,
for the sake of the argument). What signs of intelligence will
this person show? Don't expect them to invent fire or stone
axes; look how long it took us the first time around. The most
intelligent thing that person could do would be on par with what
we see chimpanzees doing in the wild today (e.g. using sticks to
get ants, etc).

What I'm driving at is that there are two kinds of "intelli-
gence"
; there is "common sense and ingenuity" (monkeys, dolphins,
and a few people), and there is "cultural methodology" (people
only).

Cultural methodologies include all of those things that are
passed on to us as a "world-view", for instance the notion of
wearing clothes, making fire, using arithmetic to figure out how
many people X bags of grain will feed, what spices to use when
cooking, how to talk (!), all of these things were at one time a
brilliant conception in someones' mind. And it didn't catch on
the first time around. Probably not the second or third time
either. But eventually someone convinced other people to try his
idea, and it became part of that culture. And using that as a
context gives other people an opportunity to bootstrap even
further. One small step for a man, a giant leap for his culture.

When we think about intelligence and get impressed by how wonder-
ful it is, we are looking at its application in a world stuffed
to the gills with prior context that is indispensible to every-
thing we think about.

What this leaves us with is people trying to define and measure a
hybrid of common sense and culture without noticing that what
they are interested in is actually two different things, plus the
inter-relations between those things, so no wonder the issue
seems so murky.

For those who may be interested, general systems theory, general
semantics, and epistemology are some fascinating related sub-
jects.

Now let's see some letters about what "common sense" is in this
context, and about applying that common sense to (cultural) con-
texts. (How recursive!)

------------------------------

Date: Tue, 13 Dec 83 11:24 EST
From: Steven Gutfreund <gutfreund%umass-cs@CSNet-Relay>
Subject: re: mental states

I am very intriguied by Ferenando Pereira's last comment:

Sorry, you missed the point that JMC and then I were making. Prygogine's
work (which I know relatively well) has nothing to say about systems
which have to model in their internal states equivalence classes of
states of OTHER systems. It seems to me impossible to describe such
systems unless certain sets of states are labeled with things
like "believe(John,have(I,book))". That is, we start associating
classes of internal states to terms that include mentalistic
predicates.

I may be missing the point, since I am not sure what "model in their internal
states equivelence classes of states of OTHER systems"
means. But I think
you are saying is that `reasoning systems' that encode in their state
information about the states of other systems (or their own) are not
coverered by Ilya Prygogine's work.

I think think you are engaging in a leap of faith here. What is the basis
for believing that any sort of encoding of the state of other systems is
going on here. I don't think even the philosophical guard phrase
`equivalence class' protects you in this case.

To continue in my role of sceptic: if you make claims that you are constructing
systems that model their internal state (or other systems' internal states)
[or even an equivalence class of those states]. I will make a claim that
my Linear Programming Model of an computer parts inventory is also
exhibiting `mental reasoning' since it is modeling the internal states
of that computer parts inventory.

This means that Prygogine's work is operative in the case of FSA based
`reasoning systems' since they can do no more modeling of the internal
state of another system than a colloidal suspension, or an inventory
control system built by an operations research person.


- Steven Gutfreund
Gutfreund.umass@csnet-relay

------------------------------

Date: Wed 14 Dec 83 17:46:06-PST
From: PEREIRA@SRI-AI.ARPA
Subject: Mental states of machines

The only reason I have to believe that a system encodes in its states
classifications of the states of other systems is that the systems we
are talking about are ARTIFICIAL, and therefore this is part of our
design. Of course, you are free to say that down at the bottom our
system is just a finite-state machine, but that's about as helpful as
making the same statement about the computer on which I am typing this
message when discussing how to change its time-sharing resource
allocation algorithm.

Besides this issue of convenience, it may well be the case that
certain predicates on the states of other or the same system are
simply not representable within the system. One does not even need to
go as far as incompleteness results in logic: in a system which has
means to represent a single transitive relation (say, the immediate
accessibility relation for a maze), no logical combination can
represent the transitive closure (accessibility relation) [example due
to Bob Moore]. Yet the transitive closure is causally connected to the
initial relation in the sense that any change in the latter will lead
to a change in the former. It may well be the case (SPECULATION
WARNING!) that some of the "mental state" predicates have this
character, that is, they cannot be represented as predicates over
lower-level notions such as states.

-- Fernando Pereira

------------------------------

Date: 12 Dec 83 7:20:10-PST (Mon)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Mental states of machines
Article-I.D.: dciem.548

Any discussion of the nature and value of mental states in either
humans of machines should include consideration of the ideas of
J.G.Taylor (no relation). In his "Behavioral Basis of Perception"
Yale University Press, 1962, he sets out mathematically a basis
for changes in perception/behaviour dependent on transitions into
different members of "sets" of states. These "sets" look very like
the mental states referenced in the earlier discussion, and may
be tractable in studies of machine behaviour. They also tie in
quite closely with the recent loose talk about "catastrophes" in
psychology, although they are much better specified than the analogists'
models. The book is not easy reading, but it is very worthwhile, and
I think the ideas still have a lot to offer, even after 20 years.

Incidentally, in view of the mathematical nature of the book, it
is interesting that Taylor was a clinical psychologist interested
initially in behaviour modification.

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 1983 1042-PST
From: HALL.UCI-20B@Rand-Relay
Subject: AI Methods

After listening in on the communications concerning definitions
of intelligence, AI methods, AI results, AI jargon, etc., I'd
like to suggest an alternate perspective on these issues. Rather
than quibbling over how AI "should be done," why not take a close
look at how things have been and are being done? This is more of
a social-historical viewpoint, admitting the possibility that
adherents of differing methodological orientations might well
"talk past each other" - hence the energetic argumentation over
issues of definition. In this spirit, I'd like to submit the
following for interested AILIST readers:

Toward a Taxonomy of Methodological
Perspectives in Artificial Intelligence Research

Rogers P. Hall
Dennis F. Kibler

TR 108
September 1983

Department of Information and Computer Science
University of California, Irvine
Irvine, CA 92717

Abstract

This paper is an attempt to explain the apparent confusion of
efforts in the field of artificial intelligence (AI) research in
terms of differences between underlying methodological perspectives
held by practicing researchers. A review of such perspectives
discussed in the existing literature will be presented, followed by
consideration of what a relatively specific and usable taxonomy of
differing research perspectives in AI might include. An argument
will be developed that researchers should make their methodological
orientations explicit when communicating research results, both as
an aid to comprehensibility for other practicing researchers and as
a step toward providing a coherent intellectual structure which can
be more easily assimilated by newcomers to the field.

The full report is available from UCI for a postage fee of $1.30.
Electronic communications are welcome:

HALL@UCI-20B
KIBLER@UCI-20B

------------------------------

Date: 15 Dec 1983 9:02-PST
From: fc%usc-cse%USC-ECL@MARYLAND
Subject: Re: AIList Digest V1 #112 - science

In my mind, science has always been the practice of using the
'scientific method' to learn. In any discipline, this is used to some
extent, but in a pure science it is used in its purest form. This
method seems to be founded in the following principles:

1 The observation of the world through experiments.

2 Attempted explanations in terms of testable hypotheses - they
must explain all known data, predict as yet unobserved results,
and be falsifiable.

3 The design and use of experiments to test predictions made by these
hypotheses in an attempt to falsify them.

4 The abandonment of falsified hypotheses and their replacement
with more accurate ones - GOTO 2.

Experimental psychology is indeed a science if viewed from this
perspective. So long as hypotheses are made and predictions tested with
some sort of experiment, the crudity of the statistics is similar to
the statistical models of physics used before it was advanced to its
current state. Computer science (or whatever you call it) is also a
science in the sense that our understanding of computers is based on
prediction and experimentation. Anyone that says you don't experiment
with a computer hasn't tried it.

The big question is whether mathematics is a science. I guess
it is, but somehow any system in which you only falsify or verify based
on the assumptions you made leaves me a bit concerned. Of course we are
context bound in any other science, and can't often see the forest for
the trees, but on the other hand, accidental discovery based on
experiments with results which are unpredictable under the current theory
is not really possible in a purely mathematical system.

History is probably not a science in the above sense because,
although there are hypotheses with possible falsification, there is
little chance of performing an experiment in the past. Archeological
findings may be thought of as an experiment of the past, but I think
this sort of experiment is of quite a different nature than those that
are performed in other areas I call science. Archeology by the way is
probably a science in the sense of my definition not because of the
ability to test hypotheses about the past through experimental
diggings, but because of its constant development and experimental
testing of theory in regards to the way nature changes things over time.
The ability to determine the type of wood burned in an ancient fire and
the year in which it was burned is based on the scientific process that
archeologists use.

Fred

------------------------------

Date: 13 Dec 83 15:13:26-PST (Tue)
From: hplabs!hao!seismo!philabs!linus!utzoo!dciem!mmt @ Ucb-Vax
Subject: Re: Information sciences vs. physical sciences
Article-I.D.: dciem.553

*** This response is routed to net.philosophy as well as the net.ai
where it came from. Responders might prefer to edit net.ai out of
the Newsgroups: line before posting.


I am responding to an article claiming that psychology and computer
science arn't sciences. I think that the author is seriously confused
by his prefered usage of the term ``science''.


I'm not sure, but I think the article referenced was mine. In any case,
it seems reasonable to clarify what I mean by "science", since I think
it is a reasonably common meaning. By the way, I do agree with most of
the article that started with this comment, that it is futile to
define words like "science" in a hard and fast fashion. All I want
here is to show where my original comment comes from.

"Science" has obviously a wide variety of meanings if you get too
careful about it, just as does almost any word in a natural language.
But most meanings of science carry some flavour of a method for
discovering something that was not known by a method that others can
repeat. It doesn't really matter whether that method is empirical,
theoretical, experimental, hypothetico-deductive, or whatever, provided
that the result was previously uncertain or not obvious, and that at
least some other people can reproduce it.

I argued that psychology wasn't a science mainly on the grounds that
it is very difficult, if not impossible, to reproduce the conditions
of an experiment on most topics that qualify as the central core of
what most people think of as psychology. Only the grossest aspects
can be reproduced, and only the grossest characterization of the
results can be stated in a way that others can verify. Neither do
theoretical approaches to psychology provide good prediction of
observable behaviour, except on a gross scale. For this reason, I
claimed that psychology was not a science.

Please note that in saying this, I intend in no way to downgrade the
work of practicing psychologists who are scientists. Peripheral
aspects, and gross descriptions are susceptible to attack by our
present methods, and I have been using those methods for 25 years
professionally. In a way it is science, but in another way it isn't
psychology. The professional use of the word "psychology" is not that
of general English. If you like to think what you do is science,
that's fine, but remember that the definition IS fuzzy. What matters
more is that you contribute to the world's well-being, rather than
what you call the way you do it.
--

Martin Taylor
{allegra,linus,ihnp4,uw-beaver,floyd,ubc-vision}!utzoo!dciem!mmt

------------------------------

Date: 14 Dec 83 20:01:52-PST (Wed)
From: hplabs!hpda!fortune!rpw3 @ Ucb-Vax
Subject: Re: Information sciences vs. physical sc - (nf)
Article-I.D.: fortune.1978

I have to throw my two bits in:

The essence of science is "prediction". The missing steps in the classic
paradigm of hypothesis-experiment-analysis- presented above is
that "hypothesis" should be read "theory-prediction-"

That is, no matter how well the hypothesis explains the current data, it
can only be tested on data that has NOT YET BEEN TAKEN.

Any sufficiently overdetermined model can account for any given set of data
by tweaking the parameters. The trick is, once calculated, do those parameters
then predict as yet unmeasured data, WITHOUT CHANGING the parameters?
("Predict" means "within an reasonable/acceptable confidence interval
when tested with the appropriate statistical methods"
.)

Why am I throwing this back into "ai"? Because (for me) the true test
of whether "ai" has/will become a "science" is when it's theories/hypotheses
can successfully predict (c.f. above) the behaviour of existing "natural"
intelligences (whatever you mean by that, man/horse/porpoise/ant/...).

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT