Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 117

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 1 Nov 1988      Volume 8 : Issue 117 

Philosophy:

Oscillating consciousness
What does the brain do between thoughts?
When is an entity conscious?
Bringing AI back home (Gilbert Cockton)
Huberman's 'the ecology of computation' book
Limits of AI

----------------------------------------------------------------------

Date: 21 Oct 88 23:10:45 GMT
From: vdx!roberta!dez@uunet.uu.net (Dez in New York City)
Subject: Re: oscillating consciousness

> Take it as you want. We know that people can not attend to the entire
> environment at once (or, at least that's what the cog. psychologists
> have found).

No that is not what cognitive psychologists have found. What we have found is:
a) people gain as much information from the environment as their
sensory systems are able to pick up. This is a very large amount
of information, it may well be the entire environment, at least as
far as the environment appears at the sense receptors.

b) people have a limited capacity to reflect or introspect upon the
wide range of information coming in from sensory systems. Various
mechanisms, some sense specific, some not, operate to draw people's
immeadiate awareness to information that is important. It is this
immeadiate awareness that is limited, not perception of, or
attention to, the environment.

Dez - Cognitive Psychologist uunet!vdx!roberta!dez

------------------------------

Date: Mon, 24 Oct 88 11:28:36 PDT
From: lambert@cod.nosc.mil (David R. Lambert)
Subject: What does the brain do between thoughts?

Discussion history (abbreviated; see AIList for detail & sources):
>>> 1) What does the brain do between thoughts?
>>> 2) ... there is no "between thoughts" except for sleep....
>>> 3) [Subjects] reported seeing "randomly" blinking lights blink
IN RYTHM to a song. Possible concl: consciousness
oscillates.
>>> 4) Other possible concl: we unconsciously attach meaning to
apparently randon patterns (e.g., notice those lit on the
beat and disregard others. ... use of tapping, or rubbing
motions to influence pace of communications.... P.S. I'd
like to know what 'oscillating consciousness' is supposed
to mean.

As I recall, there are some nice psycholinguistic "click"
experiments (I don't know the references--about 1973) which show
that the perceived location of a click which actually occurs at a
random time during a spoken sentence migrates to a semantic (or,
perhaps, syntactic) boundary. Perhaps the brain is actually
thinking (processing information) all/most/much of the time. But we
PERCEIVE (or experimentally observe) the brain as thinking
intermittently 1) because we notice only the RESULTS of this
thinking, and 2) do so only when these results become available at
natural (irregularly spaced) breakpoints in the processing.

David R. Lambert
lambert@nosc.mil

------------------------------

Date: 24 October 1988, 20:50:31
From: Stig Hemmer HEMMER at
NORUNIT
Subject: When is an entity conscious?

First a short quote from David Harvey

> But then why in the world am I writing this
>article in response? After all, I have no guarantee that you are a
>conscious entity or not.

>dharvey@wsccs

I think mr. Harvey touched an important point here, as I see it the question
is a matter of definision. We don't know other people to be conscious, we
DEFINE then to be. It is a very useful definition because other people behave
more or less as I do, and I am conscious.

Here it is possible to transfer to programs in two ways:

1) Programs are conscious if they behave as people, i.e. the Turing test.

2) Find the most useful definition. For many people this will be to define
programs not to be conscious beings, to avoid ethical and legal problems.

This discusion is therefore fruitless because it concerns the basic axioms
people can't argue for or against.
-Tortoise

------------------------------

Date: 25 Oct 88 09:24:07 GMT
From: Gilbert Cockton <mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Reply-to: Gilbert Cockton
<mcvax!cs.glasgow.ac.uk!gilbert@uunet.UU.NET>
Subject: Bringing AI back home (Gilbert Cockton)


In a previous article, Ray Allis writes:
>If AI is to make progress toward machines with common sense, we
>should first rectify the preposterous inverted notion that AI is
>somehow a subset of computer science,
Nothing preposterous at all about this. AI is about applications of
computers, and you can't sensibly apply computers without using computer
science. You can hack together a mess of LISP or PROLOG (and have I
seen some messes), but this contributes as much to our knowledge of
computer applications as a 14 year old's first 10,000 line BASIC program.

> or call the research something other than "artificial intelligence".
Is this the real thrust of your argument? Most people would agree,
even Herb Simon doesn't like the term and says so in "Sciences of the
Artificial". Many people would be happy if AI boy scouts came down
from their technological utopian fantasies and addressed the sensible
problem of optimising human-computer task allocation in a humble,
disciplined and well-focussed manner.

There are tasks in the world. Computers can assist some of these
tasks, but not others. Understanding why this is the case lies at the
heart of proper human-machine system design. The problem with hard AI is
that it doesn't want to know that a real division between automatable
and unautomatable tasks does exist in practice. Because of this, AI
can make no practical contribution to real world systems design.
Practical applications of AI tools are usually done by people on the
fringes of hard AI. Indeed, many AI types do not regard Expert Systems
types as AI workers.

> Computer science has nothing whatever to say about much of what we call
> intelligent behavior, particularly common sense.
Only sociology has anything to do with either of these, so to
place AI within CS is to lose nothing. To place AI within sociology
would result in a massacre :-)

Intelligence is a value judgement, not a definable entity. Why are so
many AI workers so damned ignorant of the problems with
operationalising definitions of intelligence, as borne out by nearly a
century of psychometrics here? Common sense is a labelling activity
for beliefs which are assumed to be common within a (sub)culture.
Hence the distinction between academic knowledge and common sense.
Academic knowledge is institutionalised within highly marginal
sub-cultures, and thus as sense goes, is far less common than the
really common stuff.

Such social constructs cannot have a machine embodiment, nor can any
academic discipline except sociology sensibly address such woolly
epiphenomena. I do include cognitive psychology within this exclusion,
as no sensible cognitive psychologist would use terms like common sense
or intelligence. The mental phenomena which are explored
computationally by cognitive psychologists tend to be more basic and
better defined aspects of individual behaviour. The minute words like
common sense and intelligence are used, the relevant discipline becomes
the sociology of knowledge.
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert
--
Gilbert Cockton, Department of Computing Science, The University, Glasgow
gilbert@uk.ac.glasgow.cs <europe>!ukc!glasgow!gilbert

------------------------------

Date: 29 Oct 88 01:38:57 GMT
From: mailrus!sharkey!emv@rutgers.edu (Ed Vielmetti)
Subject: Huberman's 'the ecology of computation' book

(why is there no sci.chaos or sci.ecology ?)

Has anyone else read this book? I'm looking for discussion of
what might be labelled as 'computational ecology' or 'computational
ecosystems'. Just looking at the relevant references in the
two papers I have, the seminal works appear to be Davis and Smith
(1983), 'Negotiation as a Metaphor for Distributed Problem Solving'
in 'Artificial Intelligence 20', and Kornfeld and Hewitt's
"The Scientific Community Metaphor" in IEEE Trans Systems Man
& Cybernetics 1981.

Followups go wherever - I really don't know which if any of these
newsgroups have any interest. My approach to this is based on
a background in economics and in watching congestion appear in
distributed electronic mail systems.

------------------------------

Date: 31 Oct 88 15:17:14 GMT
From: orion.cf.uci.edu!paris.ics.uci.edu!venera.isi.edu!smoliar@ucsd.e
du (Stephen Smoliar)
Subject: Re: Limits of AI

In article <5221@watdcsu.waterloo.edu> smann@watdcsu.waterloo.edu (Shannon
Mann - I.S.er) writes:
>
>Now consider the argument posed by Dr. Carl Sagan in ch. 2, Genes and
>Brains, of the book _The Dragons of Eden_. He argues that, at about the
>level of a reptile, the amount of information held within the brain
>equals that of the amount of information held within the genes. After
>reptiles, the amount of information held within the brain exceeds that
>of the genes.
>
>Now, of the second argument, we can draw a parallel to the question asked.
>Lets rephrase the question:
>
>Can a system containing X amount of information, create a system containing
>Y amount of information, where Y exceeds X?
>
>As Dr. Sagan has presented in his book, the answer is a definitive _YES_.
>
Readers interested is a more technical substantiation of Sagan's arguments
should probably refer to the recent work of Gerald Edelman, published most
extensively in his book NEURAL DARWINISM. The title refers to the idea that
"mind" is essentially a result of a selective process among a vast (I am
tempted to put on a Sagan accent, but it doesn't come across in print)
population of connections between neurons. However, before even considering
the selective process, one has to worry about how that population came to be
in the first place. I quote from a review of NEURAL DARWINISM which I
recently submitted to ARTIFICIAL INTELLIGENCE:

This population is an EPIGENETIC result of prenatal development.
In other words, the neural structure (and, for that matter, the
entire morphology) of an organism is not exclusively determined
by its genetic repertoire. Instead, events EXTERNAL to strictly
genetic activity contribute ot the develo9pment of a diverse
population of neural structures. Specific molecular agents,
known as ADHESION MOLECULES, are responsible for determining
the course of a morphology and, consequentlty, the resulting
pattern of neural cells which are formed in the course of that
morphology; and these molecules are responsible for the formation,
during embryonic development, of the population from which selection
will take place.

Those who wish to pursue this matter further and are not inclined to wade
through the almost 400 pages of NEURAL DARWINISM will find an excellent
introduction to the approach in the final chapter of Israel Rosenfield's
THE INVENTION OF MEMORY. (This remark is also directed to Dave Peru, who
requested further information about Edelman.)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT