Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 106

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Tuesday, 29 Apr 1986     Volume 4 : Issue 106 

Today's Topics:
AI Tools - Common LISP Coding Standards & String Reduction &
PARLOG for Unix,
Representation - Shape,
Philosophy - Computer Consciousness

----------------------------------------------------------------------

Date: Thu 24 Apr 86 07:50:46-PST
From: George Cole <GCOLE@su-sushi.arpa>
Subject: Hooks, Rings, Shapes & Background Processes

The knowledge about the individual items and their interactions must contain
the knowledge about their common environment, either as an unstated assumption
or perhaps has common knowledge. A hook and ring will not hold together (even
if they start together) unless the ring is "hanging" from the hook, because
of gravity or magnetism or a strong wind blowing past in the correct direction.
Nor will it stay hanging if the balance of forces (gravity down, wind blowing
past the plastic hook up) is upset beyond the stable limit. (If gravity is
increased 100-fold, will the tensile strength of the hook suffice to support
the ring?) And for a last concern, is there any motion of the hook or ring
that will cause the degradation of either, such as friction wearing away at
the material and thus lowering the tensile capacity?
These environmental and process contextual aspects do not seem to
yield easily to expression in a stable or fixed-point language.

George S. Cole
GCole@SU-SUSHI.ARPA

------------------------------

Date: Fri, 25 Apr 86 12:37:52 EST
From: mcguire@harvard.HARVARD.EDU (Hugh McGuire)
Subject: Re: Common LISP coding standards

Perhaps Marty Hall was seeking some guide to LISP style, similar to
Ledgard's (et al.'s) *Pascal with Style*; I certainly would find such
useful, and perhaps others would also. Steele's (et al.'s) *Common
LISP*, while it completely specifies the language, mentions style only
occasionally. For example, consider the following simple questions:
Under Lexical Scoping, how much should a programmer use variables with
identical names? Should one use "#'" (the abbreviation for special
form FUNCTION) whenever possible? When is a short COND-construct more
appropriate than an IF-construct? How should one decide between
iteration and recursion? Will asterisked global variables or constants
(e.g. "*visible-windows*") be confused with the system's asterisked
symbols?
--Hugh
(mcguire@harvard.HARVARD.EDU)

------------------------------

Date: 24 Apr 86 16:03:06 GMT
From: hplabs!hao!noao!terak!doug@ucbvax.berkeley.edu (Doug Pardee)
Subject: Re: String reduction

> TRAC is pretty easy to implement; I have an incomplete version written in
> C that I did some years back. I also have a paper on TRAC which is probably
> long out of print by now.

If anyone cares, TRAC stands for Text Reckoner And Compiler, and is
trademarked.

It is discussed at some length in Peter Wegner's book, "Data Structures,
Information Processing and Machine Organization"
(the title may be off
a bit, the book is at home and it's hard to remember such a lengthy
title :-)

Stanford used to have a version they called WYMPI. The main differences
were the use of "*" instead of "#" and -- more significantly -- they
permitted string (macro) names to be specified as the operator, rather
than requiring as TRAC does that strings be specifically called with
the "cl" operator. In other words, you could say *(macro,...) instead
of #(cl,macro,...). Wegner leaves it as an exercise to the reader to
show why the "cl" was an important architectural feature of TRAC which
shouldn't have been tampered with. Something about trying to make
#(cl,macro,...) == #(macro,...) and at the same time making
##(cl,macro,...) == ##(macro,...)
--
Doug Pardee -- CalComp -- {elrond,seismo,decvax,ihnp4}!terak!doug

------------------------------

Date: 24 Apr 86 12:32:50 GMT
From: ucdavis!lll-lcc!lll-crg!seismo!mcvax!ukc!ptb@ucbvax.berkeley.edu
(P.T.Breuer)
Subject: Re: String reduction

In article <1031@eagle.ukc.ac.uk> sjl@ukc.ac.uk (S.J.Leviseur) writes:
>Does anybody have any references to articles on string reduction
>as a reduction technique for applicative languages (or anything
>else)? They seem to be almost impossible to find! Anything welcome.

John Horton Conway (the Prince of Games, memorably Life) of Cambridge
University (UK) Pure Maths. Dept. some years ago invented a computing
language that seems to me to proceed by Markovian string reduction.
It is extremely sneaky at recognising substrings for substitution -
obviously the major cost in any such approach - and does this task
efficiently. The trick is to make up your strings as the product of
integer primes instead of by alphanumeric concatenation. The production
rules of a program script consist of single fractions. To apply the
rules to an incoming 'string' you choose the first fraction in the script
that gives an integer result on multiplication with the integer 'string'
and take the result as the outgoing string, then go to the top of the
script with the new string and start again. The indices of prime powers
in the string serve as memory cells 'x'. The denominator of the fractions
serve as 'if x> ..' statements, with the numerators as 'then x=x+-..'
components. J.H.C.'s (the middle initial is to help him remain incognito)
interest was in the fact that the Godel numbers of programs written in this
language are easily calculable. Conway has written out on a single sheet of
paper the Godel number of the program that simulates any given program from its
Godel number. The G-No. of the prime number program is relatively short.
I will intervene with J.C. to obtain more info, if requested.
U.No.Hoo advises generic statement here.

------------------------------

Date: 23 Apr 86 18:08:55 GMT
From: ucdavis!lll-lcc!lll-crg!caip!seismo!mcvax!ukc!icdoc!sg@ucbvax.
berkeley.edu (Steve Gregory)
Subject: PARLOG for Unix


SEQUENTIAL PARLOG MACHINE

We are now distributing the first release of our sequential PARLOG
system, to run on Unix machines. This system is based on an abstract
instruction set -- the SPM (Sequential PARLOG Machine) -- designed for
the sequential implementation of PARLOG. The system comprises an SPM
emulator, written in C; a PARLOG-SPM compiler, written in PARLOG; and a
query interpreter also written in PARLOG. An environment allows users to
create, compile, edit and run programs.

The system is a fairly complete implementation of the PARLOG language.
Unlike previous implementations of PARLOG, and of other parallel logic
programming languages, there is no "flat" requirement for guards; guards
may contain any "safe" PARLOG conjunction. A powerful metacall facility is
provided.

The SPM instruction set was designed by Steve Gregory. The system has
been implemented by Alastair Burt, Ian Foster, Graem Ringwood and Ken
Satoh, with contributions by Tony Kusalik. The work has been supported
by the SERC, ICL and Fujitsu.

The SPM system is currently available, in object form, for the Sun
and Vax under Unix 4.2; it is distributed on a tar format tape, which
includes all documentation. Anyone interested in obtaining a copy should
first contact me at the following address, to request a copy of the licence
agreement. The software will then be shipped on receipt of the completed
licence and prepayment of the handling fee.

Steve Gregory Telephone: +44 1 589 5111
Dept. of Computing Telex: 261503 IMPCOL G
Imperial College JANET: sg@uk.ac.ic.doc
London SW7 2BZ ARPANET: sg%icdoc@ucl-cs.arpa
England uucp: ...!mcvax!ukc!icdoc!sg

------------------------------

Date: Thu, 24 Apr 86 12:20:50 gmt
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: Re: Lucas on AI & Computer Consciousness

Tom Schutz says in Vol 4 # 80 :-
> But I hope that these researchers and their fans do not delude themselves
> into thinking that the only aspect of the universe which exists is the
> aspect that science can deal with.

One aspect of human behavior is "politics". Can there really ever be
political *science*? How would you *model* many minds acting as one?

Tom also says :-
> 2) There is a dualism of the mental and the physical with
> mysterious interactions between the two realms, and
Mysterious indeed! Consider "I *feel* ill", and the interactions
between mind and body, such as "butterflies in the stomach".

He adds :-
> 3) Other possibilities which no one has thought of yet.
of which there are an infinity? Is there of a *real* example of the
result that "sigma 2**(-n)" is 2? We bootstrap our consciousness
from the cradle, 0, to awareness, 1. Do we "multiply by infinity"
to get there?

Gordon Joly
ARPA: gcj%uk.ac.qmc.maths%uk.ac.qmc.cs@ucl-cs.arpa
UUCP: ...!seismo!mcvax!ukc!qmc-cs!qmc-ori!gcj

"I have a pain in the diodes, all the way down my left side."
-- Marvin the Paranoid Android.

------------------------------

Date: Thu 24 Apr 86 01:08:46-PST
From: Lee Altenberg <ALTENBERG@SUMEX-AIM.ARPA>
Subject: Machine emotion, cat emotion?

My perspective on the possibilities of machines having emotion
stems from my experience with animals and other life. I am concerned
about whether animals suffer from slaughtering them, because eating meat
is morally unacceptable if they do. But there is no a priori reason to
exclude plants from the question, too. How does one know if another is
suffering? I think we "know" in the case of people because they act in
ways that we would only act if WE were suffering. When I've
accidentally stepped on a cat's foot, it has made a noise that sounds
horrible to me, and run out of the way. This is close enough to what I
would do, were I in pain, so that I feel "oh no, I hurt the cat." But
sometimes I have accidentally stepped on a dog's foot and it didn't make
such sounds, and I don't know what it felt. I can't imagine that it
didn't hurt (based on what I would feel if that had been my foot) but
who knows?
Now, when we get to plants, there is nothing they could do that
would resemble what I would do were I in pain. Cartoons with
anthropomorphised plants show them doing such things as wilting or
erecting in response to events. Here there is a fourtuitous parallelel
between human body language the health of the plant's water balance.
But in general?
My point is that the question of emotion separates into two
issues, the question of one's own emotions, and the question of others'.
I am claiming that operationally, the question of emotions in other
people, animals, plants, and machines are equal in this latter category.
Consider the dynamics of how people perceive each other's
emotions from an evolutionary standpoint. The display of emotion to
others, and the recognition of emotion in others, plays a central role in
human relations, which strongly impact human Darwinian fitness. Now,
machines can be designed to mimic human expression of emotions, through
icons and the use of emotion expressing language or sounds. So the
question regarding machine emotions I would emphasize is, what sort of
emotional relationships do we WANT between the human user and the machine?
I would guess that there is some stuff of practical relevance in this
question, to the extent that a computer user's performance is
affected by his or her emotional reactions to what happens during
their sessions. Suppose after five consecutive run-time errors, the
machine posted the message,
"I'm sorry to say, but we've hit a run-time error AGAIN! Keep
working on trying to figure out the problem, though. There's got to be
a solution!"

Well, it's a bit contrived, but you get my point. It could be
an area to develop.
-Lee Altenberg

------------------------------

Date: 25 Apr 86 09:47:00 EST
From: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Reply-to: "CUGINI, JOHN" <cugini@nbs-vms.ARPA>
Subject: Some replies on computer consciousness


> ...consciousness is an emergent phenomenon. The more
> complex the nervous system of an organism, the more likely one is to
> ascribe consciousness to it. Computers, at present, are too simple,
> regardless of performance. I would have no problem believing a
> massively parallel system with size and connectivity of biological
> proportions to be conscious, provided it did interesting things.

1. Note that we've gone from a purely external criterion to an combined
one which asks about both performance and internal structure.
I quite agree that both these are relevant.

2. The assumption is that it's the connectivity per se (ie structure),
that consciousness emerges from. This may be true, but it's not
a given. Eg suppose we had a "massively parallel system with
size [I assume "
logical size" is meant here] and connectivity
of biological proportions"
which was implemented in wooden parts
of normal macroscopic physical size, with a switching speed of about
1 second. It's not just obvious to me that such a (large, slow,
wooden) thing, though structurally identical to a brain, would
generate consciousness (nor that it wouldn't).

> From: Mark Ellison
>
> Mechanism M [brain] causes C [consciousness] ? You know many
> people who (may) have brains, and you have no DIRECT evidence
> that they are conscious.

Right, but I have strong circumstantial evidence - eg they have a
brain, (like me) and they can do long division (like me).

> You only have direct evidence of one
> case of C (barring ESP, etc.), and no DIRECT evidence of that
> person's brain. Except for the performances in each case.

Huh? Surely I have other grounds for believing that I, and other
people, have brains besides their performance. Like analogy,
biology, etc.

> We only know of their ability to feel pain, experience shapes, colors,
> sounds, etc., by their reactions to those stimuli. In other words,
> by their performance. But on the other hand their performance might
> not involve abstract statements. ....I would argue that "raw
> feelings"
in others are known only by their performance.

Well, I think this simply isn't so - do you mean to deny that
the fact that they have brains in no way supports the hypothesis
of their ability to feel pain, etc??? Especially given the
neurological evidence we have that brain activity seems to
directly cause experiences (like the neurosurgeon pokes your
cortex and you say "I see a red flash")? It seems just obvious
to me that we rationally attribute consciousness to others
because of both criteria, ie performance and brains.

> One criterion that I have not seen yet proposed is the following.
> It is more useful to pretend that people are conscious than not.
> They tend to cause you less pain, and are more likely to do what you want.
> So I'll believe someone's 8600 or Cray is conscious if it works better,
> according to whatever criteria I have for that at the moment, when I so
> believe.

Well, I was speaking of Truth, not pragmatics. It may be that I
play a better game of chess against a metallic opponent if I
attribute to it motives of greed, revenge, etc. That hardly
seems to settle the question of whether it really has these
features.

BTW, I think most of these claims about computer consciousness
are mis-spoken - I think what people mean to say (or should) is
that the Wonderful Futuristic computer would be really
*intelligent.* Since the concept of intelligence is essentially
one of performance, I agree with such claims. A computer that
could hold a general, intelligent, English conversation is, ipso
facto, intelligent. It does *not* follow, either conceptually
or practically, that such a machine would be conscious (nor that
it wouldn't, of course), in the normal "seeing-yellow,
feeling-pain"
sense of the word, although everyone seems just to
assume this. To put it another way, just because intelligent
behavior in a human is decisive evidence for consciousness (where
we have the underlying fact of brain-hood), it does not follow
that it is decisive evidence in the case of a computer.

John Cugini <Cugini@NBS-VMS>

------------------------------

Date: Fri, 25 Apr 86 10:51:56 mst
From: crs%f@LANL.ARPA (Charlie Sorsby)
Subject: Re: performance considered insufficient


References: <VAX-MM(186)+TOPSLIB(117)+PONY(0).18-Apr-86.09:53:30.SRI-IU.ARPA>

> Are viruses conscious? How about protozoa, mollusks, insects, fish,
> reptiles, and birds? Certainly some mammals are conscious. How about
> cats, dogs, and chimpanzees? Does anyone maintain that homo sapiens
> is the only species with consciousness?
>
> My point is that consciousness is an emergent phenomenon. The more
> complex the nervous system of an organism, the more likely one is to
> ascribe consciousness to it. Computers, at present, are too simple,
> regardless of performance. I would have no problem believing a
> massively parallel system with size and connectivity of biological
> proportions to be conscious, provided it did interesting things.

I've been following, with interest, the debate about the possibility of
machine consciousness. I have a question:

Do you consider (each of you) consciousness a binary phenomenon? Does one
(or something) either have, or not have, consciousness?

Or, is there a continuum of consciousness, with some entities in possession
of just a *little* consciousness while others have more?

I suspect, based on what I have read here, that there is no consensus
opinion, that some believe it is binary while others subscribe to the
continuum idea (with, perhaps, others believing some intermediate theory).
Is there a prevailing view among AI researchers?

Use your own judgment as to whether to post or mail your reply. If I
receive many mail replies, I'll try to summarize and post.

Charlie Sorsby
...{cmcl2, ihnp4, ..}!lanl!crs
crs@lanl.arpa

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT