Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 072

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 9 Apr 1986      Volume 4 : Issue 72 

Today's Topics:
Psychology - Computer Emotions

----------------------------------------------------------------------

Date: 29 Mar 86 19:27:23 GMT
From: hplabs!hao!seismo!harvard!bu-cs!bzs@ucbvax.berkeley.edu (Barry Shein)
Subject: Re: Computer Dialogue


Re: should computers display emotions

I guess a question I would be more comfortable with is "would people
be happier if computers mimicked emotions"
. Ok, from experience we
see that people don't love seeing messages like "Segmentation Violation --
Core Dumped"
(although some of us for different reasons.)

Would they be 'happier' if it said 'ouch'? Well, probably not, but the
question probably comes down to more of a human-engineering machine
interface issue.

We certainly got somewhat ridiculous at one extreme (we being systems
people not unlike myself, maybe not you) with things like:

IEF007001 PSW=001049FC 0E100302

pretending to be error messages, let's face it, that's no less artificial
(and barely more useful unless you have a manual in hand and know how
to use that manual and know how to understand that manual, often the
manual was written by the same sort of brain that thought IEF007001
was helpful) than 'ouch'. We (again, we system types) have just come
to accept that sort of cruft as being socially correct (at least not
embarrasing as we might feel if we put 'ouch' into our O/S err routines).

The macintosh displays a a frowning face when it's real unhappy, most
people I know chuckled once and then remarked "that's really stupid,
how about some useful info jerks?"
(like IEF007001?) I wouldn't be the
least bit surprised to hear that those smiley/frowney macs lost them
heaps of sales (we can't have CUTE on the CEO's desk...give me IEH007001!)

I think we keep straddling some line of appearing real professional
(IEF007001) vs terminal cutesiness (ouch.) I suppose there is a huge
middle ground with some dialogue (like computer dialogues).

-Barry Shein, Boston University

------------------------------

Date: 31 Mar 86 23:36:03 GMT
From: decvax!hplabsb!marvit@ucbvax.berkeley.edu (Peter Marvit)
Subject: Re: Computer Dialogue

> Mark Davis asks if computers have anything akin to human feelings.
>
> Barry Kort responds with a wonderful description of a gigantic telephone
switching system and draws a powerful parallel with its sensors and
resulting information about physical problems and the very human sense of
pain.

A friend of mine and I were discussing a similar point. If a computer
were able to tell us "what it is like to be a computer," would it be considered
concious? That is, what would be our nomenclature for a system which could
describe its innards and current state (and possibly modify some of itself -
perhaps by taking "home remedies").

My friend is a philosopher and I am a computerscientist/humanist (admittedly
an oxymoron at times). I contend conciousness is a slippery term which I
find uncomfortable. Further, existing computer systems exhibit such behavior,
albeit in a somewhat crude and unsophiticated fashion (see "df" or "fsck").
Barry gave another excellent example, cited above.

However, the question is still a valid one- if one looks beyond the operational
issues and poses the more subtle philosophical query: What is it like to "be"
anything and what would/could a computer say about itself? At one point,
I argued that the question may be completely outside the computer's world view.
That is, it would be like asking a five year old what sex feels like (please,
no flames about sophisticated tykes). The computer wouldn't have the vocabu-
lary or internal model to be able to answer that. Yet, if we programmed
that capability in ...

I look forward to your thoughts on the net or to me.

Peter Marvit ...!hplabs!marvit
Hewlett-Packard Laboratories

------------------------------

Date: 31 Mar 86 14:53:58 GMT
From: nike!riacs!seismo!cit-vax!trent@ucbvax.berkeley.edu (Ray Trent)
Subject: Re: re: Computer Dialogue #1

In article <2345@jhunix.UUCP> ins_akaa@jhunix.UUCP (Ken Arromdee) writes:
>>toasters do"... doesn't mean that a combination of many toasters cannot, and
>You are actually quite correct. There's one problem here. Toasters can store
>perhaps two or three bytes of information. Consider how many toasters

Correct me if I'm wrong, but my understanding of the currently
dominant theory about the way human beings remember things says
that brain store NO "
bytes" of information at all, but that
memory is a congregate effect generated by the _interconnections_
of the brain cells.

The only papers I have read on this subject are by John Hopfield
here at Caltech. Does anyone out there have any pointers to good
research (people or papers) being done in this field? (have
email, will summarize)

I am particularly interested by this subject because I have seen
simple programs that simulate the connection matrix of a simple
neural network. This program can "
remember" things in a
connection matrix, and then "
recall" them at a later time given
only pieces of the original data. Sample session:

% learn "
Ross" "Richard" "Sandy"...
% ask "
Ro"
Ross
% ask "
Ri"
Richard
% ask "
R"
Rqchird

Note the program's reaction to an ambiguous request; it
extrapolated from what it "
knew" to a reasonable guess at a "real
memory" (note that 'i' + 8 = 'q' and 'a' + 8 = 'i' so the memory
was correct up to 1 bit in each of two places.)

The interesting thing about this sort of scheme is its reaction
to failed active elements. If you destroy (delete) several
locations in the connection matrix, the program doesn't lose any
specific knowledge, but it becomes harder for it to extrapolate
to the "
real memory" and distinguish these from "spurious
memories." Of course, after a certain point...things break down
completely, but it's still interesting.

"
In a valiant attempt to save the universe, his large intestine
leapt out of his body and throttled him..."

(if you don't understand that, ignore it.)


--
../ray\..
(trent@csvax.caltech.edu)
"
The above is someone else's opinion only at great coincidence"

------------------------------

Date: Wed 2 Apr 86 17:41:51-PST
From: GARVEY@SRI-AI.ARPA
Subject: Re: Computer Dialogue

Why don't you try to define what you mean by "
feel?" If you get
beyond a definition based on fairly mechanistic principles, then you
have a discussion; if you don't, then your computer will probably be
shown (uninterestingly) to feel by definition. I think it's koans
like this (assuming it isn't an April Fool joke) that keep the Dreyfi
in business and that suggest that the field needs serious tightening.

If the computer should "
feel" anything, why should you assume that it
feels bad when it doesn't seem to be working correctly? Perhaps it's
taking a vacation; probably it hates people and loves to make them
mad.

Cheers,
Tom

------------------------------

Date: 1 Apr 86 12:53:34 GMT
From: ulysses!mhuxr!mhuxt!houxm!hounx!kort@ucbvax.berkeley.edu (B.KORT)
Subject: Re: Computer Dialogue

Peter Marvit asks if computers can have anything akin to consciousness
or self-awareness similar to humans. Excellent question.

One thing that computers *can* have is simulation models of other
systems. The National Weather Bureau's computers have a model
of atmospheric dynamics that tracks the evolution of weather patterns
with sufficient accuracy that their forecasts are at least useful,
if not perfect.

NASA and JPL (Jet Propulsion Laboratory) have elaborate computer
models of spacecraft behavior and interplanetary ballistics, which
accurately track the behavior and trajectory of the real mission
hardware.

Computers can also have models of other computers, which emulate
in software the functioning of another piece of hardware.

What would happen if you gave a computer a software model of *its
own* hardware configuration and functioning? The computer could
run the model with various perturbations (e.g. faults or design
changes) and see what happened. Now suppose that the computer
was empowered to use this model in conjunction with its own
fault-detection network. The computer could diagnose many of
its own ills, and choose remedial action. It could also explore
the wisdom of possible reconfigurations or redesigns. Digital
Equipment Corporation (DEC) has an Expert System that works out
optimal configurations for their VAX line of computers. The
Expert Systems runs on....(you guessed it)... a VAX.

If a computer can have a reliable model of itself, and can use
that model to maintain and enhance its own well-being, are we
very far away from rudimentary consciousness?

For some delightful and delicious reading on computer self-awareness,
the meaning of the word "
soul", and related philosophical musings,
I recommend _The Mind's I_, composed and arranged by Douglas Hofstadter
and Daniel Dennett.

--Barry Kort ...ihnp4!houxm!hounx!kort

------------------------------

Date: Sat, 5 Apr 86 14:46:35 GMT
From: gcj%qmc-ori.uucp@cs.ucl.ac.uk
Subject: A Byte of Toast.

Quoted in Vol 4 # 62 :-
``Our brains are enormously complex computers''.
If so, then do we all run the same operating system?
And what are the operating systems of toasters?
Gordon Joly,
ARPA: gcj%qmc-ori@ucl-cs.arpa
UUCP: ...!ukc!qmc-cs!qmc-ori!gcj

------------------------------

Date: 7 Apr 86 03:09:16 GMT
From: ulysses!mhuxr!mhuxt!houxm!whuxl!whuxlm!akgua!gatech!seismo!rochester
!rocksanne!sunybcs!ellie!colonel@ucbvax.berkeley.edu
Subject: Re: what's it like (TV dialogue #1)

Reporter: "
Mr. Computer, what's it like to be a computer?"
Computer: "
Well, it's hard to explain, Frank, ..."
Reporter: "
For example, what's it like to be able to read a magtape
at 6250 bpi?"
Computer: "
It feels just great, Frank. Really great."

Col. G. L. Sicherman
UU: ...{rocksvax|decvax}!sunybcs!colonel
CS: colonel@buffalo-cs
BI: csdsicher@sunyabva

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT