Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 145

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 15 Jun 1987      Volume 5 : Issue 145 

Today's Topics:
Query - Why Did The $6,000,000 Man Run So Slowly?,
Science - Applying AI Models to Biology

----------------------------------------------------------------------

Date: Fri, 12 Jun 87 00:51:41 EDT
From: tim@linc.cis.upenn.edu (Tim Finin)
Subject: why did the $6,000,000 man run so slowly?

Why did the six million dollar man run so slowly?

Some time ago, Pat Hays posted a message in which he asked people for
explanations for the fact that Dr. Who's tardis is bigger on the
inside than it appears to be from the outside. He was trying, of
course, to discover something about our common sense model of the
physical world.

I have a similar question which might shed some light on our common
sense notions of time and actions: why did the six million dollar man
run so slowly? As you recall, the six million dollar man (from the
popular TV show in the early '70's) had bionic legs which enabled him
to run at super-human speeds. However, when the producers wanted to
show him doing this, they slowed down the image of him running. That
is, to depict him running at incredibly fast speeds, they showed an
image of him moving in "slow motion".

Id like to collect explanations for this fact.

Tim.

------------------------------

Date: 12 Jun 87 20:51:51 GMT
From: ihnp4!homxb!houxm!hou2d!avr@ucbvax.Berkeley.EDU (Adam V. Reed)
Subject: Re: Why did the six-million dollar man run so slowly?

Slow motion is commonly used in TV (and before that, newsreel) reports
to represent very fast motion (e.g. in horse races and other sports
events). My guess is that this originated through use of free "photo
finish"
footage, originally filmed for the use of sport-event judges,
in early movie newsreels. If my guess is right, the representation of
fast movement with slow-motion footage uses a learned but highly
familiar mental association.
Adam Reed (hou2d!adam)

------------------------------

Date: 13 Jun 87 03:16:03 GMT
From: code@sphinx.uchicago.edu (paul robinson wilson)
Subject: Re: Why did the six-million dollar man run so slowly?

In article <1431@hou2d.UUCP> avr@hou2d.UUCP (Adam V. Reed) writes:
>Slow motion is commonly used in TV (and before that, newsreel) reports
>to represent very fast motion (e.g. in horse races and other sports
>events). My guess is that this originated through use of free "photo
>finish"
footage, originally filmed for the use of sport-event judges,
>in early movie newsreels. If my guess is right, the representation of
>fast movement with slow-motion footage uses a learned but highly
>familiar mental association.

I think it may be more subtle than that. There is a general tendency for
effective, competent motion to be smooth and for large motions to be
relatively slow. A long-legged runner runs more "slowly" than a short-legged
one, but covers more ground. A jaguar moves fluidly and less hurriedly than
its usual prey, making large bounds seemingly effortlessly. By contrast,
the little kid trying to keep up with the big kids moves its legs very fast.

Naturally, if we saw speeded-up film of the $ 6 Meg man, we'd think he looked
comical, with his legs moving very rapidly, like a small (impotent) creature's.

Slow-motion, however, looks smooth and graceful, revealing the grace with
which we all move, but seldom notice. Our ability to appreciate this
(intended) effect without the accompanying (unintended) impression of his
moving quite slowly, however, may in fact depend on our "being used to it"
from television sports, etc. We appreciate the obvious grace while suspending
our judgement about speed.

The _right_ way to show it, I guess, would have been to have Lee Majors
bound 20 ft. (or thereabouts) at a time, and quickly. Besides being a bit
difficult to accomplish, it's also a little hard on the skeletal structure.
They would have gone through stuntmen at quite a clip :-).

(By the way, I believe Lee Majors is a rather short guy, and would have looked
especially comical in sped-up film, coveing significant ground, with normal
stuff to gauge him against.)

| Paul R. Wilson ph.: (312) 947-0740 uucp: ...!ihnp4!uicbert!wilson |
| Electronic Mind Control Lab if no answer: ...ihnp4!gargoyle!sphinx!code |
| UIC EECS Dept. (M/C 154) arpa: uicbert!wilson@uxc.cso.uiuc.edu |
| P.O.Box 4348 Chicago,IL 60680 |

------------------------------

Date: 13 Jun 87 06:18:52 GMT
From: pattis@june.cs.washington.edu (Richard Pattis)
Subject: Re: Why did the six-million dollar man run so slowly?

I've thought that the slowdown was not from the perspective of the viewer,
but from the perspective of the the $6M man. The viewer, viewing from the
frame of the $6M man, is moving so fast that everything else seems slowed
down.

------------------------------

Date: 13 Jun 87 17:49:31 GMT
From: super.upenn.edu!linc.cis.upenn.edu!mayerk@RUTGERS.EDU (Kenneth Mayer)
Subject: Re: Why did the six-million dollar man run so slowly?

Occaisionally, the producers _did_ show Lee Majors in a speeded up shot. The
effect was comical. (As I recall, there was this old farmer watching from the
porch of his house as Mr. $6million sprinted across his field.) I like the
cougar metaphor. Wildlife films of such an animal in normal speed are choppy,
incredibly brief, and usuall ends with the felling of the prey. In slow-mo
we get a chance to see the beautiful detail of the predator flying by.

>From a cinematic viewpoint, the camera director/special effects director had
to do something to show that Steve Austin wasn't simply jogging across a field
like the rest of us. Slowing the file speed (and speeding up apparent time)
looks comical, like an old Keystone Cops film. Stretching out the time line
increases tension. The viewer gets a chance to examine more detail per sec.
of real time. Exactly the way a novel will be incredibly brief during
transitions, and excrutiatingly deatailed during climaxes. (I just finished
reading Misery, by Stephen King. For a good reflective look at a writer's art,
packaged in a really good thriller, borrow this book from the library for a
summer weekend reader.)
Kenneth Mayer mayerk@eniac.seas.upenn.edu

------------------------------

Date: 10 Jun 87 09:33:34 GMT
From: nosc!humu!uhccux!todd@sdcsvax.ucsd.edu (The Perplexed Wiz)
Subject: Re: Taking AI models and applying them to biology...

In article <836@pixar.UUCP> davel@pixar.UUCP (David Longerbeam) writes:
>In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
>> This description of the human memory system, though cloaked in vaguer terms,
>> corresponds more or less one-to-one with the traditional computer
>> architecture we all know and love. To wit:
> [description deleted]
>> At least this far, this theory appears to owe a lot to computer science.
>> Granted, there is lots of empirical evidence in favour, but we all know
>> how a little evidence can go far too far towards developing an analogy.

>One of my philosophy professors in college offered the observation that
>models for the human mind have always seemed to correspond to the most
>advanced form of technology at that given point in history. He could

It's true that theories of cognition often reflect the current popular
technology. But before we start arguing current theories as reflections
of computer science and physiology, I suggest we at least have some
common starting point for our discussion.

I don't want to suggest that you need a Ph.D. in Cognitive Psychology
to discuss the subject, but you might want to consider reading one
of the many intro texts on the subject before leaping to any speculations
(wild or otherwise :-).

An intro text I often recommend to people with a more than casual
interest in cognition is:

Anderson, John (1985).
Cognitive Psychology and Its Implications. (2nd edition)
New York: W.H. Freeman and Co.

[The 1st edition also has much to recommend it. It was written from
a psychological viewpoint, and introduces vocabulary and concepts that
may be unfamiliar to computer scientists. The 2nd edition was rewritten
with an AI (or cognitive psychology!) vocabulary, hence risks echoing the
preconceptions of the field instead of contributing fresh insights. -- KIL]


If you are interested in a historical perspective of psychological
research, I suggest you take a peek at:

Hearst, Eliot (Ed.) (1979).
The First Century of Experimental Psychology.
Hillsdale, New Jersey: Lawrence Erlbaum Associates, Pub.

And finally, though I don't always agree with what Richard Gregory has
to say, I always enjoy hearing or reading his ideas and theories. His
"Mind in Science" is an interesting speculative book.

Gregory, Richard (1981).
Mind in Science: A History of Explanations in
Psychology and Physics.
Cambridge: Cambridge University Press

Well, I hope we at least have some common reference point now...

Todd Ogasawara
"With a good wind behind me and and a lot of luck...
Ph.D. in Psychology later this year :-)"


--
Todd Ogasawara, U. of Hawaii Computing Center
UUCP: {ihnp4,seismo,ucbvax,dcdwest}!sdcsvax!nosc!uhccux!todd
ARPA: uhccux!todd@nosc.MIL
INTERNET: todd@uhccux.UHCC.HAWAII.EDU

------------------------------

Date: Wed, 10 Jun 87 09:51 EDT
From: Seth Steinberg <sas@bfly-vax.bbn.com>
Subject: Borrowing from Biology [Half in Jest]

Actually, the biologists have been borrowing from the history of the
Roman Empire. Cincinatus comes down from his farm and codifies the
laws for the Republic and creates a nearly perfect mechanism which
starts taking over the Mediterranean basin. By providing for a means
of succession (read "DNA replication"), the Empire is able to achieve
higher levels of organization. Unfortunately, the military (read "the
immune system"
), slowly grows in strength as the Empire expands and
finally reaches a limit to its expansion and spends the next millenium
rotting away in Byzantium.

Theories about entropy are about complex systems in general, not just
the behavior of energy in steam engines. Biologists have latched onto
them to account for aging in organisms and to explain the epochs of
evolution. (Why aren't there any new phyla being created?) If you've
ever tried to make a major change in a decade old program think of what
the biologists are up against with their billion year old kludges.
Last month, an article in Scientific American described a glucose
complex based aging mechanism, arguing that many aging effects could be
caused by very slow chemical reactions induced by the operating
environment. Next month we may discover an actual internal counter
within each cell. It is quite probable that there are dozens of
mechanisms at work. With 90% of the genome encoding for garbage,
elegant design is more of a serendipity than the norm.

Seth Steinberg
sas@bbn.com

P.S. Did you notice the latest kludge? They've found a gene whose DNA
complement also encodes a gene! Kind of like a 68000 program you can
execute if you put a logical complement on each instruction fetch.
Neat, huh?

------------------------------

Date: 12 Jun 87 16:08:04 GMT
From: hao!boulder!eddy@ames.arpa (Sean Eddy)
Subject: Re: Taking AI models and applying them to biology...

In article <1331@sigi.Colorado.EDU> pell@boulder.Colorado.EDU writes:
>It would seem to me that the step that is likely to give the cell trouble
>is not mitosis but DNA replication. If a whole chromosome lost or
>non-disjoined, that cell is in some serious trouble. Progressive
>accumulation mistakes through replication and general maintanence seems a more
>likely culprit.

"General maintenance" is a very important thing to bring up. It seems
to me that replication/mitosis can't be the whole story in aging. One
must also propose other models because there are cells that do not
divide after a certain point, yet still age and die. Neurons are the
classic example; not only do they not divide, they cannot even
be replaced (in humans) if damaged.

- Sean Eddy
- MCD Biology; U. of Colorado at Boulder; Boulder CO 80309
- eddy@boulder.colorado.EDU !{hao,nbires}!boulder!eddy
-
- "So what. Big deal."
- - Emilio Lazardo

------------------------------

Date: 13 Jun 87 23:03:16 GMT
From: mcvax!lambert@seismo.css.gov (Lambert Meertens)
Subject: Re: Taking AI models and applying them to biology...

In article <836@pixar.UUCP> davel@pixar.UUCP (David Longerbeam) writes:

> In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:
|
| > This description of the human memory system, though cloaked in vaguer terms,
| > corresponds more or less one-to-one with the traditional computer
| > architecture we all know and love. To wit:
|
| [description deleted]
|
| > At least this far, this theory appears to owe a lot to computer science.
| > Granted, there is lots of empirical evidence in favour, but we all know
| > how a little evidence can go far too far towards developing an analogy.
|
| One of my philosophy professors in college offered the observation that
| models for the human mind have always seemed to correspond to the most
> advanced form of technology at that given point in history.

I find the connection between models of human memory as developed in
cognitive psychology and existing computer architectures rather tenuous.
The main similarity appears to be that several levels of memory can be
discerned, but the suggested analogy in function is a bit far-fetched.

It is perhaps worth pointing out that much of the current models in
cognitive psychology can already be found in the pioneering work of Otto
Selz (Muenchen, 1881 - Auschwitz, 1943), antedating the computer era.

--

Lambert Meertens, CWI, Amsterdam; lambert@cwi.nl

------------------------------

Date: Thu, 11 Jun 87 13:48:05 BST
From: Graham Higgins <gray%hplb.csnet@RELAY.CS.NET>
Subject: Re: Taking AI models and applying them to biology...

In article <622@unicus.UUCP>, craig@unicus.UUCP (Craig D. Hubley) writes:

> I was semi-surprised in recent months to discover that cognitive psychology,
> far from developing a bold new metaphor for human thinking, has (to a degree)
> copied at least one metaphor from third-generation computer science.

Psychology freely borrows *any* models that will help it get a grip on
characterising and explaining the phenomena of cognition. Over the years,
analogies of the workings of the mind have been constructed from : windmills,
hydraulic systems, telephone switching exchanges and latterly, the computer (or
more properly, information-processing devices). The one thing that all these
analogies have in common is that they draw on the technological state-of-the-art
of the time. (The "internal combustion engine" analogy is a new one to me).

David Longerbeam's comment about the requirement for empiricism is valid in this
instance. Donald Hebb assumed a separation of STM and LTM in a 1949 paper (and
that's going back quite some time, only a year after Shockley's invention of the
transistor). It is unlikely that the computer-architecture construct of
"archived storage" played any part in Hebb's dichotomising of human memory. It
appears that this is one example of a model developed within cognitive
psychology, independently of developments in computer architecture. (I'm not
well-versed in comp.sci. history - but it seems reasonable to conjecture that
Hebb was unaware of the notions of "archived storage" when he was developing his
dichotomisation).


> This description of the human memory system, though cloaked in vaguer terms,
> corresponds more or less one-to-one with the traditional computer
> architecture we all know and love ...
>
> - senses have "iconic" and "echo" memories analogous to buffers.
> - short term memory holds information that is organized for quick
> processing, much like main storage in a computing system.
> - long term memory holds information in a sort of semantic
> association network where large related pieces of information
> reside, similar to backing or "archived" computing storage.

I think that this is somewhat of an over-simplification. There are quite a few
phenomena arising from studies of "iconic", "echoic", "short-term" and
"long-term" areas of human memory which do not fit so tamely into a
computer-architecture model. Thus, there has *not* been uncritical acceptance of
either that the "iconic" and "echoic" aspects of memory are passive or that
memory can be simply dichotomised into into STM and LTM sections. In the absence
of anything better, the analogies will do for now, but there are too many
phenomena which don't fit in to these analogies for them to anything but
convenient for the moment.

One of the disciplinary traits actively promoted in psychology (be it cognitive,
social, experimental, etc.) is a high degree of circumspection. (There is a
tradition that one never sees a one-armed psychologist - "on one hand .... and
on the other ... "
). Thus models and analogies *can* be freely borrowed from
other areas and exploited for what they offer, for as long as they exhibit some
level of descriptive utility. It is instructive to note that contemporary
cognitive psychologists no longer use windmills or telephone exchanges (or even
the internal combustion engine) as analogies of the workings of the mind. These
particular analogies have outlived their usefulness and have been discarded (I
hope!).

Graham Higgins || The opinions expressed above
Hewlett-Packard Labs || are not to be contrued as the
Bristol, U.K. || opinions, stated or otherwise,
gjh@hplb.csnet +44 272 799910 xt 4060 || of Hewlett-Packard

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT