Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 182

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 15 Jul 1987    Volume 5 : Issue 182 

Today's Topics:
Logic Programming - ICOT Prolog Progress,
Humor - AI Justification of Star Wars,
Speculation - Moravec on Immortality,
Philosophy of Science - AI as a Science

----------------------------------------------------------------------

Date: Wed, 15 Jul 87 10:32:20 JST
From: Chikayama Takashi <chik%icot.jp@RELAY.CS.NET>
Reply-to: chik@icot.icot.JUNET (Chikayama Takashi)
Subject: Re: Say, what ever happened to ... ICOT Prolog?????

In article <8706111231.AA18169@mitre.arpa> elsaesser%mwcamis@MITRE.ARPA writes:
>It seems ages ago that the 5th generation project was going to
>reinvent AI in a Prolog "engine" that was to do 10 gazillion "
>LIPS". Anyone know what happened? I mean, if you can make so many
>"quality" cars (sans auto transmission, useful A/C, paint that can take
>rain and sun, etc.), why can't you make a computer that runs an NP-complete
>applications language in real time??? Semi-seriously, what is the status
>of the 5th generation project, anyone got an update?

Well, we are sorry not distributing enough information to the AI
society. Most papers related to ICOT's research are distributed to
the logic programming society but not to the AI world (I guess you
know how poor propagandist Japanese are:-). Many are reported in:
International Conference on Logic Programming
IEEE Symposium on Logic Programming
Please look into proceedings of these conferences.

For about 10 gazillion LIPS computers: What our research of these 5
years revealed is that highly parallel hardware can never be practical
without much software effort, including new concepts in programming
languages. More stress is put upon software than in the original
project plan. Indeed, VLSI technology is dropped off from the
project. Our experience shows that VLSI technology is NOT the most
difficult point in the way to realistic highly parallel computer
systems. An efficient system with 256 processors may be built without
changing the software at all. But for systems with 4096 processors,
we need a drastic change. And this is what we need to achieve 10
gazillion LIPS. NOT that VLSI technology has become easier, but that
we have found MORE difficult problems, unfortunately.

Where are we? Well, one of our recent hardware achievement is the
development of the PSI-II machine, which executes 400 KLIPS (much less
than 10 gazillion, I guess :-). It is a sequential machine and will
be used as element processors of our prototype parallel processor
Multi-PSI V2 (with 64 PE's), whose hardware is scheduled to come up at
the end of this year.

If you are interested in our research, a survey by myself titled:
"Parallel Inference System Researches in the FGCS Project"
will be presented in the IEEE Symposium on Logic Programming, held at
San Francisco during Aug 31-Sep 4, 1987. If you are more interested
in our project, please join the FGCS'88 conference. It will be held
in Tokyo during Nov 28-Dec 2, 1988.

Takashi Chikayama

------------------------------

Date: 14-Jul-1987 2028
From: minow%thundr.DEC@decwrl.dec.com (Martin Minow THUNDR::MINOW
ML3-5/U26 223-9922)
Subject: Book Report

From "Dirk Gently's Holistic Detective Agency," by Douglas Adams.
(New York: Simon and Schuster, 1987):

"Well," he said, "it's to do with the project which first made
the software incarnation of the company profitable. It was
called _Reason_, and in its own way it was sensational."

"What was it?"

"Well, it was a kind of back-to-front program. It's funny how
many of the best ideas are just an old idea back-to-front. You
see, there have already been several programs written that help
you make decisions by properly ordering and analysing all the
relevant facts.... The drawback with these is that the decision
which all the properly ordered and analyzed facts point to is not
necessarily the one you want.

"... Gordon's great insight was to design a program which allowed
you to specify in advance what decision you wished it to reach,
and only then to give it all the facts. The program's task, ...
was simply to construct a plausible series of logical-sounding
steps to connect the premises with the conclusion." ....

"Heavens. and did the program sell very well?"

"No, we never sold a single copy.... The entire project was bought
up, lock, stock, and barrel, by the Pentagon. The deal put WayForward
on a very sound financial foundation. Its moral foundation, on the
other hand, is not something I would want to trust my weight to.
I've recently been analyzing a lot of the arguments put forward in
favor of the Star Wars project, and if you know what you're looking
for, the pattern of the algorithms is very clear.

"So much so, in fact, that looking at Pentagon policies over the
last couple of years I think I can be fairly sure that the US
Navy is using version 2.00 of the program, while the Air Force for
some reason only has the beta-test version of 1.5. Odd, that."

------------------------------

Date: Wed 8 Jul 87 16:19:25-PDT
From: Ken Laws <LAWS@IU.AI.SRI.COM>
Subject: Moravec on Immortality

[Forwarded with permission of Hans.Moravec@ROVER.RI.CMU.EDU.]


From AP Newsfeatures, June 14, 1987
By MICHAEL HIRSH
Associated Press Writer
PITTSBURGH (AP) - If you can survive beyond the next 50 years or so,
you may not have to die at all - at least, not entirely. [...]
Hans Moravec, director of Mobile Robot Laboratory of the Robotics
Institute at Carnegie Mellon University, believes that computer
technology is advancing so swiftly there is little we can do to avoid
a future world run by superintelligent robots.
Unless, he says, we become them ourselves.
In an astonishingly short amount of time, scientists will be able to
transfer the contents of a person's mind into a powerful computer,
and in the process, make him - or at least his living essence -
virtually immortal, Moravec claims.
''The things we are building are our children, the next
generations,'' the burly, 39-year-old scientist says. ''They're
carrying on all our abilities, only they're doing it better. If you
look at it that way, it's not so devastating.'' [...]
''I have found in traveling throughout all of the major robotics and
artificial intelligence centers in the U.S. and Japan that the ideas
of Hans Moravec are taken seriously,'' says Grant Fjermedal, author
of ''The Tomorrow Makers,'' a recent book about the future of
computers and robotics. [He] Devotes the first five chapters of
his book to the work of Moravec and his proteges at CMU.
MIT's Gerald J. Sussman, who wrote the authoritative textbook on
artificial intelligence, agreed that computerized immortality for
people ''isn't very long from now.''
''A machine can last forever, and even if it doesn't you can always
make backups,'' Sussman told Fjermedal. ''I'm afraid, unfortunately,
that I'm the last generation to die. Some of my students may manage
to survive a little longer.'' [...]
CMU's Alan Newell, one of the so-called founding fathers of
artificial intelligence, cautions that while little stands in the way
of intelligent machines, the transfer of a human mind into one is
''going down a whole other path.''
''The ability to create intelligent systems is not at all the same
as saying I can take an existing mind and capture what's in that
mind. You might be able to create intelligence but not (capture) the
set of biological circumstances that went into making a particular
mind,'' he says.
In Moravec's forthcoming book, ''Mind Children,'' he argues that
economic competition for faster and better information-processing
systems is forcing the human race to engineer its own technological
Armageddon, one that a nuclear catastrophe can only delay.
Natural evolution is finished, he says. The human race is no longer
procreating, but designing, its successors.
''We owe our existence to organic evolution. But we owe it little
loyalty,'' Moravec writes. ''We are on a threshold of a change in the
universe comparable to the transition from non-life to life.''
Moravec's projections are based on his research showing that, on the
average, the cost of computation has halved every two years from the
time of the primitive adding machines of the late 19th century to the
supercomputers of the 1980s. [...]
Moreover, the rate is speeding up, and the technological pipeline is
full of new developments, like molecule-sized computer circuits and
recent advances in superconductors, that can ''sustain the pace for
the foreseeable future,'' he says.
The implications of a continued steady decrease in computing costs
are even more mind-boggling.
It is no surprise that studies in artificial intelligence have shown
sparse results in the last 20 years, Moravec says. Scientists are
severely limited by the calculating speed and capacity of laboratory
computers. Today's supercomputers, running at full tilt, can match in
power only the 1-gram brain of a mouse, he says.
But by the year 2010, assuming the growth rate of the last 80 years
continues, the best machines will be a thousand times faster than
they are today and equivalent in speed and capacity to the human
mind, Moravec argues. [...]
''All of our culture can be taken over by robots. It'll be boring to
be human. If you can get human equivalence by 2030, what will you
have by 2040?'' Moravec asks, laughing.
''Suppose you're sitting next to your best friend and you're 10
times smarter than he is. Are you going to ask his advice? In an
economic competition, if you make worse decisions, you don't do as
well,'' he says.
''We can't beat the computers. So that opens up another possibility.
We can survive by moving over into their form.''
There are a number of different scenarios of ''digitizing'' the
contents of the human mind into a computer, all of which will be made
plausible in the next 50 to 100 years by the pace of current
technology, Moravec says.
One is to hook up a superpowerful computer to the corpus callosum,
the bundle of nerve fibers that connects the two hemispheres of the
brain. The computer can be programmed to monitor the traffic between
the two and, eventually, to teach itself to think like the brain.
After a while, the machine begins to insert its own messages into
the thought stream. ''The computer's coming up with brilliant
solutions and they're just popping into your head,'' Moravec says [...]
As you lose your natural brain capacity through aging, the computer
takes over function by function. And with advances in brain scanning,
you might not need any ''messy surgery,'' Moravec says. ''Perhaps you
just wear some kind of helmet or headband.'' At the same time, the
person's aging, decrepit body is replaced with robot parts.
''In the long run, there won't be anything left of the original. The
person never noticed - his train of thought was never interrupted,''
he says.
This scenario is probably more than 50 years away, Moravec says, but
because breakthroughs in medicine and biotechnology are likely to
extend people's life spans, ''anybody now living has a ticket.''
Like many leading artificial intelligence researchers, Moravec
discounts the mind-body problem that has dogged philosophers for
centuries: whether a person's identity - in religious terms, his soul
- can exist independently of the physical brain.
''If you can make a machine that contains the contents of your mind,
then that machine is you,'' says MIT's Sussman.
Moravec believes a machine-run world is inevitable ''because we
exist in a competing economy, because each increment in technology
provides an advantage for the possessor . . . Even if you can keep
them (the machines) slaves for a long time, more and more
decision-making will be passed over to them because of the
competitiveness.
''We may be still be left around, like the birds. It may well be
that we can arrange things so the machines leave us alone. But sooner
or later they'll accidently step on us. They'll need the material of
the earth.''
Such talk is dismissed as sheer speculation by Moravec's detractors,
among them his former teacher, Stanford's John McCarthy, who is also
one of the founding fathers of artificial intelligence research.
McCarthy says that while he respects Moravec's pioneering work on
robots, his former Ph.D student is considered a ''radical.''
''I'm more uncertain as to how long it (human equivalence) will
take. Maybe it's five years. Maybe it's 500. He has a slight tendency
to believe it will happen as soon as computers are powerful enough.
They may be powerful enough already. Maybe we're not smart enough to
program them.''
Even with superintelligent machines, McCarthy says, it's hardly
inevitable that computers will take over the world.
''I think we ought to work it out to suit ourselves. In particular
it is not going to be to our advantage to give things with
human-level intelligence human-like emotions (like ambition). You
might want something to sit there and maybe read an encyclopedia
until you're ready to use it again,'' he says.
George Williams, an emeritus professor of divinity at Harvard
University, called Moravec's scenario ''entirely repugnant.'' [...]
McCarthy, however, insists there's no need to panic.
''Because the nature of the path that artificial intelligence will
take is so unknown, it's silly to attempt to plan any kind of social
policy at this early time,'' he says.

------------------------------

Date: Sun 12 Jul 87 19:45:34-PDT
From: Lee Altenberg <CCOCKERHAM.ALTENBERG@BIONET-20.ARPA>
Subject: AI is not a science

This discussion has brought to my mind the question of undecidability
in cellular automata, as discussed by S. Wolfram. For some rules and initial
sequences , the most efficient way of finding out how the automaton will behave
is simply to run it. Now, what is the status of knowledge about the
behavior of automata and the process of obtaining this knowledge? Is it a
science or not?
Invoking some of the previous arguments regarding AI, it could be
said that it is not a science because knowing something about an
automaton tells one nothing about the actual world. That is why mathematics
has been called not science.
Yet, to find out how undecidable automata behave one needs to
carry out experiments of running them. In this way they are just like a
worldly phenomenon where knowledge about them comes from observing them.
One must take an empirical approach to undecidable systems.
But there is another angle of evaluation. Naturalists have been
belittled as "not doing science" because their work is largely descriptive.
Does science consist then in making general statements? Or to be more precise,
does science consist of redescribing reality in terms of some general
statements plus smaller sets of statements about the world, which when combined
can generate the full (the naturalists's) description of reality? If this
is to be the case, then all examples of undecidable (and chaotic, I would
guess) processes fall outside the dominion of science, which seems to me
overly restrictive.

------------------------------

Date: Mon, 13 Jul 87 15:08:07 bst
From: Stefek Zaba <sjmz%hplb.csnet@RELAY.CS.NET>
Subject: AI as science: establishing generality of algorithms

In response to the points of Jim Hendler and John Nagle, about whether
you can verify that your favourite planning system can be shown to be
more general than the Standard Reference:

At the risk of drawing the slings and arrows of people who sincerely believe
Formalism to be the kiss of death to AI, I'd argue that there *are* better
characterisations of the power of algorithms than a battery of test cases -
or, in the case of the typical reported AI program, described in necessarily
space-limited journals, a tiny number thereof. Such characterisations are in
the form of more formal specs of the algorithm - descriptions of it which
strip away implementation efficiency tricks, and typically use quantification
and set operations to get at the gist of the algorithm. You can then *prove*
the correctness of your algorithm *under given assumptions*, or "equivalently"
derive conditions under which your algorithm produces correct results.

Such proofs are usually (and, I believe, more usefully) "rigorous - but -
informal"; that is a series of arguments with which your colleagues cannot
find fault, rather than an immensely long and tortuous series of syntactic
micro-steps which end up with a symbol string representing the desired
condition. Often it's easier to give sufficient (i.e. stronger than
necessary) conditions under which the algorithm works than a precise set of
necessary-and-sufficient ones. *Always* it's harder (for mere mortals like
me, anyway) than just producing code which works on some examples.

An example of just such a judicuious use of formalism which I personally found
inspiring is Tom Mitchell's PhD thesis covering the version space algorithm
(Stanford 1978, allegedly available as STAN-CS-78-711). After presenting a
discursive description of the technique in chapter 2, chapter 3 gives a formal
treatment which introduces a minimum of new terminology, and gives a simple
and *testable* condition under which the algorithm works: "A set of patterns P
with associated matching predicate M is said to be an admissible pattern
language if and only if every chain [totally ordered subset] of P has a
maximum and a minimum element".

Stefek Zaba, Hewlett-Packard Labs, Bristol, England.
[Standard disclaimer concerning personal nature of views applies]

------------------------------

Date: 13 Jul 87 05:23:56 GMT
From: ihnp4!lll-lcc!esl.ESL.COM!ssh@ucbvax.Berkeley.EDU (Sam)
Reply-to: ssh@esl.UUCP (Sam)
Subject: is AI a science?


[There are several components of AI, as there are of CS, but...]

Let's take a step back. Is "Computer Science" a science? -- Sam

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT