Copy Link
Add to Bookmark
Report

AIList Digest Volume 2 Issue 075

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 20 Jun 1984     Volume 2 : Issue 75 

Today's Topics:
Expert Systems - Regression Analysis,
AI Tools - Q'NIAL & Pandora Project,
Conference - AAAI-84 Program Now Available,
AI News - Army A.I. Grant to Texas,
Standards - Maintaining High Quality in AI Products,
Social Implications - Artificial People,
Seminar - Precondition Analysis
----------------------------------------------------------------------

Date: Wed, 20 Jun 84 12:27:07 EDT
From: "Ferd Brundick (VLD/LTTB)" <fsbrn@Brl-Voc.ARPA>
Subject: request for information

Hi,

Does anyone know of any expert systems to aid regression analysis ?
I've been told that Bell Labs is working in the area of AI data
analysis; William Gayle is reportedly developing a program called REX.
I would appreciate any information in this area (net addresses, phone
numbers, references, etc). Thanks.

dsw, fferd
Fred S. Brundick
USABRL, APG, MD.
<fsbrn@brl-voc>

[Bill Gayle has been developing an expert system interface to the
Bell Labs S statistical package. I believe it is based on the
Stanford Centaur production/reasoning system and that it uses
"pipes" to invoke S for analysis and display services. Gayle's
system currently has little expertise in analyzing residuals,
but it does know what types of transformations might be applied
to different data types. It is basically a helpful user interface
rather than an automated analysis system.

Rich Becker, one of the developers of S, has informed me that
source code for S is available. Call 800-828-UNIX for information,
or write to

AT&T Technologies Software Sales
PO Box 25000
Greensboro, NC 27420

For a description of the S package philosophy see Communications of
the ACM, May 1984, Vol. 27, No. 5, pp. 486-495.

Another automated data analysis system is the RADIX (formerly RX)
system being developed at Stanford by Dr. Robert Blum and his students.
It has knowledge about drug interactions, symptom onset times, and
other special considerations for medical database analysis. It is
designed to romp through a database looking for interesting correlations,
then to design and run more (statistically) controlled analyses to
attempt confirmation of the discovered effects.

-- Ken Laws ]

------------------------------

Date: Tue 19 Jun 84 12:44:50-EDT
From: Michael Rubin <RUBIN@COLUMBIA-20.ARPA>
Subject: Re: Q'NIAL

According to an advertisement I got, NIAL is "nested interactive array
language" and Q'NIAL is a Unix implementation from Queen's University at
Kingston, Ontario. It claims to be a sort of cross between LISP and APL with
"nested arrays" instead of APL flat arrays or LISP nested lists, "structured
control constructs... and a substantial functional programming subset." The
address is Nial Systems Ltd., 20 Hatter St., Kingston, Ontario K7M 2L5 (no
phone # or net address listed). I don't know anything about it other than what
the ad says.

------------------------------

Date: Sun 17 Jun 84 16:28:44-EDT
From: MDC.WAYNE%MIT-OZ@MIT-MC.ARPA
Subject: Pandora Project

In the July 1984 issue of Esquire appears an article by Frank Rose
entitled "The Pandora Project." Rose provides some glimpses into work
at Berkeley by Robert Wilensky and Joe Faletti on the commensense
reasoning programs, PAMELA and PANDORA.

--Wayne McGuire

------------------------------

Date: 17 June 1984 0019-EDT
From: Dave Touretzky at CMU-CS-A
Subject: AAAI-84 Program Now Available

[Forwarded from the CMU bboard by Laws@SRI-AI.]

The program for AAAI-84, which lists papers, tutorials, panel discussions,
etc., is now available on-line, in the following files:

TEMP:AAAI84.SCH[C410DT50] on CMUA
<TOURETZKY>AAAI84.SCH on CMUC
[g]/usr/dst/aaai84.sch on the GP-Vax

The program is 36 pages long if you print it on the dover in Sail10 font.

------------------------------

Date: Tue 19 Jun 84 18:26:11-CDT
From: Gordon Novak Jr. <CS.NOVAK@UTEXAS-20.ARPA>
Subject: Army A.I. Grant to Texas

[Forwarded from the UTexas-20 bboard by Laws@SRI-AI.]

The U.S. Army Research Office, headquartered in Research Triangle Park,
North Carolina, has announced the award of a contract to the University
of Texas at Austin for research and education in Artificial Intelligence.
The award is for approximately $6.5 million over a period of five years.

The University of Texas has established an Artificial Intelligence
Laboratory as an organized research unit. Dr. Gordon S. Novak Jr. is
principal investigator of the project and has been named Director of
the Laboratory. Dr. Robert L. Causey is Associate Director.
Other faculty whose research is funded by the contract and who will be
members of the Laboratory include professors Robert F. Simmons, Vipin
Kumar, and Elaine Rich. All are members of the Department of Computer
Sciences except Dr. Causey, who is Chairman of the Philosophy Department.

The contract is from the Electronics Division of the Army Research Office,
under the direction of Dr. Jimmie Suttle. The contract will provide
fellowships and research assistantships for graduate students, faculty
research funding, research computer equipment, and staff support.

The research areas covered by the Army Research Office contract include
automatic programming and solving of physics problems by computer (Novak),
computer understanding of mechanical devices described by English text
and diagrams (Simmons), parallel programs and computer architectures for
solving problems involving searching (Kumar), reasoning under conditions
of uncertainty, and intelligent interfaces to computer programs (Rich).

------------------------------

Date: Tuesday, 19-Jun-84 12:19:22-BST
From: BUNDY HPS (on ERCC DEC-10) <Bundy%edxa@ucl-cs.arpa>
Subject: Maintaining High Quality in AI Products

Credibility has always been a precious asset for AI, but never
more so than now. We are being given the chance to prove ourselves. If
the range of AI products now coming onto the market are shown to
provide genuine solutions to hard problems then we have a rosy future.
A few such products have been produced, but our future could still be
jeopardized by a few, well publised, failures.

Genuine failures - where there was determined, but ultimately
unsuccesful, effort to solve a problem - are regretable, but not fatal.
Every technology has its limitations. What we have to worry about are
charlatans and incompentents taking advantage of the current fashion
and selling products which are overrated or useless. AI might then be
sigmatized as a giant con-trick, and the current tide of enthusiasm
would ebb as fast as it flowed. (Remember Machine Translation - it
could still happen.)

The academic field guards itself against charlatans and
incompentents by the peer review of research papers, grants, PhDs, etc.
There is no equivalent in the commercial AI field. Faced with this
problem other fields set up professional associations and codes of
practice. We need a similar set-up and we needed it yesterday. The
'blue chip' AI companies should get together now to found such an
association. Membership should depend on a continuing high standard of
AI product and in-house expertise. Members would be able to advertise
their membership and customers would have some assurance of quality.
Charlatans and incompetents would be excluded or ejected, so that the
failure of their products would not be seen to reflect on the field as
a whole.

A mechanism needs to be devised to prevent a few companies
annexing the association to themselves and excluding worthy
competition. But this is not a big worry. Firstly, in the current state
of the field AI companies have a lot to gain by encouraging quality in
other companies. Every success increases the market for everyone,
whereas failure decreases it. Until the size of the market has been
established and the capacity of the companies risen to meet it, they
have more to gain than to lose by mutual support. Secondly, excluded
companies can always set up a rival association.

This association needs a code of practice, which members would
agree to adhere to and which would serve as a basis for refusing
membership. What form should such a code take, i.e. what counts as
malpractice in AI? I suspect malpractice may be a lot harder to define
in AI than in insurance, or medicine, or travel agency. Due to the
state of the art, AI products cannot be perfect. No-one expects 100%
accurate diagnosis of all known diseases. On the other hand a program
which only works for slight variations of the standard demo is clearly
a con. Where is the threshold to be drawn and how can it be defined?
What consitutes an extravagent claim? Any product which claims to:
understand any natural language input, or to make programming
redundant, or to allow the user to volunteer any information, sounds
decidedly smelly to me. Where do we draw the line? I would welcome
suggestions and comments.

Alan Bundy

------------------------------

Date: 22 Jun 84 6:44:56-EDT (Fri)
From: hplabs!tektronix!uw-beaver!cornell!vax135!ukc!west44!greenw @
Ucb-Vax.arpa
Subject: Human models
Article-I.D.: west44.243


[The time has come, the Walrus said, to talk of many things...]

Consider...
With present computer technology, it is possible to build
(simple) molecular models, and get the machine to emulate exactly
what the atoms in the `real` molecule will do in any situation.

Consider also...
Software and hardware are getting more powerful; larger models
can be built all the time.

[...Of shoes and Ships...]

One day someone may be able to build a model that will be an exact
duplicate of a human brain.
Since it will be perfect down to the last atom, it will also be
able to act just like a human brain.
i.e. It will be capable of thought.

[...And Sealing Wax...]

Would such an entity be considered `human`, for, though it would
not be `alive` in the biological sense, someone talking on the telephone
to its very sophisticated speech synthesiser, or reading a letter typed from
it would consider it to be a perfectly normal, if not rather intelligent
person.
Hmmmmmm.

One last thought...
Even if all the correct education could be given it, might it still
suffer from the HAL9000 syndrome [2001]; fear of being turned off if it
did something wrong?

[...of Cabbages and Kings.]

Jules Greenwall,
Westfield College, London, England.

from...

vax135 greenw (UNIX)
\ /
mcvax- !ukc!west44!
/ \
hou3b westf!greenw (PR1ME)


The MCP is watching you...
End of Line.

------------------------------

Date: 18 Jun 84 13:27:47-PDT (Mon)
From: hplabs!hpda!fortune!crane @ Ucb-Vax.arpa
Subject: Re: A Quick Question - Mind and Brain
Article-I.D.: fortune.3615

Up to this point the ongoing discussion has neglected to take two
things into account:

(1) Subconscious memory - a person can be enabled (through
hypnosis or by asking him the right way) to remember
infinite details of any experience of this or prior life
times. Does the mind selectively block out trivia in order
focus on what's important currently?

(2) Intuition - by this I mean huge leaps into discovery
that have nothing to do with the application of logical
association or sensual observation. This kind of stuff
happens to all of us and cannot easily be explained by
the physical/mechanical model of the human mind.

I agree that if you could build a computer big enough and fast
enough and taught it all the "right stuff", you could duplicate
the human brain, but not the human mind.

I don't intend to start a metaphysical discussion, but the above
needs to be pointed out once in a while.

John Crane

------------------------------

Date: Wed 20 Jun 84 10:01:39-PDT
From: WYLAND@SRI-KL.ARPA
Subject: The Turing Test - machines vs people

Tony Robison (AIList V2 #74) and his comments about
machine "soul" brings up the unsettling point - what happens when
we make a machine that passes the Turing test? For:

o One of the goals of AI (or at least some workers in the
field - hedge, hedge) is to make a machine that will pass
the Turing test.

o Passing the Turing test means that you cannot distinguish
between man and machine by their written responses to
written questions (i.e., over a teletype). Today, we could
extend the definition to include oral questions (i.e., over
the telephone) by adding speech synthesis and recognition.

o If you cannot tell the difference between person and machine
by the formal social interaction of conversation, *how will
the legal and social systems differentiate between them!!*

Our culture(s) is set up to judge people using conversation,
written or oral: the legal arguments of courts, all of the
testing through schools, psychological examination, etc. We have
chosen the capability for rational conversation (including the
potential capability for it in infants, etc.) as the test for
membership in human society, rejecting membership based on
physical characteristics such as body shape (men/women,
"foreigners") and skin color, and the content of the
conversations such as provided by cultural/ religious/political
beliefs, etc. If we really do make machines that are
*conversationally indistinguishable* from humans, we are going to
have some interesting social problems, whether or not machines
have "souls". Will we have to reject rational conversation as
the test of membership in society? If so, what do we fall back
on? (The term "meathead" may become socially significant!) And
what sort of interesting things are going to happen to the
social/legal/religious systems in the meantime?

Dave Wyland
WYLAND@SRI

P.S. Asimov addressed these problems nicely in his renowned "I,
Robot" series of stories.

------------------------------

Date: 18 Jun 1984 14:21 EDT (Mon)
From: Peter Andreae <PONDY%MIT-OZ@MIT-MC.ARPA>
Subject: Seminar - Precondition Analysis

[Forwarded from the MIT bboard by SASW@MIT-MC.]

PRECONDITION ANALYSIS - LEARNING CONTROL INFORMATION


Bernard Silver

Dept. of AI, University of Edinburgh

2pm Wednesday, June 20.
8th Floor Playroom


I will describe LP, a program that learns equation solving strategies from
worked examples. LP uses a new learning technique called Precondition
Analysis. Precondition Analysis learns the control information that is
needed for efficient progblem solving in domains with large search spaces.

Precondition Analysis is similar in spirit to the recent work of Winston,
Mitchell and DeJong. It is an analytic learning technique, and is capable
of learning from a single example.

LP has successfully learned many new equation solving strategies.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT