Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 175

eZine's profile picture
Published in 
AIList Digest
 · 11 months ago

AIList Digest            Sunday, 12 Jul 1987      Volume 5 : Issue 175 

Today's Topics:
Queries - ANIMAL in BASIC & XLisp & Monkey and Bananas Benchmark &
Conference on Production Planning and Control & Neural Networks &
GLISP,
Tools - Real Time Expert Systems,
Programming - Software Reuse,
Law - Liability in Expert Systems,
Expert Systems - Plausible Reasoning

----------------------------------------------------------------------

Date: 9 Jul 87 03:04:14 GMT
From: David L. Brauer <nosc!humu!dbrauer@sdcsvax.ucsd.edu>
Subject: ANIMAL in BASIC ???

Somewhere in the darkest reaches of my memory I recall seeing a listing
of the game ANIMAL in BASIC. It's that old standby introduction to rule-based
reasoning that tries to deduce what animal you have in mind by asking
questions like "Does it have feathers?", "Does it have hooves?" etc.
The problem is that I described this program to my wife and she now wants
to program it on an Apple IIc for her elementary school students. I believe
I saw the listing in an "Intro to AI" article in some magazine but I'm not
sure. I would prefer not to have to help her program the thing from
scratch so any pointers would be greatly appreciated.

Thanks,

David C. Brauer
MilNet: dbrauer@NOSC.Mil

------------------------------

Date: Thu 9 Jul 87 08:51:29-PDT
From: BEASLEY@EDWARDS-2060.ARPA
Subject: clarification

I would like to clarify my request for information about XLISP. The particular
version i have is XLISP Experimental Object-oriented Language Version 1.6
by David M. Betz for use on the IBM PC and others. Any information would be
greatly appreciated. By the way, i have the article from BYTE magazine.
The examples didn't work!!!!
Please send the info to beasley@edwards-2060.arpa.

joe

------------------------------

Date: Fri, 10 Jul 87 10:20:10 SET
From: "Adlassnig, Peter" <ADLASSNI%AWIIMC11.BITNET@wiscvm.wisc.edu>
Subject: Monkey and Bananas Benchmark

RE: Inquiry for Production Systems

Since we finished our PAMELA (PAttern Matching Expertsystem Language)
we are interesting in the Monkeys and Bananas benchmark (NASA MEMO
FM7(86-51). I wonder how to obtain the source code.

In addition to that we would be interested in YAPS (Yet Another Production
System) running under VAX/UNIX. Is there any information available.

I have no direct access to the ARPANET. Please return mails to my
friend's email address:
adlassni at awiimc11.bitnet

my postal address is: Franz Barachini
ALCATEL-ELIN Research Center
Floridusgasse 50
A-1210 Vienna
Austria

------------------------------

Date: 10 Jul 87 13:46:58 GMT
From: dhj@aegir.dmt.oz (Dennis Jarvis)
Subject: conference on production planning and control

In a (not so) recent posting to comp.ai.digest, it was announced that a
conference entitled "Expert Systems and the Leading Edge in Production
Planning and Control"
would be held from May 10-13 in Charleston, South
Carolina. I would like to obtain a copy of the proceedings of that
conference - any assistance in this regard would be greatly appreciated.

________________________________________________________________________
Dennis Jarvis, CSIRO, PO Box 4, Woodville, S.A. 5011, Australia.

UUCP: {decvax,pesnta,vax135}!mulga!aegir.dmt.oz!dhj
PHONE: +61 8 268 0156 ARPA: dhj%aegir.dmt.oz!dhj@seismo.arpa
CSNET: dhj@aegir.dmt.oz

------------------------------

Date: Fri, 10 Jul 87 11:04:59 +0200
From: mcvax!idefix.laas.fr!helder@seismo.CSS.GOV (Helder Araujo)
Subject: Neural Networks


I am just starting working on a vision system, for which I am
considering several different architectures. I am interested in studying the
utilization of a neural network in such a system. My problem is that I am
lacking information on neural networks. I would be grateful if anyone could
suggest me a bibliography and references on neural networks. As I am not
a regular reader of AIlist I would prefer to receive this information
directly. My address:

mcvax!inria!lasso!magnon!helder

I will select the information and put it on AIlist.

Helder Araujo
LAAS
mcvax!inria!lasso!magnon!helder
7, ave. du Colonel-Roche
31077 Toulouse
FRANCE


[I have forwarded this to the neuron%ti-csl.csnet@relay.cs.net
neural-network list. -- KIL]

------------------------------

Date: 10 Jul 87 14:45:41 GMT
From: uwmcsd1!leah!itsgw!nysernic!b.nyser.net!weltyc@unix.macc.wisc.ed
u (Christopher A. Welty)
Subject: Looking for GLISP


I am looking for some references to G-LISP, something written
by a guy named Novac (sp?) at Stanford. I don't actually need G-LISP,
but I would like to see the papers or any other references. Any help
would be much appreciated. With enough interest I'll post to the
list..





Christopher Welty - Asst. Director, RPI CS Labs
weltyc@cs.rpi.edu ...!seismo!rpics!weltyc

------------------------------

Date: Fri, 10 Jul 87 01:22:56 gmt
From: Aaron Sloman <aarons%cvaxa.sussex.ac.uk@Cs.Ucl.AC.UK>
Subject: Real Time expert systems

Hi,

I saw your enquiry about real time expert systems. A UK firm called
Systems Designers have used our Poplog system to implement a prototype
system called RESCU which can control production of detergent at ICI.

This was one of the UK Alvey Programme's "community club" projects,
i.e. a number of industrial firms potentially able to benefit from
the development helped to fund the prototype demonstration system.

They were so pleased with the result that the development work
is continuing.

They used Poplog on a VAX-730 connected to a variety of monitoring
devices displays, etc.

The system was written in POP-11 extended by a task specific rule
language for which they implemented an incremental compiler using
the POP-11 compiler-building tools.

There have been various relatively short reports on RESCU in newspapers, etc.,
as well as conference presentations, but I have not seen a full write-up.

If you want to know more about RESCU write to:
Mike Dulieu,
Systems Designers Plc,
Pembroke House,
Pembroke Broadway
Camberley, Surrey, GU15 3XD
England
Phone +44 276 686200

I hope this information is of some use.

Best wishes
Aaron Sloman,
U of Sussex, School of Cognitive Sciences, Brighton, BN1 9QN, England
UUCP: ...mcvax!ukc!cvaxa!aarons
ARPANET : aarons%uk.ac.sussex.cvaxa@cs.ucl.ac.uk
JANET aarons@cvaxa.sussex.ac.uk

PS
Robin Popplestone at University of Amherst Mass (pop@edu.umass.cs) is
taking over academic distribution of Poplog in USA. He may have some
information about RESCU. He'll be at Amherst and SUN stands at AAAI
conference.

------------------------------

Date: 9 Jul 87 03:10:00 GMT
From: johnson@p.cs.uiuc.edu
Subject: Re: Software Reuse (short title)


Object-oriented programming languages like Smalltalk provide a great
deal of software reuse. There seems to be several reasons for this.
One is that the late bound procedure calls (i.e. message sending)
provide polymorphism, so it is easier to write generic algorithms.
Late binding encourages the use of abstract interfaces, since the
interface to an object is the set of messages it accepts. Another
reason is that class inheritance lets the programmer take some code
that is almost right and convert it without destroying the original,
i.e. it permits "programming by difference". These two features
combine to encourage the creation of "application frameworks" or
"application toolkits", which are sets of objects and, more importantly,
interfaces that let the application developer quickly build an application
by mixing and matching objects from existing classes.

There are a number of ways that an abstract algorithm can be expressed
in these languages. An abstract sort or summation algorithm can be
built just using a polymorphic procedure. Abstract "process all" and
reduction algorithms are provided by inheritance in the Collection
class hierarchy of Smalltalk, and a toolkit can be used to describe
the abstract design of a browser or editor from a set of abstract
data types, a display manager, and a dialog control component
(i.e. the Model/View/Controller system).

The Smalltalk programming environment also provides tools to help
the user find code and to figure out what it does. While these tools
(and the language) could stand some improvement, they already provide
a lot of what is needed for code reuse. And they don't use A.I!

------------------------------

Date: Fri, 10 Jul 87 07:53:43 PDT
From: George Cross <cross%cs1.wsu.edu@RELAY.CS.NET>
Subject: Re: Liability in Expert Systems


Hi,
I don't know about any pending cases, but readers interested in this subject
should check the article by Christopher J. Gill, High Technology Law Journal,
Vol 1, #2, P483-520, Fall 1986 entitled "Medical Expert Systems: Grappling
with Issues of Liability."
An important legal issue is
whether the use of a medical expert system constitutes a product or a service.
If an expert system is a product, strict liability applies whereas if it a
service then a negligence standard applies. Perhaps some lawyer reading
Risks or AILIST could read this article and summarize it for us.
It is not easy going.

---- George

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
George R. Cross cross@cs1.wsu.edu
Computer Science Department ...!ucbvax!ucdavis!egg-id!ui3!wsucshp!cs1!cross
Washington State University faccross@wsuvm1.BITNET
Pullman, WA 99164-1210 Phone: 509-335-6319 or 509-335-6636

------------------------------

Date: Wed, 8 Jul 87 09:56 EDT
From: DON%atc.bendix.com@RELAY.CS.NET
Subject: Plausibility reasoning

>From: Jenny <ISCLIMEL%NUSVM.BITNET@wiscvm.wisc.edu>
>Subject: so what about plausible reasoning ?

>As I read articles on plausible reasoning in expert systems, I come to the
>conclusion that experts themselves do not exactly work with numbers as they
>solve problems.

You are correct in several senses. One, the psychology literature has
shown time and time again that human belief revision does not conform to
Bayesian evidence accumulation (e.g., Edwards, 1968; Fischhoff &
Beyth-Marom, 1983; Robinson & Hastie, 1985; Schum, Du Charme, & DePitts,
1973; Slovic & Lichtenstein, 1971). Two, it does not appear that
humans literally use any of the methods.

However, the humans do appear to be weighing alternatives. Although,
for a period, it may seem that the humans are performing sequential
hypothesis testing, for stochastic domains with non-trivial uncertainty,
humans gather support for a large set of hypotheses at the same time.
They may appear to only gather support for their "favorite"; however, if
asked for an ordering over the alternatives or if asked how much they
believe the alternatives, it is obvious that they have allowed the
evidence to change their beliefs about the non-favorite hypotheses
(e.g., Robinson & Hastie, 1985).

The question becomes, "what are they doing?" For the sake of argument,
let's take your assertion and say they are not explicitly manipulating
numbers -- it does seem absurd that the automobile mechanic who can't
add simple integers without a calculator could possibly perform the
complex aggregations necessary to use numbers.

Another possibility is that they are performing a type of non-monotonic
logic with the choice of assumptions and generation and testing of
possible worlds. This possibility suggests that, if the human is not
using numbers at any level, the human's choice of one assumption over
another uses a simple set of context sensitive rules. The only time the
human should change assumptions (generate an alternative path or
possible world) is if the current assumptions are defeated or if some
magical attentional process causes the human to arbitrarily try another
path. When choosing another path, there should be a fixed set of
rules guiding the choice of alternative -- there can be no idea of
"this looks a little stronger than that" because such comparisons
require a comparison metric which is not built into non-monotonic
logics.

The psychological research on human search strategies (especially for
games such as chess) suggests that humans often abandon one search path
to test another which looks like it might be as strong or stronger and
then return to the original path. This return to the original path
leads to a rejection of the hypothesis that humans maintain a set of
assumptions until evidence refutes those assumptions. By my previous
argument, then, if non-monotonic logics model human decision making, the
humans must be choosing to change path generation based on an
attentional mechanism. If numbers are not involved, then the
attentional mechanism is probably rule-driven.

Of course, I've laid out a straw man. I've said it's either numbers
or rules; however, there are probably many other possibilities.
The most likely possibility is an analog process something akin to
comparisons of weights. If we were to model this process in a computer,
we would use numbers; so, we're back to numbers. The trouble with
just using numbers, of course, is determining how to combine them
under different circumstances and how to interpret them. Plausibility
reasoning has been used because it, at least, suggests methods for
both of these processes. Something, even an approximation, which
has validity at some level, is better than nothing.

Rather than turn this into a thesis, let's go on to your next point.

>And many of them are not willing to commit themselves into
>specifying a figure to signify their belief in a rule.

Hum, this sounds like something from Buchanan and Shortliffe. Let's
think about the implications of this argument. You're saying, if
humans find it difficult to generate numbers to represent their degrees
of belief, then numbers must be ineffective. Perhaps even at a
higher-level, if humans find some piece of knowledge or knowledge
artifact difficult to specify, then it probably is ineffective.
What evidence do we have for these claims? What are the implications
of these claims? From a personal standpoint, I find any knowledge,
beyond the trivial, is difficult to specify in some external formalism
(including writing, rules, and probabilities). It seems unlikely
that we will ever generate external formalisms which allow painless
knowledge transfer. Does that imply that knowledge transfer is
hopeless? Let's hope not because that is the modus operandi of the
human species. Granted, it will not be perfect, it will be painfull,
it will take time, but does that imply that it is worthless?

We "know" that human experts have knowledge which is effective.
There is growing evidence that purely logical formalisms for
representing this knowledge will not work for all problem domains
due to the stochastic nature of the domains or the incomplete
understanding of the domain. Does this mean that automated problem
solving must be limited to non-stochastic domains in which there
is a full and complete understanding of the causal relations and
elements?

I fear that I have left the primary argument which I wanted to use in
response to your statement. I looked at statements such as these and
asked myself whether "comfort" was a legitimate metric for determining
the effectiveness of knowledge. This question suggested an experiment
in which different sets of experts were asked to generate the
comfortable MYCIN confidence factors, the uncomfortable but definable
conditional and a priori probabilities needed for Bayes' theorem, and
the interesting, but perhaps not well-defined, probability bounds for
the typical Dempster-Shafer formulation.

I ran this experiment in which the experts were matched for knowledge in
the domain. Each expert was asked to provide the parameters needed for
only one of the plausibility reasoning formulisms. The results were
that, at a superficial level, humans can provide better MYCIN and
Dempster-Shafer parameters than Bayesian numbers. However, when
considering how these numbers are used and how errors in the numbers
propagate through repeated applications of the aggregation formulae, the
Bayesian parameters led to more effective automated decision making than
the MYCIN parameters. The performance of the Demspter-Shafer parameters
was not significantly better or worse than either system in this test.
(This research is documented in two papers -- ask me for references.)
The conclusion: the domain expert's comfort is not a legitimate
determinant of knowledge effectiveness.

>If one obtains two conclusions with numbers indicating some significance,
>say 75 % and 80 %, can one say that the conclusion with 80% significance is
>the correct conclusion and ignore the other one ?

There is a fundamental problem here. If you are refering to
percentages, then the numbers cannot add to more than 100. You are
correct in that a decision theory for plausibility reasoning must
take into account the accuracy of the parameters, and I believe that
some researchers have not considered this problem; however, most
plausibility reasoning researchers consider the decision theory to
be an important component which must be given strict attention.

>These numbers do not seem to mean much since they are just beliefs or
>probabilties.

I alluded to this problem earlier. Actually, if they are probabilities,
they mean a lot. Probabilities have clear operational and theoretical
definitions. Some, for example Shafer (1981), have suggested that
the definition of probabilities can be extended to better account
for the subjective nature of the probabilities used in most decision
support systems. The real problem is with the MYCIN style confidence
factors. Although Heckman (1986) has developed a formal interpretation
of confidence factors, the interpretation is ad hoc and it seems
difficult to imagine that domain experts use this interpretation.
The meaningfulness of the numbers is an important criterion for
determining the successful application of the numbers and is one
of the strongest arguments for using probabilities and perhaps for
using Bayes' theorem.

Donald H. Mitchell Don@atc.bendix.com
Bendix Aero. Tech. Ctr. Don%atc.bendix.com@relay.cs.net
9140 Old Annapolis Rd. (301)964-4156
Columbia, MD 21045

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT