Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 063

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 4 Mar 1987      Volume 5 : Issue 63 

Today's Topics:
Administrivia - Moderating Best Lispm/WorkStation Discussion &
Timing of the INtelliGenT Discussion,
Expert systems - Definition

----------------------------------------------------------------------

Date: Tue, 3 Mar 87 09:29:45 PST
From: TAYLOR%PLU@ames-io.ARPA
Subject: Moderating Best Lispm/WorkStation(again)

I am posting this again(?) because I am not sure it was successful the
first time (it came back undelivered).

It has been suggested that I moderate, summarize and the post summary of
this discussion, instead of dumping it on Ken Laws (AIList).

I agree.

Therefore, please e-mail all responses, questions, flames, etc. to me.


Thanks - Will


Will Taylor - Sterling Software, MS 244-17,
NASA-Ames Research Center, Moffett Field, CA 94035
arpanet: taylor@ames-pluto.ARPA
usenet: ..!ames!plu.decnet!taylor
phone : (415)694-6525


[It wasn't my suggestion, but sounds good to me. Thanks. -- KIL]

------------------------------

Date: 3 Mar 87 14:56:47 GMT
From: allegra!ether@ucbvax.Berkeley.EDU (David Etherington)
Subject: Re: What is this "INtelliGenT"?


Please, can we skip recycling the discussion of what AI *is*?
If people really must post on the subject, perhaps they could
read the last few months' postings first.

Deja vu!

------------------------------

Date: 2 Mar 1987 1415-EST
From: Bruce Krulwich <KRULWICH@C.CS.CMU.EDU>
Subject: Expert systems


There seems to be a trend nowadays to use the phrase "expert systems" to
mean rule-based systems, not to mean any systems that mimick expert
behavior. While I'm not sure I like the terminology, I think that it's
beneficial to have a seperate catagory for rule-based-systems work,
since that's often very different from other A.I. work (especially in
describing research work) This opinion may, however, be biased by my
opinions of current work in AI and expert systems. What do others think??


Bruce Krulwich If you're right 95% of the time,
why worry about the other 3% ??
arpa: krulwich@c.cs.cmu.edu
bitnet: krulwich@c.cs.cmu.edu Any former B-CC'ers out there??
uucp: ... !seismo!krulwich@c.cs.cmu.edu

------------------------------

Date: 2 Mar 87 03:06:55 GMT
From: rpics!yerazuws@seismo.css.gov (Crah)
Subject: Re: dear abby....

In article <178@arcsun.UUCP>, roy@arcsun.UUCP (Roy Masrani) writes:
>
> Dear Abby. My friends are shunning me because i think that to call
> a program an "expert system" it must be able to explain its decisions.
> "The system must be able to show its line of reasoning", I cry. They
> say "Forget it, Roy... an expert system need only make decisions that
> equal human experts. An explanation facility is optional". Who's
> right?

While you're developing an expert system, you have to know not just
that it inferred something incorrectly, but WHY it inferred it incorrectly.
Looking through 4,000 rules trying to find the one with a typo is
no fun, no fun at all.

Secondly, once you and your expert have convinced yourself that the
system is right, you must now convince your first set of users that the
system is right, too. These users may not be as expert as *your* expert,
but they have some knowledge of the subject. Perhaps a few of them are even
more expert than your expert in some narrow subfield.

It behooves you to gain acceptance and knowledge from this group, and
if they perceive that the expert system is a "black box", they will have
no encouragement to assist in the final tweaking and debugging. To be
useful, your system must not only be correct. It must be accepted and
used!

Personal experience- People, including the expert whose knowledge has been
captured, don't like (maybe don't trust?) a black-box expert system, if
they can't ask it why it gave the answer it did.

-Bill Yerazunis
"...these guys had "Thugs 'R' Us" stencilled all over them"

------------------------------

Date: 2 Mar 87 15:58:29 GMT
From: cbatt!osu-eddie!tanner@ucbvax.Berkeley.EDU (Mike Tanner)
Subject: Re: dear abby....


Leaving aside the utility of explanations in developing a system and
in convincing users it is behaving properly there is this:

Experts are capable of explaining their reasoning, justifying
conclusions, etc. Hypothesis: they are able to do this partly
because of the way their knowledge is organized and used in
problem-solving.

Therefore, if your expert system is incapable of explaining itself you
probably haven't got the knowledge organization and problem solving
strategy right. (Granted, it's only a hypothesis. It seems right to
me. I'm in the process of working on a PhD dissertation on how
knowledge organization and problem-solving strategy can help produce
good explanations. Doesn't exactly support the hypothesis, but it
should clarify it a bit.)

This assumes you're interested in how knowledge-based problem-solving
works. If all you want is an expert system, ie, a system which gets
right answers, then you're back to utility arguments for explanation.
(Though, I don't think you'll be successful at getting good
performance without this understanding.)

-- mike

ARPA: tanner@ohio-state.arpa
UUCP: ...cbosgd!osu-eddie!tanner

------------------------------

Date: 2 Mar 87 15:36:17 GMT
From: ulysses!sfmag!sfsup!saal@ucbvax.Berkeley.EDU
Subject: Re: dear abby....

In article <178@arcsun.UUCP> roy@arcsun.UUCP (Roy Masrani) writes:
>
>Dear Abby. My friends are shunning me because i think that to call
>a program an "expert system" it must be able to explain its decisions.
>"The system must be able to show its line of reasoning", I cry. They
>say "Forget it, Roy... an expert system need only make decisions that
>equal human experts. An explanation facility is optional". Who's
>right?
>Signed,
>Un*justifiably* Compromised
>Roy Masrani, Alberta Research Council

It all depends. During development it is absolutely necessary
for the system to give its reasoning, if only as a useful
debugging tool. (Is the system using the correct logic to get to
the decision.) Once it is "in production" (the field) it may not
be as important tot give an explanation every time. This is
particularly the case when the expert system is used to help do
some of the more mundane tasks on a very frequent basis. There
are 2 reasons for this. (1) the user may be able to agree
intuitively after deriving the answer - the machine has just
helped speed the process. OR (2) If a production ES has been
converted to a compiled language, the code to express the
rationale may be removed to speed up run time.

Sam Saal

------------------------------

Date: 2 Mar 87 20:24:38 GMT
From: tektronix!sequent!mntgfx!franka@ucbvax.Berkeley.EDU (Frank A.
Adrian)
Subject: Re: dear abby....

In article <178@arcsun.UUCP> roy@arcsun.UUCP (Roy Masrani) writes:
>"expert system" ... must be able to explain its decisions.
VS.
>... expert system need only make decisions that equal human experts.
> An explanation facility is optional".


Well, given the level of explaination most human experts give (e.g., "Well,
I did it this way because it felt right," or "Gosh, I don't know, it
seemed like a good idea at the time."), I tend to agree with number two.
In fact, has anyone done an expert system which automatically spits out
one of the above phrases (or any number of similar phrases) as an
"explaination"? Could bring the damn things closer to Turing capability
as percieved by the user... "What the hell are YOU asking for," might
get the proper amount of arrogance I've seen in most experts (:-).

Frank Adrian
Mentor Graphics, Inc.

------------------------------

Date: 2 Mar 87 20:37:59 GMT
From: ihnp4!alberta!calgary!arcsun!rob@ucbvax.Berkeley.EDU (Rob
Aitken)
Subject: Re: dear abby....

In article <178@arcsun.UUCP>, roy@arcsun.UUCP (Roy Masrani) writes:
>
> Dear Abby. My friends are shunning me because i think that to call
> a program an "expert system" it must be able to explain its decisions.
> "The system must be able to show its line of reasoning", I cry. They
> say "Forget it, Roy... an expert system need only make decisions that
> equal human experts. An explanation facility is optional". Who's
> right?
>
> Signed,
>
> Un*justifiably* Compromised
>
Dear Mr. Compromised:

You should ask yourself whether you want a complete, intelligible
explanation facility, or just the basics (i.e. "The answer is X because
Rule Y says so"). If it is the latter, your friends are wrong and you
should tell them so. If the former, your friends are probably programmers
and lazy ones at that. You should find new friends.

Abby.
> Roy Masrani, Alberta Research Council
> Roy Masrani, Alberta Research Council
P.S. You don't need to specifically include a .signature

------------------------------

Date: Tue, 3 Mar 87 13:29:38 EST
From: Bruce Nevin <bnevin@cch.bbn.com>
Subject: RE: dear Abby


We humans do not usually backtrack over a line of reasoning that led to
a conclusion. Instead, we reconstruct what such a line of reasoning
might plausibly be. It's called rationalization.


How wonderful it is to be rational beings, for we can make
plausible whatever conclusions we cherish.
--Ben Franklin (paraphrase from memory)

As the ordinary usage of the term suggests, rationalization can and
often does lead us astray, but that is a critique of the quality of the
particular line of reasoning that an individual might reconstruct to
rationalize or `make rational' a given conclusion. We reach conclusions
by means that are not guaranteed. We need valid rationalization to
check them out.

Pearce made the point that mathematical reasoning is a tidy pyramidal
structure erected after the fact, and that it is better both for
presentation and for pedagogy to show the path actually followed, even
though it appears less elegant. Few have done this.

Does this mean Pearce would advocate expert systems explaining by
retracing? I think not, because he explicitly recognized the importance
of intuitive hunches in mathematical and logical work. The proof is
merely to validate conclusions reached by a less respectable path--to
rationalize them.

Since our expert systems cannot emulate hunches, a useful approach is to
check out conclusions human users have a hunch about. Can they validly
be rationalized? Isn't this in fact the use to which many users prefer
to put expert systems like Palladian's financial consultant?

What is an expert?
Some say: an expert is someone who knows a great deal about his
subject.
I prefer: an expert is someone who knows some of the worst
mistakes that can be made in his subject, and how to avoid them.
--Werner Heisenberg

Bruce Nevin
bn@cch.bbn.com

(This is my own personal communication, and in no way expresses or
implies anything about the opinions of my employer, its clients, etc.)

------------------------------

Date: 3 Mar 87 17:54:03 GMT
From: trwrb!aero!coffee@ucbvax.Berkeley.EDU (Peter C. Coffee)
Subject: Re: dear abby....

In article <3269@osu-eddie.UUCP> tanner@osu-eddie.UUCP (Mike Tanner) writes:

>If all you want is an expert system, ie, a system which gets
>right answers, then you're back to utility arguments for explanation.

I agree with everything else Mike said about this issue, but it seems to
me that the label "expert system" _should_ mean something _more_ than "a
system that gets right answers." We've had useful programs, implicitly
applying "expert" knowledge, for a long time: the new label should reflect
new capabilities. Hayes-Roth et alia, in _Building_Expert_Systems_, say the
following:

"...[E]xpert systems differ from the broad class of AI tasks in several
respects...they employ self-knowledge to reason about their own inference
processes and provide explanations or justifications for conclusions
reached."

This is one of the milestone texts in the field, and definitions are useful
things: it seems to me that disputes over whether explanation is "needed"
before you can call it an expert system are missing the point. We _have_ what
seems to me to be a mainstream definition for the term; if we want
to talk about a system that _doesn't_ do explanation, can't we just call
it a computer program (or a parser, or a pattern recognizer, or whatever)
instead of trying to stretch the popular label to fit it?

Constructively, I hope, Peter C.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT