Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 062

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest             Monday, 2 Mar 1987       Volume 5 : Issue 62 

Today's Topics:
Query - Expert System Definition,
Discussion - Expert System Definition & Intelligence &
Logic in AI & A Defence of Vulgar Tongue,
News - AI Software Revenues,
Review - Spang Robinson Report 3.2

----------------------------------------------------------------------

Date: 28 Feb 87 00:15:36 GMT
From: ihnp4!alberta!calgary!arcsun!roy@ucbvax.Berkeley.EDU (Roy Masrani)
Subject: dear abby....


Dear Abby. My friends are shunning me because I think that to call
a program an "expert system" it must be able to explain its decisions.
"The system must be able to show its line of reasoning", I cry. They
say "Forget it, Roy... an expert system need only make decisions that
equal human experts. An explanation facility is optional"
. Who's
right?

Signed,

Un*justifiably* Compromised

Roy Masrani, Alberta Research Council
3rd Floor, 6815 8 Street N.E.
Calgary, Alberta CANADA T2E 7H7
(403) 297-2676

UUCP: ...!{ubc-vision, alberta}!calgary!arcsun!roy
CSNET: masrani%noah.arc.cdn@ubc.csnet

--
Roy Masrani, Alberta Research Council
3rd Floor, 6815 8 Street N.E.
Calgary, Alberta CANADA T2E 7H7
(403) 297-2676

UUCP: ...!{ubc-vision, alberta}!calgary!arcsun!roy
CSNET: masrani%noah.arc.cdn@ubc.csnet

------------------------------

Date: Sun 1 Mar 87 10:48:43-PST
From: Ken Laws <Laws@SRI-STRIPE.ARPA>
Subject: Definition of Expert System (Re: Dear Abby)

Why must an expert system explain its reasoning? 1) To aid system
building and debugging; 2) to convince users that the reasoning is
correct; and 3) to force conformance to a particular model of
human reasoning.

Reason 1 is hardly a sine qua non. It is necessary that the line
of reasoning be debuggable, of course, but that can be done with
checkpoints, execution traces, and other debugging tools. Forcing
the system to "explain" its own reasoning adds to the complexity of
the system without directly improving performance. An explanation
capability may reduce the time, effort, and expertise required to
build and maintain or modify the system -- particularly if domain
experts instead of programmers are doing the work -- but the real
issue is what knowledge is encoded and how it is used. We have been
guilty of defining the field by the things that happened to be easy
to implement in a few early programs, just as we sometimes define AI
as that which is easy to do in LISP.

Reason 2, convincing the user, is a worthy goal and perhaps necessary
in consulting applications, but contains some traps. The real test of
a system is its performance. If adequate (or exceptional) performance
can be documented, many customers will have no interest about what
goes on in the black box. If performance is documentably poor, adding
an explanatory mechanism is just a marketing gimick: an expert con.
The explanations are really only needed if some of the decisions are
faulty and it is possible to recognize which ones from the explanation.

Further, there are different types of explanation that should be
considered. The traditional form is basically a trace of how a
particular diagnosis was reached. This is only appropriate when
the reasoning is sequential and depends strongly on a few key facts,
the kind of reasoning that humans are able to follow and "desk check".
Reasoning that is strongly parallel, non-deterministic, or depends
on subtle data distinctions (without linguistic names) are not amenable
to such explanations. This sort of problem often arises in pattern
recognition. In image segmentation, for instance, it is typically
unreasonable (for anyone but a programmer) to ask the system "By what
sequence of operations did you extract this region?"
. It is reasonable,
however, to ask how the target region differs from each of its neighbors,
and how it might now be extracted easily given that one knows its
distinguishing characteristics. In other words, the system should
answer questions in light of its current knowledge instead of trying
to backtrack to its knowledge at the time it was making decisions.
The system's job is to extract structure from chaos, and explanations
in terms of half-analyzed chaos are not helpful.

Reason 3, adherence to a particular knowledge engineering methodology
is really the sticking point. Some would claim that rule-based
reasoning and its attendant explanatory capability is fundamentally
different from other paradigms and perhaps even fundamental to human
reasoning; it therefore deserves a special name ("expert system").
Others would claim that rule-based systems are only one model of
expert reasoning and that the name should apply to any attempt at
a psychologically based or knowledge-based program. A third group,
mostly those selling software, claim performance alone as the criterion.

I believe that explanatory capability, as currently feasible, is a
correlate of the rule-based approach and is not central in theory; it
may, however, be the key ingredient to making a particular application
feasible or marketable. I don't believe that every optimal algorithm
is AI, so I reject the pure performance criterion for expert systems.
As to whether expert systems include only rule-based systems or all
knowledge-based system, I can't say -- that is a matter of convention
and has to be settled by those in the expert system field.

-- Ken Laws

------------------------------

Date: 24 Feb 87 12:57:37 GMT
From: mcvax!ukc!warwick!gordon@seismo.css.gov (Gordon Joly)
Subject: Re: What is this "INtelliGenT"?

For a working definition of A.I., how about "that which is
yet to be done"
or perhaps "that which is yet to be understood"?

Gordon Joly -- {seismo,ucbvax,decvax}!mcvax!ukc!warwick!gordon

------------------------------

Date: Fri, 27 Feb 87 12:46:54 GMT
From: Jerry Harper <mcvax!euroies!jharper@seismo.CSS.GOV>
Reply-to: jharper@euroies.UUCP (Jerry Harper)
Subject: Re: logic in ai


I think some useful distinction can be made between the use of _formalisms_
in AI and the use of logic(s). The function of the latter with respect to a
series of inference rules and a particular domain of discourse is the
characterization of truth and logical consequence. The function of the
former on my own reading of AI literature concerned with NLP systems
seems to merely crystallize certain _intuitions_ a researcher may have
about the description and solution to a various problem. In some cases
these may conform to a logical calculus, in other cases they merely
appear to do so. This is quite reasonable in a research context such as
AI provided one accepts that computational tractability and formal
rigour are different objectives served by methodological demands.
For instance, it would be impossible to build the model theory of
many logics used for semantic investigations of natural language into
a computational system. Yet _doing_ semantics entails the use of
infinitary methodology once the model theory is based on possible
worlds. Reinterpreting a semantic theory computationally is not
equivalent. More fundamentally, it is the usage of the word _logic_
which is at issue. With the plethora of logical calculi it makes
little sense to claim one uses _a lot of logic_ in ones work. Indeed
if anyone has an uncontentious definition of modern logic please
forward it.

------------------------------

Date: Thu, 26 Feb 87 14:53 N
From: MFMISTAL%HMARL5.BITNET@wiscvm.wisc.edu
Subject: RE: A defence of vulgar tongue.

Seth Steinberg proposes to use less formal notations in computer science
presentations. I disagree completely!

His argument about clarity is wrong.
Although architects do not use mathematical notations, they do use
a symbolic language (DRAWINGS or even better the lines that constitue
a drawing) to express their ideas. These drawings,
together with a description in specific "jargon" are necessary for
the contractor to make a proper cost estimate and to make the necessary
calculations for the strength of the constructions. So even for them
it is necessary to use a formal language. I believe a formal language
is useful to communicate ideas in a certain domain also in CS. Since the
basic operations of computers are indeed logical/mathematical ones, there
is no objection against using their symbolic notations.

Computer programs are inplementations of the stuff, computer science is
made of. Unfortunately, we have to check program code to check what the
program is doing. Just for that reason, debugging and software maintenance
is expensive. When we can better formalize the "art of programming" we
might come up with better understood, and more easy to maintain programs.
Discussions about program performance might then just as well be done in
the formal language for that formalization. I just remembered that a language
like APL is closely related with mathematics, specifically in matrixalgebra.
It is probably possible to formaly proof (at least to some extent) the
correctness of such a program.

Looking forward to more CS presentations using formal (mathematical and
logical notations) in order to increase the understanding what is really
ment.

Jan L. Talmon (not a computer scientist)
MFMISTAL@HMARL5.BITNET

------------------------------

Date: Sat, 28 Feb 1987 13:18 CST
From: Leff (Southern Methodist University)
<E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: AI Software Revenues


Artificial intelligence software generated $200 million dollars in
revenue in 1986. Expert system tools generated 18.6 million.
To put these numbers in perspective, the total software market is
12.3 billion and CAD/CAE software is 665 million.
Also sold in 1986, was
464 million dollars worth of robot systems and 100 million dollars
worth of vision equipment.

------------------------------

Date: Sat, 28 Feb 1987 13:19 CST
From: Leff (Southern Methodist University)
<E1AR0002%SMUVM1.BITNET@wiscvm.wisc.edu>
Subject: Review - Spang Robinson Report 3.2

Summary of the Spang Robinson Report
February 1987, Vol. 3, No. 2

Development Tools Migrate From Micro to Mainframe

Discussion of the trend for companies selling expert system software for IBM
PC's to put their systems on higher level machines such as minis and mainframes
and vice versa.

Some companies are porting major applications to microcomputer tools that
ostensibly offer "less functionality." Examples of this are a paint
manufacturing application ported from KEE to Insight and another port
from Inference's ART to ACORN.
__________________________________________________

Connecting to the Corporate Database

KEE connection provides an interface with the SQL query language. A data base
relation maps into a class with a database attribute mapping into a slot.
Data is downloaded from the database as needed to solve the problem.
Projects to directly integrate the expert system into the database include
Postgres at Berkeley, Probe at Computer Corporation of America and
Starburst at IBM. The prices for development versions of the system
range from $18,000 to $45000 with delivery versions ranging from $3000
to $18,750 depending upon size of VAX.

__________________________________________________

Shorts

Hitachi, IBM Japan and Carnegie Mellon are developing a multi-lingual machine
translation system. They have already developed a system for analyzing
the natural language utility specifications.

Fuji Electric has developed an expert system to control turbines for a thermal
power generation system.

Also, thermostats are selected and configured for Tohoku oil.

Fanac plans to build an intelligent robot integrating three-dimensional
vision and touch sensors.

Matsushita is developing a LISP machine with over 50 times the power of
a VAX 8600.

Expertelligence is selling an application builder for the Macintosh for
non-programming users.

Applied Expert Systems (APEX) has laid off a number of employees.
They are selling a system to help financial institutions expand client
relationships.

Digitalk has announced a new release of Smalltalk/V. Extensions provide
EGA capabilities, multiprocessing, DOS call features and music.

Teknowledge reports revenues of $10,867,7000 for the latter half of
1986.

Symbolics will be financing Whitney/Demos, a Los Angeles-based developer
of computer graphics and animation technology. Symbolics will be getting
marketing rights to various in-house programs of Whitney/Demos and will
be providing them with various graphic workstations

Europeans spent $200 million on expert system development. Ovum sells
a complete report on European expert system development for $495.00.

Halbrecht associates predicts a great deal of senior and mid-level turnover
of AI professionals.

___________________________________________________________________
Review of the Sixth International Workshop on Expert Systems and
Their Applications (Proceedings).

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT