Copy Link
Add to Bookmark
Report

AIList Digest Volume 5 Issue 110

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Wednesday, 6 May 1987     Volume 5 : Issue 110 

Today's Topics:
Queries - Common Lisp Books & CAD Document Scanning &
CSG --> Octree Spatial Representation Translation & DataFlow &
Expert Systems for Networking & Extracting Knowledge From Databases,
AI Tools - OPS5 Addresses & Kyoto Common Lisp Addendum,
Application - Grammar Checkers

----------------------------------------------------------------------

Date: 4 May 87 21:23:19 GMT
From: bill@hao.ucar.edu (Bill Roberts)
Subject: Good Common Lisp books


Can anyone make a comparison between Wilensky's "CommonLispCraft" and Tatar's
"A Programmer's Guide to Common Lisp"? What are the strengths and weaknesses
of each book? I know about Steele's book but it is in a different class.
Thanks in advance for any input.

Bill Roberts
NCAR/HAO
Boulder, CO
!hao!bill

------------------------------

Date: 4 May 87 18:37:06 GMT
From: nsc!amdahl!ptsfa!jeg@decwrl.dec.com (John Girard)
Subject: RFI - CAD Systems that can scan existing documents


RFI - CAD

This is a request for information from the *academic* sector.
There are several other people working on the commerical sector
offerings.

Our company has a significant number of hand-drawn diagrams
depicting sites and equipment. We would like to get these
documents into electronic media *and* simultaneously develop an
automated inventory. The documents are fairly consistent and
contain a limited number of symbols, although sometimes the
symbols may be touching. The length of lines connecting the
symbols and the types of lines is important.

The known alternatives are:

Manual data base entry and manual CAD composition

Video scan of documents followed by "touch up" and manual
data base entry

Highly automated scan of documents with automatic touch up,
automatic object identification, and automatic data base
entry - manual monitoring and minor manual adjustments.

Obviously the 3rd alternative is the best. Is anyone working on
this type of problem?

Please contact LALEH FARR
PACIFIC BELL
2600 CAMINO RAMON
ROOM 2S500T
SAN RAMON, CALIF.
U.S.A. 94583
415-823-7277

------------------------------

Date: Mon, 4 May 87 22:56:41 PDT
From: dmittman@Jpl-VLSI.ARPA
Subject: CSG --> Octree Spatial Representation Translation

Does anyone know of a Common Lisp (Symbolics, perhaps) implementation of
a conversion between Constructive Solid Geometry and Octree representations
of spatial configurations? Whatever you have would be appreciated. I hate to
reinvent tools which already exist. - David Mittman
DMITTMAN@JPL-VLSI.ARPA

------------------------------

Date: 4 May 87 23:55:35 GMT
From: jade!lemon!c60a-3ed@ucbvax.Berkeley.EDU (Sugih Jamin)
Subject: DataFlow

I don't know if this is the right news group for this question,
but, can anyone tell me what is the best introductory/reference
book to Data Flow system/language/architecture(?) ?

Sugih Jamin
(c60b-jk@buddy.Berkeley.Edu)

------------------------------

Date: 5 May 87 13:13:51 GMT
From: super.upenn.edu!operations.dccs.upenn.edu!shaffer@RUTGERS.EDU
(Earl Shaffer)
Subject: Expert Systems for networking


Hello:

We are looking for a PC or VAX based expert system that
is capable of handling rules that describe network fault
diagnosis.

Therefore, its rule base capability must be significant,
and its inference capabilities must be rich (backward,
forward, mix). The PC version would be a "portable
expert", whereas the VAX version would be smarter,
bigger, and unmovalbe.

Cost is a factor! Forget ART of KEE. Any help would
be appreciated.


thanx,

------------------------------

Date: 5 May 87 20:51:19 GMT
From: decvax!necntc!ci-dandelion!bunny!gps0@ucbvax.Berkeley.EDU
(Gregory Piatetsky-Shapiro)
Subject: Extracting Knowledge From Databases


******* This is not a line-eater line ******

I am interested in extracting Knowledge from Databases.
For example, by analyzing a medical database, a system can discover
new effects of known drugs (such project was done by Blum & Wiederhold
at Stanford, 1982); by analyzing the planet movements, one may
discover Kepler's third law (this project was done by Kepler).
A more prosaic application is analyzing
a telephone company customer database to find what types of customers
order what types of services. In general, the discovered knowledge
may have the form of rules, functional dependencies,
causal dependencies, or statistical correlations.

A closely related topic is Statistical Expert Systems,
which intelligently use statistical methods and packages to
find statistical correlations in data.

If you know of work in these areas, please email the appropriate
references to me. I would be very grateful and will
summarize the responces to the net.

Gregory Piatetsky-Shapiro at GTE Laboratories.
gps0@gte-labs.relay.cs.net

======== A standard disclaimer =======

------------------------------

Date: 17 Apr 87 00:32:18 GMT
From: decvax!wanginst!masscomp!dlcdev!eric@ucbvax.Berkeley.EDU (eric
van tassell)
Subject: OPS5

To all those of you who asked me to mail OPS5 to them and only gave
arpa addresses of the form. foo@bar.baz, please help a net neophyte
at a uucp only site comprehend how to translate this into a path
from my machine that won't upset the mailer at mit-eddie.


Eric Van Tassell
Data Language Corp.
617-663-5000
clyde!bonnie!masscomp!dlcdev!eric
harvard!mit-eddie!dlcdev!eric
dlcdev!eric@eddie.mit.edu

------------------------------

Date: Tue, 5 May 1987 21:24 EDT
From: "Scott E. Fahlman" <Fahlman@C.CS.CMU.EDU>
Subject: Kyoto Common Lisp addendum


A clarification from Mr. Yuasa:

To whom it may concern,

It seems that the previous note of ours, announcing that we are looking
for a free channel for KCL distribution, may have brought confusions and
misunderstandings to many people. It may be mostly because only the
facts were mentioned, with no explanation of the background necessary to
understand our intention.

Our intention is to make it clear that KCL is an academic free software.
By "free", we mean that any one can get it free of charge, if he agrees
the conditions we put in the License Agreement. It does NOT mean that
anyone has any *free*dom for KCL. In particular, we have no intention
to put KCL into the public domain. We would rather like to keep the
identity of KCL, so that we can keep up its high quality.

Some commercial organizations are now distributing KCL and they charge a
certain amount of fees to their customers. These fees are exclusively
for the distribution costs and the service they offer to their
customers. We require no royalties to them. We are happy if more
people have got the chance to use this software. It is a voluntary work
if we give them technical help on their requests.

Unfortunately, some people believe that we are receiving royalties for
KCL. In order to overcome this wrong belief, we decided to look for a
free channel for KCL distribution. Apparently, some KCL users
(including potential users) do not need any maintenance service at all.
We are glad to make KCL available to such users for free. Note that we
do not intend to restrict the activities of commercial organizations for
KCL distribution. We intend to give a choice to the user and to make it
clear what the user is charged for by commercial organizations. Note
also that some KCL versions require additional programs developed by
commercial organizations and we cannot force them to make their code
open to the public, though we expect them to do so.

We are now seriously looking for a free channel. We already found some
candidates, but it will take some time before we decide the best
appropriate channel for our purpose. In case we cannot find an
appropriate channel, we ourselves will directly distribute KCL.
However, this will require a lot of work and we will have to spend a lot
of time. So, this should be the last solution.

Thanks.

Taiichi Yuasa, Dr.
Research Institute for Mathematical Sciences
Kyoto University

------------------------------

Date: Mon, 4 May 87 12:18 EST
From: "Linda G. Means" <MEANS%gmr.com@RELAY.CS.NET>
Subject: re: grammar checkers

mom: toaster oven, kimono Todd Ogasawara writes in AILIST Digest v.5 #108:

>I think that if these style checking tools are used in conjunction
>with the efforts of a good teacher of writing, then these style
>checkers are of great benefit. It is better that children learn a
>few rules of writing to start with than no rules at all. Of course,
>reading lots of good examples of writing and a good teacher are still
>necessary.

Sure, but the problem is the bogus rules that the child is likely
to infer from the output of the style-checking program, like never
write a sentence longer than x words, or don't use passive voice,
or try not to write sentences with multiple clauses.


>On another level... I happened to discuss my response above with one
>of my dissertation committee members. His reaction? He pulled out
>a recent thesis proposal filled with red pencil marks (mostly
>grammatical remarks) and said, "So what if the style checkers are
>superficial? Most mistakes are superficial. Better that the style
>checker should find these things than me."

Sounds like a rather irresponsible attitude to me, given the state
of the art of automatic style checkers. Your prof needs a graduate
student slave if he dislikes having to correct student grammar
errors. Let's consider separately the issues of grammar correction and
stylistic advice (the two worlds partially overlap, but remain distinct
some areas).

1. Grammar. As your prof points out, lots of grammar errors are
superficial, but your commercial grammar checker will fail to find all
of them, correct perceived mistakes which really aren't, and give plenty
of bad advice. Those programs "know" less about grammar than the students
who use them. Any bonafide grammatical errors which can be found by the
commercially available software could also be found by the writer if he
were to proof his paper carefully. It grieves me to think of students
failing to proof their own papers because the computer can do it for them.

2. Style. The analysis of writing style is not a superficial task; it is,
in fact, a kind of expertise not found in many "literate" individuals.
In my experience, the best way to learn to write well is to scrutinize
your work in the company of a good writer who will think aloud with you
while helping you to rewrite sentences. I've successfully taught various
people to write that way. The second best method is a patient teacher's
red pen. In both cases, your prose is being evaluated by someone who is
trying to understand what you are trying to communicate in your writing.

You must understand that this is not the case with the computer. It
probably has no way of representing the discourse as a whole; all analysis
is performed at the sentence level with a heavy emphasis on syntax and
with no semantic theory of style. The result? Stylistic advice which
is so superficial as to be useless. Many years of research in the area of
computational stylistics have provided evidence that although some (few)
stylistic discriminators can be found through syntactic analysis, the
features which contribute to textual cohesion and to a given writer's
"stylistic fingerprint" cannot. Researchers are still stymied by the
problem of identifying stylistically significant features of a text.
Yet the program advocated by Carl Kadie feigns an understanding of the
effect that the prose will have on its reader; it generalizes from
syntactic structure to stylistic impact. Look at the summary generated
at the end of the text. The program equates active voice and short
sentences with "directness". I won't take the time here to argue
against the use of fuzzy adjectives like 'direct', 'expressive', 'fresh',
and so on to describe prose, since the use of such imprecise language
is a longstanding tradition in the arena of literary criticism. I can't
tell you exactly how to make your writing "direct", but I know that
directness cannot always be computed empirically, which is how your
machine computes it. A paragraph of non sequiturs probably shouldn't
be characterized as direct, even if all sentences are short and contain
only active verbs.

An aside to Ken Laws:

You questioned whether the topic of automatic style checkers is appropriate
to AILIST: is it AI? I believe it is. The study of computational stylistics
is a difficult natural language problem with a long history. Topics range
from authorship studies of anonymous works to trying to identify stylistic
idiosyncrasies to automatic style advisors. In general, many theoretical issues
carry over from other areas of natural language processing, like discourse
analysis and understanding human reasoning processes. Think of a favorite
author. You may sometimes recognize a sample of his writing without
even knowing who wrote it, or you may say of another writer, "Gee, his
style reminds me of X". You may put down a book which you started reading
because the style is too "obtuse". How specifically does a writer use the
language to produce that effect? What characteristics of a text must we
identify to enable a computer to make judgments about style? Of course,
any advances made in tackling these issues may also be of use in the area
of text generation.

- Linda Means
GM Research Laboratories
means%gmr.com@relay.cs.net

------------------------------

Date: Sun, 3 May 1987 23:35 EDT
From: MINSKY%OZ.AI.MIT.EDU@XX.LCS.MIT.EDU
Subject: AIList Digest V5 #108

I agree with Todd, Ogasawara: one should not criticise to extremes. I
found RightWriter useful and suggestive. It was helpful in detecting
obnoxious passive constructions and excessively long sentences. In
final editing of "The Society of Mind" I used spelling checkers to
notify me of unfamiliar words, and I often replaced them by more
familiar ones. I also used it to establish a "gradient". The early
chapters are written at a "grade level" of about 8.6 and the book ends
up with grade levels more like 13.2 - using RightWriter's quaint
scale.

Naturally the program makes lots of errors, but they are instantly
obvious and easily ignored.

I imposed a "style gradient" upon "The Society of Mind" because I
wanted its beginning to be accessible to non-specialists. I
cheerfully assumed that any reader who gets to the end will by then
have become a specialist.

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT