Copy Link
Add to Bookmark
Report

AIList Digest Volume 4 Issue 040

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Friday, 28 Feb 1986       Volume 4 : Issue 40 

Today's Topics:
Queries - Lisp Books & Common Lisps & International Logo Exchange,
Knowledge Representation - Translation & Associative Memory,
Methodology - The Community Authoring Project & AI Taxonomy,
Literature - Scientific DataLink Index To AI Research 1954-1984

----------------------------------------------------------------------

Date: Wed, 26 Feb 86 12:35:22 CST
From: "Glenn O. Veach" <veach%ukans.csnet@CSNET-RELAY.ARPA>
Subject: Lisp in the classroom.

This past year at the University of Kansas we used Scheme in two
classes. In an undergraduate "Programming Languages" class we
went through Abelson and Sussman's book while using Scheme for
homework and class projects. In a graduate level "Artificial
Intelligence" class we went through Kowalski's book and assigned
a project to develop a Horne clause theorem prover which some
implemented using Scheme. We are now trying to a curriculum
for our "Introductory Programming" course in which we would use
MacScheme (we now use Pascal) and would use Abelson and Sussman
as a text (probably not the entire book). We would hope to use
the remaining chapters of the text for our second semester
programming course.

We are of course encountering some resistance as we try to forge
ahead with Lisp as a basic instructional language. I understand
that MIT uses Abelson and Sussman as the text for their first
course in programming languages. Do they cover the entire text?
What do they use for more advanced programming language courses?
Do any other schools have a similar curriculum? Has anyone
been involved with the review process of ACM or IEEE for CS or
ECE programs and suggested the use of Lisp as a basic language?
What are some of the more compelling arguments for and against
such an effort? If anyone could direct me to any B-Boards on
ARPA net which would be interested in such a discussion I would
appreciate it.

Glenn O. Veach
Artificial Intelligence Laboratory
Department of Computer Science
University of Kansas
Lawrence, KS 66045-2192
(913) 864-4482
veach%ukans.csnet@csnet-relay

------------------------------

Date: Thu, 27 Feb 86 16:44:28 est
From: nikhil@NEWTOWNE-VARIETY.LCS.MIT.EDU (Rishiyur S. Nikhil)
Subject: Public domain Common Lisps?


Prof. Rajeev Sangal of the Indian Institute of Technology, Kanpur, is looking
for implementations of Common Lisp in the public domain, running on any of
these machines:

Dec-10 running Tops-10
UNIX System III (with Berkeley enhancements)
IBM PC's running MSDOS

Are there any such implementations? If you have any information/opinions,
please reply to

nikhil@xx.lcs.mit.edu

Thanks in advance,

Rishiyur Nikhil

------------------------------

Date: 27 February 1986 13:44:31 EST THURSDAY
From: FRIENDLY%YORKVM1.BITNET@WISCVM.WISC.EDU ( Michael Friendly
Subject: International Logo eXchange

I am the North American field editor for a new Logo newsletter,
ILX, edited by Dennis Harper at UCSB and published by Tom Lough
of the National Logo Exchange, PO Box 5341, Charlottesville, VA
22905.

I write a bi-monthly column on Logo-like educational computing,
and am interested in hearing from people who are doing interesting
things which might be of interest to the international Logo
community. Please reply directly to FRIENDLY@YORKVM1.BITNET.
Applications of Logo to particular subject areas, advanced ideas,
list processing, metaphors for teaching Logo etc are of particular
interest.

I am also interested in developing a network forum for Logo workers,
perhaps going thru AI-ED or perhaps separate from it, and would
appreciate hearing from anyone of other nets, Bboards or conferences
in this area.

My background:
I am a cognitive psychologist doing work on knowledge structure
and memory organization, with interests toward the applied side,
and am developing empirical techniques for cognitive mapping --
graphic portrayal of an individual's knowledge for some domain.

I have written a book on Advanced Logo with applications in
AI, computational linguistics, mathematics, physics, etc. oriented
toward courses in Computer Applications in Psychology and as an
advanced Logo book in a Faculty of Education. It is due to appear
sometime in 86.

------------------------------

Date: 27 Feb 86 09:38:30 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Knowledge Representation and Translation

Could anyone send to me or post to the net references on conversion of
knowledge from one representational structure to another? For example,
translating between frames and semantic nets would be of interest.

Well, here's a couple of obvious ones that you probably already know about:

* Brachman, R.J. On the Epistemological Status of Semantic Networks.
* Etherington, D.W. and R. Reiter. On Inheritance Hierarchies with Exceptions.
* Hayes, P.J. The Logic of Frames.

These can be found in Brachman & Levesque's `Readings in Knowledge
Representation', Morgan Kaufman 1985. Actually as I look through the
TOC, I realize that you probably should just get the book if you don't
have it. Lots of good stuff. Has an extensive partially annotated
bibliography too.

------------------------------

Date: 27 Feb 86 10:11:11 est
From: Walter Hamscher <hamscher@MIT-HTVAX.ARPA>
Subject: Associative Memory

Date: 24 Feb 86 22:59:07 GMT
One of the biggest problems AI'ers seem to be having with their machines is
one of data access. Now, a human [or other sentient life-form :-)] has a
large pool of experience (commonly refered to as a swamp) that he/she/it has
access to.
It is linked together in many obscure ways (as shown by word-association
games) so that for any given thought (or problem) there are a vast number
(usually) of (not-necessarily) connected replies.
Thinking of that swamp as a form of data-base, does the problem then boil
down to one of finding a path-key that would let you access all of the
cross-referances quickly?

It's not invalid but unfortunately it isn't new either. See any paper
on Frames. The power of a frame-organized database isn't that there
happen to be these defstructs called frames, it's in the fact that the
frames are all connected together -- it's indexing by relatedness (how
dense the connections have to be before you start to win is an open
question, but see Lenat's recent stuff on CYC in the recent issue of
AI Magazine). For background see Minsky (A Framework For Representing
Knowledge, 1975). See NETL (e.g. Fahlman, Representing Real-world
Knowledge, circa 1979, MIT Press). See Connection Machine literature
(e.g. The Connection Machine, Hillis, 1985, MIT press). If you want
to see the connection between AI KB's and traditional DBMS's covered
extensively, see `Proceedings of the Islamorada Workshop on Large
Scale Knowledge Base and Reasoning Systems' (Feb 85) chaired by
Michael Brodie, available (I think) from Computer Corporation of
America, Cambridge MA (617) 492-8860.

------------------------------

Date: Thu 27 Feb 86 13:33:56-PST
From: Tom Garvey <Garvey@SRI-AI.ARPA>
Subject: Re: The Community Authoring Project


While I would certainly not want to be viewed as a stifler of creative
urges, sometimes it seems that a little common-sense, reality,
engineering knowledge, ..., injected into our blue-skying would go a
long way toward setting feasible goals. What makes CAP (to which any
yahoo could presumably add his personal view of the world) anything
more than, say, a multimedia extension of this BBOARD?

Cheers,
Tom

------------------------------

Date: Fri, 21 Feb 86 11:51 EST
From: Tim Finin <Tim%upenn.csnet@CSNET-RELAY.ARPA>
Subject: AI Taxonomy

When Dave Waltz was overseeing the AI section of CACM, he developed
a rather extensive taxomomy of AI. I recall seeing it published
in AAAI magazine or SIGART or a similar source about 2 or 3 years ago.

[I believe that he developed it for Scientific Datalink and then
published it in AI Magazine. See the following message. -- KIL]

------------------------------

Date: Fri 21 Feb 86 09:42:28-PST
From: C.S./Math Library <LIBRARY@SU-SCORE.ARPA>
Subject: Scientific DataLink Index To AI Research 1954-1984

[Forwarded from the Stanford bboard by Laws@SRI-AI.]


We have just added the four volume set of the Scientific DataLink Index To
Artificial Intelligence Research 1954-1984. The four volumes including two
abstract volumes, a subject volume, and an author index, are shelved with
the serial indexes. These volumes index the Scientific DataLink microfiche
collections for the following research institutions in AI: Bolt Beranek
and Newman, CMU, University of Illinois, ISI, University of Massachusetts,
MIT, University of Pennsylvania, University of Rochester, Rutgers, SRI,
Stanford AI and HPP, University of Texas Austin, Xerox Parc, and Yale.
The subject volume is based on the AI classification as published in AI
Magazine Spring 1985. I have included a photocopy of that article in
the back of the subject volume.

ACM is almost up-to-date with its ACM Guide To Computing Literature an
annual index to the computer science literature. We have received up to
1984 and the 1985 volume is expected to be out this summer. ACM expects
to have future annual volumes out by the summer of the following year
covered by the volume. This annual index not only includes all entries
from Computing Reviews Index but additional computer science articles
not included in the monthly Computing Reviews. Monographs, proceedings,
and journal articles are included in the index.

Harry Llull

------------------------------

Date: 19 Feb 86 17:09:00 GMT
From: hplabs!hp-pcd!orstcs!tgd@ucbvax.berkeley.edu (tgd)
Subject: Re: taxonomizing in AI: useless, harmful

Taxonomic reasoning is a weak, but important form of plausible reasoning.
It makes no difference whether it is applied to man-made or naturally
occurring phenomena. The debate on the status of artificial intelligence
programs (and methods) as objects for empirical study has been going on
since the field began. I assume you are familiar with the arguments put
forth by Simon in his book Sciences of the Artificial. Consider the case of
the steam engine and the rise of thermodynamics. After many failed attempts
to improve the efficiency of the steam engine, people began to look for
an explanation, and the result is one of the deepest theories of modern
science.
I hope that a similar process is occurring in artificial intelligence. By
analyzing our failures and successes, we can attempt to find a deeper theory
that explains them. The efforts by Michalski and others (including myself)
to develop a taxonomy of machine learning programs is viewed by me, at
least, not as an end in itself, but as a first step toward understanding the
machine learning problem at a deeper level.
Tom Dietterich
Department of Computer Science
Oregon State University
Corvallis, OR 97331
dietterich@oregon-state.csnet

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT