Copy Link
Add to Bookmark
Report

AIList Digest Volume 8 Issue 132

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest           Thursday, 24 Nov 1988     Volume 8 : Issue 132 

Queries:

BOOTSTRAP
translating LISP to/in other languages
Prolog on a Macintosh II
Input refutations
OPS and Prolog comparison

Responses:

ES for Student Advising
Genetic Learning Algorithms
Learning arbitrary transfer functions (2 responses)
Iterative Deepening (2 responses)
AI & the DSM-III

----------------------------------------------------------------------

Date: Wed, 16 Nov 88 20:21:35 EST
From: "Thomas W. Stuart" <C078D6S6@UBVM>
Subject: BOOTSTRAP

I'm passing along a query from Dr. William McGrath, here at the School
of Information and Library Studies, SUNY - Buffalo. He is looking for
references or information about available programs and packages for
Efron's Bootstrap statistical procedures -- packages which might run
on micros or VAX systems.

------------------------------

Date: 21 Nov 1988 07:55:54 CDT
From: Walter.Daugherity@LSR.TAMU.EDU
Subject: translating LISP to/in other languages


I am looking for information about converting LISP to other languages
(C, PASCAL, ADA, etc.) or about LISP interpreters written in such
languages.

Thanks in advance, Walter Daugherity

ARPA INTERNET: daugher@cssun.tamu.edu
Walter.Daugherity@lsr.tamu.edu
CSNET: WCD7007%LSR%TAMU@RELAY.CS.NET
WCD7007%SIGMA%TAMU@RELAY.CS.NET
BITNET: WCD7007@TAMLSR
WCD7007@TAMSIGMA

------------------------------

Date: Tue, 22 Nov 88 14:33:50 +1000
From: "ERIC Y.H. TSUI" <munnari!aragorn.oz.au!eric@uunet.UU.NET>
Subject: Prolog on a Macintosh II

I would like to communicate with users of the following PROLOGs on a MAC:

M-1.15 (from Advanced AI Systems Prolog)
IF/Prolog (from Interface Computer Gmbh.)
Prolog-1 (from Expert Systems International)
ZYX Macintosh Prolog 1.5 (from ZYX Sweden AB)

(Welcome any other suggestions for Prolog on a MAC II ?)

I am porting a large (approx. 1MB source) MU-Prolog (almost exactly DEC-10
Edinburgh Prolog syntax) system to run on a MAC II.

Desirable features include: save states, no need to pre-declare dynamic
predicates (flexible assert and retract), reconsult, large stack space
and efficient execution.

Eric Tsui eric@aragorn.oz
Research Associate
Department of Computing and Mathematics
Deakin University
Geelong, Victoria 3217
AUSTRALIA

------------------------------

Date: 22 Nov 88 07:46:51 GMT
From: geoff@wacsvax.OZ (Geoff Sutcliffe)
Subject: Input refutations

I have been searching (in the wrong places obviously) for a proof that
resolution & paramodulation, or resolution & paramodulation & factoring,
form a complete input refutation system for sets of Horn clauses, and
that the single negative clause in a minimally unsatisfiable set of
Horn clauses may be used as the top clause in such refutations.

Refutation completeness, without specification of the top clause, is
in "Unit Refutations and Horn Sets" [Henschen 1974]. If set-of-support
is compatible with input resolution,paramodulation,factoring then it
is possible to choose the negative clause as the support set, and the problem
is solved. Is this compatibility known?

Any help, with this seemingly obvious result, would be appreciated.

Geoff Sutcliffe

Department of Computer Science, CSNet: geoff@wacsvax.oz
University of Western Australia, ARPA: geoff%wacsvax.oz@uunet.uu.net
Mounts Bay Road, UUCP: ..!uunet!munnari!wacsvax!geoff
Crawley, Western Australia, 6009.
PHONE: (09) 380 2305 OVERSEAS: +61 9 380 2305

------------------------------

Date: 22 Nov 88 16:02:20 GMT
From: att!whuts!homxb!hou2d!shun@bloom-beacon.mit.edu (S.CHEUNG)
Subject: OPS and Prolog comparison

I am looking for some information comparing OPS83
(including OPS5 and C5) and Prolog, such as speed,
the types of applications they are good for, availability,
ease of software maintenance, how easy to learn, etc.
I am also interested in statistics concerning
the number of existing applications using each language.

There might be articles on these topics already;
can someone let me know where to find them?

Thanks in advance.

-- Shun Cheung

--
-- Shun Cheung, AT&T Bell Laboratories, Middletown, New Jersey
electronic: shun@hou2d.att.com or ... rutgers!mtune!hou2d!shun
voice: (201) 615-5135

------------------------------

Date: Thu, 17 Nov 1988 20:22:35 EST
From: "Thomas W. Stuart" <C078D6S6@UBVM>
Subject: ES for Student Advising

William McGrath (School of Information and Library Studies, 309 Baldy,
SUNY at Buffalo, 14120) has created an ES knowledgebase (KB) for
advising students on what courses to take for a projected plan of study
in library and information science, particularly in reference to the
student's specific career objective. The KB, created with 1stCLASS ES
shell, considers the type of job environment (academic, public,
corporate, sci-tech) and type of work (collection development,
cataloging, information retrieval, management, etc.), prerequisites,
hours needed to complete the program, need for faculty permission, and
other factors. Planned modules: advice for resolving schedule
conflicts, list of job prospects -- given the student's program,
feedback and evaluation.

------------------------------

Date: 7 Nov 88 12:03:04 GMT
From: mcvax!ukc!strath-cs!pat@uunet.uu.net (Pat Prosser)
Subject: Re: GENETIC LEARNING ALGORITHMS


Genetic Algorithms (GA's) traditionally represent the genetic string
(chromosone) using a binary alphabet; Holland has shown this to be
optimal. It is not the only alphabet, a purely symbolic alphabet is
possible if appropriate genetic operators are defined. For example

[1] P. Prosser, "A Hybrid Genetic Algorithm for Pallet Loading"
European Conference on Artificial Intelligence, 1988
[2] Derek Smith, "Bin Packing with Adaptive Search"
Proceedings ICGAA 1985
[3] David Goldberg, "Alleles, Loci and the Travelling Salesman
Problem"


The only problem with non-binary alphabet is the limits of our
imagination.

------------------------------

Date: Fri, 18 Nov 88 10:18:37 EST
From: alexis%yummy@gateway.mitre.org
Reply-to: alexis%yummy@gateway.mitre.org
Subject: Flaming on Neural Nets and Transfer Functions

I have to admit some surprise that so many people got this "wrong."
Our experience is that neural nets of the PDP/backprop variety are
at their *BEST* with continueous mappings. If you just want classification
you might as well go with nearest-neighbor alg.s (or if you want the same
thing in a net try Nestor's Coulombic stuff). If you can't learn x=>sin(x)
in a couple of minutes, you've done something wrong and should check
your code (I'm assuming you thought to scale sin(x) to [0,1]).
Actually, requiring a PDP net to output 1's and 0's means your weights
must be quite large which takes alot of time and puts you way out on
the tails of the sigmoids where learning is slow and painful.
What I do for fun (?) these days is try to make nets output sin(t) {where
t is time} and other waveforms with static or "seed" wave inputs.

For those who like math, G. Cybenko (currently of U. Illinois and starting
12/10/88 of Tufts) has a very good paper "Approximation by Superpositions
of a Sigmoidal Function"
where he gives a existence proof that you can
uniformly approximate any continuous function with support in the unit
hypercube. This means a NN with one hidden layer (1 up from a perceptron).
Certainly more layers generally give more compact and robust codings ...
but the theory is *finally* coming together.

Alexis Wieland .... alexis%yummy@gateway.mitre.org

------------------------------

Date: 17 Nov 88 20:48:52 GMT
From: amos!joe@sdcsvax.ucsd.edu (Shadow)
Subject: Re: Learning arbitrary transfer functions

in article, 399.uvaee.ee.virginia.EDU writes:

>>I am looking for any references that might deal with the following
>>problem:
>>
>>y = f(x); f(x) is nonlinear in x
>>
>>Training Data = {(x1, y1), (x2, y2), ...... , (xn, yn)}
>>
>>Can the network now produce ym given xm, even if it has never seen the
>>pair before?
>>
>>That is, given a set of input/output pairs for a nonlinear function, can a
>>multi-layer neural network be trained to induce the transfer function

my response:

1. Neural nets are an attempt to model brain-like learning
(at least in theory).

So, how do human's learn non linear functions ?

: you learn that x^2, for instance, is X times X.

And how about X times Y ? How do humans learn that ?

: you memorize it, for single digits, and
: for more than a single digit, you multiply streams
of digits together in a carry routine.

2. So the problem is a little more complicated. You might imagine
a network which can perfectly learn non-linear functions if
it has at its disposal various useful sub-networks (e.g., a
network can learn x^n if it has at its disposal some mechanism
and architecture suitable for multiplying x & x.)

(imagine a sub-network behaving as a single unit, receiving
input and producing output in a predictable mathimatical manner)

(promoting thought)


What is food without the hunger ?
What is light without the darkness ?
And what is pleasure without pain ?

joe@amos.ling.ucsd.edu

------------------------------

Date: Fri, 18 Nov 88 19:50:27 pst
From: purcell%loki.edsg@hac2arpa.hac.com (ed purcell)
Subject: iterative deepening for game trees, state-space graphs

Some observations on the request of quintus!ok@unix.sri.com (16 Nov 88)
for references on the term ``iterative deepening'':

In his IJCAI85 paper on the IDA* (Iterative Deepening A*) search
algorithm for state-space problem graphs, Rich Korf of UCLA
acknowledges early chess-playing programs as the first implementations
of the idea of progressively deeper searches. (The history of
progressively deeper look-ahead searches for game trees is somewhat
reminiscent of the history of alpha-beta pruning -- these clever
algorithms were both implemented early but not immediately published
nor analyzed until many years later.)

The closely-related term ``progressive deepening'' also has been around
awhile; for example, this term is used in the 2nd edition (1984) of Pat
Winston's textbook ``An Introduction to AI.''

The contributions of Korf's IJCAI85 paper on IDA* are in the
re-formulation and analysis of progressively deeper depth-first search
for state-space graphs, using a heuristic evaluation function instead
of a fixed depth bound to limit node expansions.

It is interesting that Korf is now investigating the re-formulation of
minimax/alpha-beta pruning for state-space graphs.

Ed Purcell
purcell%loki.edsg@hac2arpa.hac.com
213-607-0793

------------------------------

Date: 19 Nov 88 02:16:34 GMT
From: korf@locus.ucla.edu
Subject: "Iterative-Deepening" Reference wanted

Another reference on this subject is: "An analysis of consecutively
bounded depth-first search with applications in automated deduction"
,
by Mark E. Stickel and W. Mabry Tyson, in IJCAI-85, pp. 1073-1075.

------------------------------

Date: 23 Nov 88 18:07:05 GMT
From: sire@ptsfa.PacBell.COM (Sheldon Rothenberg)
Subject: Re: AI & the DSM-III


In a previous article, ANDERSJ%ccm.UManitoba.CA@MITVMA.MIT.EDU writes:
> Hi Again. I have a colleague who is attempting to write a paper on
> the use of AI techniques in psychiatric diagnosis in general, and
> more specifically using the DSM-III.


Todd Ogasawara, at U. of Hawaii, posted a 10 article biblio on related
topics. The article which appears most relevant is:

Hardt, SL & MacFadden, DH
Computer Assisted Psychiatric Diagnosis: Experiments in
Software Design
from "Computers in Biology and Medicine", 17, 229-237

A book by DJ Hand entitled "Artificial Intelligence and Psychiatry"
a 1985 publication of Cambridge University Press also looks promising.

Todd's e-mail address on INTERNET is: todd@uhccux.UHCC.HAWAII.EDU

Shelley Rothenberg
(415) 867-5708

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT