Copy Link
Add to Bookmark
Report

AIList Digest Volume 1 Issue 044

eZine's profile picture
Published in 
AIList Digest
 · 15 Nov 2023

AIList Digest            Monday, 22 Aug 1983       Volume 1 : Issue 44 

Today's Topics:
AI Architecture - Parallel Processor Request,
Computational Complexity - Maximum Speed,
Functional Programming,
Concurrency - Production Systems & Hardware,
Programming Languages - NETL
----------------------------------------------------------------------

Date: 18 Aug 83 17:30:43-PDT (Thu)
From: decvax!linus!philabs!sdcsvax!noscvax!revc @ Ucb-Vax
Subject: Looking for parallel processor systems
Article-I.D.: noscvax.182

We have been looking into systems to replace our current ANALOG
computers. They are the central component in a real time simulation
system. To date, the only system we've seen that looks like it might
do the job is the Zmob system being built at the Univ. of Md (Mark
Weiser).

I would appreciate it if you could supply me with pointers to other
systems that might support high speed, high quality, parallel
processing.

Note: most High Speed networks are just too slow and we can't justify
a Cray-1.

Bob Van Cleef

uucp : {decvax!ucbvax || philabs}!sdcsvax!nosc!revc arpa : revc@nosc
CompuServe : 71565,533

------------------------------

Date: 19 Aug 83 20:29:13-PDT (Fri)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: maximum speed
Article-I.D.: ssc-vax.445

Hmmm, I didn't know that addition of n numbers could be performed
simultaneously - ok then, constant time matrix multiplication, given
enough processors. I still haven't seen any hard data on limits to
speed because of communications problems. If it seems like there are
limits but you can't prove it, then maybe you haven't discovered the
cleverest way to do it yet...

stan the lep hack
ssc-vax!sts (soon utah-cs)

ps The space cost of constant or log time matrix mults is of course
ridiculous

pps Perhaps this should move to net.applic?

------------------------------

Date: Fri, 19 Aug 83 15:08:15 EDT
From: Paul Broome (CTAB) <broome@brl-bmd>
Subject: Re: Functional programming and AI

Stan,

Let me climb into my pulpit and respond to your FP/AI prod. I don't
think FP and AI are diametrically opposed. To refresh everyone's
memory here are some of your comments.


... Having worked with both AI and FP languages,
it seems to me that the two are diametrically
opposed to one another. The ultimate goal of functional
programming language research is to produce a language that
is as clean and free of side effects as possible; one whose
semantic definition fits on a single side of an 8 1/2 x 11
sheet of paper ...

Looking at Backus' Turing award lecture, I'd have to say that
cleanliness and freedom of side effects are two of Backus' goals but
certainly not succinctness of definition. In fact Backus says (CACM,
Aug. 78, p. 620), "Our intention is to provide FP systems with widely
useful and powerful primitive functions rather than weak ones that
could then be used to define useful ones."


Although FP has no side effects, Backus also talked about applicative
state systems (AST) with one top level change of state per
computation,i.e. one side effect. The world of expressions is a
nice, orderly one; the world of statements has all the mush. He's
trying to move the statement part out of the way.

I'd have to say one important part of the research in FP systems is to
define and examine functional forms (program forming operations) with
nice mathematical properties. A good way to incorporate (read
implement) a mathematical concept in a computer program is without
side effects. This side effect freeness is nice because it means that
a program is 'referentially transparent', i.e. it can be used without
concern about collision with internal names or memory locations AND
the program is dependable; it always does the same thing.

A second nice thing about applicative languages is that they are
appropriate for parallel execution. In a shared memory model of
computation (e.g. Ada) it's very difficult (NP-complete, see CACM, a
couple of months ago) to tell if there is collision between
processors, i.e. is a processor overwriting data that another
processor needs.


On the other hand, the goal of AI research (at least in the
AI language area) is to produce languages that can effectively
work with as tangled and complicated representations of
knowledge as possible. Languages for semantic nets, frames,
production systems, etc, all have this character.

I don't think that's the goal of AI research but I can't offer a
better one at the moment. (Sometimes it looks as if the goal is to
make money.)

Large, tangled structures can be handled in applicative systems but
not efficiently, at least I don't see how. If you view a database
update as a function mapping the pair (NewData, OldDatabase) into
NewDatabase you have to expect a new database as the returned value.
Conceptionally that's not a problem. However, operationally there
should just be a minor modification of the original database when
there is no sharing and suspended modification when the database is
being shared. There are limited transformations that can help but
there is much room for improvement.

An important point in all this is program transformation. As we build
bigger and smarter systems we widen the gap between the way we think
and the hardware. We need to write clear, easy to understand, and
large-chunked programs but transform them (within the same source
language) into possibly less clear, but more efficient programs.
Program transformation is much easier when there are no side effects.

Now between the Japanese 5th generation project (and the US
response) and the various projects to build non-vonNeumann
machines using FP, it looks to me like the seeds of a
controversy over the best way to do programming. Should we be
using FP languages or AI languages? We can't have it both ways,
right? Or can we?

A central issue is efficiency. The first FORTRAN compiler was viewed
with the same distrust that the public had about computers in general.
Early programmers didn't want to relinquish explicit management of
registers or whatever because they didn't think the compiler could do
as well as they. Later there was skepticism about garbage collection
and memory management. A multitude of sins is committed in the name
of (machine) efficiency at the expense of people efficiency. We
should concern ourselves more with WHAT objects are stored than with
HOW they are stored.

There's no doubt that applicative languages are applicable. The
Japanese (fortunately for them) are less affected by, as Dijkstra puts
it, "our antimathematical age." And they, unlike us, are willing to
sacrifice some short term goals for long term goals.


- Paul Broome
(broome@brl)

------------------------------

Date: 17 Aug 83 17:06:13-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: FP and AI - (nf)
Article-I.D.: ssc-vax.427

There *is* a powerful functional language underlying most AI programs
- Lisp! But it's never pure Lisp. The realization that got me to
thinking about this was the apparent necessity for list surgery,
sooner or later. rplaca and allied functions show up in the strangest
places, and seem to be crucial to the proper functioning of many AI
systems (consider inheritance in frames or the construction of a
semantic network; perhaps method combination in flavors qualifies).
I'm not arguing that an FP language could *not* be used to build an AI
language on top; I'm thinking more about fundamental philosophical
differences between different schools of research.

stan the lep hacker
ssc-vax!sts (soon utah-cs)

------------------------------

Date: Sat 20 Aug 83 12:28:17-PDT
From: PEREIRA@SRI-AI.ARPA
Subject: So the language analysis problem has been solved?!?

I will also refrain from flaming, but not from taking to task
excessive claims.

I'll refrain from flaming about traditional (including
logic) grammars. I'm tired of people insisting on a
restricted view of language that claims that grammar rules
are the ultimate description of syntax (semantics being
irrelevant) and that idioms are irritating special cases. I
might note that we have basically solved the language
analysis problem (using a version of Berkeley's Phrase
Analysis that handles ambiguity) ...

I would love to test that "solution of the language analysis
problem"
... As for the author being "tired of people insisting on a
restricted ..."
, he is just tired of his own straw people, because
there doesn't seem to be anybody around anymore claiming that
"semantics is irrelevant". Formal grammars (logic or otherwise) are
just a convenient mathematical technique for representing SOME
regularities in language in a modular and testable form. OF COURSE, a
formal grammar seen from the PROCEDURAL point of view can be replaced
by any arbitrary "ball of string" with the same operational semantics.
What this replacement does to modularity, testability and
reproducibility of results is sadly clear in the large amount of
published "research" in natural language analysis which is untestable
and irreproducible. The methodological failure of this approach
becomes obvious if one considers the analogous proposal of replacing
the principles and equations of some modern physical theory (general
relativity, say) by a computer program which computes "solutions" to
the equations for some unspecified subset of their domain, some of
these solutions being approximate or plain wrong for some (again
unspecified) set of cases. Even if such a program were "right" all the
time (in contradiction with all our experience so far), its sheer
opacity would make it useless as scientific explanation.

Furthermore, when mentioning "semantics", one better say which KIND of
semantics one means. For example, grammar rules fit very well with
various kinds of truth-theoretic and model-theoretic semantics, so the
comment above cannot be about that kind of semantics. Again, a theory
of semantics needs to be testable and reproducible, and, I would
claim, it only qualifies if it allows the representation of a
potential infinity of situation patterns in a finite way.

I don't recall a von Neumann bottleneck in AI programs, at
least not of the kind Backus was talking about. The main
bottleneck seems to be of a conceptual rather than a
hardware nature. After all, production systems are not
inherently bottlenecked, but nobody really knows how to make
them run concurrently, or exactly what to do with the
results (I have some ideas though).

The reason why nobody knows how to make production systems run
concurrently is simply because they use a global state and side
effects. This IS precisely the von Neumann bottleneck, as made clear
in Backus' article, and is a conceptual limitation with hardware
consequences rather than a purely hardware limitation. Otherwise, why
would Backus address the problem by proposing a new LANGUAGE (fp),
rather than a new computer architecture? If your AI program was
written in a language without side effects (such as PURE Prolog), the
opportunities for parallelism would be there. This would be
particularly welcome in natural language analysis with logic (or other
formal) grammars, because dealing with more and more complex subsets
of language needs an increasing number of grammar rules and rules of
inference, if the results are to be accurate and predictable.
Analysis times, even if they are polynomial on the size of the input,
may grow EXPONENTIALLY with the size of the grammar.

Fernando Pereira
AI Center
SRI International
pereira@sri-ai

------------------------------

Date: 15 Aug 83 22:44:05-PDT (Mon)
From: pur-ee!uiucdcs!uicsl!pollack @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: uiucdcs.2573


The nodes in a data-flow machine, in order to compute efficiently,
must be able to do a local computation. This is why arithmetic or
logical operations are O.K. to distribute. Your scheme, however,
seems to require that the database of propositions be available to
each node, so that the known facts can be deduced "instantaneously".
This would cause severe problems with the whole idea of concurrency,
because either the database would have to be replicated and passed
through the network, or an elaborate system of memory locks would need
to be established.

The Hearsay system from CMU was one of the early PS's with claims to a
concurrent implementation. There is a paper I remember in IEEE ToC (75
or 76) which discussed the problems of speedup and locks.

Also, I think John Holland (of Michigan?) is currently working on a
parallel PS machine (but doesn't call it that!)


Jordan Pollack
University of Illinois
...!pur-ee!uiucdcs!uicsl!pollack

------------------------------

Date: 17 Aug 83 16:56:55-PDT (Wed)
From: decvax!tektronix!uw-beaver!ssc-vax!sts @ Ucb-Vax
Subject: Re: data flow computers and PS's - (nf)
Article-I.D.: ssc-vax.426

A concurrent PS is not too impossible, 'cause I've got one
(specialized for NL processing and not actually implemented
concurrently, but certainly capable). It is true that the working
memory would have to be carefully organized, but that's a matter of
sufficiently clever design; there's no fundamental theoretical
problems. Traditional approaches won't work, because two concurrently
operating rules may come to contradictory conclusions, both of which
may be valid. You need a way to store both of these and use them.

stan the leprechaun hacker
ssc-vax!sts (soon utah-cs)

------------------------------

Date: 18 Aug 83 0516 EDT
From: Dave.Touretzky@CMU-CS-A
Subject: NETL

I am a graduate student of Scott Fahlman's, and I've been working on
NETL for the last five years. There are some interesting lessons to
be learned from the history of the NETL project. NETL was a
combination of a parallel computer architecture, called a parallel
marker propagation machine, and a representation language that
appeared to fit well on this architecture. There will probably never
be a hardware implementation of the NETL Machine, although it is
certainly feasible. Here's why...

The first problem with NETL is its radical semantics: no one
completely understands their implications. We (Scott Fahlman, Walter
van Roggen, and I) wrote a paper in IJCAI-81 describing the problems
we had figuring out how exceptions should interact with multiple
inheritance in the IS-A hierarchy and why the original NETL system
handled exceptions incorrectly. We offered a solution in our paper,
but the solution turned out to be wrong. When you consider that NETL
contains many features besides exceptions and inheritance, e.g.
contexts, roles, propositional statements, quantifiers, and so on, and
all of these features can interact (!!), so that a role (a "slot" in
frame lingo) may only exist within certain contexts, and have
exceptions to its existence (not its value, which is another matter)
in certain sub-contexts, and may be mapped multiple times because of
the multiple inheritance feature, it becomes clear just how
complicated the semantics of NETL really is. KLONE is in a similar
position, although its semantics are less radical than NETL's.
Fahlman's book contains many simple examples of network notation
coupled with appeals to the reader's intuition; what it doesn't
contain is a precise mathematical definition of the meaning of a NETL
network because no such definition existed at that time. It wasn't
even clear that a formal definition was necessary, until we began to
appreciate the complexity of the semantic problems. NETL's operators
are *very* nonstandard; NETL is the best evidence I know of that
semantic networks need not be simply notational variants of logic,
even modal or nonmonotonic logics.

In my thesis (forthcoming) I develop a formal semantics for multiple
inheritance with exceptions in semantic network languages such as
NETL. This brings us to the second problem. If we choose a
reasonable formal semantics for inheritance, then inheritance cannot
be computed on a marker propagation machine, because we need to pass
around more information than is possible on such a limited
architecture. The algorithms that were supposed to implement NETL on
a marker propagation machine were wrong: they suffered from race
conditions and other nasty behavior when run on nontrivial networks.
There is a solution called "conditioning" in which the network is
pre-processed on a serial machine by adding enough extra links to
ensure that the marker propagation algorithms always produce correct
results. But the need for serial preprocessing removes much of the
attractiveness of the parallel architecture.

I think the NETL language design stands on its own as a major
contribution to knowledge representation. It raises fascinating
semantic problems, most of which remain to be solved. The marker
propagation part doesn't look too promising, though. Systems with
NETL-like semantics will almost certainly be built in the future, but
I predict they will be built on top of different parallel
architectures.

-- Dave Touretzky

------------------------------

Date: Thu 18 Aug 83 13:46:13-PDT
From: David Rogers <DRogers@SUMEX-AIM.ARPA>
Subject: NETL and hardware

In Volume 40 of the AIList Alan Glasser asked about hardware
implimentations using marker passing a la NETL. The closest hardware I
am aware of is called the Connection Machine, and is begin developed
at MIT by Alan Bawden, Dave Christman, and Danny Hillis (apologies if
I left someone out). The project involves building a model with about
2^10 processors. I'm not sure of its current status, though I have
heard that a company is forming to build and market prototype CM's.

I have heard rumors of the SPICE project at CMU, though I am
not aware of any results pertaining to hardware, the project seems to
have some measure of priority at CMU. Hopefully members of each of
these projects will also send notes to AIList...

David Rogers, DRogers@SUMEX-AIM

------------------------------

Date: Thu, 18 Aug 1983 22:01 EDT
From: Scott E. Fahlman <Fahlman@CMU-CS-C.ARPA>
Subject: NETL


I've only got time for a very quick response to Alan Glasser's query
about NETL. Since the book was published we have done the following:

1. Our group at CMU has developed several design sketches for
practical NETL machine implementations of about a million porcessing
elements. We haven't built one yet, for reasons described below.

2. David B. McDonald has done a Ph.D.thesis on noun group
understanding (things like "glass wine glass") using a NETL-type
network to hold the necessary world knowledge. (This is available as
a CMU Tech Report.)

3. David Touretzky has done a through logical analysis of NETL-style
inheritance with exceptions, and is currently writing up his thesis on
this topic.

4. I have been studying the fundamental strengths and limitations of
NETL-like marker-passing compared to other kinds of massively parallel
computation. This has gradually led me to prefer an architecture that
passes numers or continuous values to the single-bit marker-passing of
NETL.

For the past couple of years, I've been putting most of my time into
the Common Lisp effort -- a brief foray into tool building that got
out of hand -- and this has delayed any plans to begin work on a NETL
machine. Now that our Common Lisp is nearly finished, I can think
again about starting a hardware project, but something more exciting
than NETL has come along: the Boltzmann Machine architecture that I am
working on with Geoff Hinton of CMU and Terry Sejnowski of
Johns-Hopkins. We will be presenting a paper on this at AAAI.

Very briefly, the Boltzmann machine is a massively parallel
architecture in which each piece of knowledge is distributed over many
units, unlike NETL in which concepts are associated with particular
pieces of hardware. If we can make it work, this has interesting
implications for reliable large-scale implementation, and it is also a
much more plausible model for neural processing than is something like
NETL.

So that's what has happened to NETL.

-- Scott Fahlman (FAHLMAN@CMU-CS-C)

------------------------------

End of AIList Digest
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT