Copy Link
Add to Bookmark
Report

Neuron Digest Volume 04 Number 33

eZine's profile picture
Published in 
Neuron Digest
 · 11 months ago

Neuron Digest	Tuesday, 13 Dec 1988		Volume 4 : Issue 33 

Today's Topics:
ANNs vs Symbolic Systems - Retention of past learning?
Back-propogation question
Sources "Boltzmann Machine simulator" in Xlisp.
Learning Image Representation by Gabor Basis Functions
Back-Propagation with other non-linear functions?
Re: Learning arbitrary transfer functio
Some biological questions
DARPA Announcement
DARPA Neural Network Study
Neural Net Small Business Solicitations


Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"

------------------------------------------------------------

Subject: ANNs vs Symbolic Systems - Retention of past learning?
From: ucsd!cs.UCSD.EDU!schraudo@ucbvax.Berkeley.EDU (Nici Schraudolph)
Organization: what, organized - me??
Date: Fri, 02 Dec 88 17:47:56 -0800

>Subject: RE: advantages of NNs over symbolic systems
>From: kortge@psych.Stanford.EDU (Chris Kortge)
>
>>From: bradb@ai.toronto.edu (Brad Brown)
>> [...]
>> (2) Neural nets can adapt to changes in their environment.
>> [...]
>
>I'm a Connectionist, but I don't think this advantage typically holds. The
>powerful existing learning procedures, those which can learn distributed
>representations (e.g. back-prop), actually require that the environment
>(i.e., the input distribution) remain _fixed_. If, after learning, you
>change the environment a little bit, you can't just train on the new
>inputs; rather, you must retrain on the entire distribution. Otherwise,
>the NN happily wipes out old knowledge in order to learn the new.
>
>[...]

There is a way to implement a kind of attention mechanism in back-prop
nets by maintaining several sets of weights with different learning
rates: weights with high learning rate but fast exponential decay
handle novel inputs whereas more inert weights without decay retain
previous knowledge.

I think Geoff Hinton in Toronto is/was doing something along these
lines though I don't know any details. References, anyone?

#####################################################################
# Nici Schraudolph nschraudolph@ucsd.edu #
# University of California, San Diego ...!ucsd!nschraudolph #
#####################################################################
Disclaimer: U.C. Regents and me share no common opinions whatsoever.


------------------------------

Subject: Back-propogation question
From: reiter@endor.harvard.edu (Ehud Reiter)
Organization: Aiken Computation Lab Harvard, Cambridge, MA
Date: Mon, 05 Dec 88 17:23:18 +0000

Is anyone aware of any empirical comparisons of back-propogation to
other algorithms for learning classifications from examples (e.g.
decision trees, exemplar learning)? The only such article I've seen
is Stanfill&Waltz's article in Dec 86 CACM, which claims that
"memory-based reasoning" (a.k.a. exemplar learning) does better than
back-prop at learning word pronunciations. I'd be very interested in
finding articles which look at other learning tasks, or articles which
compare back-prop to decision-tree learners.

The question I'm interested in is whether there is any evidence that
back-prop has better performance than other algorithms for learning
classifications from examples. This is a pure engineering question -
I'm interested in what works best on a computer, not in what people
do.

Thanks.
Ehud Reiter
reiter@harvard (ARPA,BITNET,UUCP)
reiter@harvard.harvard.EDU (new ARPA)

------------------------------

Subject: Sources "Boltzmann Machine simulator" in Xlisp.
From: mcvax!vmucnam!occam@uunet.UU.NET
Date: Tue, 06 Dec 88 19:49:09 +0000

Could Please send me e-mail and tell me how to get these sources

Thankx....Rodrigo Laurens
C.N.A.M Paris
FRANCE.
e-mail
occam@vmucnam.UUCP

[[Editor's Note: I assume he's talking about Betz's public domain
Xlisp. If there are other LISP versions around, perhaps M. Laurens
wouldn't mind porting the code. If you reply directly, please cc:
neuron-request@hplabs.hp.com! -PM ]]

------------------------------

Subject: Learning Image Representation by Gabor Basis Functions
From: Dario Ringach <dario%TECHUNIX.BITNET@CUNYVM.CUNY.EDU>
Date: Wed, 07 Dec 88 08:05:18 +0200

Following the generalization by Niranjan et al. of the nodes of the
back propagation network, if we multiply the Gaussian nodes by sines
and cosines centered at coordinates in the frequency space then we get
multidimensional Gabor basis functions. It might be interesting to
look for image representation in the non-uniform frequency-position
space using back-prop to minimize the error of the reconstructed image
using biological based basis functions, and expect the network to find
a good tradeoff between spatial sampling and the effective bandwidth.
Has anyone tried this approach? Any comments?

Dario

------------------------------

Subject: Back-Propagation with other non-linear functions?
From: Ho Chung LUI <ISSLHC%NUSVM.BITNET@CUNYVM.CUNY.EDU>
Date: Sat, 10 Dec 88 15:18:43 +0700

It seems to me that everyone is using the sigmoidal function:

f(y) = 1. / (1 + exp( -y + thr )) where thr = threshold

to do back propagation. However, in theory any nonlinear function
which is bound between 0 and 1 and continuously differentiable would
do.

Has any one used any other nonlinear functions (preferrably easier to
compute) to do back-prop successfully ??

Ho Lui
Institute of Systems Science
Singapore
Acknowledge-To: <ISSLHC@NUSVM>

------------------------------

Subject: Re: Learning arbitrary transfer function
From: Michael Bass <MBASS@uoneuro.uoregon.edu>
Date: Sun, 11 Dec 88 10:58:00 -0800

<joe@amos.ling.ucsd.edu> writes:
>>Although my knowledge of neural nets is limited, I won't buy what is
>>written above. Most persons can, for example, throw a baseball more
>>or less at the target in spite of gravity. This requires a non-linear
>>calculation. This is not done via multiplication tables. Sure it is
>>done by "experience", but so are neural network calculations.
>
>Hmm. I'm no expert on human learning, but I don't buy what's written above.
>
>When I throw a baseball off the top of a ten-story building, I am very
>bad at hitting that at which I aimed (e.g., students). This would lead
>me to theorize that I have not learned a non-linear relationship.

Maybe you haven't learned a non-linear relationship, but the
relationship you will eventually learn will be "non-linear." But that
doesn't mean that your brain goes through a series of multiplications
after gathering quantitative estimates gravity and wind speed. Then
giving a command to your arm to impart x N of force in a certain
direction. Rather, the brain is more adaptable than that. You throw
the ball a couple of times. Each time hitting left of the target. So
you try throwing a little more to the right. (error correction --
modifying synaptic connections) Pretty soon, you forget about error
correction and your network has been trained to do the task -- you're
bopping students left and right. Then as a storm rises (in the
administration building), you learn to compensate for wind.

Non-linearily is an explanation of adaptability. You can't say that
in the beginning of the learning paradigm, the network didn't succeed,
therefore the network didn't/couldn't learn a non-linear relationship.
After learning, the relationship can be described as non-linear. I
don't think that the brain cares whether a relationship is linear or
non-linear. It has adapted synapses to accomplish a task. (While not
even being aware of its own mechanism!)

Michael Bass
biochemist & neurobiologist
Institute of Neuroscience
University of Oregon
mbass@uoneuro.uoregon.edu

------------------------------

Subject: Some biological questions
From: csrobe@cs.wm.edu (Chip Roberson)
Date: Sun, 11 Dec 88 19:41:49 -0500

I have some questions about the biological side of neurons and neural
networks. What I am looking for are a few succinct answers hopefully
accompanied with references. One caveat, I am a computer science
student so please bear that in mind when reading and/or replying.

How diverse are the neurons in a small system of neurons (or in
selected regions of the brain)?

Can somebody give me a general idea of the complexity of the chemical
reactions that occur in the cell body? (A vague question I know, but
I'm just trying to get an idea how much is going on in there).

Approximately, how many chemicals/ions have been found in and around a
neuron?

Is it true that the basic structure of the brain is determined when
you are born?

How does the shape of the neuron affect its "computation"?

Finally, has anyone determined what role, if any, DNA might play in
the processing performed by a neuron?

If anyone is interested, this questions were raised during a reading
of the first two chapters of James S. Albus' "Brains, Behavior, and
Robotics"
.

Thanks,
-c
-------------------------------------------------------------------------
Chip Roberson ARPANET: csrobe@cs.wm.edu
1328-F Mt. Vernon Ave. BITNET: #csrobe@wmmvs.bitnet
Williamsburg, VA 23185 UUCP: ...!uunet!pyrdc!gmu90x!wmcs!csrobe
-------------------------------------------------------------------------

[[Editor's note: Some excellent questions, many without good answers.
Next week, I'll try to respond (after my Neurobiology final!) and will
accumulate any other answers you readers send in. -PM ]]

------------------------------

Subject: DARPA Announcement
From: will@ida.org (Craig Will)
Date: Sun, 11 Dec 88 12:05:38 -0500


DARPA Announces New
Neural Network Program

(Based on an Office of the Secretary of Defense news release).


The Defense Advanced Research Projects Agency (DARPA)
has announced a major new program in artificial neural net-
works. The program was described as having the goal of
determining the potential advantages of artificial neural
networks, advanced neural network theory and of developing
advanced hardware technology.


The program will be ``a 28-month, $33 million effort
with three components: comparative performance measurements
to identify, investigate and measure potential advantages of
artificial neural networks involving complex information
processing and autonomous control systems; theory and model-
ing efforts to advance the state-of-the-art; and hardware
technology base development efforts to develop advanced
hardware implementation technologies as the basis for future
construction of artificial neural network computing
machines. The accomplishments of this initial effort will
determine the future direction of a DARPA program."


Competitive solicitations for participation in the pro-
gram will be published in the Commerce Business Daily.

[According to sources in the Office of the Secretary of
Defense, the CBD announcement will be sometime in December,
probably before Christmas. For more details on the program,
see the upcoming issue (volume 2, no. 3) of Neural Network
Review.]


Craig A. Will
Institute for Defense Analyses
will@ida.org



------------------------------

Subject: DARPA Neural Network Study
From: will@ida.org (Craig Will)
Date: Sun, 11 Dec 88 12:08:12 -0500


The DARPA Neural Network Study
AFCEA Press Version
Summary and Analysis in Neural Network Review


The AFCEA International Press version of the DARPA
Neural Network Study is expected to be released to the pub-
lic about Monday, December 12. This is the roughly 600 page
document containing the individual reports of each of the
technical panels, which AFCEA Press is publishing as a hard-
bound book. AFCEA Press will begin shipping copies at that
time.


The book costs $49.95 plus $5.00 for shipping in the
US, $10.00 foreign. Orders go to AFCEA International Press,
4400 Fair Lakes Court, Fairfax, VA 22033. (703) 631-6190.


A 25,000 word, 30-page summary and critical analysis of
the 600-page DARPA Study will be published in Neural Network
Review, a quarterly journal published by the Washington
Neural Network Society. Individual copies of the DARPA
issue are available for $6.00; a one-year subscription to
Neural Network Review is $24.00 for 4 issues. Orders go to
the Washington Neural Network Society, P. O. Box 427, Dunn
Loring, VA 22027. Copies of the DARPA issue will be mailed
out beginning about a week after the public release of the
DARPA Study document (roughly December 16).


Craig Will
Institute for Defense Analyses
Alexandria, Virginia
will@ida.org

------------------------------

Subject: Neural Net Small Business Solicitations
From: will@ida.org (Craig Will)
Date: Sun, 11 Dec 88 12:06:19 -0500


Small Business
Innovation Research
Program

Department of Defense


(The following was prepared for publication in Neural
Network Review. It is being distributed via the net because
it is time sensitive and the next issue of Neural Network
Review has been held up pending public release of the DARPA
Neural Network Study. -- Craig Will, Institute for Defense
Analyses. will@ida.org)


The U. S. Department of Defense has announced its 1989
solicitation for the Small Business Innovation Research
(SBIR) Program. This program provides for research con-
tracts for small businesses in various program areas desig-
nated by DoD component agencies, including the Army, Navy,
Air Force, Defense Advanced Research Project Agency (DARPA),
and Strategic Defense Initiative Organization (SDIO).

The program is in three Phases. Phase I awards are
essentially feasibility studies of 6 months in length and
$50,000. Phase I contractors compete for Phase II awards of
2 years in length and up to $500,000. Phase III of the pro-
gram is for commercial application of the research.

Proposals must be no longer than 25 pages in length,
including the cover sheet, summary, cost proposal, resumes
and any attachments. Deadline for proposals is January 6,
1989.

A number of topics in the solicitation are for neural
network research, or for topics for which neural networks
might be used. The following are those topics most directly
related to neural networks:


N89-003 Acoustic Classification with Parallel-
Processing Networks. Office of Naval Research, Arlington,
Virginia. A research project whose objective is to develop
a prototype system that can, in concert with a human,
``determine the source of a non-speech acoustic signal from
its transient characteristics."
``The exploitation of artif-
icial neural network or neuro-computer systems is
encouraged."


N89-098 Neural Net Software Applications. Naval Sea
Systems Command, Arlington, Virginia. An ``Exploratory
development"
project with a purpose ``to evaluate the level
of maturity of currently available neural network software
and demonstrate potential applications within the Navy where
the best payback can be expected." The proposer ``must be
thoroughly familiar with both expert systems and neural
nets."



N89-160. Artificial Intelligence Based Target Recogni-
tion. Naval Surface Weapons Center, White Oak, Maryland.
An ``Exploratory development" project. One suggested
approach to ``the development of a hybrid image understand-
ing system"
is the use of ``one or more neural networks for
feature extraction and recognition."


AF89-036. Neural Computing Architectures for Natural
Language and/or Vision. Rome Air Development Center, Grif-
fiss AFB, NY. The goal of this topic is to develop
``natural language and vision interfaces for computer sys-
tems."
They suggest experimenting with different approaches
for ``knowledge representation and retrieval using neural
computing techniques." Another RADC program, AF89-053,
involves AI techniques for natural language for message
routing, but might be done with neural network techniques.
RADC had a 1988 solicitation for automatic target recogni-
tion using neural networks.


AF89-077. Crew Performance Predictions and Enhance-
ments. Human Systems Division, Brooks AFB, Texas. Of four
projects of interest, one involves applying neural networks
to measuring and analyzing human performance in piloting
combat aircraft.


AF89-097. Artificial Intelligence and Parallel Pro-
cessing Technologies for Electronic Combat Applications.
Aeronautical Systems Division, Wright-Patterson AFB, Ohio.
This project suggests ``a blend of advanced AI technologies"

including knowledge-based systems and neural networks
together with multiprocessors and distributed processing
systems to solve ``a current electronic combat problem".


AF89-163. Artificial Intelligence Applied to Aeronaut-
ical Systems. Aeronautical Systems Division, Wright-
Patterson AFB, Ohio. This program involves applying AI to
``all aspects of the Air Force Mission"
, including office
automation, logistics, and maintenance, as well as aircraft.
This program has funded neural network projects in 1987 and
1988.


AF89-241. Neurocomputers, New Architectures, and
Models of Computation. Air Force Office of Scientific
Research, Bolling Air Force Base, Washington, DC. The
objective of this program is ``to stimulate development of
new computer architectures that implement neural network /
connectionist models of computation." They are interested
in both ``general purpose"
neurocomputer architectures that
can implement ``as many neural network models as possible",
as well as ``special purpose machines"
designed for a
specific type of neural network architecture or application
problem. They suggest the ``integration of new technolo-
gies, such as optics and organic polymers" as well as
integrating neural net machines with traditional AI and
database computers. This agency has traditionally funded
relatively fundamental research with a broad interdisci-
plinary flavor.


AF89-243. Life Sciences Basic Research. Air Force
Office of Scientific Research, Bolling AFB, Washington, DC.
A general broad solicitation covering five areas: toxicol-
ogy, neuroscience, vision, audition, and cognition. The
neuroscience area particularly suggests integrating neuro-
biology and AI and the relationship between ``neural archic-
tures and formal computation."



DARPA89-004. Investigation of Potential Applications
of Neural Network Architecture to Seismic Processing Prob-
lems. Defense Advanced Research Agency, Arlington, Vir-
ginia. The goal is to investigate ``neural network archi-
tectures and methods to evaluate seismic waveforms for
extraction of parameters for seismic event identification."
Their interest is distinguishing signals representing natur-
ally occurring events from those representing explosions.


SDIO89-010. Computer Architecture, Algorithms, and
Language. Strategic Defense Initiative Organization, The
Pentagon, Washington, DC. A general solicitation for com-
puting methods capable of ``order-of-magnitude advances"
.
Includes architectures that are robust and fault-tolerant,
including innovative techniques such as neural networks.
Also combined rule-based AI and neural networks for man-
machine interfaces and optical computing.


For more details obtain a copy of the SBIR Program Sol-
icitation book (358 pages in length) from the Defense Techn-
ical Information Center: Attn: DTIC/SBIR, Building 5, Cam-
eron Station, Alexandria, Virginia 22304-6145. Telephone:
Toll-free, (800) 368-5211. For Virginia, Alaska, Hawaii:
(202) 274-6902.

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT