Copy Link
Add to Bookmark
Report

Neuron Digest Volume 05 Number 13

eZine's profile picture
Published in 
Neuron Digest
 · 14 Nov 2023

Neuron Digest	Monday, 13 Mar 1989		Volume 5 : Issue 13 

Today's Topics:
Call for Minitrack in Business Applications of NN
Connection between Hidden Markov Models and Connectionist Networks
EURASIP Workshop on Neural Nets
NIPS POST-MEETING WORKSHOPS
Preprint - Performance of a Stochastic Learning Microchip
Rules and Variables in a Connectionist Reasoning System
Talk at ICSI - DIFICIL
Talk at ICSI - "Perceptual Organization for Computer Vision"
Talk at ICSI - Spreading Activation Meets Back Propagation:
Talk at ICSI - The Sphinx Speech Recognition System
TR - ANNs and Sequential Paraphrasing of Script-Based Stories
TR - ANNs in Robot Motion Planning
TR - Dynamic Node Creating in Back-Prop Nets
TR - Learning State Space Trajectories in Recurrent ANNs
TR - Speeding up ANNs in the "Real World"

Send submissions, questions, address maintenance and requests for old issues to
"neuron-request@hplabs.hp.com" or "{any backbone,uunet}!hplabs!neuron-request"
ARPANET users can get old issues via ftp from hplpm.hpl.hp.com (15.255.16.205).

------------------------------------------------------------

Subject: Call for Minitrack in Business Applications of NN
From: T034360%UHCCMVS.BITNET@CUNYVM.CUNY.EDU
Date: Sat, 04 Mar 89 01:22:00 -1000

CALL FOR PAPERS
HAWAII INTERNATIONAL CONFERENCE ON SYSTEM SCIENCES - 23


NEURAL NET APPLICATIONS IN BUSINESS


KAILUA-KONA, HAWAII - JANUARY 2-5, 1990


The Emerging Technologies and Applications Track of HICSS-23 will contain a
special set of sessions focusing on a broad selection of topics in the area
of Neural Net Applications in Business. The presentations will provide a
forum to discuss new advances in these applications.

Papers are invited that may be theoretical, conceptual, tutorial, or
descriptive in nature. Of special interest, however, are papers detailing
solutions to practical problems. Those papers selected for presentation
will appear in the Conference Proceedings, which are published by the
Computer Society of the IEEE. HICSS-23 is sponsored by the University of
Hawaii in cooperation with the ACM, the IEEE Computer Society, and the
Pacific Research Institute for Information Systems and Management (PRIISM).
Submissions are solicited in the areas:

(1)The application of neural nets to model business tasks performed by
people (e.g. Dutta and Shekhar paper on Applying Neural Nets to Rating
Bonds, ICNN, 1988, Vol. II, pp. 443-450)

(2)The development of neural nets to model human decision tasks (e.g.
Gluck and Bower, Journal of Experimental Psychology: General, 117(3),
227-247)

(3)The application of neural nets to improving modeling tools commonly
used in business (e.g. neural networks to perform regression-like modeling)

(4)The embedding of neural nets in commercial products (e.g. OCR
scanners)

Our order of preference is from (1) to (4) above. Papers which detail
actual us age of neural networks are preferred to those which only propose
uses.

INSTRUCTIONS FOR SUBMITTING PAPERS: Manuscripts should be 12-26 typewritten,
double-spaced pages in length. Do not send submissions that are
significantly s horter or longer than this. Each manuscript will be
subjected to refereeing. Manuscript papers should have a title pages that
includes the title of the paper, full name(s) of its author(s),
affiliation(s), complete mailing and electronic address(es), telephone
number(s), and a 300- word abstract of the paper.


DEADLINES

A 300-word optional abstract may be submitted by April 30, 1989 by
Email or mail. (If no reply to email in 7 days, send by U.S. mail also.)

Feedback to author concerning abstract by May 31, 1989.

Six paper copies of the manuscript are due by June 26, 1989.

Notification of accepted papers by September 1, 1989.

Accepted manuscripts, camera-ready, are due by October 1, 1989.

SEND SUBMISSIONS AND QUESTIONS TO:

Prof. William Remus
College of Business Administration
University of Hawaii
2404 Maile Way
Honolulu, HI 96822 USA
Tel.: (808)948-7608
EMAIL: CBADWRE@UHCCVM.BITNET
FAX: (808)942-1591

------------------------------

Subject: Connection between Hidden Markov Models and Connectionist Networks
From: thanasis kehagias <ST401843%BROWNVM.BITNET@VMA.CC.CMU.EDU>
Date: Mon, 13 Feb 89 00:47:00 -0500


The following paper explores the connection between Hidden Markov Models
and Connectionist networks. anybody interested in a copy, email me.
If you have a TeX setup I will send you the dvi file. else give me your
physical mail address.

OPTIMAL CONTROL FOR TRAINING
THE MISSING LINK BETWEEN
HIDDEN MARKOV MODELS
AND CONNECTIONIST NETWORKS

by Athanasios Kehagias
Division of Applied Mathematics
Brown University
Providence, RI 02912



ABSTRACT

For every Hidden Markov Modle there is a set of forward probabilities that
need to be computed for both the recognition and training problem . These
probabilties are computed recursively and hence the computation can be
performed by a multistage , feedforward network that we will call Hidden
Markov Model Network (HMMN). This network has exactly the same architecture
as the standard Connectionist Network(CN). Furthermore, training a Hidden
Markov Model is equivalent to optimizing a function of the HMMN; training a
CN is equivalent to minimizing a function of the CN. Due to the multistage
feedforward architecture, both problems can be seen as Optimal Control
problems. By applying standard Optimal Control techniques, we discover in
both problems that certain back propagating quantities (backward
probabilities for HMMN, backward propogated errors for CN) are of crucial
importance for the solution. So HMMN's and CN's are similar both in
architecture and training.

**************

i was influenced in this research by the work of H. Bourlard and C.C.
Wellekens (the HMM- CN connection) and Y. leCun (Optimal Control
applications in CN's). as I was finishing my paper I received a message by
J.N. Hwang saying that he and S.Y. Kung have written a paper that includes
similar results.

Thanasis Kehagias

------------------------------

Subject: EURASIP Workshop on Neural Nets
From: uunet!mcvax!inesc!alf!lba (Luis Borges de Almeida)
Date: Mon, 06 Mar 89 16:37:30 +0000


EURASIP WORKSHOP ON NEURAL NETWORKS

Sesimbra, Portugal
February 15-17, 1990


ANNOUNCEMENT AND CALL FOR PAPERS

The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It
will take place in 1990, from February 15 morning to 17 noon, and will be
sponsored by EURASIP, the European Association for Signal Processing. It
will be open to participants from all countries, both from inside and
outside of Europe.

Contributions from all fields related to the neural network area are
welcome. A (non-exclusive) list of topics is given below. Care is being
taken to ensure that the workshop will have a high level of quality.
Proposed contributions will be evaluated by an international technical
committee. A proceedings volume will be published, and will be handed to
participants at the beginning of the workshop. The number of participants
will be limited to 50. Full contributions will take the form of oral
presentations, and will correspond to papers in the proceedings. Some short
contributions will also be accepted, for presentation of ongoing work,
projects (ESPRIT, BRAIN, DARPA,...), etc. They will be presented in poster
format, and will not originate any written publication. A small number of
non-contributing participants may also be accepted. The official language of
the workshop will be English.

TOPICS:

- signal processing (speech, image,...)
- pattern recognition
- algorithms (training procedures, new structures, speedups,...)
- generalization
- implementation
- specific applications where NN have been proved better than other
approaches
- industrial projects and realizations


Submissions, both for long and for short contributions, will consist of
(strictly) 2-page summaries. Three copies should be sent directly to the
Technical Chairman, at the address given below. The calendar for
contributions is as follows:


Full contributions Short contributions

Deadline for submission June 1, 1989 Oct 1, 1989
Notif. of acceptance Sept 1, 1989 Nov 15, 1989
Camera-ready paper Nov 1, 1989



SESIMBRA...

... is a fishermens village, located in a nice region about 30 km south of
Lisbon. Special transportation from/to Lisbon will be arranged. The
workshop will end on a Saturday at lunch time; therefore, the participants
will have the option of either flying back home in the afternoon, or staying
for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon.
An optional program for accompanying persons is being organized.

For further information, send the coupon below to the general chairman, or
contact directly.


ORGANIZING COMMITTEE:

GENERAL CHAIRMAN

Luis B. Almeida
INESC
Apartado 10105
P-1017 LISBOA CODEX
PORTUGAL

Phone: +351-1-544607.
Fax: +351-1-525843.
E-mail: {any backbone, uunet}!mcvax!inesc!lba


TECHNICAL CHAIRMAN

Christian Wellekens
Philips Research Laboratory
Av. Van Becelaere 2
Box 8
B-1170 BRUSSELS
BELGIUM

Phone: +32-2-6742275


TECHNICAL COMMITTEE

John Bridle (Royal Signal and Radar Establishment, Malvern, UK)
Herve Bourlard (Intern. Computer Science Institute, Berkeley, USA)
Frank Fallside (University of Cambridge, Cambridge, UK)
Francoise Fogelman (Ecole de H. Etudes en Informatique, Paris, France)
Jeanny Herault (Institut Nat. Polytech. de Grenoble, Grenoble, France)
Larry Jackel (AT&T Bell Labs, Holmdel, NJ, USA)
Renato de Mori (McGill University, Montreal, Canada)


REGISTRATION, FINANCE, LOCAL ARRANGEMENTS

Joao Bilhim
INESC
Apartado 10105
P-1017 LISBOA CODEX
PORTUGAL

Phone: +351-1-545150.
Fax: +351-1-525843.



WORKSHOP SPONSOR:

EURASIP - European Association for Signal Processing


CO-SPONSORS:

INESC - Instituto de Engenharia de Sistemas e Computadores, Lisbon,
Portugal

IEEE, Portugal Section


*-------------------------------- cut here ---------------------------------*

Please keep me informed about the EURASIP Workshop on Neural Networks

Name:

University/Company:

Address:

Phone: E-mail:

[ ] I plan to attend the workshop

I plan to submit a contribution [ ] full [ ] short

Preliminary title:

(send to Luis B. Almeida, at address given above)

------------------------------

Subject: NIPS POST-MEETING WORKSHOPS
From: Stephen J Hanson <jose@tractatus.bellcore.com>
Date: Tue, 14 Feb 89 17:22:22 -0500


NIPS-89 POST-CONFERENCE WORKSHOPS
DECEMBER 1-2, 1989

REQUEST FOR PROPOSALS

Following the regular NIPS program, workshops on current topics in
Neural Information Processing will be held on December 1 and 2, 1989,
at a ski resort near Denver. Proposals by qualified individuals
interested in chairing one of these workshops are solicited.

Past topics have included: Rules and Connectionist Models; Speech,
Neural Networks and Hidden Markov Models; Imaging Techniques in
Neurobiology; Computational Complexity Issues; Fault Tolerance in
Neural Networks; Benchmarking and Comparing Neural Network
Applications; Architectural Issues; Fast Training Techniques.

The format of the workshops is informal. Beyond reporting on past
research, their goal is to provide a forum for scientists actively
working in the field to freely discuss current issues of concern and
interest. Sessions will meet in the morning and in the afternoon of
both days, with free time in between for ongoing individual exchange
or outdoor activities. Specific open and/or controversial issues are
encouraged and preferred as workshop topics. Individuals interested
in chairing a workshop must propose a topic of current interest and
must be willing to accept responsibility for their group's discussion.
Discussion leaders' responsibilities include: arrange brief informal
presentations by experts working on this topic, moderate or lead the
discussion; and report its high points, findings and conclusions to
the group during evening plenary sessions.

Submission Procedure: Interested parties should submit a short
proposal for a workshop of interest by May 30, 1989. Proposals should
include a title and a short description of what the workshop is to
address and accomplish. It should state why the topic is of interest
or controversial, why it should be discussed and what the targeted
group of participants is. In addition, please send a brief resume of
the prospective workshop chair, list of publications and evidence of
scholarship in the field of interest.

Mail submissions to:
Kathie Hibbard
NIPS89 Local Committee
Engineering Center
Campus Box 425
Boulder, CO, 80309-0425
Name, mailing address, phone number, and e-mail net address (if
applicable) should be on all submissions.

Workshop Organizing Committee:
Alex Waibel, Carnegie-Mellon, Workshop Chairman;
Howard Wachtel, University of Colorado, Workshop Local Arrangements;
Kathie Hibbard, University of Colorado, NIPS General Local
Arrangements;

PROPOSALS MUST BE RECEIVED BY MAY 30, 1989.

------------------------------

Subject: Preprint - Performance of a Stochastic Learning Microchip
From: Selma M Kaufman <smk@flash.bellcore.com>
Date: Fri, 17 Feb 89 09:45:33 -0500


Performance of a Stochastic Learning Microchip
Joshua Alspector, Bhusan Gupta, and Robert B. Allen


We have fabricated a test chip in 2 micron CMOS that can perform
supervised learning in a manner similar to the Boltzmann machine.
Patterns can be presented to it at 100,000 per second. The chip
learns to solve the XOR problem in a few milliseconds. We also
have demonstrated the capability to do unsupervised competitive
learning with it. The functions of the chip components are exam-
ined and the performance is assessed.

For copies contact: Selma Kaufman, smk@flash.bellcore.com


------------------------------

Subject: Rules and Variables in a Connectionist Reasoning System
From: Lokendra Shastri <Shastri@cis.upenn.edu>
Date: Sun, 26 Feb 89 20:58:00 -0500



Technical report announcement, please send requests to glenda@cis.upenn.edu


A Connectionist System for Rule Based Reasoning with Multi-Place
Predicates and Variables

Lokendra Shastri and Venkat Ajjanagadde
Computer and Information Science Department
University of Pennsylvania
Philadelphia, PA 19104

MS-CIS-8906
LINC LAB 141

Abstract

McCarthy has observed that the representational power of most connectionist
systems is restricted to unary predicates applied to a fixed object. More
recently, Fodor and Pylyshyn have made a sweeping claim that connectionist
systems cannot incorporate systematicity and compositionality. These
comments suggest that representing structured knowledge in a connectionist
network and using this knowledge in a systematic way is considered difficult
if not impossible. The work reported in this paper demonstrates that a
connectionist system can not only represent structured knowledge and display
systematic behavior, but it can also do so with extreme efficiency. The
paper describes a connectionist system that can represent knowledge
expressed as rules and facts involving multi-place predicates (i.e., n-ary
relations), and draw limited, but sound, inferences based on this knowledge.
The system is extremely efficient - in fact, optimal, as it draws
conclusions in time proportional to the length of the proof.

It is observed that representing and reasoning with structured knowledge
requires a solution to the variable binding problem. A solution to this
problem using a multi-phase clock is proposed. The solution allows the
system to maintain and propagate an arbitrary number of variable bindings
during the reasoning process. The work also identifies constraints on the
structure of inferential dependencies and the nature of quantification in
individual rules that are required for efficient reasoning. These
constraints may eventually help in modeling the remarkable human ability of
performing certain inferences with extreme efficiency.

------------------------------

Subject: Talk at ICSI - DIFICIL
From: collison%icsi.Berkeley.EDU@berkeley.edu (Alexandra Collison)
Date: Wed, 15 Feb 89 13:58:02 -0800




The International Computer Science Institute
is pleased to announce a talk:

Dr. Susan Hollbach Weber
University of Rochester

Monday, February 27, 1989
at 2:30 p.m.

"DIFICIL: Direct Inferences and Figurative Interpretation
in a Connectionist Implementation of Language understanding."



Given that conceptual categories possess properties (or slots)
and values (or fillers), the structural relationships between
these attributes can account for many complex behaviours,
ranging from direct inferences to the interpretation of novel
figures of speech. This talk presents a connectionist imple-
mentation of a functional model of category structure in which
categories are multi-faceted and each facet is functionally
motivated.

The resulting system, known as DIFICIL, captures a wide variety
of cognitive effects. Direct inferences arise from literal
adjective-noun combinations, where inferences are drawn about
property values based on the named property value; for example,
green apples are unripe and sour, and green grass is soft and
cool. Property dominance effects indicate that the adjective
`green' actually primes the property values `unripe' and `sour'
for the category `apple'. Prototype effects arise within a
given aspect of a category, as the tightly coupled property
values interact with each other. Finally, the model provides
a mechanism to interpret novel figurative adjective-noun combi-
nations, such as `green idea': property abstraction hierarchies
supply all possible interpretations suggested by the conceptual
aspects normally associated with the adjective.


This talk will be held in ICSI's Main Lecture Hall.
1947 Center Street, Suite 600, Berkeley, CA 94704
(On Center between Milvia and Martin Luther King Jr. Way)


------------------------------

Subject: Talk at ICSI - "Perceptual Organization for Computer Vision"
From: collison%icsi.Berkeley.EDU@berkeley.edu (Alexandra Collison)
Date: Wed, 22 Feb 89 12:35:43 -0800



The International Computer Science Institute
is pleased to present a talk:

Thursday, March 9, 1989 2:30 p.m.

Rakesh Mohan
Institute for Robotics and Intelligent Systems,
University of Southern California

"Perceptual Organization for Computer Vision"


Our ability to detect structural relationships among similar image
tokens is termed "perceptual organization". In this presen- tation, we will
discuss the grouping of intensity edges into "col- lations" on the basis of
the geometric relationships among them. These collations encode structural
information which can aid various visual tasks such as object segmentation,
correspondence processes (stereo, motion and model matching) and shape
inference. We will present two vision systems based on perceptual
organization, one to detect and describe buildings in aerial images and the
other to segment 2D scenes.


This talk will be held in the Main Lecture Hall at ICSI.
1947 Center Street, Suite 600, Berkeley, CA 94704
(On Center between Milvia and Martin Luther King Jr. Way)


------------------------------

Subject: Talk at ICSI - Spreading Activation Meets Back Propagation:
From: collison%icsi.Berkeley.EDU@berkeley.edu (Alexandra Collison)
Date: Fri, 17 Feb 89 15:44:46 -0800



The International Computer Science Institute
is pleased to present a talk:


Dr. James Hendler
ICSI and University of Maryland, College Park

Wednesday, February 22, 1989 12 noon


Spreading Activation Meets Back Propagation:
Towards higher level inferencing with distributed networks


Connectionism has recently seen a major resurgence of interest among both
artificial intelligence and cognitive science researchers. The spectrum of
these neural network approaches is quite large, ranging from structured
models, in which individual network units carry meaning, through distributed
models of weighted networks with learning algorithms. Very encouraging
results, particularly in ``low-level'' perceptual and signal processing
tasks, are being reported across the entire spectrum of these models. These
models have had more limited success, however, in those ``higher cognitive''
areas where symbolic models have traditionally shown promise: expert
reasoning, planning, and natural language processing.

However, although connectionist techniques have had only limited success in
such cognitive tasks, for a system to provide both ``low-level'' perceptual
functionality as well as demonstrating high-level cognitive abilities, it
must be able to capture the best features of each of the competing
paradigms. In this talk we discuss several steps towards providing such a
system by examining various models of spreading activation inferencing on
networks created by parallel distributed processing learning techniques.


This talk will be held in the ICSI Main Lecture Hall.
1947 Center Street, Suite 600, Berkeley, CA 94704
(On Center between Milvia and Martin Luther King Jr. Way)


------------------------------

Subject: Talk at ICSI - The Sphinx Speech Recognition System
From: collison%icsi.Berkeley.EDU@berkeley.edu (Alexandra Collison)
Date: Fri, 24 Feb 89 12:12:00 -0800


The International Computer Science Institute
is pleased to present a talk:

Dr. Kai-Fu Lee
Computer Science Department
Carnegie Mellon University
Pittsburgh, Pennsylvania

"The Sphinx Speech Recognition System"

In this talk, I will describe SPHINX, the first large-vocabulary
speaker-independent continuous speech recognition system. First, an
overview of the system will be presented. Next, I will describe some of our
recent enhancements, including:

- generalized triphone models
- word duration modeling
- function-phrase modeling
- between-word coarticulation modeling
- corrective training

Our most recent results with the 997-word resource management task are: 96%
word accuracy with a grammar (perplexity 60), and 82% without grammar
(perplexity 997).

I will also describe our recent results with:
- Speech recognition without vocabulary-specific
training.
- Using neural networks for continuous speech
recognition.


This talk will be held in the Main Lecture Hall at ICSI.
1947 Center Street, Suite 600, Berkeley, CA 94704
(On Center between Milvia and Martin Luther King Jr. Way)

------------------------------

Subject: TR - ANNs and Sequential Paraphrasing of Script-Based Stories
From: Risto Miikkulainen <risto@CS.UCLA.EDU>
Organization: UCLA Computer Science Department
Date: Thu, 23 Feb 89 14:15:22 -0800


[ Please send requests to valerie@cs.ucla.edu ]

A Modular Neural Network Architecture for
Sequential Paraphrasing of Script-Based Stories

Risto Miikkulainen and Michael G. Dyer
Artificial Intelligence Laboratory
Computer Science Department
University of California, Los Angeles, CA 90024

Abstract

We have applied sequential recurrent neural networks to a fairly high-level
cognitive task, i.e. paraphrasing script-based stories. Using hierarchically
organized modular subnetworks, which are trained separately and in parallel,
the complexity of the task is reduced by effectively dividing it into
subgoals. The system uses sequential natural language input and output, and
develops its own I/O representations for the words. The representations are
stored in an external global lexicon, and they are adjusted in the course of
training by all four subnetworks simultaneously, according to the
FGREP-method. By concatenating a unique identification with the resulting
representation, an arbitrary number of instances of the same word type can
be created and used in the stories. The system is able to produce a fully
expanded paraphrase of the story from only a few sentences, i.e. the
unmentioned events are inferred. The word instances are correctly bound to
their roles, and simple plausible inferences of the variable content of the
story are made in the process.

------------------------------

Subject: TR - ANNs in Robot Motion Planning
From: mel@cougar.ccsr.uiuc.edu (Bartlett Mel)
Date: Thu, 09 Feb 89 12:26:34 -0600

The following thesis/TR is now available--about 50% of it is dedicated to
relations to traditional methods in robotics, and to psychological and
biological issues...


MURPHY: A Neurally-Inspired Connectionist Approach to
Learning and Performance in Vision-Based
Robot Motion Planning


Bartlett W. Mel
Center for Complex Systems Research
Beckman Institute, University of Illinois

Many aspects of intelligent animal behavior require an understanding of the
complex spatial relationships between the body and its parts and the
coordinate systems of the external world. This thesis deals specifically
with the problem of guiding a multi-link arm to a visual target in the
presence of obstacles. A simple vision-based kinematic controller and
motion planner based on a connectionist network architecture has been
developed, called MURPHY. The physical setup consists of a video camera and
a Rhino XR-3 robot arm with three joints that move in the image plane of the
camera. We assume no a priori model of arm kinematics or of the imaging
characteristics of the camera/visual system, and no sophisticated built-in
algorithms for obstacle avoidance. Instead, MURPHY builds a model of his
arm through a combination of physical and ``mental'' practice, and then uses
simple heuristic search with mental images of his arm to solve
visually-guided reaching problems in the presence of obstacles whose
traditional algorithmic solutions are extremely complex.

MURPHY differs from previous approaches to robot motion-planning primarily
in his use of an explicit full-visual-field representation of the workspace.
Several other aspects of MURPHY's design are unusual, including the sigma-pi
synaptic learning rule, the teacherless training paradigm, and the
integration of sequential control within an otherwise connectionist
architecture. In concluding sections we outline a series of strong
correspondences between the representations and algorithms used by MURPHY,
and the psychology, physiology, and neural bases for the programming and
control of directed, voluntary arm movements in humans and animals.


You can write to me: mel@complex.ccsr.uiuc.edu, or judi
jr@complex.ccsr.uiuc.edu. Out computers go down on Feb. 13
for 2 days, so if you want one then, call (217)244-4250 instead.

-Bartlett Mel


------------------------------

Subject: TR - Dynamic Node Creating in Back-Prop Nets
From: biafore@beowulf.ucsd.edu (Louis Steven Biafore)
Organization: Computer Science & Engineering Dept. U.C. San Diego
Date: Tue, 07 Mar 89 20:43:19 +0000


The following technical report is now available:


DYNAMIC NODE CREATION
IN
BACKPROPAGATION NETWORKS

Timur Ash
ash@ucsd.edu


Abstract


Large backpropagation (BP) networks are very difficult
to train. This fact complicates the process of iteratively
testing different sized networks (i.e., networks with dif-
ferent numbers of hidden layer units) to find one that pro-
vides a good mapping approximation. This paper introduces a
new method called Dynamic Node Creation (DNC) that attacks
both of these issues (training large networks and testing
networks with different numbers of hidden layer units). DNC
sequentially adds nodes one at a time to the hidden layer(s)
of the network until the desired approximation accuracy is
achieved. Simulation results for parity, symmetry, binary
addition, and the encoder problem are presented. The pro-
cedure was capable of finding known minimal topologies in
many cases, and was always within three nodes of the
minimum. Computational expense for finding the solutions was
comparable to training normal BP networks with the same
final topologies. Starting out with fewer nodes than needed
to solve the problem actually seems to help find a solution.
The method yielded a solution for every problem tried. BP
applied to the same large networks with randomized initial
weights was unable, after repeated attempts, to replicate
some minimum solutions found by DNC.

Requests for reprints should be sent to the Institute for Cognitive
Science, C-015; University of California, San Diego; La Jolla, CA 92093.

(ICS Report 8901)

------------------------------

Subject: TR - Learning State Space Trajectories in Recurrent ANNs
From: Barak.Pearlmutter@F.GP.CS.CMU.EDU
Date: 16 Feb 89 19:33:00 -0500

The following tech report is available. It is a substantially expanded
version of a paper of the same title that appeared in the proceedings of the
1988 CMU Connectionist Models Summer School.


Learning State Space Trajectories
in Recurrent Neural Networks

Barak A. Pearlmutter

ABSTRACT

We describe a number of procedures for finding $\partial E/\partial
w_{ij}$ where $E$ is an error functional of the temporal trajectory
of the states of a continuous recurrent network and $w_{ij}$ are the
weights of that network. Computing these quantities allows one to
perform gradient descent in the weights to minimize $E$, so these
procedures form the kernels of connectionist learning algorithms.
Simulations in which networks are taught to move through limit
cycles are shown. We also describe a number of elaborations of the
basic idea, such as mutable time delays and teacher forcing, and
conclude with a complexity analysis. This type of network seems
particularly suited for temporally continuous domains, such as
signal processing, control, and speech.


Overseas copies are sent first class so there is no need to make special
arrangements for rapid delivery. Requests for copies should be sent to

Catherine Copetas
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213

or Copetas@CS.CMU.EDU by computer mail. Ask for CMU-CS-88-191.

------------------------------

Subject: TR - Speeding up ANNs in the "Real World"
From: Josiah Hoskins <joho%sw.MCC.COM@MCC.COM>
Date: Thu, 02 Mar 89 12:18:40 -0600

The following tech report is available.

Speeding Up Artificial Neural Networks
in the "Real" World

Josiah C. Hoskins

A new heuristic, called focused-attention backpropagation (FAB)
learning, is introduced. FAB enhances the backpropagation pro-
cedure by focusing attention on the exemplar patterns that are
most difficult to learn. Results are reported using FAB learning
to train multilayer feed-forward artificial neural networks to
represent real-valued elementary functions. The rate of learning
observed using FAB is 1.5 to 10 times faster than backpropagation.



Request for copies should refer to MCC Technical Report Number STP-049-89
and should be sent to

Kintner@mcc.com

or to

Josiah C. Hoskins
MCC - Software Technology Program AT&T: (512) 338-3684
9390 Research Blvd, Kaleido II Bldg. UUCP/USENET: milano!joho
Austin, Texas 78759 ARPA/INTERNET: joho@mcc.com

------------------------------

End of Neurons Digest
*********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT