Copy Link
Add to Bookmark
Report

NL-KR Digest Volume 09 No. 10

eZine's profile picture
Published in 
NL KR Digest
 · 20 Dec 2023

NL-KR Digest      (Tue Mar 10 12:00:28 1992)      Volume 9 No. 10 

Today's Topics:

News: NSF HPCC Workshop on Vision, NLP, AI Preliminary Report

Submissions: nl-kr@cs.rpi.edu
Requests, policy: nl-kr-request@cs.rpi.edu
Back issues are available from host archive.cssage.rpi.edu [128.113.53.18] in
the files nl-kr/Vxx/Nyy (ie nl-kr/V01/N01 for V1#1), mail requests will
not be promptly satisfied. If you can't reach `cs.rpi.edu' you may want
to use `turing.cs.rpi.edu' instead.
BITNET subscribers: we now have a LISTSERVer for nl-kr.
You may send submissions to NL-KR@RPIECS
and any listserv-style administrative requests to LISTSERV@RPIECS.

-----------------------------------------------------------------

To: nl-kr@cs.rpi.edu
Date: Fri, 6 Mar 92 13:22:04 CST
From: hpcc@aquinas.csl.uiuc.edu (Benjamin W. Wah)
Subject: NSF HPCC Workshop on Vision, NLP, AI Preliminary Report

Preliminary Report

Workshop on High Performance Computing
and Communications for Grand Challenge Applications:
Computer Vision, Natural Language and Speech Processing,
and Artificial Intelligence

1. INTRODUCTION

This article reports preliminary findings of the Workshop on High Per-
formance Computing and Communications (HPCC) for Grand Challenge Applica-
tions: Computer Vision, Natural Language and Speech Processing, and Artifi-
cial Intelligence.

Under the support of National Science Foundation, this workshop
brought together 23 invited experts from academia and industry. The goal
of the workshop was to identify near-term (within five years) and long-term
(beyond five years) problems and potential approaches/research directions
in supporting grand challenge applications in computer vision, natural
language and speech processing, and artificial intelligence (AI) by HPCC
systems. Attendees focused on answering the following questions.

a) What grand challenge applications in computer vision, natural language
and speech processing, and AI can benefit by the availability of HPCC
systems?

b) How should HPCC systems be designed so that they can support grand
challenge applications in these areas?

Preparation of the workshop started in late January 1992. Over 40
experts in the three areas and 19 program directors from National Science
Foundation were invited. The workshop was held on February 21 and 22, 1992
in Arlington, Virginia, with 23 experts from academia and industry attend-
ing and 12 program directors from National Science Foundation serving as
observers.

Participants in the workshop were divided into three areas, with a
vice-chair identified for each. Before the workshop, each vice-chair soli-
cited position statements from members of his area, and coordinated the
discussions of issues. Separate discussions in the three areas took place
on the morning of February 21. In each area, the vice-chair first
presented an overview of issues, followed by a short presentation by each

____________________

This workshop was supported by National Science Foundation under grant
IRI-9212592. Ideas reported here do not reflect the official position of
the sponsoring agency.

Preparation of this report was coordinated by Benjamin W. Wah, Thomas
Huang, Aravind K. Joshi, and Dan Moldovan. Questions regarding this arti-
cle can be directed to them or to any of the attendees listed in Section 3.


March 6, 1992 2

member of the area, including the vice-chair. Based on comments received
during these presentations and further extensive discussions on the after-
noon of February 21, the vice-chair, in consultation with members of the
area, prepared a summary report. These reports were presented by the
vice-chairs on the morning of February 22 and led to considerable discus-
sion. The next section contains a summary of the ideas discussed on Febru-
ary 22. The final report, to be released in late April, will be prepared
on the basis of this preliminary report and further discussions among the
participants through electronic mail.

This report contains a collection of ideas expressed by individuals at
the workshop; it does not necessarily represent a consensus among all the
participants. Further, ideas expressed in this report do not reflect the
official position of the sponsoring agency.

2. SUMMARY OF IDEAS

2.1. Computer Vision Area

Computer vision has two goals. From the engineering viewpoint, the
goal is to build autonomous systems that can perform some tasks that the
human visual system can do, and even go beyond the capabilities of the
human visual system in multimodality, speed, and reliability. From the
scientific viewpoint, the goal is to develop computational theories of
vision, and by so doing, gain insights into human visual perception.

Grand challenge applications in computer vision fall in two classes.
a) Autonomous vision systems have many important applications. Examples
include i) flexible manufacturing, ii) intelligent vehicle highway systems,
iii) environment monitoring, and iv) visual man-machine interface and
model-based compression for telecommunication, multimedia, and education.
Note that most of the applications involve interaction of the vision system
with the environment and humans. b) Computer vision techniques can also be
invaluable tools for studying many basic scientific problems in other
areas. A prominent example is the visual understanding of turbulence in
fluid flow.

The basic scientific issues underlying the applications are i) machine
learning, ii) surface reconstruction, inverse optics, and integration, iii)
model acquisition, and iv) perception and action.

HPCC support for computer vision can be divided into three classes.
1) Vision Systems. There are two cases: i) designing vision systems, and
ii) running vision systems. Both require huge amounts of computation power
and memory. In addition, vision systems often require real-time operation,
low-cost, low power, small volume, and low weight. For instance, a vision
system may receive as its input 1-100 gigabits/second of image data that
need to be processed in real time. 2) Vision Tasks. Tasks in a vision
system fall into roughly three categories: low-level (e.g., noise reduc-
tion, data interpolation, feature extraction, and matching), intermediate-
level (e.g., grouping), and high-level (e.g., object recognition). To per-
form these tasks efficiently, each level may require different types of


March 6, 1992 3

computer architectures. Therefore, for many vision systems, a heterogene-
ous parallel architecture may be the best answer. Of particular interest
is the scalability of such architectures, especially the question of how
the different components can be easily ``glued'' together, and the communi-
cation and control pathways between the different homogeneous parallel pro-
cessors. Another challenge is to develop easy-to-use software for such
architectures. 3) Distributed Processing. In many vision systems, compu-
tations need to be carried out at several different locations. Thus, dis-
tributed computing is of great importance. One aspect of this problem is
the transmission and management of huge amounts of image data.

Computer vision is related to other grand challenge areas because a)
many applications, such as video compression and man-machine interface,
involve both vision and speech; and b) AI techniques, such as knowledge-
based reasoning, are needed in vision systems.

Infrastructure supports for computer vision include a) sharing image
databases, software over high-bandwidth networks, and b) providing facili-
ties and incentives for architects and computer-vision researchers to work
together.

2.2. Natural Language and Speech Processing Area

Grand Challenge applications in this area include a) electronic
libraries and librarians, which include the use of spoken language inter-
faces, machine translation, and full text retrieval, and b) spoken language
translation.

The fundamental scientific and enabling technologies include a) corpus
based natural language processing (NLP) that involves the acquisition of
linguistic structure, b) statistical approaches to NLP, c) language
analysis and search strategies, d) auditory and vocal-tract modeling, e)
integration of multiple levels of speech and language analyses, f) connec-
tionist speech and language processing, g) full text retrieval techniques,
and h) special-purpose architectures.

Bridges to other grand challenge areas include a) optical character
reader (OCR), b) handwriting analysis, c) document image analysis, d)
multi-media interfaces, and e) integration of multiple knowledge sources.

Architectural needs for supporting natural language and speech pro-
cessing include a) faster processors with larger memory, b) general purpose
supercomputing, c) heterogeneous architectures, such as systems including
signal processing and symbolic processing capabilities, d) homogeneous
architectures not requiring wide floating point arithmetic, such as those
for modeling connectionist architectures, and e) high-bandwidth real-time
inputs and outputs.

Infrastructure supports include a) shareable text and speech data-
bases, b) smart compilers and open parallel systems, c) technical staff for
developing sharable tools, and d) access to high-performance computing
through high-performance wide-area networks.


March 6, 1992 4

2.3. Artificial Intelligence/Computer Architecture

This area covers the broad field of AI and the computer architectural
support for HPCC AI systems.

Some of the grand challenge applications are a) nation-wide job banks,
b) electronic library, c) electronic market places, d) large-scale real-
time planning and scheduling, e) automation in constructing very large
knowledge bases, and f) automation of decision making. For example, an
electronic library may involve a diverse collection of text, images, data-
bases, and other information scattered around the net in an assortment of
formats. Users will need an intelligent librarian program to help guide
them through all this information. The librarian will need to communicate
with users in natural language and understand something about text stored
in the network.

The basic research issues and enabling technologies underlying the
applications include a) study and design of scalable and verifiable ``trad-
itional'' symbolic AI/expert systems, b) construction and utilization of
very large knowledge bases, c) development of highly parallel machine
learning techniques, d) research on active memories as a means of increas-
ing the contribution of knowledge sources in reasoning, e) development and
evaluation of marker/value passing techniques, f) application of neural
networks to AI, and g) further studies of heuristic search techniques
applied to problem solving.

Some computer architecture implications are a) increased use of mas-
sively parallel processing techniques with a goal of achieving real-time AI
processing, b) understanding of the computational requirements of various
AI paradigms and how they translate into system requirements in order to
either build specialized systems or improve the mapping of AI problems into
existing high performance computers, c) understanding of the architecture
of systems supporting both numeric and symbolic AI problems, d) development
of knowledge base management techniques for implementing efficient multi-
level knowledge based systems, e) deciding when it is best to use general-
purpose versus specialized accelerators, and f) development of compilers
for AI languages on today's supercomputers.

Required infrastructure supports include a) access to large fast com-
puters by the AI community, b) access to on-line large knowledge bases and
corpora, c) sharing systems and research results achieved in large projects
by the community, and d) development of computational benchmarks for impor-
tant AI paradigms.

3. WORKSHOP ATTENDEES

Workshop Chair Benjamin W. Wah University of Illinois, Urbana-Champaign
wah@aquinas.csl.uiuc.edu

Vision Area Thomas Huang University of Illinois, Urbana-Champaign
(Area Vice Chair) huang@uicsl.csl.uiuc.edu


March 6, 1992 5

John Aloimonos University of Maryland, College Park
yiannis@alv.umd.edu
Ruzena K. Bajcsy University of Pennsylvania
bajcsy@central.cis.upenn.edu
Dana Ballard University of Rochester
dana@cs.rochester.edu
Charles R. Dyer University of Wisconsin, Madison
dyer@cs.wisc.edu
Tomaso Poggio Massachusetts Institute of Technology
poggio@ai.mit.edu
Edward M. Riseman University of Massachusetts, Amherst
riseman@cs.umass.edu
Steven L. Tanimoto University of Washington
tanimoto@cs.washington.edu

Natural Language and Speech Processing Area
Aravind K. Joshi University of Pennsylvania
(Area Vice Chair) joshi@central.cis.upenn.edu
Ralph Grishman New York University
grishman@nyu.edu
Lynette Hirschman Massachusetts Institute of Technology
hirschman@goldilocks.lcs.mit.edu
Stephen E. Levinson AT&T Bell Laboratories
sel@research.att.com
Nelson H. Morgan University of California, Berkeley
morgan@icsi.berkeley.edu
Sergei Nirenburg Carnegie-Mellon University
sergei@nl.cs.cmu.edu
Craig Stanfill Thinking Machines Corporation
craig@think.com

Artificial Intelligence and Computer Architecture Area
Dan Moldovan University of Southern California
(Area Vice Chair) moldovan@gringo.usc.edu
Doug DeGroot Texas Instruments
degroot@dog.dseg.ti.com
Kenneth DeJong George Mason University
kdejong@aic.gmu.edu
Scott E. Fahlman Carnegie-Mellon University
scott.fahlman@cs.cmu.edu
Richard E. Korf University of California, Los Angeles
korf@cs.ucla.edu
Daniel P. Miranker University of Texas, Austin
miranker@cs.utexas.edu
Salvatore J. Stolfo Columbia University
sal@cs.columbia.edu
Benjamin W. Wah University of Illinois, Urbana-Champaign
wah@aquinas.csl.uiuc.edu

National Science Foundation Observers
Syed Kamal Abdali Numeric, Symbolic, & Geom. Computations
kabdali@nsf.gov
Paul G. Chapin Linguistics
pchapin@nsf.gov


March 6, 1992 6

Su-Shing Chen Knowledge Models and Cognitive Systems
schen@nsf.gov
Bernard Chern Microelectronic Info. Proc. Systems
bchern@nsf.gov
Y. T. Chien Info., Robotics, & Intelligent Systems
ytchien@nsf.gov
John H. Cozzens Circuits and Signal Processing
jcozzens@nsf.gov
John D. Hestenes Interactive Systems
jhestene@nsf.gov
Richard Hirsch Supercomputer Center
rhirsch,@nsf.gov
Howard Moraff Robotics and Machine Intelligence
hmoraff@nsf.gov
John Lehmann Microelectronic Info. Proc. Systems
jlehmann@nsf.gov
Pen-Chung Yew Microelectronic Systems Architecture
pyew@nsf.gov
Zeke Zalcstein Computer Systems Architecture
zzalcste@nsf.gov

------------------------------
End of NL-KR Digest
*******************


← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT