Copy Link
Add to Bookmark
Report

VISION-LIST Digest 1990 01 15

eZine's profile picture
Published in 
VISION LIST Digest
 · 6 Jan 2024

Vision-List Digest	Mon Jan 15 09:38:15 PDT 90 

- Send submissions to Vision-List@ADS.COM
- Send requests for list membership to Vision-List-Request@ADS.COM

Today's Topics:

Request for Public Domain Image Processing Packages
3D-glasses
Connected Component Algorithm
Posting Call-for-papers of ICCV'90
Call for Papers Wang Conference

------------------------------

Date: Fri, 12 Jan 90 13:02:20 PST
From: Scott E. Johnston <johnston@odin.ads.com>
Subject: Request for Public Domain Image Processing Packages
Status: RO

I am collecting any and all public-domain image processing packages.
I plan on making them available to all via the vision-list FTP site
(disk space permitting). Send pointers and packages themselves to
johnston@ads.com (not vision-list@ads.com). Thanks.

----------------------------------------------------------------------

Subject: 3D-glasses
Date: Thu, 11 Jan 90 01:17:23 EST
From: Edward Vielmetti <emv@math.lsa.umich.edu>

Here's a pointer to info on the Sega 3D glasses. --Ed


------- Forwarded Message

From: jmunkki@kampi.hut.fi (Juri Munkki)
Subject: [comp.sys.mac.hardware...] Sega 3D glasses document fix 1.2
Date: 8 Jan 90 20:16:53 GMT
Followup-To: comp.sys.mac.hardware
Approved: emv@math.lsa.umich.edu (Edward Vielmetti)

This is patch 1.2 of the Sega 3D glasses interface document. It
supersedes versions 0.9, 1.0 and 1.1 of the document. Version 1.2 is
available with anonymous ftp from vega.hut.fi [130.233.200.42].
pub/mac/finnish/sega3d/

Version 0.9 and 1.0 of the document have the TxD+ and TxD- pins
reversed. This causes problems only with my demo software and can
be noticed easily, because both lenses show the same image. Fix
this problem by pulling out the TxD+ and TxD- pins from the miniDIN
connector, swap them and push back in.

Version 1.1 (which is what you have after you make the previous
change) has the tip and center of the glasses connector switched.
Again this doesn't cause any problems unless you use the demo
software. The spiro and Macintosh demos will clearly be inside
the screen and their perspectives will look wrong. To fix the
problem resolder the connector or change the software to swap
the meanings of left and right. If you intend to write for the
glasses, it might be a good idea to include an option to switch
left and right.

Juri Munkki jmunkki@hut.fi jmunkki@fingate.bitnet I Want Ne |
Helsinki University of Technology Computing Centre My Own XT |

------- End of Forwarded Message


------------------------------

Date: Thu, 11 Jan 90 14:16:52 EST
From: palumbo@cs.Buffalo.EDU (Paul Palumbo)
Subject: Connected Component Algorithm

I was wondering if anybody out there in net-land knows an image analysis
technique to locate connected components in digital images. In particular,
I am looking for an algorithm that can be implemented in hardware that makes
only one pass through the image in scan-line order and reports several simple
component features such as component extent (Minimum and Maximum X and Y
coordinates) and the number of foreground pixels in the component.

The project I am on is planning to design and develop custom image analysis
hardware to do this. We have developed an algorithm locally and was wondering
if somebody else has an easier method.


I know about the LSI Logic "Object Contour Tracer Chip" but this chip appears
to be too powerful (and slow) for this application. I had found some papers
by Gleason and Agin dated about 10 years ago but could not find the exact
details of their algorithm.

Does anybody else have a need for such hardware?

Any help or pointers on locating such an algorithm would be appreciated.

Paul Palumbo internet:palumbo@cs.buffalo.edu
Research Associate bitnet: palumbo@sunybcs.BITNET
226 Bell Hall csnet: palumbo@buffalo.csnet
SUNY at Buffalo CS Dept.
Buffalo, New York 14260
(716) 636-3407 uucp: ..!{boulder,decvax,rutgers}!sunybcs!palumbo

------------------------------

Date: Fri, 12 Jan 90 11:11:31 JST
From: tsuji%tsuji.ce.osaka-u.JUNET@relay.cc.u-tokyo.ac.jp (Saburo Tsuji)
Subject: Posting Call-for-papers of ICCV'90


Call for Papers
THIRD INTERNATIONAL CONFERENCE ON COMPUTER VISION
International House Osaka, Osaka, Japan
December 4-7, 1990

CHAIRS
General Chair:
Makoto Nagao, Kyoto University, Japan
E-mail: nagao@kuee.kyoto-u.ac.jp
Program Co-chairs:
Avi Kak, Purdue University, USA
E-mail:kak@ee.ecn.purdue.edu
Jan-Olof Eklundh, Royal Institute of Technology, Sweden
joe@bion.kth.se
Saburo Tsuji, Osaka University, Japan
tsuji@tsuji.ce.osaka-u.ac.jp
Local Arrangement Chair:
Yoshiaki Shirai, Osaka University, Japan
shirai@ccmip.ccm.osaka-u.ac.jp

THE CONFERENCE
ICCV'90 is the third International Conference devoted solely to
computer vision. It is sponsored by the IEEE Computer Society.

THE PROGRAM
The program will consist of high quality contributed papers on
all aspects of computer vision. All papers will be refereed by
the members of the Program Committee. Accepted papers will
be presented as long papers in a single track or as short pa-
pers in two parallel tracks.

PROGRAM COMMITTEE
The Program Committee consists of thirty prominent members
representing all major facets of computer vision.


PAPER SUBMISSION
Authors should submit four copies of their papers to Saburo Tsuji
at the address shown below by April 30, 1990. Papers must con-
tain major new research contributions. All papers will be re-
viewed using a double-blind procedure, implying that the identi-
ties of the authors will not be known to the reviewers. To make
this possible, two title pages should be included, but only one
containing the names and addresses of the authors; the title page
with the names and addresses of the authors will be removed prior
to the review process. Both title pages should contain the title
of the paper and a short (less than 200 words) abstract. Au-
thors must restrict the lengths of their papers to 30 pages; that
length should include everything, meaning the title pages, texts
(double-spaced), figures, bibliography, etc. Authors will be no-
tified of acceptance by mid-July. Final camera-ready papers,
typed on special forms, will be due mid-August.

Send To: Saburo Tsuji,
Osaka University, Department of Control Engineering, Toyonaka,
Osaka 560, Japan.
E-mail tsuji@tsuji.ce.osaka-u.ac.jp





------------------------------

Date: Fri, 12 Jan 90 02:06:41 EST
From: mike@bucasb.bu.edu (Michael Cohen)
Subject: Call for Papers Wang Conference

CALL FOR PAPERS

NEURAL NETWORKS FOR AUTOMATIC TARGET RECOGNITION
MAY 11--13, 1990

Sponsored by the Center for Adaptive Systems,
the Graduate Program in Cognitive and Neural Systems,
and the Wang Institute of Boston University
with partial support from
The Air Force Office of Scientific Research


This research conference at the cutting edge of neural network science and
technology will bring together leading experts in academe, government, and
industry to present their latest results on automatic target recognition
in invited lectures and contributed posters. Invited lecturers include:

JOE BROWN, Martin Marietta, "Multi-Sensor ATR using Neural Nets"

GAIL CARPENTER, Boston University, "Target Recognition by Adaptive
Resonance: ART for ATR"


NABIL FARHAT, University of Pennsylvania, "Bifurcating Networks for
Target Recognition"


STEPHEN GROSSBERG, Boston University, "Recent Results on Self-Organizing
ATR Networks"


ROBERT HECHT-NIELSEN, HNC, "Spatiotemporal Attention Focusing by
Expectation Feedback"


KEN JOHNSON, Hughes Aircraft, "The Application of Neural Networks to the
Acquisition and Tracking of Maneuvering Tactical Targets in High Clutter
IR Imagery"


PAUL KOLODZY, MIT Lincoln Laboratory, "A Multi-Dimensional ATR System"

MICHAEL KUPERSTEIN, Neurogen, "Adaptive Sensory-Motor Coordination
using the INFANT Controller"


YANN LECUN, AT&T Bell Labs, "Structured Back Propagation Networks for
Handwriting Recognition"


CHRISTOPHER SCOFIELD, Nestor, "Neural Network Automatic Target Recognition
by Active and Passive Sonar Signals"


STEVEN SIMMES, Science Applications International Co., "Massively Parallel
Approaches to Automatic Target Recognition"


ALEX WAIBEL, Carnegie Mellon University, "Patterns, Sequences and Variability:
Advances in Connectionist Speech Recognition"


ALLEN WAXMAN, MIT Lincoln Laboratory, "Invariant Learning and
Recognition of 3D Objects from Temporal View Sequences"


FRED WEINGARD, Booz-Allen and Hamilton, "Current Status and Results of Two
Major Government Programs in Neural Network-Based ATR"


BARBARA YOON, DARPA, "DARPA Artificial Neural Networks Technology
Program: Automatic Target Recognition"



CALL FOR PAPERS---ATR POSTER SESSION: A featured poster session on ATR
neural network research will be held on May 12, 1990. Attendees who wish to
present a poster should submit 3 copies of an extended abstract
(1 single-spaced page), postmarked by March 1, 1990, for refereeing. Include
with the abstract the name, address, and telephone number of the corresponding
author. Mail to: ATR Poster Session, Neural Networks Conference, Wang
Institute of Boston University, 72 Tyng Road, Tyngsboro, MA 01879. Authors
will be informed of abstract acceptance by March 31, 1990.

SITE: The Wang Institute possesses excellent conference facilities on a
beautiful 220-acre campus. It is easily reached from Boston's Logan
Airport and Route 128.

REGISTRATION FEE: Regular attendee--$90; full-time student--$70.
Registration fee includes admission to all lectures and poster session,
abstract book,
one reception, two continental breakfasts, one lunch, one dinner, daily
morning and afternoon coffee service. STUDENTS FELLOWSHIPS are available.
For information, call (508) 649-9731.

TO REGISTER: By phone, call (508) 649-9731; by mail, write for further
information to: Neural Networks, Wang Institute of Boston University, 72 Tyng
Road, Tyngsboro, MA 01879.


------------------------------

End of VISION-LIST
********************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT