Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 14 Issue 29

eZine's profile picture
Published in 
VISION LIST Digest
 · 6 Jan 2024

VISION-LIST Digest    Mon Jul 31 11:11:34 PDT 95     Volume 14 : Issue 29 

- ***** The Vision List host is TELEOS.COM *****
- Send submissions to Vision-List@TELEOS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@TELEOS.COM
- Access Vision List Archives via anonymous ftp to TELEOS.COM

Today's Topics:

SUMMARY: automatic recognition of number plates
Web vision resources
Re: New Book on Computer Vision and Image Processing
Dissertationwww available (in French)
Ph.D. computer vision opening at Sandia Labs
Special Issue of CVIU
CFP Visual'96
Road Following Vehicles

----------------------------------------------------------------------

Date: Fri, 28 Jul 1995 15:11:55 +0200
From: arnoudv@ped.kun.nl (Arnoud Verdwaald)
Organization: Special Education, Nijmegen University
Subject: SUMMARY: automatic recognition of number plates

Hi,

I would like to thank all who responded to my call for information on
systems that can do automatic recognition of number plates. I was hoping I
would do everybody a favor by posting this summary of the addresses I have
received.
Well, here it is (the list is unordered):

***
F+O Electronic Systems GmbH; You can reach them by mailing to

auer@uranus.fundo.net

or per snail mail to

* F+O Electronic Systems GmbH
* Badstrasse 30
* D-01454 Radeberg , Germany

I also recall a similar thread in the sci.image.procssing group, you
can check the gopher-server at
gopher://skyking.OCE.ORST.EDU:71/11/pub/sci.image.processing
to access the WAIS-archive of articles.

***
ITMI APTOR commercializes a character recognition system for a couple of
years now. If you want more information please contact:

Patrick Stelmaszyk
ITMI-APTOR
tel : +33 76-41-40-00
fax : +33 76-41-28-05

***
Contact: For Release:
Jack Hillhouse April 18, 1994
IVHS America Exposition
Booth 122

or

Edward W. Cheatham
The Racal Corporation
(212) 268-0918

***
CSIRO in Australia was mentioned several times

Ian Macintyre
Manager Vision Technology Development
CSIRO Division of Manufacturing Technology
Locked Bag No. 9
Preston
VIC 3072
Australia

Phone: +61 3 662 7700
Fax: +61 3 662 7852

The product is the Safe-T-Cam traffic monitoring system.

They sent me a reprint of a conference paper:

Auty, G., Corke, P., Dunn, P., Jensen, M., Macintyre, I., Mills, D.,
Nguyen, H., and Simons, B.,
An Image Acquisition System for Traffic Monitoring Applications,
SPIE Volume 2416, "Cameras and Systems for Electronic Photography and
Scientific Imaging", 8-9 February 1995, San Jose, California, pp. 119-133.

It says 0-8194-1763-7/95. Maybe that's the ISSN, or something?

***
AITEK was mentioned several times as well (someone also mentioned an
associated company TELEROBOT); this small company has developed a system
called
TarGet

Fabrizio Ferrari,
Aitek
Via Pisa 12/1
16146 Genova - Italy
Phone: +39 10 315180 / 3620102
Fax: +39 10 314873
e-mail: stress@dist.dist.unige.it

One can watch the product at:
http://ecvnet.lira.dist.unige.it/HOSTS/AITEK/home.html

***
An extensive answer came from

VECON: Vehicle and Container Number Recognition System

By Dr. John Chung-Mong LEE
Department of Computer Science
Hong Kong University of Science & Technology
Clear Water Bay, Kowloon, Hong Kong
fax: (852)-2358-1477
e-mail: cmlee@cs.ust.hk

We have developed a generalized alpha-numeric character recognition system
that can locate and identify printed characters in complex gray-level or
color images. A scene image may be complex for a variety of reasons.
Rust, mud, peeling paint, or fading color may distort the images of the
characters; uneven lighting may make them difficult to discern. Our system
has succeeded in recognizing printed characters on cargo containers and
vehicle licence plates. At present, the system has been licensed to a
local software company which will install the system in a container depot
in Shanghai, China. In addition, the system is currently undergoing beta
testing at one of Hong Kong's busy container shipping terminals.

The method that the system uses is better than present technology because it
is much faster and more accurate in recognizing characters. Its novel and
unusual features lie in both character extraction and character recognition.
In character extraction, it uses the relative size, shape, thickness, and
ratio of characters as well as the relative distance between characters for
location and segmentation. In character recognition, it uses different
expansion and reduction strategies for different extracted characters in
order to reduce mistakes in recognition. These strategies are embedded in two
separately trained neural network architectures, where the second network is
used to complement the first.

This technology could be enhanced to achieve higher recognition rate. It
could then be used wherever printed characters in a more complex images
have to be read, in either indoor or outdoor situations. For instance, the
system could be used for toll collection at tunnels, bridges, and highways.
In indoor situations, it could be used for inventory control in a warehouse
coupled with a robot to select items to fill orders, or cataloging library
books, videos, laser disks, etc.

***
Elettronica Santerno S.p.A. developed a systema called "SIRIO". Our system
is tailored for number plates recognition and has been installed in the
town of Bologna has en enforcement system for the Limit Traffic Zone
(ZTL). (The first one, and actually the only one, in Italy).

The address of my company is:

Elettronica Santerno S.p.A.
via G. di Vittorio,3
40020 Casalfiumanese (Bologna) Italy.

Tel. +39 542 666165
Fax. +39 542 666632

For commercial infos about our system you have to call hour Commercial Society:
Busi Consorzio
Via del Tappezziere, 4
40138 Bologna

Tel. +39 51 6010231
Fax. +49 51 534403

ask for Mr. Luigi Merloni or for Mr. Gianni Frezzati.

***
I have made such a program, running under Windows at realtime (more or less).
It reads norwegian number plates, and now also swedish plates at a test site
in Gothenburg, but it can be trained to read several different fonts.

Please email me directly, that is:

bjoernk@oslonett.no

or contact the company I wrote it for via fax
or mail to Aage Vik Saaghus at:

Digitalteknikk AS
Sagmyra 25,
Postbox 123,
4622 Kristiansand.

Phone: (+47) 380 86866
Fax : (+47) 380 86715

***
For Number Plate Recognition (and other image processing systems) Contact:

Dr Andrew Rogoyski
Logica UK Ltd
Cobham Park
Downside Road
Cobham
Surrey KT11 3LG

Tel (44)171 637 9111 extension 4114
fax (44)1932 869102
e-mail: rogoyskia@logica.com

*** END ***

Thanks again!

Regards, Arnoud Verdwaald.

------------------------------

Date: Wed, 26 Jul 1995 17:28:14 +0100
Organization: University of Wales College of Medicine
Subject: Web vision resources
Keywords: vision, eye movements, visual science

Hello!


The Applied Vision Section of the Department of Diagnostic Radiology
in the University of Wales College of Medicine at Cardiff is now on the
web, and is pleased to offer the following resources:


Internet Information for the Vision Community

A hypertext document listing mailing lists, file archives,
relevant newsgroups, etc.

Eye Movement equipment

A plain text document (hopefully in hypertext soon) listing the
currently available eye movement monitoring equipment.

Eye Movement and Visual Science software

A plain text document containing a far from complete list of eye
movement and visual science related software.


The URL is http://www.cf.ac.uk/uwcm/dr/groups/vision/


Thanks,

David.


P.S.

Plain-text versions of these documents are also available by
email from the eye-movement mailing list archive. For a list of
filenames, send an email with the words:
INDEX EYE-MOVEMENT
to mailbase@mailbase.ac.uk, then send an email containing:
GET EYE-MOVEMENT filename
to the same address.


Dr.David Wooding (wooding@cardiff.ac.uk) UWCM, Cardiff, Wales, UK

The Mornington Crescent List: send email with the words subscribe
mornington-crescent (in message body) to majordomo@mono.city.ac.uk

------------------------------

Date: Thu, 27 Jul 1995 10:26:01 +0800 (SST)
From: "S.Z. Li" <szli@szli.eee.ntu.ac.sg>
Subject: Re: New Book on Computer Vision and Image Processing

Subject: Re: New Book on Computer Vision and Image Processing

> AUTHOR: S.Z. Li
> TITLE: Markov Random Field Modeling in Computer Vision
> PUBLISHER: Springer-Verlag, 1995
> SUBJECTS: Computer Vision, Image Processing,
> Pattern Recognition, Markov Random Fields
> DESCRIPTION: 260 pp., 71 figs.
>
> A hypertext version can be browsed at www address
> http://ntuix/~szli/book_1/book.html

The correct www address is
http://www.ntu.ac.sg/~szli/book_1/book.html

------------------------------

Date: Thu, 27 Jul 1995 12:26:13 +0200
From: ronse@dpt-info.u-strasbg.fr
Subject: Dissertation available (in French)

The following dissertation (IN FRENCH) is available in its printed
version. If you want a copy of it, send me a request by e-mail, with your
complete postal address (including zip code and country).

Maybe one day we will succeed in making a Postcript file from it, in which
case it will be put into our anonymous ftp server.

Segmentation d3images:
Approche r\3egion par analyse multi \3echelle et multi-r\3esolution

Luc Decker

Memoire de D.E.A

It is an experimental work dealing with the regularization of the watershed
algorithm on the gradient of a grey-level image. Three approaches were
tried for the regularization of the gradient:
(1) morphological gradient after alternating sequential filtering of the
image: poor results, the shape of the structuring element is imposed on the
image;
(2) morphological gradient after Gaussian smoothing of the image: poor
results, bizarre things happen (not illustrated);
(3) convolution by the gradient of a Gaussian (sampled according to
Hummel-Lowe): good results for sigma=4; for large sigma (16 or more) the
contours are badly localized.
Other factors can be controlled in order to reduce the over-segmentation:
(a) choice of 4- or 8-connectivity, (b) minimal depth of basins, and (c)
number of quantization levels in the gradient image.
The approach was experimented on the Lenna image, without using markers for
the basin generation.

Christian Ronse ronse@dpt-info.u-strasbg.fr
LSIIT - URA 1871
Universite Louis Pasteur
UFR de Mathematique et Informatique
7 rue Rene Descartes Tel. (33) 88.41.66.38
F-67000 Strasbourg Fax. (33) 88.61.90.69

------------------------------

Date: Wed, 26 Jul 1995 15:33:56 -0600
From: jckrumm@sirius.isrc.sandia.gov (John C. Krumm)
Subject: Ph.D. computer vision opening at Sandia Labs

Ph.D Staff Opening for Computer Vision/Image Processing
The Intelligent Systems and Robotics Center
Sandia National Laboratories
Albuquerque, NM

Sandia National Laboratories in Albuquerque, NM has a Ph.D. staff
position open for a qualified candidate interested in research in
computer vision and/or image processing.

All applicants must be U.S. citizens and have earned a doctorate in
Electrical Engineering, Computer Science, Robotics, or a related
field. Candidates must have expertise in computer vision or image
processing. Experience in numerical algorithms, the "C" programming
language, and image hardware is highly desirable. We are seeking a
productive, motivated, flexible, friendly individual. We place a
premium on the ability to communicate and get along with others.

This staff opening is in the Intelligent Systems Sensors & Controls
Department, which is part of Sandia's Intelligent Systems & Robotics
Center. We are looking for someone to augment a small but growing
computer vision group within the Department. This person will be
expected to support one or two ongoing Center projects as well as
develop his or her own original research thrust that would eventually
support our Center's goals.

Sandia National Laboratories is a prime contractor to the U.S. DOE and
is managed by Lockheed Martin. The Intelligent Systems & Robotics
Center at Sandia is recognized as a national leader in robotics,
currently with a staff of about 160 people working on over 25 active
projects including:

Robotic edge finishing Automated assembly
Hazardous waste cleanup Automated dismantlement
Automated path planning Adaptive machining
Dexterous manipulation Sensor systems
Nondestructive evaluation Seam tracking
Industrial inspection Medical imaging

Opportunities for ongoing project support and research in computer
vision include industrial part inspection, object recognition and
location, 3D sensing, and 3D ultrasound reconstruction.

This job offers

* ample support in terms of equipment and technicians
* internal funding opportunities for self-directed research
* thousands of other science and engineering Ph.D.'s for collaboration
* encouragement for publication and interaction with research community

Please send C.V. and short description of research interests to:

John Krumm
Sandia National Laboratories
MS 0949
P.O. Box 5800
Albuquerque, NM 87185-0949

jckrumm@sandia.gov

------------------------------

Date: 27 Jul 1995 19:09:21 GMT
From: dnm@graphics.upenn.edu (Dimitris Metaxas)
Organization: Center for Human Modeling and Simulation
Subject: Special Issue of CVIU

Please make sure you submit the papers according to CVIU guidelines
mentioned below.


CALL FOR PAPERS

Special Issue of COMPUTER VISION AND IMAGE UNDERSTANDING on
PHYSICS-BASED MODELING AND REASONING IN COMPUTER VISION

Physics-based techniques have emerged as a major trend in computer
vision and related fields because of their effectiveness in object
representation, image segmentation, shape estimation, motion analysis,
and object recognition, as well as in application areas such as
medical imaging.

Papers are solicited for a special issue of Computer Vision and Image
Understanding on the subject of Physics-Based Modeling and Reasoning
in Computer Vision. The Guest Editors for the special issue are
Dimitri Metaxas of the University of Pennsylvania and Demetri
Terzopoulos of the University of Toronto.

The purpose of the special issue will be to showcase new physics-based
methodologies for the analysis of complex scenes. Of particular
interest is research on the integration of physics-based modeling and
reasoning techniques to improve the results of shape and motion
estimation and object recognition in scenes with multiple objects and
significant occlusion.

All submissions will be refereed in accordance with the CVIU
guidelines. Manuscripts will not be accepted if they have been
previously published, or if they present algorithms that have not been
evaluated on complex scenes. Possible topics for submitted papers
include, but are not limited to

1) physics-based segmentation and shape representation of multiple
objects in complex scenes with occlusion;

2) physics-based motion estimation and object tracking in scenes with
multiple rigid and/or nonrigid objects;

3) integration of physics-based modeling, reasoning and recognition techniques;

4) applications: medical image analysis; analysis and recognition of
faces, gestures, etc.

Please submit four copies of your paper ACCORDING TO CVIU GUIDELINES
stated in the INFORMATION FOR AUTHORS section of the journal
(double-spaced and on one side) to:

Professor Dimitri Metaxas
Dept. of Computer and Information Science
University of Pennsylvania
200 South 33rd St.
Philadelphia, PA 19104-6389
U.S.A.

Schedule:

Deadline for submission of manuscripts: October 30, 1995
First set of reviews to authors: January 30, 1996
Final manuscripts due: April 30, 1996
Publication of special issue: December, 1996

For further information, please contact

Dimitri N. Metaxas (dnm@central.cis.upenn.edu)
or
Demetri Terzopoulos (dt@vis.toronto.edu)

------------------------------

Date: Thu, 27 Jul 1995 00:17:21 GMT
From: zheng@matilda.vut.edu.au (Zheng Zhi Jie)
Organization: Victoria University of Technology
Subject: CFP Visual'96
Summary: First International Conference on Visual Information Systems
Keywords: Visual Information Systems


Call for Papers

First International Conference on Visual Information Systems

VISUAL '96
5-6 February 1996
Melbourne, Victoria
AUSTRALIA


Aims and Scope

With the widespread use of multimedia information, there is a pressing
requirement to efficiently manage, store, manipulate and retrieve images
and pictorial data in a wide spectrum of applications. As many organisations
currently maintain large collections of images, the need for flexible visual
information management is already critical. Future information systems in
commercial and scientific applications will have a high visual content, and
it is necessary to integrate the visual and image components into the
architecture of organisational information systems. Such visual components
will tend to permeate all information systems and in time will not be regarded
as a distinct element, but will form an essential part of any information
system, working alongside and in harmony with structured information
processing components. The conference will focus attention on the management
of visual information and will include, but is not restricted to, the
following topics:

* Architecture of visual information systems
* Data modelling for visual information systems
* Memory organisation and management
* Feature recognition and extraction
* Feature and content indexing
* Picture description and representation languages
* Query model and paradigms for visual information
* Query language for visual information retrieval
* Content-based search and retrieval
* Integration of visual and non-visual information
* Compression and delivery of visual information
* Image processing and manipulation
* Parallel processing in visual information systems
* Specific applications areas of visual information systems

Both work in progress as well as fully developed systems will be of interest
to the conference.


Keynote Addresses:

Shi-Kuo Chang, University of Pittsburgh, USA
Tosiyasu Kunii, University of Aizu, Japan


Paper Submission:

Authors should submit three copies of an extended abstract consisting of
between two and four pages to the Program Chair. The abstract should
include the authors' names, affiliation, telephone and fax numbers, postal
and email addresses, and provide sufficient details to allow the merits of
the paper to be assessed. Authors are encouraged to submit the abstracts
electronically in Postscript form to visual96@matilda.vut.edu.au.
Abstracts will be reviewed internationally. Detailed instructions for
manuscript preparation will be sent at the time of acceptance
notification, and are also available from the Web page (see Further
Information).

Accepted papers must be presented at the Conference with the presenting author
registering as a delegate in order for the paper to be included in the
proceedings. Conference proceedings will be published and distributed to
participants at the conference. It is also planned to publish the papers in
book form after the conference.


Important Dates:

Expression of interest: 2 September 1995
Extended abstract due: 2 October 1995
Notification of acceptance: 3 November 1995
Camera ready paper due: 11 December 1995
Conference: 5-6 February 1996


Organising Chair:

Audrey Tam
Department of Computer & Mathematical Sciences
Victoria University of Technology
PO Box 14428, MMC
Melbourne, Victoria 3000, AUSTRALIA
Email: amt@matilda.vut.edu.au


Program Chair:

Clement Leung
Department of Computer & Mathematical Sciences
Victoria University of Technology
PO Box 14428, MMC
Melbourne, Victoria 3000, AUSTRALIA
Email: amt@matilda.vut.edu.au


Program Committee:

David Bell, University of Ulster
Terry Caelli, Curtin University of Technology
Alfonso Cardenas, University of California, Los Angeles
Shi-Kuo Chang, University of Pittsburgh
Francis Chin, University of Hong Kong
Roland Chin, Hong Kong University of Science & Technology
Bill Cody, IBM Almaden Research Center
John Debenham, University of Technology Sydney
Tharam Dillon, LaTrobe University
Borko Furht, Florida Atlantic University
Ricki Goldman-Segall, University of British Columbia
Bill Grosky, Wayne State University
Ramesh Jain, University of California, San Diego
Kingsley Nwosu, AT&T Bell Laboratories
Tosiyasu Kunii, University of Aizu
Zhi-Qiang Liu, University of Melbourne
Wo-shun Luk, Simon Fraser University
Song De Ma, Chinese Academy of Science
Erich Neuhold, T.H. Darmstadt
P. Venkat Rangan, University of California, San Diego
Nalin Sharda, Victoria University of Technology
Bala Srinivasan, Monash University
Imants Svalbe, Monash University
Paul Swatman, Swinburne University of Technology
Rodney Topor, Griffith University
Zhi Jie Zheng, Victoria University of Technology


Expression of Interest:

Please respond by 2 September 1995
Name:....................................................................
Title First Initial(s) Last
Organisation:..........................................................
Address: ..............................................................
Email:.................................................................
Tel: (..........).......................................................
Fax: (..........)..........................................................

[] I plan to attend Visual '96. Please send me registration information.
[] I plan to attend Visual '96 and present my work as a poster.
[] I plan to attend Visual '96 and submit a paper on the following topic:
.................................................................
[] I may not be able to attend Visual '96, but would like to order the
Proceedings.

Further Information:

Further information and updates are available on the World Wide Web at:
http://dingo.vut.edu.au/~visual96
or by contacting:

Visual '96 Conference Secretariat
Department of Computer & Mathematical Sciences
Victoria University of Technology
PO Box 14428, MMC
Melbourne, Victoria 3000, AUSTRALIA
Email: visual96@matilda.vut.edu.au
Tel: +61 3 688 4249 Fax: +61 3 688 4050

------------------------------

Date: 31 Jul 1995 16:49:32 GMT
From: Ulrich Nehmzow <ulrich@cs.cmu.edu>
Organization: School of Computer Science, Carnegie Mellon
Subject: Road Following Vehicles

Sun Annual Lecture 1995
***********************

Teaching Vehicles to See
========================
by Ernst D. Dickmanns
======================

31st August and 1st September 1995
++++++++++++++++++++++++++++++++++

o About the Sun Annual Lecture
o Synopsis
o About the lecturer

About the Sun Annual Lecture
============================

The Sun Annual Lecture in Computer Science at the University of
Manchester provides an opportunity for eminent computer scientists to
give a series of lectures introducing a wider audience to current
research work in their area. The lectures occupy about eight hours
over two days, giving ample time for questions and discussion.

Registration for the lecture series is 90 pounds (45 pounds for
registered students), which includes lunch each day, and supporting
material for the lectures. To register, print the registration form,
fill it in, and return it to the address shown, together with your
payment. Registration is free for members of the University of
Manchester, who can register by e-mail or telephone (see below) or by
printing and returning the alternative registration form.

The Annual Lecture dinner will be held on the night of the 31st at
Yang Sing, one of the best Chinese restaurants in the country. If you
would like to attend this, please tick the box in the registration
form and include the cost with the registration payment. Please
indicate if you are vegetarian or have other dietary preferences.

We can arrange bed and breakfast accommodation at St. Anselm Hall (a
student hall). Please indicate on the registration form if you want us
to make bookings for you. Note that payment for accommodation is made
separately to the Hall before departure.

For further information, please contact:

Jenny Fleet,
Annual Lecture,
Department of Computer Science,
The University,
Oxford Road,
Manchester M13 9PL,
ENGLAND.
0161-275 6130
annual-lecture@cs.man.ac.uk
http://www.cs.man.ac.uk/events/sun-lecture.html


Synopsis
========

Each lecture will last approximately one hour.

Introduction

Emphasis is put on the task context for machine vision since this
determines the side constraints dominating visual interpretation and
hypothesis generation for ecological dynamic scene understanding. In
addition to 3-D space, the dimension of time is thoroughly considered
for making image sequence analysis efficient by using known invariants
of objects in all four basic dimensions and for preventing
combinatorial explosion of feature grouping. Time derivatives and
their integrals are exploited in a systematic way, both for perception
and for control; for the general case of observing moving objects with
cameras on board a moving vehicle, inertial sensors like
accelerometers and rate sensors are very important for high
performance dynamic vision. In this context, perception means, more
precisely, the combined use of inertial, odometric and image sensors.

This lecture will include videos of road and air vehicle guidance


Task analysis
Typical sets of tasks. The basic four dimensions (4-D): 3-D space
and time.
Coordinate systems, resolution scales, multiple scale modelling for
different
aspects of the vision task. Environmental conditions and objects:
representation according to the visual task. The body and its
control: mission
decomposition (mission elements, types and sequencing); perceptual and
behavioural capabilities required; functional realizations through
feedback and
feedforward control; symbolic representations of behavioural capabilities.
Examples: road vehicle guidance, aircraft landing approach.

The 4-D approach to dynamic machine perception and control
Generic object and process models for feature based machine vision.
Differential and integral representations in space and time. The
idea of Gestalt
(shape and aspect conditions). The instantiation problem and the initial
orientation phase. Fast recursive updates through prediction error feedback
(Kalman filters and derivatives). Model based integration of image and
inertial measurement data. Modular overall system design. The dynamic data
base (DDB) for object-oriented exchange of actual best estimates. Active
viewing direction control and attention focussing. A survey of control
computation.

Sensory inputs and data interpretation
Active bifocal vision for both a large field of view and good
resolution in the
region of special interest. Parameter selection. Data versus information.
Features as carriers of information. The KRONOS software package for
edge-feature extraction. Efficiently controlled feature extraction through
temporal predictions. Bottom-up versus top-down interpretation schemes.
Object processor groups for confining communication requirements. Inertial
and other conventional measurements. Object- and expectation-based data
fusion. Perceptual organizations and resulting capabilities. Inertial
stabilization, saccades and smooth pursuit as active vision
modes. Application
examples in road vehicle guidance.

Vision system integration and task performance
Vehicle control by a 3-layer hierarchy for decoupling fast reflex-like
reactions from event-triggered manoeuvre elements and knowledge based
behavioural decisions. Examples from road vehicle guidance: lane
and distance
keeping by feedback control; lane changes by feedforward control time
histories; superposition of visual feedback components for handling
perturbations; decisions for lane change by situation analysis and
assessment in
the task context; mode transitions and monitoring of actual behaviours.

Example: road vehicle guidance
Coherent discussion of visual road vehicle guidance tasks. The test vhicles
VaMoRs (a five ton van) and VaMoRs-P (VaMP for short, a passenger car),
their equipment and performance levels. Motorway driving at high speeds
with multiple objects to react to; the approach to multiple object
detection and
tracking. Driving on state roads and negotiating turn-offs onto cross roads
including active vision strategies; region-based feature extraction with the
`triangle' algorithm. Driving on unsurfaced minor roads and
recognizing forks
in the road. Visual recognition of vertical curvatures with low and
high spatial
frequency components.

There will be video of these vehicles in action.

Full 3-D motion: landing approaches by dynamic machine vision
Modelling of perspective projection with constrained motion in all
six degrees
of freedom (3 translations and 3 rotations) for visual recognition
of the state
relative to the landing strip. Fixing the view relative to a point
on the horizon
for easier data interpretation. Initialization through systematic search.
Selection of image evaluation areas (windows) for high frequency recursive
data interpretation. Handling perturbations from turbulence using inertial
measurement data. Delay-time compensation for visual data interpretation.
Transputer hardware for dynamic machine vision. Hardware-in-the-loop
simulation results, and flight experiments with a twin turbo-prop Do-128.

Visual grasping of a free-floating object in space, and conclusions
In May 1993, the Space Shuttle/Spacelab-D2 Robot Technology Experiment
(ROTEX) was set up in low Earth orbit with a camera in the robot arm and the
computing facilities on the ground in Oberpfaffenhofen near Munich. This
was the first machine tele-operation experiment, with a large time delay of
about six seconds. The lecture will discuss man/machine interaction for
initialization, automatic delay compensation and tele-grasping.

Lines of development for dynamic machine vision. In the near future,
computing power will be sufficient for solving practical tasks with
affordable
vision systems, but robustness has yet to be improved.



About the lecturer
==================

Professor Dr.-Ing. Ernst D. Dickmanns is currently Professor for
Control Engineering in the Department of Aerospace Engineering at the
Universitaet der Bundeswehr in Munich. He began his studies in
aerospace engineering at RTWH Aachen from 1956 to 1961. He then joined
the German National Aerospace Research Establishment (DFVLR) in
Muelheim-Ruhr and Oberpfaffenhofen as a researcher in the field of
optimal control and trajectory shaping. From 1964 to 1965 he undertook
graduate study at Princeton University on a NASA fellowship, and on
his return to DFVLR became head of the trajectory dynamics section of
the Institut fuer Dynamik der Flugsysteme. He received his doctorate
from RTWH Aachen in 1969.

At NASA, he investigated Space Shuttle Orbiter design. From 1972 to
1974, he was in charge of designing the transfer and positioning
procedures for the first European geostationary satellite,
Symphonie. In 1974, he became acting head of the DFLVR Research Centre
in Oberpfaffenhofen.

He took up his current appointment at the Universitaet der
Bundeswehr in 1975, where he founded the Institute for System Dynamics
and began the research programme on real-time machine vision for
autonomous vehicle navigation. This has undertaken pioneering work in
visual dynamic scene understanding and real-time autonomous visual
guidance for road, air and space vehicles. Under his direction, seven
autonomous road vehicles have been equipped with vision systems, of
which four are currently in operation in normal traffic on the German
autobahns. They have accumulated over 6000 km of fully autonomous
driving with both longitudinal and lateral degrees of freedom. Similar
techniques have been applied to aircraft landing approaches, grasping
of a free-floating object in satellite orbit, and helicopter control.

Last updated 9/5/95 by Tim Clement (timc@cs.man.ac.uk)

------------------------------

End of VISION-LIST digest 14.29
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT