Copy Link
Add to Bookmark
Report

VISION-LIST Digest Volume 14 Issue 15

eZine's profile picture
Published in 
VISION LIST Digest
 · 6 Jan 2024

VISION-LIST Digest    Thu Apr 27 17:10:39 PDT 95     Volume 14 : Issue 15 

- ***** The Vision List host is TELEOS.COM *****
- Send submissions to Vision-List@TELEOS.COM
- Vision List Digest available via COMP.AI.VISION newsgroup
- If you don't have access to COMP.AI.VISION, request list
membership to Vision-List-Request@TELEOS.COM
- Access Vision List Archives via anonymous ftp to TELEOS.COM

Today's Topics:

Want CCD video camera with frame-based shuttering
Synchronization of Digital Cameras
Job Announcement
Post-Doc Opening
Lectureship in Information Engineering
Offering POST-DOC POSITION (new deadline)
Papers Available: Object Indexing/Face Recognition
Urgent notice: ACCV'95 call-4-papers
Preliminary Program for ICCV 95
CFP: MTAP special issue on visual media retrieval
Survey on model-based vision (long)

----------------------------------------------------------------------

Date: Sun, 23 Apr 95 18:01:34 -0500
From: Chiun-Hong Chien <chien@superman.jsc.nasa.gov>
Subject: Want CCD video camera with frame-based shuttering

Extra-Vehicular Activity Helper and Retriever is a space robotics
project requires vision guided grasping of free floating/flying
objects. We are looking for a CCD video camera capable of imaging
moving objects. Most of inexpensive commercial CCD cameras are not
suitable for imaging moving objects since their shuttering mechanism
is field-based rather than frame-based. As a result, images of
observed objects are split into two corresponding to images of the
objects at time t and (t+1/60) sec. , respectively.

We have looked into Elmo's ME411E that can be operated in three modes,
namely field accumulation interlace, frame accumulation interlace, and
non-interlace mode. However, their technical people at US are not
quite familiar with ME411E, and couldn't provide us instructions
regarding how to set up configuration of the camera for obtaining
clear images of moving objects. We are also looking into Pulnix 9700
(of which the price is a bit higher), and waiting for information from
Cohu. We would appreciate if any vision groups could share their
experiences with frame-based shuttering types of CCD cameras or
provide related information.

Thanks in advance,
C. H. Chien
chien@superman.jsc.nasa.gov

------------------------------

Date: 27 Apr 1995 00:24:45 GMT
From: Sanjiv Singh <ssingh@FRC2.FRC.RI.CMU.EDU>
Organization: Field Robotics Center
Subject: Synchronization of Digital Cameras

We are trying to synchronize two digital cameras so that the
shutters of the two click very close (in time) to each other..
roughly 1 ms apart. If you have done this, I'd like to
hear about your solution. In general if you are imaging points
20 to 30 m away with stereo cameras, it seems that it would be
very important to ensure that the shutters of the two open
at about the same time, especially if the cameras are moving.
However, if a RS-170 signal is being digitized, in the worst
case, the two images could be taken 33 ms apart in time.

Sanjiv Singh ssingh@ri.cmu.edu
System Scientist
Field Robotics Center voice/fax: 412.268.6577
Carnegie Mellon University

------------------------------

Date: Thu, 27 Apr 95 12:44:10 EDT
From: msl@cns.NYU.EDU (Michael Landy)
Subject: Job Announcement

New York University Center for Neural Science and Courant Institute
of Mathematical Sciences

As part of its Sloan Theoretical Neuroscience Program, the
Center for Neural Science at New York University, together
with the Courant Institute of Mathematical Sciences, is planning
to hire an Assistant Professor (tenure-track) in the field of
Theoretical Visual Neuroscience. Applicants should have a background
in mathematics, physics, and/or computer science with a proven
record of research in visual science or neuroscience.
Applications (deadline June 30, 1995) should include a CV, the
names and addresses of at least three individuals willing to write
letters of reference, and a statement of research interests.
Send to: Sloan Search Committee,
Center for Neural Science, New York University,
4 Washington Place, New York NY 10003.
New York University is an affirmative action/equal opportunity employer.

------------------------------

Date: Fri, 21 Apr 1995 18:17:26 -0400
From: tboult@cortex.EECS.Lehigh.Edu (Dr. Terrance Boult)
Subject: Post-Doc Opening

Lehigh University Department of Electrical Engineering and Computer Science
has an opening, starting Summer 95, for a Post-doctoral fellow as part of
Dr. T. Boult's recently funded (DOD/DARPA MURI) project on autonomous sensor
systems for manufacturing. This position will involve a mixture of vision
science and software science. The ideal candidate will have a mixture of
hands-on experience in 3 or more of the following:
1) object-oriented software (C++)
2) the ARPA image understanding environment (IUE)
3) physics-based vision
4) industrial vision/inspection
5) distributed computation
6) systems integration
7) sensor fusion

The post-doc will be based at Lehigh University and will interact primarily
with the project PI, T. Boult as well as with Dr. Boult's students. However,
the project is a multi-disciplinary multi-university project and the post-doc
will be involved with the integration of components from all sites. Hence
they will also be interacting with other team members including S. Nayer,
P. Allen and J. Kender of Columbia University's vision/robotics lab;
R. Wallace of NYU; R. Blum (Sensor Fusion/Signal processing, Lehigh) and
R. Nagel (Manufacturing, Lehigh).

(Note we are looking for 1-3 good Ph.D. students too :-)



Here is some background information on the lab and on Lehigh. For more info
on Lehigh visit URL "http://www.lehigh.edu", For info on the EECS department
visit "http://www.eecs.lehigh.edu"

Lehigh University is in Bethlehem PA, which is in Eastern PA, about 1.5 hours
North of Phila., 1.5 hours Northeast of Princeton, and 2 hours East of NYC. It
sits on the edge of town with a beautiful 1600 acre campus. It has M.S. and
Ph.D. programs in CS and EE and a MS program in Comp.Eng. It has strong
undergraduate programs in Comp. Eng., CS and EE.


Terrance Boult heads the vision and software technology (VAST) laboratory in
the Department of Electrical Engineering and Computer Science at Lehigh
University. By fall the lab will include the following equipment: a Sun
multi-media multi-processor workstation with real time video &
frame-grabber, a DATACUBE MV250, a SGI Indigo-2 Extreme, 4-6 Sun
workstations, 2 SPARC laptops, x86 based unix machines and various BW, RGB
and Infrared-cameras. (We also have access to department's NSF CISE-funded
Giga-Op computation/simulation facility.)



Interested parties should contact Terry Boult via email
(boult@eecs.lehigh.edu) providing a short (2-3 paragraph) description of your
vision/system experience, Name/email/phone of at least 3 references, and a
full vita (.ps form okay for vita).


Lehigh University is an Equal Opportunity/Affirmative Action Employer

------------------------------

Date: 24 Apr 1995 14:53:56 GMT
From: ajr@eng.cam.ac.uk (Tony Robinson)
Organization: Engineering Department, Cambridge University, England.
Subject: Lectureship in Information Engineering

UNIVERSITY OF CAMBRIDGE
DEPARTMENT OF ENGINEERING

University Lectureship/Assistant Lectureship
in Information Engineering

A vacancy exists for a University Lecturer or University Assistant
Lecturer in the Department of Engineering's Information Engineering
Division to take up appointment on 1 October 1995, or as soon as
possible thereafter. The person appointed will be expected to develop
a significant research activity and contribute to undergraduate
teaching, including lecturing, organising programming laboratories and
practical classes, and supervising 4th year projects. Preference will
be given to candidates who have general expertise in computing, and
specialist skills in one of the following areas: computer vision,
speech processing, robotics, neural networks, software engineering and
computer communications.

The pensionable scales of stipends for a University Lecturer are
#17,813 rising by eleven annual increments to #27,473, and for a
University Assistant Lecturer #14,756 rising by six annual increments
to #19,326.

Further information and application forms may be obtained from the
Secretary of the Appointments Committee for the Faculty of
Engineering, Faculty Board Office, Dept of Engineering, Trumpington
Street, Cambridge, CB2 1PZ, to whom completed application forms
together with a curriculum vitae should be sent so as to reach him not
later than 31 May 1995. Candidates wishing to discuss the post
further before applying should contact Professor S.J. Young (Tel:
01223 332654; Email: sjy@eng.cam.ac.uk).

The University follows an equal opportunities policy.

------------------------------

Date: 24 Apr 1995 07:46:24 GMT
From: greg@epidaure.inria.fr (Gregoire Malandain)
Organization: INRIA Sophia Antipolis, France
Subject: Offering POST-DOC POSITION (new deadline)

The Epidaure Group, at INRIA Sophia-Antipolis, is offering
a post-doctoral position in Medical Image Analysis, starting
in Autumn or Winter 1995, for one year.

The topics of research are 3D medical image understanding,
all aspects of image registration, augmented reality for
therapy planning, virtual patients for surgery rehearsal.

The candidate must be a European citizen (outside of France),
or a non european citizen who has been working
in an EEC country outside of France for at least a year,
and must not have received a funding for Human Mobility
from the EEC before.

Applications must be sent to
Dr. Nicholas Ayache
INRIA
2004 Route des Lucioles
F-06902 Sophia-Antipolis
France
email: Nicholas.Ayache@sophia.inria.fr

The deadline has now been postponed for one month,
which means it is possible to send an application until

MAY 30, 1995

for a start in October 95. Early application is anyhow
strongly encouraged.

------------------------------

Date: 21 Apr 95 22:01:51 GMT
From: rao@cs.rochester.edu (Rajesh Rao)
Organization: University of Rochester Computer Science Dept
Subject: Papers Available: Object Indexing/Face Recognition

The following papers on object indexing/face recognition using a
spatial filters & sparse distributed memory are now available via
ftp:

Rajesh P. N. Rao and Dana H. Ballard, "Object Indexing using an Iconic
Sparse Distributed Memory", ICCV'95 (to appear).

ftp://cs.rochester.edu/pub/u/rao/papers/iccv95.ps.Z


Rajesh P. N. Rao and Dana H. Ballard, "Natural Basis Functions and
Topographic Memory for Face Recognition", IJCAI'95 (to appear).

ftp://cs.rochester.edu/pub/u/rao/papers/ijcai95.ps.Z

Rajesh Rao Internet: rao@cs.rochester.edu
Dept. of Computer Science VOX: (716) 275-2527
University of Rochester FAX: (716) 461-2018
Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/

------------------------------

Date: 26 Apr 1995 09:24:34 +0800
From: hw@ntu.ac.sg (Wang Han)
Organization: Nanyang Technological University
Subject: Urgent notice: ACCV'95 call-4-papers

Dear colleagues,

If you are submitting papers for ACCV'95, plesae quote on the first
page the category of your paper:


* image processing * texture analysis
* motion analysis and tracking * sensor fusion
* active vison * medical imaging
* physics based vision * invariant features
* segmentationand grouping * feature extraction
* image understanding * pattern recognition
* learning in computer vision * robot and machine vision
* mobile robot and navigation * remote sensing
* medical imaging * real-time vision systems
* parallel algorithms * virtual reality
* applications of computer vision * image database and retrieval


/^\ Dr. Wang Han , School of EEE
--------- / \ Nanyang Technological University
| |--| / ` \ Singapore 2263
---|--- |--|/ ----/ \ voice: (+65) 799-1253
| |--| / (direct line&answer machine)
---------- |-----| email: hw@ntuix.ntu.ac.sg (internet)
|-----| hw@ntuvax.bitnet

------------------------------

Date: Fri, 21 Apr 95 22:59:27 EDT
From: welg@ai.mit.edu (W. Eric L. Grimson)
Subject: Preliminary Program for ICCV 95

If it is not too long, I would appreciate your posting the following item:

Preliminary Program for ICCV 95
Cambridge MA
June 20 -- 23, 1995

** Information about registration and accomodation for the conference
has been previously posted, and can be requested by contacting
gmfitz@mit.edu, or welg@ai.mit.edu

TUESDAY, JUNE 20

SESSION 1 -- Opening, IUE, Recognition -- 8:00 -- 9:40

Opening welcome, E. Grimson (MIT)

Towards a Unified IU Environment: Coordination of Existing IU Tools
with the IUE, Charles Kohl (Amerinex), Jeffrey Hunter (Amerinex),
Cynthia Loiselle (Amerinex).

Recognition using region correspondences, Ronen Basri (Weizmann), David
Jacobs (NEC).

Alignment by Maximization of Mutual Information, Paul Viola (MIT), William
Wells III (MIT, Harvard Med).

Object indexing using an iconic sparse distributed memory, Rajesh Rao
(Rochester), Dana Ballard (Rochester).

SESSION 2 -- Calibration, Navigation -- 10:10 -- 11:50

Robot Aerobics: Four easy steps to a more flexible calibration,
Daniel Stevenson (Iowa), Margaret Fleck (Iowa).


Head-eye calibration, Mengxiang Li (KTH), Demetrios Betsis (KTH).


Weakly-Calibrated Stereo Perception for Rover Navigation,
Luc Robert (INRIA), Michel Buffa (Lab. I3S), Martial Hebert (CMU).


An integrated stereo-based approach to automatic vehicle guidance,
Q.-T. Luong (Berkeley), J. Weber (Berkeley), D. Koller (ECRC), J.
Malik (Berkeley).


Active visual navigation using non-metric structure,
Paul Beardsley (Oxford), Ian Reid (Oxford), Andrew Zisserman (Oxford),
David Murray (Oxford).

SESSION 3 -- Shape Recovery -- 1:30 -- 2:30


Shape from shading with interreflections under proximal light source:
3D shape reconstruction of unfolded book surface from a scanner image,
Toshikazu Wada (Okayama), Hiroyuki Ukida (Okayama), Takashi Matsuyama (Okayama).

Shape and model from specular motion,
Jiang Yu Zheng (Kyushu), Yoshihiro Fukagawa (Kyushu), Norihiro Abe (Kyushu)

Reflectance function estimation and shape recovery from image
sequence of a rotating object,
Jiping Lu (UBC), Jim Little (UBC)

POSTER SESSION 1 -- Stereo, Texture, Low Level Vision, Color,
Calibration, Motion -- 2:30 -- 4:30

A multibaseline stereo system with active illumination and real-time image
acquisition,
Sing Bing Kang (DEC), Jon Webb (CMU), Lawrence Zitnick (CMU), Takeo Kanade (CMU).


Electronically directed "focal" stereo,
Peter Burt (Sarnoff), Lambertt Wixson (Sarnoff), Garbis Salgian (Rochester).

Segmented shape description from 3-view stereo,
Parag Havaldar (USC), Gerard Medioni (USC).

3d surface reconstruction from stereoscopic image sequences,
Reinhard Koch (Hannover).

An analytical and experimental study of the performance of Markov Random Fields
applied to textured images using small samples,
Athanasios Speis (Irvine), Glenn Healey (Irvine).

Texture segmentation and shape in the same image,
John Krumm (Sandia), Steven Shafer (CMU).

Illumination-invariant recognition of texture in color images,
Glenn Healey (Irvine), Lizhi Wang (Irvine).

Direct estimation of affine deformations
using visual front-end operators with automatic scale selection,
Tony Lindeberg (KTH).

Indexing visual representations through the complexity map,
Benoit Dubuc (McGill), Steven Zucker (McGill).


Seeing behind the scene: analysis of Photometric properties of
occluding edges by the reversed projection blurring model,
Naoki Asada (Okayama), Hisanaga Fujiwara (Okayama), Takashi Matsuyama (Okayama).

Image Segmentation by Reaction-Diffusion Bubbles,
Huseyin Tek (Brown), Benjamin Kimia (Brown).

Scale-space from nonlinear filters,
Andrew Bangham (East Anglia), Paul Ling (East Anglia), Richard Harvey

Unsupervised parallel image classificiation using a hierarchical
markovian model,
Zoltan Kato (INRIA), Josiane Zerubia (INRIA), Marc Berthod (INRIA).

Polymorphic grouping for image segmentation,
Claudia Fuchs (Bonn), Wolfgang Forstner (Bonn).

Class-based grouping in perspective images,
Andrew Zisserman (Oxford), Joe Mundy (GE), David Forsyth (Berkeley),
Jane Liu (GE), Nic Pillow (Oxford), Charlie Rothwell (INRIA),
Sven Utcke (Hamburg-Harburg).

Steerable wedge filters,
Eero Simoncelli (Penn), Hany Farid (Penn).

Saliency maps and attention selection in scale and spatial
coordinates: an information theoretic approach,
Martin Jagersand (Rochester).

Combining color and geometry for the active, visual recognition
of shadows,
Gareth Funka-Lea (Siemens), Ruzena Bajcsy (Penn).

Bayesian decision theory, the maximum local mass estimate, and color
constancy,
William Freeman (Mitsubishi), David Brainard (Santa Barbara).

Color constancy in diagonal chromaticity space,
Graham Finlayson (Simon Fraser).

The nonparametric approach for camera calibration,
MaoLin Qiu (Chinese Academy of Sciences), Song De Ma (Chinese Academy
of Sciences).

Accurate Internal Camera Calibration using Rotation, with Analysis of
Sources of Error,
Gideon Stein (MIT).

ASSET-2: Real-time motion segmentation and shape tracking,
Stephen Smith (Defence Research AGency).

Global rigidity constraints in image displacement fields,
Cornelia Fermuller (Maryland), Yiannis Aloimonos (Maryland).

Rigid body segmentation and shape description from dense optical flow
under weak perspective,
Joseph Weber (Berkeley), Jitendra Malik (Berkeley).

Estimating Motion and Structure from Correspondences of Line Segments
Between Two Perspective Images,
Zhengyou Zhang (INRIA).

Computation of coherent optical flow by using multiple constraints,
Massimo Tistarelli (Genoa).

Motion from the frontier of curved surfaces,
R. Cipolla (Cambridge), K.E. Astrom (Lund), P.J. Giblin (Liverpool).

Real-time obstacle avoidance using central flow divergence and
peripheral flow,
David Coombs (NIST), Martin Herman (NIST), Tsai Hong (NIST), Marilyn Nashman (NIST).

Recovering 3d motion and structure of multiple objects using adaptive
hough transform,
Tina Yu Tian (Central Florida), Mubarak Shah (Central Florida).

Structure and motion estimation from dynamic silhouettes under
perspective projection,
Tanuja Joshi (Illinois), Narendra Ahuja (Illinois), Jean Ponce (Illinois).

Robust real time tracking and classificiation of facial expressions,
Yael Moses (Weizmann), David Reynard (Oxford), Andrew Blake (Oxford).

Region tracking through image sequences,
Benedicte Bascle (Oxford), Rachid Deriche (INRIA).

Closing the loop on multiple motions,
Charles Wiles (Oxford), Michael Brady (Oxford).

A unifying framework for structure and motion recovery from image
sequences,
Philip McLauchlan (Oxford), David Murray (Oxford).

Recursive estimation of motion from weak perspective,
Stefano Soatto (Caltech), Pietro Perona (Caltech, Padova).

SESSION 4 -- Shape Recovery -- 4:30 -- 5:30

Complete Scene Structure from Four Point Correspondences,
Steven Seitz (Wisconsin), Charles Dyer (Wisconsin).

Matching Constraints and the Joint Image
Bill Triggs (LIFIA, INRIA).

Surface reconstruction: GNCs and MFA,
Mads Nielsen (DIKU).

WEDNESDAY, JUNE 21

SESSION 5 -- Face and Gesture Recognition -- 8:00 -- 9:40

Automatic recognition of human facial expressions, Katsuhiro Matsuno
(Kansai Electri), Chil-Woo Lee (Lab of Image Information Science),
Satoshi Kimura (Lab of Image Information Science), Saburo
Tsuji (Wakayama).

Facial Expression Recognition using a Dynamic Model and Motion Energy,
Irfan Essa (MIT), Alex Pentland (MIT).

A unified approach for coding and interpreting face images,
A. Lanitis (Manchester), C.J. Taylor (Manchester), T.F. Cootes (Manchester).

Tracking and recognizing rigid and non-rigid facial motions
using local parametric models of image motion
Michael Black (Xerox), Yaser Yacoob (Maryland)

A State-based Technique for the Summarization and recognition of
gesture, Aaron Bobick (MIT), Andrew Wilson (MIT).


SESSION 6 -- Curve Matching, Shape Completion -- 10:10 -- 11:50

Matching of 3d curves using semi-differential invariants,
Tomas Pajdla (Leuven), Luc Van Gool (Leuven).

Shape extraction for curves using geometry-driven diffusion and
functional optimization,
Eric Pauwels (Leuven), Peter Fiddelaers (Leuven), Luc Van Gool (Leuven).

Optimal subpixel matching of contour chains and segments,
Bruno Serra (INRIA, AEROSPATIALE), Marc Berthod (INRIA).

Stochastic completion fields: A neural model of illusory contour shape
and salience,
Lance Williams (NEC), David Jacobs (NEC).

Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for
Multi-band Image Segmentation,
S. Zhu (Harvard), Alan Yuille (Harvard).

SESSION 7 -- Pose and Correspondence -- 1:30 -- 2:30

Object Pose: Links between paraperspective and perspective,
Radu Horaud (LIFIA, INRIA), Stephane Christy (LIFIA, INRIA), Fadi
Dornaika (LIFIA, INRIA), Bart Lamiroy (LIFIA, INRIA).

A geometric criterion for shape-based non-rigid correspondence,
Hemant Tagare (Yale), Don O'Shea (Mt. Holyoke), Anand Rangarajan (Yale).

Region correspondence by inexact attributed planar graph matching,
Caihua Wang (Shizuoka), Keiichi Abe (Shizuoka).


POSTER SESSION 2 -- Recognition, Applications -- 2:30 -- 4:30

Relational matching with dynamic graph structures,
Richard Wilson (York), Edwin Hancock (York).

Locating objects using the Hausdorff distance,
William Rucklidge (Xerox).

FORMS: a Flexible Object Recognition and Modelling System,
S. Zhu (Harvard), Alan Yuille (Harvard).

Elimination: An Approach to the Study of 3D-from-2D,
Michael Werman (Hebrew), Amnon Shashua (Hebrew).

Recognizing 3D objects using photometric invariant,
Kenji Nagao (Matsushita, MIT).

Probabilistic 3d object recognition,
Ilan Shimshoni (Illinois), Jean Ponce (llinois).

Nonlinear manifold learning for visual speech recognition,
Christoph Bregler (Berkeley), Stephen Omohundro (NEC).

Face Recognition From One Example View
David Beymer (MIT), Tomaso Poggio (MIT).

Recognition of Human Body Motion using Phase Space Constraints,
Lee Campbell (MIT), Aaron Bobick (MIT).

Invariant-based recognition of complex curved 3d objects from image contours,
B. Vijaykumar (Yale), David Kriegman (Yale), Jean Ponce (Illinois).

Computing computing visual correspondence: Incorporating the
probability of a false match,
Daniel Huttenlocher (Cornell), Eric Jaquith (Xerox)

Fast object recognition in noisy images using simulated annealing,
Margrit Betke (MIT), Nicholas Markis (Naval Research Laboratory).

Model-based matching of line drawings by linear combinations of
prototypes,
Michael Jones (MIT), Tomaso Poggio (MIT).

Gabor wavelets for 3-d object recognition,
Xing Wu (Riverside), Bir Bhanu (Riverside).

Learning geometric hashing functions for model based object
recognition,
George Bebis (Central Florida), Michael Georgiopoulos (Central
Florida), Niels da Vitoria Lobo (Central Florida).

3d-2d projective registration of free-form curves and surfaces,
Jacques Feldmar (INRIA), Nicholas Ayache (INRIA), Fabienne Betting (INRIA).

Validation of 3d registration methods based on points and frames,
Xavier Pennec (INRIA), Jean-Philippe Thirion (INRIA).

Combining color and geometric information for the illumination
invariant recognition of 3d objects,
David Slater (Irvine), Glenn Healey (Irvine).

3d pose estimation by fitting image gradients directly to polyhedral
models,
Henner Kollnig (Karlsruhe), Hans-Hellmut Nagel (Karlsruhe).

Real-time X-ray inspection of 3D defects in circuit board patterns,
Hideaki Doi (Hitachi), Yoko Suzuki (Hitachi), Yasuhiko Hara (Hitachi), Tadashi Iida (Hitachi), Yasuhiro Fujishita
(Hitachi), Koichi Karasaki (Hitachi).

Model-based 2D&3D Dominant Motion Estimation for Mosaicing and Video
Representation,
Harpreet Sawhney (IBM), Serge Ayer (Swiss Federal Institute of
Technology), Monika Gorkani (IBM).

Face detection by fuzzy pattern matching,
Qian Chen (Osaka), Haiyuan Wu (Osaka), Masahiko Yachida (Osaka).

Perceptual organization in an interactive sketch editing application,
Eric Saund (Xerox), Thomas Moran (Xerox)

Mosaic based representations of video sequences and their
applications,
Michal Irani (Sarnoff), P. Anandan (Sarnoff), Steve Hsu (Sarnoff).

Model-based tracking of self-occluding articulated objects,
James Rehg (CMU), Takeo Kanade (CMU).

Motion-based 3d human part segmentation and shape estimation from
multiple views,
Ioannis Kakadiaris (Penn), Dimitri Metaxas (Penn)

Learning-Based Hand Sign Recognition Using SHOSLIF-M
Yuntao Cui (Michigan State), Daniel Swets (Michigan State), John Weng
(Michigan State).

Finding faces in cluttered scenes using labelled random graph matching,
Thomas Leung (Caltech, Berkeley), Mike Burl (Caltech), Pietro Perona
(Caltech, Padova).

A 6DOF odometer and gyroscope based on monocular visual motion analysis,
Jean-Yves Bouguet (Caltech), Pietro Perona (Caltech, Padova).

A recursive filter for phase velocity assisted shape-based tracking of
cardiac non-rigid motion,
John McEachen III (Yale), Francois Meyer (Yale), Todd Constable (Yale), Arye Nehorai (Yale), James Duncan (Yale).


Structure and semi-fluid motion analysis of stereoscopic satellite
images for cloud tracking,
K. Palaniappan (NASA), Chandra Kambhamettu (NASA), Frederick Hasler
(NASA), Dmitry Goldgof (South Florida).

Vision based hand modeling and tracking for virtual teleconferencing
and telelcollaboration,
James Kuch (TouchVision Systems), Thomas Huang (Illinois).

Closed-world tracking,
Stephen Intille (MIT), Aaron Bobick (MIT)

Towards an Active Visual Observer,
Tomas Uhlin (KTH), Peter Nordlund (KTH), Atsuto Maki (KTH), Jan-Olof
Eklundh (KTH).

A model-based integrated approach to track myocaridal deformation
using displacement and velocity constraints,
Pengcheng Shi (Yale), Glynn Robinson (Yale), Todd Constable (Yale),
Albert Sinusas (Yale), James Duncan (Yale).

SESSION 8 -- Deformable models -- 4:30 -- 5:30

Geodesic Active Contours,
Vicent Caselles (Illes Balears), Ron Kimmel (Technion), Guillermo Sapiro (HP).

Volumetric deformable models with parameter functions: A new approach
to the 3d motion analysis of the LV from MRI-SPAMM,
Jinah Park (Penn), Dimitri Metaxas (Penn), Leon Axel (Penn).

Optical flow and deformable objects,
Andrea Giachetti (Genova), Vincent Torre (Genova).


THURSDAY, JUNE 22

SESSION 9 -- Color, Texture, Specularities -- 8:00 -- 9:40


Results using random field models for the segmentation of color images,
Dileep Panjwani (Irvine), Glenn Healey (Irvine).

Color Constancy Under Varying Illumination,
G. Finlayson (Simon Fraser), B. Funt (Simon Fraser), K. Barnard (Simon Fraser).

On representation and matching of multi-coloured objects,
J. Matas (Surrey), R. Marik (Surrey), J. Kittler (Surrey).

Surface orientation and curvature from differential texture
distortion,
Jonas Garding (KTH).

A theory of specular surface geometry,
Michael Oren (Columbia), Shree Nayar (Columbia).


SESSION 10 -- Motion -- 10:10 -- 11:50

Motion analysis with a camera with unknown, and possibly varying
intrinsic parameters,
Thierry Vieville (INRIA), Olivier Faugeras (INRIA).

Motion Estimation with Quadtree Splines,
Richard Szeliski (DEC), Heung-Yeung Shum (CMU)

Tracking the human arm in 3D using a single camera,
Luis Goncalves (Caltech), Enrico Di Bernardo (Caltech, Padova),
Enrico Ursella (Padova), Pietro Perona (Caltech, Padova).

Hypergeometric filters for optical flow and affine matching,
Yalin Xiong (CMU), Steven Shafer (CMU).

Layered representation of motion video using robust maximum-likelihood
estimation of mixture models and MDL encoding,
Serge Ayer (Swiss Federal Institute of Technology), Harpreet Sawhney (IBM).


SESSION 11 -- Learning, Modeling -- 1:30 -- 2:30

Probabilistic Visual Learning for Object Detection,
Baback Moghaddam (MIT), Alex Pentland (MIT).


Optimal RBF networks for visual learning,
Sayan Mukherjee (Columbia), Shree Nayar (Columbia).

Animat Vision: Active Vision in Artitifical Animals
Demetri Terzopoulos (Toronto), Tamer Rabie (Toronto)

POSTER SESSION 3 -- Deformable Models, Surfaces, Learning, Geometry, Modeling,
Analysis, Sensors, Active Vision -- 2:30 -- 4:30

Gradient flows and geometric active contour models,
Satyanad Kichenassamy (Minnesota), Arun Kumar (Minnesota), Peter Olver (Minnesota), Allen Tannenbaum
(Minnesota), Anthony Yezzi (Minnesota)

A snake for model-based segmentation,
Petia Radeva (Barcelona), Joan Serrat (Barcelona), Enric Marti (Barcelona).

Algorithms for Implicit Deformable Models,
Ross Whitaker (ECRC).

Deformable velcro surfaces,
W. Neuenschwander (ETH), P. Fua (SRI), G. Szekely (ETH), O. Kubler (ETH).

Adaptive shape evolution using blending,
Douglas DeCarlo (Penn), Dimitri Metaxas (Penn).

Topologically adaptable snakes,
Tim McInerney (Toronto), Demetri Terzopoulos (Toronto).

On multi-feature integration for deformable boundary finding,
Amit Chakraborty (Yale), Marcel Worring (Amsterdam), James Duncan (Yale).

Curve and Surface Smoothing without Shrinkage,
Gabriel Taubin (IBM).

Surface geometry from cusps of apparent contours,
Roberto Cipolla (Cambridge), Gordon Fletcher (Liverpool), Peter
Giblin (Liverpool).

Multiscale detection of curvilinear structures in 2d and 3d
image data,
Thomas Koller (ETH), G. Gerig (ETH), Gabor Szekely (ETH), Daniel
Dettwiler (ETH).

An integral approach to free-formed object modeling,
Heung-Yeung Shum (CMU), Martial Hebert (CMU), Katsushi Ikeuchi (CMU), Raj Reddy (CMU).

Recovering object surfaces from viewed changes
in surface texture patterns,
Peter Belhumeur (Yale), Alan Yuille (Harvard).

A linear method for reconstruction from lines and points,
Richard Hartley (GE).

Site model acquisition and extension from aerial images,
Robert Collins (UMass), Yong-Qing Cheng (UMass), Chris Jaynes (UMass), Frank
Stolle (UMass), Xiaoquang Wang (UMass), Allen Hanson (UMass), Edward
Riseman (UMass).

Affine surface reconstruction by purposive viewpoint control,
Kiriakos Kutulakos (Rochester)

Estimating the tensor of curvature of a surface from a
polyhedral approximation,
Gabriel Taubin (IBM).

Hierarchical statistical models for the fusion of
multiresolution image data,
J.M. Laferte (IRISA-INRIA), F. Heitz (ENSPOS-LSIIT), P. Perez (IRISA-INRIA), E. Fabre (IRISA-INRIA).

Statistical learning, localization, and identification of objects,
J. Hornegger (Erlangen), H. Niemann (Erlangen).

Trilinearity of Three Perspective Views and its Associated Tensor,
Amnon Shashua (Hebrew), Michael Werman (Hebrew).

Invariant of a Pair of Non-coplanar Conics in Space: Definition,
Geometric interpretation and Computation,
Long Quan (LIFIA-CNRS-INRIA).

A Comparison of Projective Reconstruction Methods for Pairs of Views,
C. Rothwell (INRIA), G. Csurka (INRIA), O. Faugeras (INRIA).

A Quantitative Analysis of View Degeneracy and its use for Active
Focal Length control,
David Wilkes (Ontario Hydro), Sven Dickinson (Rutgers), John Tsotsos (Toronto).

Rigidity Checking of 3D Point Correspondences Under Perspective
Projection,
Daniel McReynolds (UBC), David Lowe (UBC).

Algebraic and geometric properties of point correspondences in N images,
between N images,
Olivier Faugeras (INRIA), Bernard Mourrain (INRIA).

Rendering real-world objects using view interpolation,
Tomas Werner (Czech Technical Univ.), Roger Hersch (Ecole
Polytechnique -- Lausanne), Vaclav Hlavac (Czech Technical Univ.).

Determining wet surfaces from dry,
Howard Mall, Jr. (Central Florida), Niels da Vitoria Lobo (Central Florida).

Expected performance of robust estimators near discontinuities,
Charles Stewart (RPI).

Auxiliary variables for deformable modelscomputer vision problems,
Laurent Cohen (CEREMADE).

Improving Laser Triangulation Sensors Using Polarization,
J. Clark (Heriot-Watt), E. Trucco (Heriot-Watt), H.-F. Cheung (Lancaster).

Better optical triangulation through spacetime analysis,
Brian Curless (Stanford), Marc Levoy (Stanford).

Real-time focus range sensor,
Shree Nayar (Columbia), Masahiro Watanabe (Hitachi), Minori Noguchi (Hitachi).

Active fixation using attentional shifts, affine resampling,
and multiresolution search,
A.L. Abbott (Virginia Tech), B. Zheng (Virginia Tech)

Calibration-free visual control using projective invariance,
Gregory Hager (Yale).

Annular Symmetry Operators: A method for locating and describing
objects, M. Kelly (McGill), M. Levine (McGill).

SESSION 12 -- Representations, Geometry -- 4:30 -- 5:30


COSMOS - A representation scheme for free-form surfaces,
Chitra Dorai (Michigan State), Anil Jain (Michigan State).

Epipole and fundamental matrix estimation using virtual parallax,
Boubakeur-Seddik Boufama (LIFIA-INRIA), Roger Mohr (LIFIA-INRIA).

Robust detection of degenerate configurations for the fundamental
matrix,
Philip Torr (Oxford), Andrew Zisserman (Oxford), Stephen Maybank (Oxford).


FRIDAY, JUNE 23

SESSION 13 -- Motion -- 8:00 -- 9:40

Detecting Kinetic Occlusion, Sourabh Niyogi (MIT).

A Model of Figure-Ground Segregation from Kinetic Occlusion,
George Chou (MIT)

Reconstruction from image sequences by means of relative depths,
Anders Heyden, (Lund).

In defence of the 8-point algorithm,
Richard Hartley (GE).

A Multi-body Factorization Method for Motion analysis,
Joao Costeira (CMU), Takeo Kanade (CMU).

SESSION 14 -- Stereo, Robot Vision -- 10:10 -- 11:50

Reconstructing Complex Surfaces from Multiple Stereo Views,
Pascal Fua (SRI).

Stereo in the presence of specular reflection,
Dinkar Bhat (Columbia), Shree Nayar (Columbia).

A robot system that observes and replicates grasping tasks,
Sing Bang Kang (DEC), Katsushi Ikeuchi (CMU).

Transfer of fixation for an active stereo platform via affine
structure recovery,
Stuart Fairley (Oxford), Ian Reid (Oxford), David Murray (Oxford).

Task-oriented generation of visual sensing strategies,
Jun Miura (Osaka), Katsushi Ikeuchi (CMU).

------------------------------

Date: Tue, 25 Apr 95 13:48:56 +0800
From: zhj@iss.nus.sg
Subject: CFP: MTAP special issue on visual media retrieval

CALL FOR PAPERS CALL FOR PAPERS CALL FOR PAPERS

MULTIMEDIA TOOLS AND APPLICATIONS

Special issue on
REPRESENTATION AND RETRIEVAL OF VISUAL MEDIA IN MULTIMEDIA SYSTEMS


With rapid advances in communication and multimedia computing technologies,
accessing mass amounts of multimedia data is becoming a reality on the
Information Superhighway and in digital libraries. However, interacting
with multimedia data, images and video in particular, is not merely a matter
of connecting everyone with data banks and delivering data via networks to
everyone's home and office. It is definitely not enough simply to store and
display images and video as in commercial video-on-demand services. What we
need are new technologies for parsing and representing the content of visual
data to facilitate organization, storage, query and retrieval of mass collec
-tions of image and video data in a user-friendly way. Such a need poses
many research challenges to scientists and engineers across all multimedia
computing disciplines.

Unpublished, original research papers and reports with the emaphsis on imple
-mented prototypes, tools and applications, are solicited for a special
issue of Multimedia Tools and Applications. The papers in this special issue
will address significant new techniques for image and video content parsing
and representation, and their applications in content-based retrieval and
browsing of large collections of visual data. Topics of this special issue
papers include, but are not limited to:

* Effective feature-based pattern analysis and content representation
for visual data classification, browsing, retrieval, filtering and
compression.

* Perceptual similarity measures based on object color, texture and
shape; effective and efficient indexing schemes based on these
features; and their evaluations based on large collections of data.

* Automatic parsing and classification of video data; spatio-temporal
features for video representation and retrieval, especially for
supporting "event-based" retrieval.

* Automatic/semiautomatic annotation of images and video based on
visual features, to support text based retrieval of visual media.

* Automatic/semiautomatic abstraction of video content in both visual
media (e.g. key frames) and non-visual media (e.g. text).

* Image and video analysis, classification, and retrieval based on
compressed data.

* Fusion of information derived from other media, especially sound/
speech/music and close captions, for video content parsing.

* Integration of content based retrieval with traditional database
technologies in applications.

* Data modeling of visual media in multimedia databases and informa-
tion systems.

* Visual tools and interfaces for query formation, visual feedback,
presentation of retrieval results, and content-based browsing of
visual media.

* Application systems which embody state-of-the-art techniques in
visual media archival and retrieval.

All manuscript are subject to review. To be considered for the Special Issue
of Multimedia Tools and Applications, prospective authors should submit six
copies of their complete manuscript, and specify this issue, by November 30,
1995 to: Ms. Judith A. Kemp, Multimedia Tools and Applications Editoral
Office, Kluwer Academic Publishers, 101 Philip Drive, Assinippi Park,
Norwell MA 02061, Tel(617) 871-630, Fax: (617) 878-0449, E-mail: jkkluwer@
world.std.com. The guest eidtors are:

Dr. HongJiang Zhang

Institute of System Science, National University of Singapore
Kent Ridge, Singapore 0511, SINGAPORE
Tel: +65-772-6725 Fax: +65-774-4998 E-mail: zhj@iss.nus.sg

Dr. Philippe Aigrain

Institut de Recherche en Informatique de Toulouse, Universite Paul Sabatier
118, route de Narbonne, F-31062 Toulouse Cedex, FRANCE
Tel: +33 61 55 63 05 Fax: +33 61 55 62 58 E-mail: aigrain@irit.fr

Dr. Dragutin Petkovic

Manager, Advanced Algorithms, Architectures and Applications
IBM Almaden Research Center, San Jose, CA 95120-6099
Tel: (408) 927-1778 Fax: (408) 927-3030 E-mail: petkovic@almaden.ibm.com

Deadline of submission: November 30, 1995
Notification: March 1, 1996
Final manuscripts due : May 1, 1996
Publication: August, 1996

------------------------------

Date: 26 Apr 1995 03:18:29 GMT
From: stever@burgundy.cse.ogi.edu (Steve Rehfuss)
Organization: Oregon Graduate Institute - Computer Science & Engineering
Subject: Survey on model-based vision (long)

To computer vision researchers:

Hello. I'm currently looking at computer architecture support for
model-based recognition. I'm not a vision researcher myself, but I've
done a somewhat extensive, admittedly hasty, review of the model-based
vision literature. What seems to be missing from the literature, for my
purposes, is information about the computational complexity of the various
algorithms as actually applied. That is to say, such information as
statistics about model size, number of models in the library, size of the
portion of the image actually matched against, expected complexity of
matching algorithms, effectiveness of indexing in reducing number of
models examined, and so on.

I have thus prepared the following questionnaire, and will tabulate,
summarize and make publicly available the responses, should there be
enough to make it worthwhile. All usable responses will be acknowledged
in any document I produce from the data.

It seems to me that this would be generally useful information; I hope you
will take time to provide it.

Thanks,
Steve Rehfuss
stever@cse.ogi.edu



SURVEY ON MODEL-BASED VISION.

This is a statistically informal survey: the questions are merely
``cues'' to elicit the desired information, which you are encouraged to
provide in the manner most convenient for you, including pointers to your
papers, and/or your database of models. If it is more convenient for you,
just point me to a paper; I'll read it and query you explicitly about any
survey info that isn't in it.

There are two sections to the survey:

A. the first section asks for various information about your
system -- the task, the algorithms used, and so on -- to help
categorize and evaluate your responses to the second part.
Each question is followed by a list of possible responses; these
are intended only to flesh out what the question means, and to
allow ease of summarizing.

B. the second part asks for information about computational issues;
the information gathered here is the main point of the
questionnaire.


Section A:

A.1 Representation of shape

pixel templates; orthogonal function expansions thereof
rigid parametric models (Generalized Hough Transform)
linear (2D) ("feature string" boundary representation)
edge based (3D) (space curve, aspect graph, edge-junction graph, ...)
surface-based (Gaussian sphere, moments, reflectance...)
volumetric (superquadric, CSG, octree, sweep representation, ...)
other (please specify)

A.2 Representation of models and objects

A.2.1 Representation of individual models

pixel template; orthogonal function expansions thereof
(global) feature vector
subpart tree
relational graph / constraint network
other (please specify)

A.2.2 Use of multiple models

Is a single object type represented with multiple models?
If so, how?
aspect graph
linear combination of individual viewpoint models
other (please specify)


A.3 Method for grouping of features
(hypothesis formation: form groupings of features possibly belonging
to the same model)

thresholding
region-growing
(Generalized) Hough Transform
direct boundary determination (e.g., laser range images)
other (please specify)


A.4 Indexing algorithm (hypothesis formation: limit set of models
considered)

A.4.1 Indexing algorithm

Generalized Hough Transform
geometric hashing
prototypes/`coarse' models/hierarchy of partial models
other (please specify)

A.4.2 Indexing features used

edge junctions
edge chains
invariants (please specify)
other (please specify)


A.5 Matching algorithms (hypothesis verification)

A.5.1 Image-model correspondence ("matching")

global feature-vector classification
neural networks,
statistical pattern recognition
correlation/template matching
DP or gradient-based linear feature determination
interpretation tree search
(sub)graph isomorphism
relaxation labeling / constraint propagation
(numeric) graph/relation similarity measure
other (please specify)

A.5.2 Pose estimation

"pose inherited from prototype"
alignment
other (please specify)


A.6 Model 'activity' / control issues
(any way that models influence control flow)

A.6.1 model encodes/orders matching strategy

A.6.2 (partial) model directs refined rematching of components
image re-segmentation, re-thresholding,
re-match to more specific template (backprojection)

A.6.3 model is locus for fusion of info from multiple sensors
(matching involves multiple sensor data)

A.6.4 failed model match/verification causes backtrack
through hierarchy of (partial) models

other (please specify)


A.7 Task and Environment

A.7.1 task
robotics
mobile
static
image analysis
natural
industrial

A.7.2 noisiness of image (lighting, image quality, occlusions, ....)

A.7.3 camera viewpoint(s)
single/multiple
fixed/arbitrary

A.7.4 `multiplicity'
single model (perhaps multiple viewpoints) matched to many
groupings
(looking for specific thing in complicated image)
many models matched to single/few groupings
(identifying single part of image, chosen by other means)
many-to-many

A.7.5 any other factors affecting computational complexity of task


A.8 Papers, etc
Please list some relevant papers or other sources (e.g. model
libraries) that I can examine for further information.



Section B

Computational complexity and constraints

B.1 image size

B.2 allowed processing time per image

B.3 model library
B.3.1 number of models in library
B.3.2 mean and variance of model size
(measured in bytes; arcs+nodes; pixels; ...)

B.4 effectiveness of indexing
(reduction in fraction of models in library actually
matched/verified)

B.5 predictability of set of models accessed at one time
(e.g. are some models always accessed together; from one image
to the next, does model set accessed change much)

B.6 complexity of algorithms (if not standard/well-known):
(mean,variance,max,min of time and space required, as functions
of whatever parameters are relevant; nominal values for these
parameters, if not given above)
B.6.1 grouping algorithm
B.6.2 indexing algorithm
B.6.3 matching algorithm

B.7 In general, where are the computation bottlenecks ?

------------------------------

End of VISION-LIST digest 14.15
************************

← previous
next →
loading
sending ...
New to Neperos ? Sign Up for free
download Neperos App from Google Play
install Neperos as PWA

Let's discover also

Recent Articles

Recent Comments

Neperos cookies
This website uses cookies to store your preferences and improve the service. Cookies authorization will allow me and / or my partners to process personal data such as browsing behaviour.

By pressing OK you agree to the Terms of Service and acknowledge the Privacy Policy

By pressing REJECT you will be able to continue to use Neperos (like read articles or write comments) but some important cookies will not be set. This may affect certain features and functions of the platform.
OK
REJECT