This file contains bibliographic citations (with abstracts) for selected
papers produced at the University of Rochester. Most citations end
with a link to a PDF or (formerly) compressed postscript file.
These files are also available via anonymous ftp from
ftp.cs.rochester.edu (user anonymous, password your_name), in the
directory pub/.
Copyright on many of these papers may be owned by organizations other
than the University of Rochester, as indicated in the citations below.
For more information or for help obtaining technical
reports not available online, please contact
tr@cs.rochester.edu.
Keywords: gait recognition; computer vision; walking direction algorithm; asupport vector machines.
Gait recognition is an important research problem in the field of computer vision. The goal is to identify people by analysis of gait patterns. Because the technique can be performed remotely, it has been applied to access control, surveillance, etc. Most research is based on the assumption that people's walking direction is perpendicular to the camera axis. In this case the silhouette can be extracted to identify individuals. This limits the application and development of gait recognition. Consequently, walking direction has recently become a popular and challenging research problem. An improved gait recognition approach is proposed. It can give high recognition rates in cases where people's walking direction is not perpendicular to the camera axis. We describe a novel approach to walking direction computation using information about camera position. The walking direction angle and camera affine projection model are used to define features that can be related to a kinematic model of a human being. Support Vector Machine is used for classification and to evaluate the power of the approach. We apply our method to real human walking image sequences, and achieve relatively high recognition rates. Our approach illustrates how changes in walking direction affect gait parameters in terms of recognition performance. We show that the use of the walking direction algorithm improves recognition rates under variation in viewing direction.
Keywords: art restoration; scratch detection; dust detection; filtering;daguerreotypes; high-resolution digitization; dataset; inpainting.
The Conservation Laboratory of the George Eastman House International Museum of Photography and Film (GEH) and the Department of Computer Science at the University of Rochester (URCS) are collaborating on the problems of preservation and access to daguerreotypes. Parallel (cluster) computation provides high speed image processing to find, classify, and ultimately to eliminate defects and artifacts of deterioration. This TR describes early low-level techniques and applies them to scanner lighting, dust, and scratch defects.
Keywords: decision theory; conditional random fields.
We propose a means of extending Conditional Random Field modeling to decision-theoretic planning where valuation is dependent upon fully observable factors. Representation is discussed, and a comparison with existing decision problem methodologies is presented. Included are exact and inexact message passing schemes for policy making, examples of decision making in practice, extensions to solving general decision problems, and suggestions for future use.
Keywords: multimodal data mining; image annotation; image retrieval;semi-supervised machine learning.
The web has the potential to serve as an excellent source of example imagery for visual concepts. Image search engines based on text keywords can fetch thousands of images for a given query; however, their results tend to be visually noisy. We present a technique that allows a user to refine noisy search results and characterize a more precise visual object class. With a small amount of user intervention we are able to re-rank search engine results to obtain many more examples of the desired concept. Our approach is based on semi-supervised machine learning in a novel probabilistic graphical model composed of both generative and discriminative elements. Learning is achieved via a hybrid expectation maximization / expected gradient procedure initialized with a small example set defined by the user. We demonstrate our approach on images of musical instruments collected from Google image search. The rankings given by our model show significant improvement with respect to the user-refined query. The results are suitable for improving user experience in image search applications and for collecting large labeled datasets for computer vision research.
Keywords: machine vision applications; cloudiness analysis.
We describe a self bootstrapping and adaptive system designed to make observations of an outdoor environment and determine some simple environmental condi tions, specifically whether it is day or night, and whether the conditions are clear, cloudy, or some mixtur e. Doing this so that the system will self- adapt and operate reliably in a variety of locations and envi ronments, robust against changes in seasons, weather, and typical human and non-human disturbances (e.g. streetlights, thunderstorms) is a more complex problem than might first be thought. We describe some of the practical issues, and techniques for dealing with them.
Keywords: machine vision; background subtraction; activity detection.
We describe implementation details of an activity detection system whose function is to detect small image regions (pixels) that are inconsistent with a learned and adaptive background model. The system is based on a classic multiple Gaussian model approach, but contains additional model elements to deal with phenomena characteristic of outdoor environment such as wind-blown foliage and moving cloud shadows.
Keywords: predictive action; predictive coding; intelligence; reinforcement learning; curiosity; temporal invariance; exploration; neural networks; spike coding.
There has been much research in recent decades aimed at discovering what the underlying principles are, if any, that drive the brain. As the cortex appears to be basically uniform, it seems that if there is an underlying principle, it is ubiquitous. However, the principles which have been proposed to explain the brain have largely been specialized principles, which each explain a particular aspect of the brain.Principles such as efficient coding, predictive coding, and temporal invariance have been proposed to explain sensory coding, and have succeeded to some measure in reproducing the receptive field properties of neurons in the visual cortex. Bayesian surprise has been offered as an explanation of attention, and has enjoyed some success in modeling human saccades, while reinforcement learning and intelligent adaptive curiosity have been aimed at explaining how actions are chosen.
In this dissertation we propose a novel principle which we call predictive action. It is an information theoretic principle which unifies all of the above proposals. We show its relationship to each of the above proposals, and give several algorithms which approximate predictive action for specific environments. We hope that this principle will allow not only for a greater understanding of the brain, but also serve as a principled basis for the design of future algorithms to solve a broad range of problems in artificial intelligence.
Keywords: scene context; scene classification; image classification; camera metadata; semantic features; low-level features; hidden Markov model; factor graph; Bayesian network; temporal context; spatial configurations.
Semantic scene classification, automatically categorizing images into a discrete set of classes such as beach, sunset, or field, is a difficult problem. Current classifiers rely on low-level image features, such as color, texture, or edges, and achieve limited success on constrained image sets. However, the domain of unconstrained consumer photographs requires the use of new features and techniques.One source of information that can help classification is the context associated with the image. We have explored three types of context. First, spatial context enables the use of scene configurations (identities of regions and the spatial relationships between the regions) for classification purposes. Second, temporal context allows us to use information contained in neighboring images to classify an image. We exploit elapsed time between images to help determine which neighboring images are most closely related. Third, image capture condition context in the form of camera parameters (e.g., flash, exposure time, and subject distance) recorded at the time the photo was taken provides cues that are effective at distinguishing certain scene types.
We developed and used graphical models to incorporate image content with these three types of context. These systems are highly modular and allow for probabilistic input, output, and inference based on the statistics of large image collections. We demonstrate the effectiveness of each context model on several classification problems.
Keywords: image translation; abstraction; learning structural distortion.
We argue that due to engineering choices in the design of computational machinery, the fundamental difficulty of achieving translation invariance in vision systems is widely misunderstood in the image-processing community. Far from being a relatively trivial problem, translation invariance represents a complex abstraction that is of equivalent difficulty to (and can be considered complete for) an important class of structural distortion problems. We also argue that this class of abstractions is, in an important sense, efficiently learnable. These facts have significant implications for the abilities of any plastic system (e.g. the brain) that is able to acquire or "learn" some member of the class.
Keywords: shape from silhouette; 3-D reconstruction; image mosaicing; image registration; camera calibration; catadioptric sensors; visual perception; visual cue integration; Bayesian modeling; augmented reality; cast shadow.
Augmented Reality (AR) systems insert graphics objects into the images of real scenes. Geometric and photometric consistency has to be achieved to make AR systems effective and bring photorealism to the augmented graphics. Particularly, global illumination effects between the graphics objects and scene objects need to be simulated. This thesis investigates ways to improve AR rendering by creating cast shadows between the real and graphics objects. This requires knowledge about the scene lighting and the scene structure. We first give novel methods of recovering the light sources from the input images. For indoor scenes, we take advantage of scene regularities such as parallel and orthogonal walls. For outdoor scenes and indoor scenes where the lights can be approximated by a directional light source, we show a method of finding the light source from cast shadows present in the real scene.Besides the light source structure, we also need to know the 3D structure of the scene so that we can render the shadow cast on a real object by a graphics object. Using spheres as primitives, we develop an algorithm to approximate the shape of the scene objects from multiple silhouettes.
With all the above components, one can build an AR system that infers necessary information of the scene from shadows and inserts graphics object with convincing shadows. To justify our endeavor in terms of shadows being important in human spatial perception, we investigate shadow perceptions in the context of cue integration.
Keywords: color regions; color spaces; depth edges; morphology; segmentation; multi-flash camera; background detection.
Our goal is to isolate and ultimately identify objects in cluttered and possibly partially-obscuring background environments. Multiple images (usually six in our case) of a scene, including at least five flash images with different flash locations, locate depth edges in the scene. Morphological processing turns the collection of edges into a collection of regions. These regions are candidates for non-background status, and can be compared by color and shape analysis to background regions. Further, we think the edges, labeled with depth and color information, will provide a richer and more reliable set of inputs to an object recognizer like that of Nelson and Selinger, which so far has only had intensity edges to work with.
Keywords: silhouette; visual hull; pose; camera calibration; catadioptric sensor; mirror; 3-d reconstruction.
We propose a 3-D object reconstruction method using a stationary camera and a planar mirror. No calibration is required. The mirror provides the extra views needed for a multiple-view reconstruction. We examine the imaging geometry of the camera-mirror setup and prove a theorem that gives us the point correspondences to compute the orientation of the mirror. The correspondences are derived from the convex hull of the silhouettes of the images of the object and its mirror reflection. The distance between the mirror and the camera can be then obtained by a single object point and a pair of points on the mirror surface. After the pose of the mirror is determined, we have multiple calibrated views of the object. We show two reconstruction methods that utilize the special imaging geometry. The system setup is simple. The algorithm is fast and easy to implement.
Keywords: image understanding; Markov random fields; Bayesian networks; graphical models; semantic features; scene classification.
Scene classification, the automatic categorization of images into semantic classes such as beach, field, or party, is useful in applications such as content-based image organization and context-sensitive digital enhancement. Most current scene-classification systems use low-level features and pattern recognition techniques; they achieve some success on limited domains.Several contemporary classifiers, including some developed in Rochester, incorporate semantic material and object detectors. Classification performance improves because because the gap between the features and the image semantics is narrowed. We propose that spatial relationships between the objects or materials can help by distinguishing between certain types of scenes and by mitigating the effects of detector failures. While past work on spatial modeling has used logic- or rule-based models, we propose a probabilistic framework to handle the loose spatial relationships that exist in many scene types.
To this end, we have developed MASSES, an experimental testbed that can generate virtual scenes. MASSES can be used to experiment with different spatial models, different detector characteristics, and different learning parameters. Using a tree-structured Bayesian network for inference on a series of simulated natural scenes, we have shown that the presence of key spatial relationships are needed to disambiguate other types of scenes, achieving a gain of 7% in one case.
However, our simple Bayes net is not expressive enough to model the faulty detection at the level of individual regions. As future work, we propose first to evaluate full (DAG) Bayesian networks and Markov Random Fields as potential probabilistic frameworks. We then plan to extend the chosen framework for our problem. Finally, we will compare our results on real and simulated sets of images with those obtained by other systems using spatial features represented implicitly.
Keywords: temporal invariance; predictive coding; unsupervised learning .
Predictive coding and temporal invariance are two major unsupervised learning principles which have been used to explain the behavior of parts of the brain (most notably the striate cortex). Although both have been around for a number of years, no formal relationship between them has been established. We prove that temporal invariance is a form of predictive coding. To do this, we begin with the goal of predictive coding, make a set of assumptions about the class of problem we are dealing with, and derive temporal invariance from the predictive coding goal and our added assumptions.
Keywords: predictive coding; spiking neurons; motor control; reinforcement learning.
Predictive coding is an unsupervised learning principle which has been proposed to explain the brain's perceptual abilities. While it has enjoyed success in modeling the receptive field properties of cells in the visual and auditory cortex, its application has so far been limited only to perceptual areas of the cortex. Given the uniformity of the cortex, it seems likely that one principle can account for the operation of the entire cortex.We believe that predictive coding is such a principle, and that its utility extends well beyond perception. All that is necessary for a predictive coding network to do motor control is that it have some feedback. If its outputs affect its inputs, then it will use its outputs to affect the world in a way that it can predict. Therefore, motor control falls naturally out of the predictive coding framework.
One must be very careful in the development of a predictive coding network in order to avoid making assumptions that prevent the network from using its outputs in this fashion. In particular, the network must not assume that its input is independent of its output. It also must not assume that its outputs should be independent of each other. Finally, and most importantly, it must maximize Shannon information about its input, instead of Fisher information. We are not aware of any predictive coding network which satisfies these criteria, so we present a derivation of one here.
Keywords: autonomous agents; multi-agent coordination; multi-agent collaboration; robot simulation; Quake; implemented systems; teaching AI; open source.
We have modified the public-domain Quake II game to support research and teaching. Our research is in multi-agent control and supporting human-computer interfaces. Teaching applications have so far been in an undergraduate Artificial Intelligence class and include natural language understanding, machine learning, computer vision, and production system control. The motivation for this report is mainly to document our system development and interface. Only early results are in, but they appear promising. Our source code and user-level documentation is available on the web. The information document is a somewhat motion-blurred snapshot of the situation in September 2004.
Keywords: machine vision; color description; language grounding.
This paper addresses the issue of how verbal communication arises from the complex and uncertain representations that seem necessary to robustly carry out perception in real-world domains. We propose that the generation of natural language in such domains should be addressed as the optimization problem of finding, under various constraints, the verbalization that has the greatest probability of achieving a specific change that the speaker wants to induce in the mental state or behavior of the listener. This most likely effective or MLE strategy has the advantage of making the problem concrete, and allowing (possibly empathic) models of the perceptual and behavioral processes to be used in a principled way. We illustrate these issues in the context of the specific problem of describing real objects in native domains using basic color language (e.g., "mostly brown," "partly red"). The term "native domains" refers to real-world environments that have not been tailored to suit the application.
Keywords: spike timing; vision; unsupervised learning; sprase coding; spiking neurons.
One of the guiding principles of sparse coding is that neurons should convey as much information as possible with every spike. However, sparse coding models have not lived up to this idea. Many models use neurons which output continuous values over time. This is justified by assuming they communicate by firing rates, but this disregards all temporal information from the spike. Newer models use exact spike times, but they also use synchronous firing chains. For neurons to convey as much information as possible per spike, their spikes must be as independent as possible. Synfire chains are, in contrast, very highly correlated spike trains.This model is to our knowledge the first sparse coding model on spiking neurons which does not use synfire chains. Instead, each neuron tries to make its spike train as independent as possible from those of its neighbors. The neurons are strictly local, with temporal receptive fields and recurrent inhibitory connections. The input can be reconstructed by summing the convolutions of each neuron's spike trains with their receptive fields. Finally, we argue that the algorithm is related to K-means clustering in a convolutional feature space.
Keywords: pattern recognition; machine learning; image understanding; semantic scene classification; multi-label classifier; image organization; cross-training; Jaccard similarity.
In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.
Keywords: neural coding; cortical computation; probabilistic firing; irregular firing; distributed coding; sparse coding; synchrony.
One of the biggest challenges in systems neuroscience is a satisfactory model of neural signaling. From rate coding to temporal coding, models of neural signaling have been challenged by the fact that neurons fire highly irregularly. A typical interpretation of the variability is ``noise other than signal'', which not only has difficulty accounting for the speed, accuracy, efficiency and complexity of biological systems, but is also contradicted by recent studies that show both spike generation and transmission are highly reliable.Challenged with the discrepancy between theory and data, we take a fresh view of the subject with the proposal that the randomness associated with neuronal outputs is certain to have a purpose. In particular, we model neurons as probabilistic devices that not only compute probabilities but also fire probabilistically to signal their computations. According to our model, signaling of probabilities is done by having cells with similar receptive fields fire synchronously to achieve fast communication, this is consistent with observations of neurons coding as ensembles and topographic map organization. Our proposal of probabilistic, distributed synchronous volleys as a neural signaling strategy not only accounts for variable neural responses, but also provides the advantage of robust and fast computation. Furthermore, the involvements of probabilistic firing and distributed coding explicate how synchronous firing can appear to be a rate code, accounting for the vast amount of data supporting a rate code assumption.
Any neural signaling model must support cortical computation in a biologically realistic fashion. Going beyond simply addressing the role of spikes in cortical cells' communication, we show that our distributed synchrony model can be implemented in a predictive coding framework and can be used to learn structures in the natural environment. Trained with patches from natural images, our model V1 cells develop localized and oriented receptive fields, consistent with V1 simple cell properties. Unlike most cortical computation models, our predictive coding model makes use of single spikes, instead of abstracting spikes away with analog quantities. This close resemblance to biology makes our model well suited for guiding experimental research.
Keywords: shadow geometry; cast shadow; convex hull; pinhole camera; self calibraion; light source locating.
Shadows provide valuable information about the scene geometry, especially the whereabout of the light source. This paper investigates the geometry of point light sources and cast shadows. It is known that there is redundancy in the object-shadow correspondences. We explicitly show that no matter how many such correspondences are available, it is impossible to locate a point light source from shadows with a single view. We discuss the similarity between a point light source and a conventional pinhole camera and show that the above conclusion is in accordance to traditional camera self calibration theory. With more views, however, the light source can be located by triangulation. We proceed to solve the problem of establishing correspondences between the images of an object with extended size and its cast shadow. We prove that a supporting line, which, put simply, is a tangent line of the image regions of the object and its shadow, provides one correspondence. We give an efficient algorithm to find supporting lines and prove that at most two supporting lines can be found. The intersection of these two lines gives the direction of the point light source. All this can be done without any knowledge of the object. Experiment results using real images are shown.
Keywords: shadow perception; depth cue; shadow creation; psychometric curve; augmented reality; virtual reality; lighting simulation.
We lay out the plans for a series of psychophysics experiments on human shadow perceptions. The Peg in Hole experiments are to find out whether shadows play a role in human 3D perception and how important the role is, compared with other depth cues such as stereoscopy. We also design quantitative experiments to extract the psychometric curves for shadow perception versus various characteristics of scene geometry. The curves will help us to decide different thresholds for our scene reconstruction algorithms. Some background of psychophysics is included for future reference. Expected outputs of the experiments are described. Some implementation details are also discussed.
Keywords: obstacle detection; robust method; planar homography; affine structure from motion.
In this report, we describe an obstacle identification method using affine structures from motion. We first identify a reference plane by tracking feature points across image sequences. We then compute the homographies between images induced by the reference plane from these feature points. Once the feature points are categorized as on the plane and off the plane, we reconstruct the affine structure from all the off-plane points. The obstacles are identified from the affine structure. Our method doesn't require a calibrated camera. Results of simulated and real experiments show that our method work very well.
Keywords: normal flow; structure from motion; ego-motion; flow clustering; optical flow.
This report documents our experience with different optical flow estimation methods and our attempt to use optical flow both qualitatively and quantitatively. Special attention is devoted to improving the Lucas-Kanade method to obtain dense flow. We use a simple clustering technique to find looming objects. This method has the potential of supporting obstacle avoidance using optical flow. Experiments using real images demonstrate that this simple clustering is effective for certain scenes. We also point out when this technique will fail. We try to use optical flow quantitatively to recover the structure of a piecewise planar environment. First, we use the widely-known 8-parameter planar flow equations to locate individual planes in the scene. Second, in lieu of full flow, we try to use normal flow to compute both the ego-motion and the structure. Both trials fail ungracefully, mostly due to noisy flow data. We describe the mathematics ofboth methods and our experimental results.
Keywords: object recognition; active recognition; computer vision.
The task of generic object recognition involves learning to identify members of a class of objects based on a few exemplars from that class. Generic object classes are inherently ill-defined. Objects can be grouped into classes based on varying criteria such as form, function, color, size, etc. In this work, we develop two extensions to a well-studied, 3D view-based, rigid-object recognizer that improve its performance on generic object classes grouped on shape and a related class of objects we call loosely structured objects.The first extension uses clustering on the underlying local context features to discover features that recur within object classes. The modification improves performance for rigid, generic, and loosely structure classes, but it does not reliably discover recurrent features. Further analysis shows most of the performance improvement comes from a side-effect of the clustering algorithm. Namely, features that tend to create noise in the system become marginalized.
The second extension takes a principled approach to estimating the quality of each object model feature based on its robustness and commonness. Noisy features get a low quality score and thus contribute less noise to the recognition process. This approach further improves recognition for rigid, generic, and loosely structured object classes over the clustering method.
We also develop an active recognition system that achieves better recognition by utilizing additional information available in the active vision setting. The system uses change detection to perform foreground/background segmentation on the scene. This segmentation information is used to command a pan, tilt, zoom camera to acquire a high-resolution image of target regions in the scene. Furthermore, the segmentation information is used to reduce background clutter in these high-resolution target images.
Finally, we describe the Memory Assistant application built on top of the active recognition system. This application is designed to assist people with mild to moderate memory loss keep track of important objects in a home environment. A prototype of this system is currently deployed at the Center for Future Health at the University of Rochester Medical Center.
Keywords: 3d shape; noisy images; noisy video; Bayesian modeling; nonrigid motion; scene-space based methods; Photo Hull Distribution.
We combine scene-space based methods with Bayesian modeling for recovering the geometric (3d shape, appearance) and dynamic (motion, deformation) properties of real-world scenes from noisy images. Bayesian modeling in scene space helps establish a direct mathematical relationship between the uncertainty in estimating scene properties (e.g., 3d shape and motion) and the uncertainty due to noise and errors in image measurements. This leads to algorithms that optimally recover 3d scene properties directly from image intensities. We apply this approach to two specific problems.The first problem we study is inferring 3d shape from a set of noisy images. We derive a general probabilistic theory of occupancy and emptiness to specify what one can infer about 3d shape for arbitrarily-shaped, Lambertain scenes and arbitary viewpoint configurations. By modeling the problem in scene space, we formalize the notions of visibility, occupancy, emptiness, and photo-consistency, leading to the Photo Hull Distribution, the tightest probabilistic bound on the scene's true shape that is theoretically computable from the input images. We show how to express this distribution directly in terms of image measurements and represent it compactly by assigning an occupancy probability at every 3d point. We provide a stochastic algorithm that draws fair samples from the Photo Hull Distribution and converges to an optiaml conservative estimate of the occupancy probability. We present experimental results for real, complex scenes.
The second problem we study is recovering nonrigid motion of deformable surfaces from noisy video. We develop linear methods for model-based tracking of nonrigid 3d objects in video. Uncertainty in image measurements is quantified and propagated through the inverse model to yield optimal 3d pose and deformation estimates directly from 2d image intensities. We obtain accurate and optimal closed-form nonrigid motion estimators by minimizing information loss from non-reversible operations. We demonstrate results with 3d nonrigid tracking, model refinement, and super-resolution texture lifting from low-quality, low-resolution video.
Keywords: image classification; low-level features; exemplar-based systems; model-based systems; semantic scene classification; semantic features.
Semantic scene classification, categorizing images into one of a set of physical (e.g., indoor/outdoor, orientation) or semantic categories (e.g., beach or party), is a relatively new field. Most of the existing techniques used primarily low-level features to classify scenes and achieved some success on constrained problems. We report on the state of the art, presenting summaries of major scene classification systems and identifying the features and inference engines they use.
Keywords: reinforcement learning.
We present a new algorithm, GM-Sarsa(0), for finding approximate solutions to multiple-goal reinforcement learning problems that are modeled as composite Markov decision processes. According to our formulation different sub-goals are modeled as MDPs that are coupled by the requirement that they share actions. Existing reinforcement learning algorithms address similar problem formulations by first finding optimal policies for the component MDPs, and then merging these into a policy for the composite task. The problem with such methods is that policies that are optimized separately may or may not perform well when they are merged into a composite solution. Instead of searching for optimal policies for the component MDPs in isolation, our approach finds good policies in the context of the composite task.
Keywords: scene reconstruction; image mosaicking; pose estimation.
We describe a method for reconstructing indoor scenes from image mosaics using prior knowledge of the cuboid structure of the environment. The method is inspired by traditional approaches to the camera pose estimation problems known as the PnL and PnA problems. We show that a cuboid can be reconstructed from the images of three of its corners. The necessary camera intrinsic parameters are obtained by self-calibration in the image mosaicking process. The major advantages of this method over methods such as single view metrology are (1) it can do metric reconstruction; (2) it is a closed form solution so it is numerically stable; and (3) it requires only minimal user interaction.
Keywords: image registration; image mosaicking; environment map.
In this report, we describe methods of acquiring an environment map by image mosaicking. We focus on technique details of the different algorithms involved. These algorithms include image matching, homography estimation, linear image warping and linear self calibration. Many of these details are distributed in various publications and we here bring them together. Furthermore, these algorithms are widely used in other circumstances and we provide all the C++ code for each of the algorithms.
Keywords: stereoscopic vision; 3D reconstruction; multi-view geometry; multi-baseline stereo; curve-based stereo; space carving; motion analysis; multi-view motion estimation; 3D flow; direct motion estimation methods; image warping; deformation analysis; 3D motion capture; reflectance modeling; bidirectional reflectance distribution function; Phong reflectance model; illumination modeling; differential geometry; surfels.
In this thesis we study the problem of recovering non-rigid motion, shape and reflectance properties of dynamic 3D scenes from image sequences. Our goals are both to advance towards a firmer mathematical understanding of the constraints that exist in this problem and to develop practical methods that extract the desired properties directly from visual data, using as little prior knowledge about the scenes being observed as possible.To recover motion, shape and reflectance simultaneously when they are all unknown and the scenes potentially have discontinuities, we observe that scenes composed of curves and surfaces with piecewise-smooth shape and motion trace manifolds embedded in 4D space-time as they move. Moreover, we show that these manifolds have a well-defined differential-geometric structure and, consequently, can be used as the basis to create spatiotemporally-distributed geometric and radiometric representations.
This insight is supported by a mathematical analysis of how multi-view image sequences constrain spatiotemporally-localized scene properties such as the instantaneous 3D velocity, position and orientation of individual scene points. Based on this analysis, we develop a general framework for visual reconstruction of dynamic scenes, and propose specific representational primitives that are both powerful enough to capture a broad class of scenes with arbitrarily-high accuracy and simple enough to be unambiguously recovered from visual data alone. The use of these primitives leads us to develop algorithms that break the complex problem of reconstructing entire dynamic scenes into collections of spatiotemporally-localized, well-posed optimization problems.
Experiments with complex real scenes (paper, clothing, skin, shiny objects) and scenes for which ground-truth geometry is known illustrate our methods' ability to (1) explain pixels and pixel variations in terms of their underlying physical causes---3D shape, surface reflectance, 3D motion, illumination, and visibility, (2) recover dense and non-rigid instantaneous velocity fields even in the presence of moving specularities, and (3) incorporate spatio-temporal coherence into computations for improved stability, and accuracy gains with respect to static multi-view analysis techniques.
Keywords: object discovery; temporal reasoning; computer vision; spatial intelligence.
We present the theory behind TOD (the Temporal Object Discoverer), a novel unsupervised system that uses only temporal information to discover objects across image sequences acquired by any number of uncalibrated cameras. The process is divided into three phases: (1) Extraction of each pixel's temporal signature, a partition of the pixel's observations into sets that stem from different objects; (2) Construction of a global schedule that explains the signatures in terms of the lifetimes of a set of quasi-static objects; (3) Mapping of each pixel's observations to objects in the schedule according to the pixel's temporal signature. Our Global Scheduling (GSched) algorithm provably constructs a valid and complete global schedule when certain observability criteria are met. Our Quasi-Static Labeling (QSL) algorithm uses the schedule created by GSched to produce the maximally-informative mapping of each pixel's observations onto the objects they stem from. Using GSched and QSL, TOD ignores distracting motion, correctly deals with complicated occlusions, and naturally groups observations across cameras. The sets of 2D masks recovered are suitable for unsupervised training and initialization of object recognition and tracking systems.
Keywords: fundamental matrix; three-view epipolar geometry; homography; stereo correspondence; reprojection; teleconference.
In a teleconference, reprojecting a face can make it appear to be looking in a particular direction. Thus reprojection can substitute for an individual camera for each member of the conference, saving on hardware and transmission bandwidth. Our reprojection algorithm has an off-line part, which calculates fundamental matrices expressing the relationships of different points of view. During operation, the on-line part converts a single image into possibly several others that give conference participants the consistent impression that the speaker is addressing a particular person. Our offline algorithm presents only an easy version of the generally difficult correspondence problem.
Keywords: active object recognition; segmentation; object tracking.
In this paper, we present a method for propagating segmentation information across a saccade for a foveating camera. In particular, we take a region of interest from a wide-angle, low-fidelity image and propagate its segmentation information to a zoomed, high-fidelity image containing that region. Our method uses normalized greyscale templates to estimate the change in translation and magnification required to transform the segmented region. This process is useful for systems which detect regions of interest at low-fidelity and then perform a saccade to provide a high-fidelity view of that region of interest. We show how using this method increases the performance of an active object recognition system.
Keywords: object recognition; activity recognition; memory assistance.
Tracking is frequently considered a frame-to-frame operation. As such, object recognition techniques are generally too slow to be used for tracking. There are domains, however, where the objects of interest do not move most of the time. In these domains, it is possible to watch for activity in the scene and then apply object recognition techniques to find the object's new location. This makes tracking a discrete process of watching for object disappearances and reappearances. We have developed a memory assistance tool that uses this approach to help people with slight to moderate memory loss keep track of important objects around the house. The system is currently deployed in a prototype smart home.
Keywords: single frame orientation; pattern recognition; image orientation.
Automatic single-frame image orientation detection is a difficult problem. In this report, we describe a system designed to classify the orientation of an image. An algorithm designed by researchers at Michigan State University is used as a baseline. First- and second-order spatial color moments are used as features. Learning Vector Quantization (LVQ) is used to estimate the underlying probability density function needed by a Bayesian classifier. We compare these results with those for 1NN and SVM classifiers, and with LDA used as a feature extractor. Extensive experiments were conducted to gain insight into how and why the MSU algorithm works.We present results both for a stock photo library (Corel) and for a set of consumer images (JBJL). Analyzing these results shows that certain prototypical images (e.g., those with sky at the top) can be classified correctly over 90% of the time, but that the general problem is much more difficult for low-level feature-based approaches. We obtained best results of 74% accuracy on the Corel set and 68% accuracy on JBJL, assuming equal prior among all four possible image orientations.
Keywords: P3; evoked potential; brain-computer interface; BCI; virtual reality.
Recent advances in computer hardware and signal processing have made it feasible to use human EEG signals or "brain waves" to communicate with a computer. Locked-in patients now have a means to communicate with the outside world. Even with modern advances, such systems still suffer from communication rates on the order of 2-3 items/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve. This dissertation presents a flexible brain-computer interface that is designed to facilitate changes in signal processing methods and user applications. In order to show the flexibility of the system, several applications, ranging from a brain-body actuated video game played with eye movements to a brain-computer interface for environmental control in a virtual apartment, are shown.The P3 evoked potential is a positive wave in the EEG signal peaking at around 300 milliseconds after task-relevant stimuli and it can be used as a binary control signal. A virtual driving experiment shows that the P3 can be reliably detected within a virtual environment. Several on-line algorithms for processing single trial P3 evoked potentials are presented and compared. It is important that actual EEG signals rather than signal artifacts are being recognized and thus false recognition of artifacts is shown to be small.
Results from an environmental control application within a virtual apartment are presented. Subjects do not perform significantly different between controlling the application from a computer monitor and when fully immersed in the virtual apartment and subjects like the immersive VR environment better. This highlights the fact that the P3 component of the evoked potential is robust over different environments and that usability does not depend solely on performance, but on other factors as well. Future work is discussed within this context.
Keywords: performance analysis; learning; minimally supervised training; object recognition.
Due to recent advances in the art, object recognition may soon replace low-level feature extraction processes in automatic image database annotation. However, improvement in performance is still an important consideration. In addition, model acquisition for appearance-based object recognition is tedious, since such systems usually require training on a large set of segmentable example views that cover variation among class exemplars. These views have to be labeled with object identity and pose.In this thesis we first develop and analyze a feature-based object recognition system that demonstrates good recognition of a variety of 3D shapes, with full orthographic invariance. We report the results of large-scale tests that evaluate recognition performance in conditions of background clutter and partial occlusion, as well as generic capabilities of the system. We develop a statistical framework for predicting the performance in a variety of situations from a few basic measurements. We investigate the performance of object recognition systems to see which, if any, design axes of such systems hold the greatest potential for improving performance. One conclusion is that the greatest leverage seems to lie at the level of intermediate feature construction. We also analyze the effect of other improvements, such as parallelization and the use of multiple views.
We then formalize a system for constructing 3D recognition models using large, cluttered visual corpora, in a minimally supervised manner. After giving it a few seed pictures of an object class (say a couple of pictures of cars), the system is given access to an unlabeled image database containing, among other images, other pictures of the object. The system then explores the image database, augmenting its representation of the object (in this case the car) class to include new information whenever it finds a near enough match to the existing representation. After exposure to sufficient imagery, the system will usually have a general model of the car that can label cars in the entire database and other databases. We obtain a significant improvement in recognition performance when training the system from unlabeled cluttered background images, as opposed to training only on the labeled, black background seed image. The approach could use any appearance-based 3D object recognition system.
Keywords: N-view stereo; 3D occupancy; visibility; probabilistic photo-consistency.
This paper addresses the inference of 3D shape from a set of n noisy photos. We derive a probabilistic framework to specify what one can infer about 3D shape for arbitrarily-shaped, Lambertian scenes and arbitrary viewpoint configurations. Based on formal definitions of visibility, occupancy, emptiness, and photo-consistency, the theoretical development yields a formulation of the Photo Hull Distribution, the tightest probabilistic bound on the scene's true shape that can be inferred from the photos. We show how to (1) express this distribution in terms of image measurements, (2) represent it compactly by assigning an occupancy probability to each point in space, and (3) design a stochastic reconstruction algorithm that draws fair samples (i.e., 3D photo hulls) from it. We also show experimental results on two complex scenes.
Keywords: object recognition; learning; minimal supervision; cluttered backgrounds; recognition performance.
Appearance-based object recognition systems rely on training from imagery, which allows the recognition of objects without requiring a 3d geometric model. It has been little explored whether such systems can be trained from imagery that is unlabeled, and whether they can be trained from imagery that is not trivially segmentable. In this paper we present a method for minimally supervised training of a previously developed recognition system from unlabeled and unsegmented imagery. We show that the system can successfully extend an object representation extracted from one black background image to contain object features extracted from unlabeled cluttered images and can use the extended representation to improve recognition performance on a test set.
Keywords: object recognition; fixed cameras; relative camera position; appearance-based; multiple views; recognition performance.
Object recognition from a single view fails when the available features are not sufficient to determine the identity of a single object, either because of similarity with another object or because of feature corruption due to clutter and occlusion. Active object recognition systems have addressed this problem successfully, but they require complicated systems with adjustable viewpoints that are not always available. In this paper we investigate the performance gain available by combining the results of a single view object recognition system applied to imagery obtained from multiple fixed cameras. In particular, we address performance in cluttered scenes with varying degrees of information about relative camera pose. We argue that a property common to many recognition systems, which we term a weak target error, is responsible for two interesting limitations of multi-view performance enhancement: the lack of significant improvement in systems whose single-view performance is weak, and the plateauing of performance improvement as additional multi-view constraints are added.
Keywords: visual cue intregration; sensory integration; adaptation; tracking; identification; fast re-weighting; virtual reality.
We study the dynamics of visual cue integration in a tracking / identification task, where subjects track a target object among distractors and identify the target after an occlusion. Objects are defined by three different attributes (color, shape, size) which change randomly within a singe trial. When the attributes differ in their reliability (two change frequently, one is stable) our results show that subjects rapidly re-weight the different cues, putting more emphasis on the stable cue. The re-weighting takes place in less than one second. Our results suggest that cue integration can exhibit adaptive phenomena on a very fast time scale. We propose a probabilistic model with temporal dynamics that accounts for the observed effect.
Keywords: active vision; sensor parameter selection; mutual information; sequential decision process; state estimation.
In this paper we introduce a formalism for optimal sensor parameter selection for iterative state estimation in static systems. In contrast to common approaches, where a certain metric---for example, the mean squared error between true and estimated state---is optimized during state estimation, in this work the optimality is defined in terms of reduction in uncertainty in the state estimation process. The main assumption is that state estimation becomes more reliable if the uncertainty and ambiguity in the state estimation process can be reduced.We consider a framework based on Shannon's information theory and select the camera parameters that maximize the mutual information, i.e., optimize the information that the captured image conveys about the true state of the system. The technique implicitly takes into account the a priori probabilities governing the computation of the mutual information. Thus a sequential decision process can be formed by treating the a priori probability at a certain time step in the decision process as the a posteriori probability of the previous time step.
We demonstrate the benefits of our approach using an object recognition scenario and an active pan/tilt/zoom camera. During the sequential decision process the camera looks to parts of the object that allow the most reliable distinction of similar looking objects. We performed experiments with discrete density representation as well as with continuous densities and Monte Carlo evaluation of the mutual information. The results show that the sequential decision process outperforms a random gaze control, both in the sense of recognition rate and number of views necessary to return a decision.
Keywords: adaptation; neocortex; modular networks; correlations; synaptic plasticity; sensory integration; McGurk effect; Democratic Integration.
The human brain has to integrate the inputs it receives from different sensory modalities into a coherent description of its environment. This integration is often adaptive, showing recalibration or suppression of discordant sensory modalities. This paper proposes a qualitative theory of sensory integration which relates these adaptation phenomena to the anatomy of the neocortex and a rapid reversible synaptic mechanism as proposed in von der Malsburg's correlation theory of brain function.
Keywords: performance improvement; low-level feature detection; intermediate-level feature detection; object recognition; cluttered backgrounds; verification.
Appearance-based object recognition systems are currently the most successful approach for dealing with 3D recognition of arbitrary objects in the presence of clutter and occlusion. However, no current system seems directly scalable to human performance levels in this domain. In this report we describe a series of experiments on a previously described object recognition system that try to see which, if any, design axes of such systems hold the greatest potential for improving performance. We look at the potential effect of different design modifications and we conclude that the greatest leverage lies at the level of intermediate feature construction.
Keywords: object recognition; learning; view clusters; minimal supervision.
Image-based object recognition systems developed recently don't require the construction of a 3D geometric model, allowing recognition of objects for which current geometric recognition technologies do not apply. Such systems are typically trained with labeled, clean views that cover the whole viewing sphere and can sometimes handle generic, visually similar classes with moderate variation. It has been little explored whether such systems can be trained from imagery that is unlabeled, and whether they can be trained from imagery that is not trivially segmentable.In this report we investigate how an object recognition system developed previously can be trained from clean images of objects with minimal supervision. After training this system on a single or a small number of views of each object, a simple learning algorithm is able to attract additional views to the object representation, building clusters of views belonging to the same object. We explore how the learning performance improves by extending the set of views, introducing a small amount of supervision, or using more complicated learning algorithms.
Keywords: edge detection; boundary extraction; 3D object recognition.
In this report we describe a method for extracting curves from an image using directional pixel variances instead of gradient measures as low-level boundary evidence. The advantage of the variance over the image gradient is that we can accurately compute the direction of a local edge even if a sudden contrast change occurs in the background. This allows curves belonging to object contours to be followed more easily. We compared our method to a similar method based on the image gradient and we found that it obtains better results when run on synthetic and natural images. Our method also improved the performance of a contour-based 3D object recognition system in cluttered images.
Keywords: predictive coding; visual cortex; Kalman filter; rate coding; learning; neural networks.
Several decades of research have made many advances towards the goal of interpreting the neural spike train but a comprehensive understanding remains elusive. This paper pursues this goal in the context of a new class of models termed predictive models. Predictive models characterize the cortex as a memory whose parameters can be used to predict its input. This allows the input to be economically coded as a residual difference between itself and the prediction. Such models have recently had considerable success in modeling features of visual cortex. This paper shows that the predictive coding model can be extended to a lower level of detail that includes individual spikes as primitives. This is a significant improvement in perspicuity compared to the firing rate variables used by most current models. The specific model we describe exploits the use of coincidence of spike arrival times and the fact that neural representations can be distributed over large numbers of cells.
Keywords: autonomous driving; simulation tools; visual routines; real-time active vision.
To meet the demands of driving in complex environments, the perception subsystem of an intelligent vehicle must be able to extract the information needed for behaviors from the input video stream. An attractive way of achieving this is to have a library of basic image processing sub-functions (visual routines), which can be composed to subserve more elaborate goal-directed programs. The crucial compositional capability allows the visual routines to span the huge space of different task goals.The visual routines presented here are developed in a unique platform. The view from a car driving in a simulated world is fed into a Datacube pipeline video processor. The integration of photo-realistic simulation and real-time image processing represents a proof of concept for a new system design that allows testing computer vision algorithms under controllable conditions, thus leading to rapid prototyping. In addition to the simulations, the routines are also tested on similar images generated by driving in the real world to assure the generalizability of the simulation.
The simulator can also be used with human subjects who can drive a kart through the virtual environment while wearing head mounted displays (HMDs). A unique feature of the driving simulator we have built is the ability to track eye movements within a freely moving HMD. This allows the assessment of exigencies in complex situations that can be used to guide the development of automated routines.
Keywords: evoked potential; virtual reality; signal processing.
Virtual reality (VR) provides immersive and controllable experimental environments. It expands the bounds of possible evoked potential (EP) experiments by providing complex, dynamic environments in order to study cognition without sacrificing environmental control. The addition of quick, on-line analysis enables feedback to subjects, making the system we present ideal for safe Brain Computer Interface (BCI) research. In this context, we describe an experiment to recognize the existence of P300 EP epochs at red stoplights and the absence of this signal at yellow stoplights in a virtual driving environment. In order to determine the plausibility of single trial on-line P300 epoch analysis in the artifact ridden driving environment, we have compared the use of Independent Component Analysis (ICA), a Kalman filter, a robust Kalman filter, and correlation with the stoplight averages for recognition ability off-line. We report that while all methods perform better than correlation, the robust Kalman filter gives the highest recognition accuracy, and we discuss future work in this context.
Keywords: computer graphics; virtual reality; haptic interface; affine geometry; augmented reality.
Augmented reality is the merging of synthetic sensory information into a user's perception of a real environment. Until recently, it has presented a passive interface to its human users, who were merely viewers of the scene augmented only with visual information. In contrast, practically since its inception, computer graphics--and its outgrowth into virtual reality--has presented an interactive environment. It is our thesis that the agumented reality interfce can be made interactive. We present: techniques that can free the user from restricttive requirements such as working in calibrated environments, resutls with haptic interface technology incorporated into augmented reality domains, and systems considerations that underlie the practical realization of these interactive augmented reality techinques.
Keywords: dynamic vision; optimal estimation; learning; eye movements;multiscale spatial filters; visual routines; object recognition;object location; visual search; hierarchical recognition; prediction;Kalman filters; attention; segmentation; transformation Invariance;Pose Estimation, Visual Cortex, Neural Networks, Cortical Feedback,extra-classical effects; spatiotemporal filters; natural Images.
Theories of vision have traditionally confined themselves to the passive analysis of static images, focusing on the extraction of task-independent, 3D reconstructions of the visual world. However, the images projected on the retina are seldom unrelated static signals. Rather, they represent measurements of a coherent and continuous stream of events occurring in the visual environment, constrained both by physical laws of nature and the observer's actions on the immediate environment. In short, vision is inherently a dynamic process.In this thesis, we propose two related theories of dynamic visual perception. The first theory exploits the ability to make eye movements for dynamically exploring the visual world. The resulting architecture uses appearance-based models of objects in lieu of hand-coded 3D models, and employs two visual routines, one for object identification and another for object location, for solving visual cognitive tasks. The second theory, which can be seen as an elaboration of the first, is based directly on the premise that vision is a stochastic, dynamic process. The task of visual perception is then reduced to the dual problems of optimally estimating visual events occurring in the external environment, and on a longer time scale, learning efficient internal models of the environment. Both estimation and learning are appearance-based, relying only on input images rather than hand-coded object/environment models. Using this framework, we derive estimation and learning algorithms for visual recognition, visual "attention," occlusion-handling, segmentation, prediction, hierarchical recognition, transformation-invariant recognition, and pose estimation. Experimental results are provided to corroborate the viability of these derived algorithms.
In addition to their potential applications in machine vision and robotics, the derived algorithms can also be used to understand human and mammalian vision. We use the visual routines theory to model saccade learning behaviors in infants, visual search/cognitive behaviors in adult subjects, and hemispatial neglect in patients with parietal cortex lesions. The optimal estimation and learning framework is used to interpret the hierarchical and laminar circuitry of the mammalian visual cortex, and to explain neuronal properties such as endstopping, response suppression during free viewing of natural images, and spatiotemporal receptive field development in primary visual cortex.
Keywords: 3d computer vision; shape recovery; stereo; shape-from-shading; volumetric representations.
In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarily-shaped scene from multiple color photographs taken at known but arbitrarily-distributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a special member of this class, the maximal photo-consistent shape, that (1) can be computed from an arbitrary volume that contains the scene, and (2) subsumes all other members of this class. We then give a provably-correct algorithm, called Space Carving, for computing this shape and present experimental results from applying it to the reconstruction of geometrically-complex scenes from several photographs. The approach is specifically designed to (1) build 3D shapes that allow faithful reproduction of all input photographs, (2) resolve the complex interactions between occlusion, parallax, shading, and their effects on arbitrary collections of photographs of a scene, and (3) follow a "least commitment" approach to 3D shape recovery.
Keywords: perceptual grouping; object recognition; appearance-based representations.
In this report we consider the problem of 3D object recognition, and the role that perceptual grouping processes must play. In particular, we argue that a single level of perceptual grouping is inadequate, and that reliance on a single level of grouping is responsible for the specific weaknesses of several well-known recognition techniques. Instead, we argue that recognition must utilize a hierarchy of perceptual grouping processes, and describe an appearance-based system that uses four distinct levels of perceptual grouping, the upper two novel, to represent 3D objects in a form that not only allows recognition, but reasoning about 3D manipulation of a sort that has been supported in the past only by 3D geometric models.
Keywords: object recognition; appearance-based representations; visual learning.
We describe an appearance-based object recognition system using a keyed, multi-level context representation reminiscent of certain aspects of cubist art. Specifically, we utilize distinctive intermediate-level features, in this case automatically extracted 2D boundary fragments, as keys, which are then verified within a local context, and assembled within a loose global context to evoke an overall percept. This system demonstrates extraordinarly good recognition of a variety of 3D shapes, ranging from sports cars and fighter planes to snakes and lizards with full orthographic invariance. We report the results of large-scale tests, involving over 2000 separate test images, that evaluate performance with increasing number of items in the database, in the presence of clutter, background change, and occlusion, and also the results of some generic classification experiments where the system is tested on objects never previously seen or modelled. To our knowledge, the results we report are the best in the literature for full-sphere tests of general shapes with occlusion and clutter resistance.
Keywords: EEG; virtual reality.
Since scalp EEG recordings are measured in microvolts, electrical signals may easily interfere during an experiment. As Spehlmann discusses, such interference may be introduced through the lights in the recording room, a nearby television, or even a computer monitor [Spehlmann, 1991]. Thus, when we consider performing EEG/EP/ERP experiments within a virtual reality helmet containing an eye tracker, electrical interference becomes a real possibility. We tested the effects of wearing a VR4 virtual reality (VR) helmet containing an ISCAN eye tracker while asking subjects to do a continuous performance task. The results of this task were then analyzed in the frequency domain and compared to results from the same experiment while looking at a computer screen in two different environments. Results indicate that in an environment with other computers, the vertical refresh from the back of a nearby row of computer monitors added more noise to the signal than wearing the VR helmet and eye tracker. Even in an environment without other computers, the noise while wearing the VR helmet and eye tracker is not significantly different from the noise obtained while viewing a laptop computer screen in the same location.
Keywords: 3D computer vision; shape recovery; stereo; shape-from-shading; volumetric representations.
In this paper we consider the problem of computing the 3D shape of an unknown, arbitrarily-shaped scene from multiple color photographs taken at known but arbitrarily-distributed viewpoints. By studying the equivalence class of all 3D shapes that reproduce the input photographs, we prove the existence of a special member of this class, the maximal photo-consistent shape, that (1) can be computed from an arbitrary volume that contains the scene, and (2) subsumes all other members of this class. We then give a provably-correct algorithm for computing this shape and present experimental results from applying it to the reconstruction of a real 3D scene from several photographs. The approach is specifically designed to (1) build 3D shapes that allow faithful reproduction of all input photographs, (2) resolve the complex interactions between occlusion, parallax, shading, and their effects on arbitrary collections of photographs of a scene, and (3) follow a "least commitment" approach to 3D shape recovery.
Keywords: visual navigation; pose estimation; tracking; Variable State Dimension Filter.
In this paper, we discuss how to process visual information in convoying applications, using only low-cost, off-the-shelf hardware. We introduce a numerical algorithm for real-time perspective pose estimation that uses strong task-specific constraints to achieve efficiency and stability. Through extensive experiments performed with synthetic data, we show that this approach yields more accurate recovery than a general-purpose structure-and-motion recovery framework known as the Variable State Dimension Filter, even when some of its fundamental task-specific assumptions are only partially valid. In addition, we discuss efficient ways to perform low-level vision with off-the-shelf hardware, and we present a two-level control strategy that uses high-frequency odometry data to stabilize visual control. Real-world convoying experiments show that our tracking-and-control system performs quite well in the sense that it manages to keep targets in view, tolerates changes in lighting conditions, and enables vehicles to keep up with complex maneuvers performed by other members of the convoy, such as 180-degree turns.
Keywords: pose recovery; extrinsic camera calibration; visual navigation; tracking; dual quaternions; numerical optimization.
In this paper we review and compare several techniques for model-based pose recovery (extrinsic camera calibration) from monocular images. We classify the solutions reported in the literature as analytical perspective, affine and numerical perspective. We also present reformulations for two of the most important numerical perspective solutions: Lowe's algorithm and Phong-Horaud's algorithm. Our improvement to Lowe's algorithm consists of eliminating some simplifying assumptions on its projective equations. A careful experimental evaluation reveals that the resulting fully projective algorithm has superexponential convergence properties for a wide range of initial solutions and, under realistic usage conditions, it is up to an order of magnitude more accurate than the original formulation, with arguably better computation-time properties. Our extension to Phong-Horaud's algorithm is, to the best of our knowledge, the first method for independent orientation recovery that actually exploits the theoretical advantages of point correspondences over line correspondences. We show that in the context of a specific real-life application (visual navigation), it is either more accurate than other similar techniques with the same computational cost, or more efficient with the same accuracy.
Keywords: Linux; pose estimation; motor control; graphics simulation; dynamical modeling; video pipeline architecture; obstacle avoidance; communication protocol.
After abandoning an attempt to build our own gasoline-powered automated outdoor vehicle in 1995, we purchased two M68332-controlled wheelchairs for indoor and outdoor mobile robotics research. Much of the first year has been spent on various infrastructure projects, several of which are described here. At this writing we are beginning to be in a position to do nontrivial applications and research using these platforms. This compendium of facts and experiences is meant to be useful in getting to know the organization and capabilities of our mobile robots. We first cover the basic hardware and the serial protocol used to communicate between the main computing engine and the microcontroller responsible for sensor management, motor control, and low-level sensori-motor control loops. We describe the interface to the video digitizer, a low-level obstacle avoidance routine, and a general software organization for a control architecture based on video streams. Dynamic nonholonomic models and a virtual environment for debugging and experimenting with them are described next, followed up by a visual servoing application that uses ``engineered vision'' and special assumptions.
Keywords: eye movements; visual search; computational models; reference frames; spatial memory.
Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds, although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. Once fixated, targets are remembered by using spatial memory in the form of object-centered maps. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in natural visual search tasks. Experimental results indicate excellent agreement between eye movements predicted by the model and those recorded from human subjects.
Keywords: image manipulation; 2D painting and drawing; rendering algorithms; image-based rendering.
This paper presents a new class of interactive image editing operations designed to maintain physical consistency between multiple images of a physical 3D object. The distinguishing feature of these operations is that edits to any one image propagate automatically to all other images as if the (unknown) 3D object had itself been modified. The approach is useful first as a power-assist that enables a user to quickly modify many images by editing just a few, and second as a means for constructing and editing image-based scene representations by manipulating a set of photographs. The approach works by extending operations like image painting, scissoring, and morphing so that they alter an object's plenoptic function in a physically-consistent way, thereby affecting object appearance from all viewpoints simultaneously. A key element in realizing these operations is a new volumetric decomposition technique for reconstructing an object's plenoptic function from an incomplete set of camera viewpoints.
Keywords: appearance-based recognition; prediction; visual learning; Kalman filters; robust statistics.
Using results from the field of robust statistics, we derive a class of Kalman filters that are robust to structured and unstructured noise in the input data stream. Each filter from this class maintains robust optimal estimates of the input process's hidden state by allowing the measurement covariance matrix to be a non-linear function of the prediction errors. This endows the filter with the ability to reject outliers in the input stream. Simultaneously, the filter also learns an internal model of input dynamics by adapting its measurement and state transition matrices using two additional Kalman filter-based adaptation rules. We present experimental results demonstrating the efficacy of such filters in mediating appearance-based segmentation and recognition of objects and image sequences in the presence of varying degrees of occlusion, clutter, and noise.
Keywords: augmented reality; registration; calibration; interaction; real-time tracking; invariance; affine representations.
We describe the design and implementation of a video-based augmented reality system capable of overlaying three-dimensional graphical objects on live video of dynamic environments. The key feature of the system is that it is completely uncalibrated: it does not use any metric information about the calibration parameters of the camera or the 3D locations and dimensions of the environment's objects. The only requirement is the ability to track across frames at least four feature points that are specified by the user at system initialization time and whose world coordinates are unknown. Our approach is based on the following observation: Given a set of four or more non-coplanar 3D points, the projection of all points in the set can be computed as a linear combination of the projections of just four of the points. We exploit this observation by (1) tracking lines and fiducial points at frame rate, and (2) representing virtual objects in a non-Euclidean, affine frame of reference that allows their projection to be computed as a linear combination of the projection of the fiducial points.
Keywords: vision; real-time tracking; viewpoint estimation; pose determination.
David Lowe's influential and classic algorithm for tracking objects with known geometry is formulated with certain simplifying assumptions. A version implemented by Ishii et al. makes different simplifying assumptions. We formulate a full projective solution and apply the same algorithm (Newton's method). We report results of extensive testing of these three algorithms. We compute two image--space and six pose--space error metrics to quantify the effects of object pose, errors in initial solutions, and image noise levels. We consider several scenaria, from relatively unconstrained conditions to those that mirror real--world and real--time constraints. The conclusion is that the full projective formulation makes the algorithm orders of magnitude more accurate and gives it super--exponential convergence properties with arguably better computation--time properties.
Keywords: robot learning; dextrous manipulation; evolution strategy.
We present a method for autonomous learning of dextrous manipulation skills with multifingered robot hands. We use heuristics derived from observations made on human hands to reduce the degrees of freedom of the task and make learning tractable. Our approach consists of learning and storing a few basic manipulation primitives for a few prototypical objects and then using an associative memory to obtain the required parameters for new objects and/or manipulations. The parameter space of the robot is searched using a modified version of the evolution strategy, which is robust to the noise normally present in real-world complex robotic tasks. Given the difficulty of modeling and simulating accurately the interactions of multiple fingers and an object, and to ensure that the learned skills are applicable in the real world, our system does not rely on simulation; all the experimentation is performed by a physical robot, in this case the 16-degree-of-freedom Utah/MIT hand. Experimental results show that accurate dextrous manipulation skills can be learned by the robot in a short period of time.
Keywords: selective attention; short-term memory; memory-based learning; deictic representations; embodied cognition.
Reinforcement learning is a machine learning framework in which an agent manipulates its environment through a series of actions, and in response to each action, receives a reward value. The agent stores its knowledge about how to choose reward-maximizing actions in a mapping from agent-internal states to actions.Agents often struggle with two opposite, yet intertwined, problems regarding their internal state space. First, the agent's state space may have too many distinctions---meaning that an abundance of perceptual data has resulted in a state space so large that it overwhelms the agent's limited resources for computation, storage and learning experience. This problem can often be solved if the agent uses selective perception to prune away unnecessary distinctions, and focus its attention only on certain features. Second, even though there are too many distinctions, the agent's state space may simultaneously contain too few distinctions---meaning that perceptual limitations (such as field of view, acuity and occlusions), have temporarily hidden crucial features of the environment from the agent. This problem, called hidden state, can often be solved by using memory of features from previous views to augment the agent's perceptual inputs.
This dissertation presents algorithms that use selective perception and short-term memory to simultaneously prune and augment the state space provided by the agent's perceptual inputs. During learning, the agent selects task-relevant state distinctions with a utile distinction test that uses robust statistics to determine when a distinction helps the agent predict reward. The dissertation also advocates using instance-based (or memory-based) learning for making efficient use of accumulated experience, and using a tree structure to hold variable-length memories. Four new algorithms are shown to perform a variety of tasks well---in some cases with more than an order-of-magnitude better performance than previous algorithms.
Keywords: dextrous manipulation; Utah/MIT hand.
In this paper we present a kinematic method for 6-degree-of-freedom manipulation of rigid objects using a dextrous robotic hand. Our method requires no prior models of the objects being manipulated; instead it obtains all the information needed directly from the hand's sensors. Its low computational cost makes real-time performance easy to achieve.We present experimental results showing the implementation of our method using the Utah/MIT dextrous hand. We also show that adding a Cartesian controller significantly improves the accuracy of the manipulation.
Keywords: image-based view synthesis; visual representations; active visual model acquisition.
Using a combination of techniques from visual representations, view synthesis, and visual-motor transfer function estimation, we present a method for animating movements of an active agent (e.g., robot), without the use of any prior models or explicit 3d information. The information needed to generate simulated images can be acquired either on or off line, by watching the agent doing an arbitrary, possibly unrelated task. We present experimental results synthesizing image sequences of the movement of a simulated PUMA 760 robot arm, using both joint space and Cartesian world coordinate control. We have created a user interface, where a user can input a robot movement program, and then upon execution, view movies of the (simulated) robot executing the program, along with the instantaneous dynamic variables from the simulated robot.
Keywords: shape and object representation; active vision; occluding contour; affine-invariant representations; viewpoint control; smooth surfaces; purposive and qualitative vision.
We present an approach for building an affine representation of an unknown curved object viewed under orthographic projection from images of its occluding contour. It is based on the observation that the projection of a point on a curved, featureless surface can be computed along a special viewing direction that {\em does not} belong to the point's tangent plane. We show that by circumnavigating the object on the tangent plane of selected surface points, we can (1) compute two orthogonal projections of every point projecting to the occluding contour during this motion, and (2) compute the affine coordinates of these points. Our approach demonstrates that affine shape of curved objects can be computed {\em directly}, i.e., without Euclidean calibration or image velocity and acceleration measurements.
Keywords: Kanerva memories; common filters; visual cortex; basis functions.
Recent neurophysiological experiments appear to indicate that the responses of visual cortical neurons in a monkey freely viewing a natural scene can sometimes differ substantially from those obtained when the same image subregions are flashed during a conventional fixation task. These new findings attain significance from the fact that neurophysiological research in the past has been based predominantly on cell recordings obtained during fixation tasks, under the assumption that these data would be useful in predicting responses in more general situations. We describe a hierarchical model of visual memory that reconciles the two differing experimental results mentioned above by predicting neural responses in both fixating and free-viewing conditions. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to achieve optimal estimation of current state using a Kalman filter based framework. The architecture of the model posits a role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties.
Keywords: teleassistance; motor control; human guidance; deictic gestures; dexterous manipulation.
This thesis presents a bottom-up approach to understanding and extending robotic motor control by integrating human guidance. The focus is on dexterous manipulation using a Utah/MIT robot hand but the ideas apply to other robotic platforms as well.{\em Teleassistance} is a novel method of human/robot interaction in which the human operator uses a gestural sign language to guide an otherwise autonomous robot through a given task. The operator wears a glove that measures finger joint angles to relay the sign language. Each sign serves to orient the robot within the task action sequence by indicating the next perceptual sub-goal and a relative spatial basis. Teleassistance merges robotic servo loops with human cognition to alleviate the limitations of either full robot autonomy or full human control alone.
The operator's gestures are {\em deictic}, from the Greek {\em deiktikos} meaning pointing or showing, because they circumscribe the possible interpretations of perceptual feedback to the current context and thereby allow the autonomous routines to perform with computational economy and without dependence on a detailed task model. Conversely, the use of symbolic gestures permits the operator to guide the robot strategically without many of the problems inherent to literal master/slave teleoperation, including non-anthropomorphic mappings, poor feedback, and reliance on a tight communication loop.
The development of teleassistance stems from an analysis of autonomous control, in light of recent advances in manipulator technology. This work also presents a {\em qualitative}, context-sensitive control strategy that exploits the many degrees of freedom and compliance of dexterous manipulators. The qualitative strategy governs the underlying autonomous routines in teleassistance.
Keywords: 3D object recognition; sparse distributed memory; iconic representations; principal component analysis (PCA); multiscale spatial filters.
A general-purpose object indexing technique is described that combines the virtues of principal component analysis with the favorable matching properties of high-dimensional spaces to achieve high precision recognition. An object is represented by a set of high-dimensional iconic feature vectors comprised of the responses of derivative of Gaussian filters at a range of orientations and scales. Since these filters can be shown to form the eigenvectors of arbitrary images containing both natural and man-made structures, they are well-suited for indexing in disparate domains. The indexing algorithm uses an active vision system in conjunction with a modified form of Kanerva's sparse distributed memory which facilitates interpolation between views and provides a convenient platform for learning the association between an object's appearance and its identity. The robustness of the indexing method was experimentally confirmed by subjecting the method to a range of viewing conditions and the accuracy was verified using a well-known model database containing a number of complex 3D objects under varying pose.
Keywords: image proceessing hardware; image processing software.
The Datacube MaxVideo 200 is a high-speed image processing system that can provide video rate processing of images. A user writes programs using the Datacube ImageFlow libraries to control the hardware. Learning the details of ImageFlow programming is a daunting task for the new user. This learning task is made more difficult because the manuals for the MaxVideo 200 hardware and ImageFlow software are rather obscure to the typical novice user. This user's guide describes several simple ImageFlow programs. Emphasis is placed on providing some of the folklore that is needed to get started in MV200/ImageFlow programming. Also, the organization of the information in the Datacube manuals is described so that the new user can continue exploring on their own system features that are not in the sample programs.
Keywords: mobile robotics; LEGO; interactive C; MIT 6.270; sensors.
On 7 December 1994, four student-built autonomous robots demonstrated various strategic, tactical, and mechanical approaches to a delivery task. That event was preceded by approximately two years of history and two days of frenzied preparation. Our robotics efforts were based on materials from MIT's well-known 6.270 course. This report summarizes our experiences, from pedagogical goals and organizational matters through mechanical and electronic techniques. Our intended audience is future robot-builders, and organizers of robot-building courses. We assume familiarity with material in Jones and Flynn's Mobile Robotics text, and with the various materials available from MIT over the internet.
Keywords: robot control; adaptive control; visual feedback control; hand-eye coordination; real-time active vision; robot user interfaces.
We propose and implement a novel method for visual space trajectory planning, and adaptive high degree-of-freedom (DOF) visual feedback control. The method requires no prior information either about the kinematics of the manipulator, or the placement or calibration of the cameras, and imposes no limitations on the number of degrees of freedom controlled or the number of kind of visual features utilized. The approach provides not only a means of low-level servoing but a means to integrate it with higher level visual space trajectory and task planning. We are thus able to specify and perform complex tasks composed of several primitive behaviors, using both visual servoing and open loop control, where the number of sensed and controlled signals varies during the task. We report experimental results demonstrating a factor of 5 improvement in the repeatability of manipulations using a PUMA arm when comparing visual closed-loop to traditional joint level servoing. We also present experiment statistics showing the advantages of adaptive over non-adaptive control systems, and of using redundant visual information when performing manipulation tasks. Finally, we demonstrate usefulness of the approach by using it to specify and execute complex tasks involving real-world robot manipulation of rigid and non-rigid objects in up to 12 degrees of freedom. The manipulation is performed in the context of a semi-autonomous robot manipulation system.
Keywords: sensory-motor control; visual robotics; robot user interfaces.
We describe and demonstrate a construct termed a ``virtual tool'' that provides a flexible interface to sensory-motor control. This interface is, from a user standpoint, substantially less complex and more application-oriented than the raw devices. The basic idea is to use extra degrees of freedom present in a flexible system, in conjunction with sophisticated sensing (e.g. vision), to dynamically configure or ``tailor'' a manipulator so that it is matched to a particular situation and operation. This ``virtual tool'' is created by imposing customized, sensory modulated constraints between various degrees of freedom in the system. The remaining degrees of freedom constitute a small set of control parameters that are fitted to a particular operation. We argue that, within the confines of fairly broad application domains, a small set of ``tool classes'' can be defined that will serve as a general purpose sensory-motor toolbox for a wide variety of applications. We further argue that such class definitions can be made portable not only across tasks, but across platforms as well. The implementation of a number of basic tool classes, on various platforms, using vision and other sensory modalities, is described, and their use in performing multi-stage sensory-modulated manipulation tasks is illustrated.
Keywords: motion analysis; motion recognition; nonrigid motion; human movement.
The recognition of nonrigid motion, particularly that arising from human movement (and by extension from the locomotory activity of animals) has typically made use of high-level parametric models representing the various body parts (legs, arms, trunk, head, etc.) and their connections to each other. Such model-based recognition has been successful in some cases; however, the methods are often difficult to apply to real-world scenes, and are severely limited in their generalizability. The first problem arises from the difficulty of acquiring and tracking the requisite model parts, usually specific joints such as knees, elbows or ankles. This generally requires some prior high-level understanding and segmentation of the scene, or initialization by a human operator. The second problem is due to the fact that the human model is not much good for dogs or birds; for each new type of motion, a new model must be hand-crafted. In this paper, we show that the recognition of human or animal locomotion, and, in fact, any repetitive activity, can be done using low-level, non-parametric representations. Such an approach has the advantage that the same underlying representation is used for all examples, and no individual tailoring of models or prior scene understanding is required. We show in particular that repetitive motion is such a strong cue that the moving actor can be segmented, normalized spatially and temporally, and recognized by matching against a spatio-temporal template of motion features. We have implemented a real-time system that can recognize and classify repetitive motion activities in normal gray-scale image sequences. Results on a number of real-world sequences are described.
Keywords: keywords: robotics; learning; intelligent control; motor skills.
In robot skill learning the robot must obtain data for training by executing expensive practice trials and recording their results. The thesis is that the high cost of acquiring training data is the limiting factor in the performance of skill learners. Since the data is obtained from practice trials, it is important that the system make intelligent choices about what actions to attempt while practicing. In this dissertation we present several algorithms for intelligent experimentation in skill learning.In open-loop skills the execution goal is presented and the controller must then choose all the control signals for the duration of the task. Learning is a high-dimensional search problem. The system must associate a sequence of actions with each commandable goal. We propose an algorithm that selects practice actions most likely to improve performance by making use of information gained on previous trials. On the problem of learning to throw a ball using a robot with a flexible link, the algorithm takes only 100 trials to find a ``whipping'' motion for long throws.
Most closed loop learners improve their performance by gradient descent on a cost function. The main drawback of this method is convergence to non-optimal local minima. We introduce the concept of cooperation as a means of escaping these local minima. We assume the existence of several coaches that each improve some aspect of the controller's performance. Switching training between coaches can help the controller avoid locally minimal solutions. On the task of curve tracing with an inverted pendulum the cooperative algorithm learns to track faster than with a traditional method.
In an integrated system with scarce sensor resources it is preferable to perform tasks without sensing. We observe that closed loop learning can function as an efficient search technique for open-loop control. Our system starts with closed loop learning. As it improves its ability to control the plant, it replaces sensor information with estimates. The result is a controller that tracks long segments of a reference curve open loop.
Keywords: 3-d object recognition; visual object recognition; memory-based learning; visual learning; principal views.
We describe a method of 3-D object recognition based on two stage use of a general purpose associative memory and a principal views representation. The basic idea is to make use of semi-invariant objects called keys. A key is any robustly extractable feature that has sufficient information content to specify a 2-D configuration of an associated object (location, scale, orientation) plus sufficient additional parameters to provide efficient indexing and meaningful verification. The recognition system utilizes an associative memory organized so that access via a key feature evokes associated hypotheses for the identity and configuration of all objects that could have produced it. These hypothesis are fed into a second stage associative memory, which maintains a probabilistic estimate of the likelihood of each hypothesis based on statistics about the occurrence of the keys in the primary database. Because it is based on a merged percept of local features rather than global properties, the method is robust to occlusion and background clutter, and does not require prior segmentation. Entry of objects into the memory is an active, automatic procedure. We have implemented a version of the system that allows arbitrary definitions for key features. Experiments using keys based on perceptual groups of line segments are reported. Good results were obtained on a database derived from of approximately 150 images representing different views of 7 polyhedral objects.
Keywords: redundant manipulators; dextrous manipulation; telemanipulation; grasping.
Manipulators with large numbers of degrees of freedom, from the human hand to the trunk of an elephant, are common in the biological world. These freedoms allow highly flexible and robust performance of complex tasks. However, progress in developing and controlling artificial high-degree-of freedom manipulators has been slow. The main problem is that traditional robotics has focussed on the solution of systems of kinematic equations where there is a unique solution. Such approaches tend not to generalize well to situations with a high-dimensional solution space, and controlling redundant systems has acquired a reputation as a hard problem. However, this need not be the case. In this paper, we describe a behavioral method for using extra degrees of freedom to simplify rather than complicate manipulation problems, while at the same time obtaining more flexibility than would be available with a simpler system. The method is developed in the context of a high DOF robot hand, but it has the potential to generalize to other sorts of manipulators.The basic idea is based on the observation that, for a particular task, using a custom-designed fitting can greatly simplify the control problem. Using a wrench sized for a particular nut is an extreme example. We use the extra degrees of freedom to dynamically configure or ``tailor'' the manipulator to match the particular object and task at hand. This creates a virtual tool. The tailoring is accomplished by imposing low-level, task-specific constraints on the degrees of freedom. These constraints are selected dynamically from a large set of potential constraints in response to the demands of the current task. The process of smoothly transitioning from one virtual tool to another in the course of task execution is referred to as morphing. We apply the technique to the control of a 16-DOF Utah/MIT hand, and perform fine manipulations on a range of objects using virtual tools that are dynamically instantiated on the basis of sensory information.
Keywords: active vision; visual routines; iconic representations; object recognition; principal components of natural images.
Active vision systems have the capability of continuously interacting with the environment. The rapidly changing environment of such systems means that it is attractive to replace static representations with visual routines that compute information on demand. Such routines place a premium on image data structures that are easily computed and used.The purpose of this paper is to propose a general active vision architecture based on efficiently computable iconic representations. This architecture employs two primary visual routines, one for identifying the visual image near the fovea (object identification), and another for locating a stored prototype on the retina (object localization). This design allows complex visual behaviors to be obtained by composing these two routines with different parameters.
The iconic representations are comprised of high-dimensional feature vectors obtained from the responses of an ensemble of Gaussian derivative spatial filters at a number of orientations and scales. These representations are stored in two separate memories. One memory is indexed by image coordinates while the other is indexed by object coordinates. Object localization matches a localized set of model features with image features at all possible retinal locations. Object identification matches a foveal set of image features with all possible model features. We present experimental results for a near real-time implementation of these routines on a pipeline image processor and suggest relatively simple strategies for tackling the problems of occlusions and scale variations. We also discuss two additional visual routines one for top-down foveal targeting using log-polar sensors and another for looming detection, which are facilitated by the proposed architecture.
Keywords: grasping; genetic algorithm; visual servoing.
In this paper we present a system for vision-based planning and execution of fingertip grasps using a four-fingered dextrous hand. Our system does not rely on prior models of the objects to be grasped; it obtains all the information it needs from vision and from tactile sensors located at the fingertips of the hand. The grasp planner is based on a genetic algorithm modified to allow the use of real numbers as the basic representation unit. The grasp executer is based on differential visual feedback, which allows the system to specify goals and monitor progress in image space without needing absolute calibration between the camera and the hand. We present experimental results showing the application of the system to grasping unknown objects with the Utah/MIT hand.
Keywords: state estimation; filtering; Kalman filter; ARMA filter; image restoration; Wiener filter; least squares estimation.
This tutorial is dedicated to our long-suffering 442 students, and to the excellent authors from whom I shamelessly cribbed this work. It is a pure cut-and-paste job from my favorite sources on this material. This is not my own work---think of me as an editor working without his authors' permissions. Readers should know that original authors are usually easier to understand than rehashed versions. If this presentation helps you, good. If not it at least helped me sort a few things out.I assume knowledge of all necessary linear systems theory, differential equations, statistics, control theory, etc. We start with the ideas of filtering, smoothing, prediction, and state estimation. Wiener filtering and its associated intellectual framework follows, with a brief foray into ARMA filtering. The idea of recursive estimation is introduced to give some motivation for the slog ahead, and then we start with basic concepts in maximum likelihood, maximum a posteriori, and least-squares estimation. The strategy is to work toward the Kalman filtering equations by showing how they are simply related to general least-squares estimation. After Kalman filtering, some simpler versions of recursive filters are presented. There are appendices on the orthogonality principle, the matrix inversion lemma, singular value decomposition, partial C and LISP code, and a worked example.
Keywords: real-world applications; SPARTAs; Ephor; runtime environments.
Designing real-world applications can involve coordinating many pieces of hardware and integrating multiple software components. Increased processing power has allowed complex real-world applications to be designed, and there has been increasing interest in the issues involved in designing both the applications and their support. In this paper we describe the issues involved in designing the application. The shepherding application we have chosen is representative of many real-world applications. This report focuses on technical details. We describe the underlying hardware, including the camera, vision processing boards, processors, and puma robot arm. We then discuss the software components we designed to integrate the hardware components in real-time. At each stage we describe the trade-offs between the different possibilities and why the ones chosen were best suited for our environment. We also present results supporting our selection. At appropriate points we indicate underlying support that would have eased and improved our implementation.
Keywords: temporal texture; activity recognition; motion-based recognition; optical flow interpretation; statistical properties of motion; image sequence processing.
The goal of this thesis is to demonstrate the utility of low-level motion features for the purpose of recognition. Although motion plays an important role in biological recognition tasks, motion recognition, in general, has received little attention in the literature compared to the volume of work on static object recognition. It has been shown that in some cases, motion information alone is sufficient for the human visual system to achieve reliable recognition. Previous attempts at duplicating such capability in machine vision have been based on abstract higher-level models of objects, or have required building intermediate representations such as the trajectories of certain feature points of the object. In this work we demonstrate that motion recognition can be accomplished using lower-level motion features, without the use of abstract object models or trajectory representations.First, we show that certain statistical spatial and temporal features derived from the optic flow field have invariant properties, and can be used to classify regional motion patterns such as ripples on water, fluttering of leaves, and chaotic fluid flow. We then present a novel low-level computational approach for detecting and recognizing temporally repetitive movements, such as those characteristic of walking people or flying birds, on the basis of the periodic nature of their motion signatures. We demonstrate the techniques on a number of real-world image sequences containing complex non-rigid motion patterns. We also show that the proposed techniques are reliable and efficient by implementing a real-time activity recognition system.
Keywords: selective perception; object search; multi-resolution; gaze control strategy; multiple viewpoints; view sequences.
This dissertation studies the problem of searching for a target object with a visual sensor. In particular, it studies the task of selecting a sequence of viewpoints, viewing directions, and fields of view that efficiently examines the area being searched. This is made difficult by two problems, namely the need for high image resolution and the presence of obstacles that occlude portions of the search area from certain viewpoints.Searches for objects that require high image resolution to be recognizable can potentially require the examination of a large number of images; high resolution requires a narrow field of view, and hence more images are necessary to span a given visual angle. This dissertation considers a method for increasing search efficiency by searching only those subregions that are especially likely to contain the object. Searches that use this method, called indirect searches, repeatedly find a cheaply-locatable "intermediate" object that commonly participates in a spatial relationship with the target object, and then look for the target in the restricted region specified by this relationship. A decision-theoretic model of search efficiency is developed. The model identifies desiderata for useful intermediate objects and predicts that, in typical indoor situations, indirect search provides up to an eight-fold increase in efficiency. The model is also suitable for use in an on-line system for selecting intermediate objects.
The second problem facing a searcher is that portions of the area being searched are often hidden from view. Multiple viewpoints are therefore often necessary. This dissertation examines the selection of such viewpoints. Traditional viewpoint selection methods involve detailed maps of the scene portions viewed so far. Simpler model-free methods are presented that, though less selective about their viewpoints, find objects without significantly more effort than map-based methods. They suggest that the main requirement for selecting efficient viewpoint sequences is that the searcher possesses a mechanism for ensuring that it systematically traverses the viewpoint space. Such mechanisms can be much simpler than maps. One drawback of model-free methods is that when the object is not present, they can waste more effort before aborting. Suggestions for remedying this are presented.
Keywords: reinforcement learning; hidden state; memory-based learning.
When a reinforcement learning agent's next course of action depends on information that is hidden from the sensors because of problems such as occlusion, restricted range, bounded field of view and limited attention, we say the agent suffers from the Hidden State Problem. State identification techniques use history information to uncover hidden state. Previous approaches to encoding history include: finite state machines [Chrisman 1992; McCallum 1992], recurrent neural networks [Lin and Mitchell 1992], and genetic programming with indexed memory [Teller 1994]. A chief disadvantage of all these techniques is their long training time.This report presents Instance-Based State Identification, a new approach to reinforcement learning with state identification that learns with much fewer training steps. Noting that learning with history and learning in continuous spaces both share the property that they begin without knowing the granularity of the state space, the approach applies instance-based (or ``memory-based'') learning to history sequences---instead of recording instances in a continuous geometrical space, we record instances in action-perception-reward sequence space. The first implementation of this approach, called Nearest Sequence Memory, learns with an order of magnitude fewer steps than several previous approaches.
Keywords: genetic algorithms; learning; adaptive representation.
Machine learning aims towards the acquisition of knowledge based on either experience from the interaction with the external environment or by analyzing the internal problem-solving traces. Both approaches can be implemented in the Genetic Programming (GP) paradigm. Hillis [1990] proves in an ingenious way how the first approach can work. There have not been any significant tests to prove that GP can take advantage of its own search traces. This paper presents an approach to automatic discovery of functions in GP based on the ideas of discovery of useful building blocks by analyzing the evolution trace, generalizing of blocks to define new functions and finally adapting of the problem representation on-the-fly. Adaptation of the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Complexity measures of solution trees are defined for an adaptive representation framework and empirical results are presented.
Keywords: object recognition; steerable filters; active vision.
The location of objects in images is difficult owing to the view variance of geometric features but can be determined by developing view-insensitive descriptions of the intensities local to image points. View-insensitive descriptions are achieved in this work by describing points in terms of the responses of steerable filters at multiple scales. Owing to the use of multiple scales, the vector for each point is, for all practical purposes, unique, and thus can be easily matched to other instances of the point in other images. We show that this method can be extended to handle the case where the area near a point of interest is partially occluded. The method uses a description of the occluder in the form of a template that can be obtained easily via active vision systems using a method such as disparity filtering.
Keywords: active vision; Bayesian belief network; computer vision; decision theory; influence diagram; qualitative vision; selective perception.
A selective vision system sequentially collects evidence to support a specified hypothesis about a scene, as long as the additional evidence is worth the effort of obtaining it. Efficiency comes from processing the scene only where necessary, to the level of detail necessary, and with only the necessary operators. Knowledge representation and sequential decision-making are central issues for selective vision, which takes advantage of prior knowledge of a domain's abstract and geometrical structure and models for the expected performance and cost of visual operators.The TEA-1 selective vision system uses Bayes nets for representation and benefit-cost analysis for control of visual and non-visual actions. It is the high-level control for an active vision system, enabling purposive behavior, the use of qualitative vision modules and a pointable multiresolution sensor. TEA-1 demonstrates that Bayes nets and decision theoretic techniques provide a general, re-usable framework for constructing computer vision systems that are selective perception systems, and that Bayes nets provide a general framework for representing visual tasks. Control, or decision making, is the most important issue in a selective vision system. TEA-1's decisions about what to do next are based on general hand-crafted ``goodness functions'' constructed around core decision theoretic elements. Several goodness functions for different decisions are presented and evaluated.
The TEA-1 system solves a version of the T-world problem, an abstraction of a large set of domains and tasks. Some key factors that affect the success of selective perception are analyzed by examining how each factor affects the overall performance of TEA-1 when solving ensembles of randomly produced, simulated T-world domains and tasks. TEA-1's decision making algorithms are also evaluated in this manner. Experiments in the lab for one specific T-world domain, table settings, are also presented.
Keywords: Bayes nets; task-oriented vision; active vision; selective perception; Bayesian belief network; decision theory.
TEA-1 is a selective vision system that uses Bayes nets for representation and benefit-cost analysis for control of visual and nonvisual actions. TEA-1 solves T-world problems, a class of problems involving static two-dimensional scenes. For example, TEA-1 has been demonstrated to answer questions about scenes of dinner tables.This paper presents dTEA-1, an extension to TEA-1 that allows tasks to be performed on dynamic scenes. Currently, dTEA-1 successfully performs a task on a simulated train scene. The objects in the scene include a train on a track and a herd of cows, but the domain may be extended to include static objects and other classifications of moving objects. The task is to keep track of the locations of objects, and the system intelligently allocates its effort to keep uncertainty and cost to a minimum.
Keywords: motor skills; learning control; open-loop control; generalization.
Table lookup with interpolation is used for many learning and adaptation tasks. Redundant mappings capture the important concept of ``motor skill,'' which is important in real, behaving systems. Few, if any, robot skill implementations have dealt with redundant mappings, in which the space to be searched to create the table has much higher dimensionality than the table itself. A practical method for inverting redundant mappings is important in physical systems with limited time for trials. We present the ``Guided Table Fill In'' algorithm, which uses data already stored in the table to guide search through the space of potential table entries. The algorithm is illustrated and tested on a robot skill learning task both in simulation and on a robot with a flexible link. Our experiments show that the ability to search high dimensional action spaces efficiently allows skill learners to find new behaviors that are qualitatively different from what they were presented or what the system designer may have expected. Thus the use of this technique can allow researchers to seek higher dimensional action spaces for their systems rather than constraining their search space at the risk of excluding the best actions.
Keywords: reinforcement learning; active perception; selective perception; hidden state; incomplete perception; short-term memory.
An agent with selective perception focuses its sensors on those parts of the environment that are relevant to the task at hand. Selective perception is an efficient method of gathering information from the world, but it presents problems for a learning agent when different actions are required in situations for which the selective perception system cannot produce distinguishing outputs. If this happens the agent is said to have incomplete perception, and the agent may be able to use internal state determined by past perceptions and actions in order to choose the correct action.I propose research on learning algorithms that use short-term memory to disambiguate the incomplete perception that arises with selective perception. I present the Utile Distinction Memory (UDM) algorithm that solves the incomplete perception problem using a partially observable Markov decision process to represent the agent's internal state space. A significant feature of the algorithm is that it will build an internal state space proportionate to the task at hand, not as large as would be required to represent all of the perceivable world. A second algorithm, part of work in progress, will keep the advantages of UDM while improving learning speed and the ability to recognize the significance of memories that span multiple time steps.
Learning to use memory is difficult and will require a strong bias to learn efficiently. I will investigate ``learning by watching'' as a method of providing bias. Two applications I propose to study are: driving a simulated car using vision from the driver's point of view; and setting a table with human cooperation or interference. Using the results of psychophysical experiments, I will compare my algorithm's perceptual actions with the perceptual actions made by human subjects.
Keywords: vision; robotics; visual orientation; head-eye coordination.
This paper is about orienting, that is, establishing and maintaining a spatial relation between a motorized pair of cameras (the eye-head system) and a static or a moving object tracked over time. Motivated by physiological evidence, the paper proposes a simple set of vision-based strategies aimed to perform head, eyes and body movements in a complex environment. Fixation is shown to be an essential feature in visual servoing, and it is used to decouple control on head rotational degrees of freedom, making possible a metric-less approach to the orientation problem. A running implementation of these strategies, using a binocular camera system mounted on a PUMA 700, demonstrates the effectiveness of the approach.
Keywords: robot skill learning; trajectories; tunable.
We present a computational, constructive theory of tunable, open loop trajectory skills. A skill is a controller whose outputs achieve any task in a space characterized by n parameters, n > 1. Throwing a ball at a target is a 3-dimensional task if the target may be anywhere within a 3-dimensional volume. Repetitous pick and place tasks are zero-dimensional, and thus not skills. Skills are performed open loop for speed reasons: we assume the entire command sequence is generated before any feedback can become available. We do not assume prior knowledge of plant or task models, so skills must be at least partly learned. A skill output is a vector of values---in our work so far it is generated as the sum of a base vector and a weighted change vector whose weight accomplishes the tuning. Learning consists of a search for the best set of base and change vectors. An interpretation process maps skill outputs into sequences of commands for the plant by using basis functions (given a priori in this paper). The basis functions may be arbitrarily complex. We claim that appropriate basis functions can speed up the learning process and overcome the limitations of the linear trajectory tuning algorithm. This report describes a skill learning algorithm and experiments done with various basis functions and control methods for a one-dimensional throwing task. It concludes with a discussion of future work in learning basis functions, higher dimensional tasks, and comparisons against common learning and control algorithms.
Keywords: object recognition; visual search; object search; knowledge-based vision; active vision.
This paper studies the task of using a mobile camera platform to search a region of space for a target object. Our goal is to maximize the efficiency of such searches. The problem is analyzed using a simple mathematical description of the factors that affect search efficiency. This analysis suggests that one way to improve efficiency is to take advantage of the spatial relationships in which the target object commonly participates. Searches that do so, which we call indirect searches, are modeled as two-stage processes that first find an intermediate object that commonly participates in a spatial relationship for the target object, and then look for the target in the restricted region specified by this relationship. A mathematical model of search efficiency is then used to analyze the efficiency of indirect search over a wide range of situations that vary the spatial structure of the domain as well as recognition performance. The model predicts that, for searches that involve rotating a camera about a fixed location, indirect searches improve efficiency by factors of 2 to 8. An implemented robot search system substantiates these predictions. Finally, we highlight some areas in need of further research if these efficiencies are to be achieved.
Keywords: visual following; tracking; pursuit; vergence; active vision; animate vision; precategorical vision; sensory-motor systems; gaze stabilization; egomotion compensation; visual fixation; binocular vision; mobile robots.
Using a binocular, maneuverable visual system, a robot that holds its gaze on a visual target can enjoy improved visual perception and performance in interacting with the world. This dissertation examines the problem of holding gaze on a moving object from a moving platform, without requiring the ability to recognize the target. A novel aspect of the approach taken is the use of controlled camera movements to simplify the visual processing necessary to keep the cameras locked on the target. A gaze holding system on the Rochester robot's binocular head demonstrates this approach. Even while the robot is moving, the cameras are able to track an object that rotates and moves in three dimensions.The key observation is that visual fixation can help separate an object of interest from distracting surroundings. Camera vergence produces a horopter (surface of zero stereo disparity) in the scene. Binocular features with no disparity can be extracted with a simple filter, showing the object's location in the image. Similarly, an object that is being tracked will be imaged near the center of the field of view, so spatially-localized processing helps concentrate on the target. Rochester's binocular robot exploits these observations. The vergence and smooth tracking systems cooperate to hold the cameras on an object moving in three dimensions. The vergence system changes the vergence angle of the cameras to drive the disparity of the target to zero, relying on the tracking system to keep the target in the central field of view. The tracking system centers the cameras on the zero-disparity signals, relying on the vergence system to hold vergence on the target. Instead of requiring a way to recognize the target, the system relies on active control of camera movements and binocular fixation segmentation.
Keywords: Polhemus sensor; Polhemus source; VPL DataGlove.
A Polhemus 3Space Isotrak sensor system is used on the VPL Research DataGlove Model 2 to detect the glove's absolute position and orientation. The sensor system has two components: the Polhemus sensor and the Polhemus source. The sensor is attached to the back of the DataGlove. The sensor sends to the DataGlove Control Unit analog signals, which are converted into the Polhemus data--six parameters that represent the position and orientation of the sensor relative to the source. This document describes how the Polhemus devices work and how the Polhemus data should be interpreted. It also describes the transformation of the Polhemus output into the (X, Y, Z, O, A, T) space of the robot in the University of Rochester Robotics Laboratory. This transformation allows teleoperation of the robot through the use of the Polhemus devices.
Keywords: dexterous manipulation; grasping; Utah/MIT hand.
Traditional analytic robotics defines grasping by knowing the task geometry and the forces acting on the manipulator precisely. This method is particularly important for non-compliant manipulators with few degrees of freedom, such as a parallel jaw gripper, that overconstrain the solution space. In contrast, the advent of anthropomorphic, high degree-of-freedom grippers allows us to use closed-loop strategies that depend heavily on the task context but do not require precise positioning knowledge. To demonstrate, a robotic hand flips a plastic egg, using the finger joint tendon tensions as the sole control signal. The manipulator is a compliant, sixteen degree-of-freedom, Utah/MIT hand mounted on a Puma 760 arm. The completion of each subtask, such as picking up the spatula, finding the pan, and sliding the spatula under the egg, is detected by sensing when the tensions of the hand tendons pass a threshold. Beyond this use of tendon tensions and the approximate starting position of the spatula and pan, no model of the task is constructed. The routine is found to be robust to different spatulas and to changes in the location and orientation of the spatula, egg, and table, with some exceptions.The egg-flipping example relies on interpreting fluctuating tension values within a known temporal sequence of actions. For instance, knowing when the manipulator is trying to touch the pan with the spatula provides the context to interpret changes in tendon tensions. Given the success of this task, we go on to propose a method for analyzing the temporal sensory output for tasks that have not been previously segmented. This method suggests a means for automatically generating robust force-control programs to perform previously teleoperated manipulation tasks.
Last Change: 27 Sep 2010 / marty@cs.rochester.edu