MACHINE VISION UNIT
other past research
Our `non-range data analysis' research has covered a wide range of vision-related topics, from developing range scanner hardware to interpretation of underwater sonar images. More conventional intensity images in both monocular and stereo form have also been analysed.
Facial expressions play an extremely important role in human interaction. Expressions provide tacit clues to emotional state and regulate communication. Successful computer modelling of human expressions remains an open research topic, despite over twenty years of dedicated research. This project attempted to identify psychological, biomechanical and engineering constraints in the development of a facial animation system incorporating emotional cues. An application titled Vital was developed allowing the interactive visualisation and deformation of three-dimensional facial models, captured using passive-stereo cameras.
Most attempts to fit an ellipse to a set of points actually fit a general conic and hope that the result is an ellipse. Unfortunately, this process is often unstable and results in other conics. We have developed a process that is guaranteed to produce elliptical shapes.
Try ellipse fitting for yourself!Link to References
Early work involved automatically inferring volumetric descriptions of articulated objects from a sequence of parallel planar cross-sections. The volumetric primitives were `sticks' and `blobs' which provided a rough approximation to the underlying structure of the part.
Later more sophisticated work was possible, including a PhD project on automatically reconstructing the original form of mouse cross-sections from microtomed slices. The problem is that the deformation caused by the cutting process varies depending upon the local characteristics of the tissue, eg bone deforms less than brain. Each slice had to be separately corrected before any three-dimensional reconstruction could be performed. The aim of this project was to develop a fully automatic method of reconstruction in order to provide a 3D atlas of mouse development as part of a gene expression database.
Fast recognition of complicated objects requires consideration of such questions as: how to represent the objects given that the surfaces are the features observed, how to isolate the objects in a cluttered image, how to invoke the correct model to explain the data, and how to fully recognise and locate the object given that it may be partially obscured. One key problem behind each of these topics is representation of scale. Because objects have many features of varying importance, and because the description of a feature varies as the scale of analysis changes, one has to consider how to extract decriptions, represent objects and match features at different (and perhaps mixed) scales.Link to References
Back to top
Locating and characterising slow structural changes in soft tissues is helpful both to medical research and medical diagnosis. Quantifying the changes can help determine the cause, the likely effects, and track the progress of diseases such as Alzheimer's disease or AIDS. We use Magnetic Resonance Imaging (MRI) body scan data which records physical properties of the scanned tissues such as tissue density.
Tissues cross-sections taken from the same patient at different times were compared by matching each cell grouping in the `before' picture to a corresponding grouping in the `after' picture. Movement of cell groupings showed where changes were taking place in the tissue. Complications included differing patient orientation and differing scanner characteristics between slices, partly due to long time scales between samples.
A related MRI/CT project was concerned with detecting surfaces of solid structures and presenting their 3D structure in a flexible and informative manner to a trained analyst. A partial parallel processor re-implementation improved the performance of the algorithm.
Back to top
The main goal is to investigate how the early stages of face processing might work, by developing and testing a computational model of face detection. This should be robust enough to cope with different head orientations. The model will be situated in a mobile robot and it must have the capacity to detect a face in a scene, track it and detect changes in head position and orientation.
There are two main issues to be addressed. The first is the development of a face classifier, capable of discriminating between a face and a non-face image, for various orientations. The second is the investigation of a model for the search for faces in the scene, and how the situatedness of the model in the world can provide constraints to limit the search space.
Back to top
Back to top