![]() ASSEMBLY ROBOTICS GROUP
Part of the
Sister Groups:
For more information contact: Chris Malcolm
|
The Assembly Robotics group in Edinburgh has a long and distinguished
history. Some of the highlights are mentioned below:
Currently the focus of our research concerns is finding more principled support for the familiar intuitive general guidelines of behaviour-based robotics, and exploring its use in the hybrid version of behavior-based robotics we use in assembly work (a hybridisation of classical planner and behaviour-based plan interpreter), with special emphasis on the sensor fusion problem. Freddy, the Famous Scottish RobotFreddy (mid1960s - 1981) was one of the first robots to be able to assemble wooden models using vision to identify and locate the parts -- given a jumbled heap of toy wooden car and boat pieces it could assemble both in about 16 hours using a parallel gripper and single camera (1973). The 16 hours was due to the slowness of movement of the robot, an artefact of the limited computational power available for movement control in those days. An Elliot 4130 computer with 64k 24-bit words, later upgraded to 128k, was the main computer. A Honeywell H316, initially with 4k 16-bit words, later upgraded to 8k, controlled the robot motors and cameras. The videos we now have of Freddy's assembly work have been dubbed from 16mm film, as video hadn't been invented then. Even with today's knowledge, methodology, software tools, and so on, getting a robot to do this kind of thing would be a fairly complex and ambitious project. In those days, when they had to design and buld the robot, design and build the programming system, design and build the vision system, etc., it was a heroic pioneering feat which required to be demonstrated in practice in order to convince some that it was even possible.Key ReferenceA. P. Ambler, H. G. Barrow, C. M. Brown, R. M. Burstall, and R. J. Popplestone, A Versatile Computer-Controlled Assembly System, Proc. Third Int. Joint Conf. on AI, Stanford, California, pp. 298-307, 1973.RAPTFreddy's famous car and boat asssembly above was programmed as a list of end-effector positions. The great tedium and lack of generality in programming assembly robots in this way prompted the search for a higher level of assembly description. RAPT permitted robot positions and movements to be specified in terms of relationships (such as parallel, aligned, against) between geometric features (such as point, edge, and surface) of the parts being assembled. By 1980 this was well developed (largely by Popplestone, Ambler, and Bellos) and had been integrated with a solid geometric modeller front end (by Cameron) to facilitate part description and trajectory planning. The geometric nature of vision permitted its neat integration within the RAPT system using such ideas as a plane-of-gaze (defined by lens centre and two points in image plane marking an object edge) projected out and touching the edge of the real object (by Yin). The attempt in the mid 1980s to include reasoning about uncertainty based on tolerances on dimensions and positions, errors in robot movements, etc., foundered on a combinatorial explosion of computation.This raised the interesting question of whether this was an unavoidably hard problem which simply needed very much more powerful computers, or whether there was another computationally cheaper way of tackling the problem. Key ReferencePopplestone, R.J., Ambler, A.P., and Bellos, I., An Interpreter for a Language for Describing Assemblies, Artificial Intelligence Vol. 14, No. 1, pp. 79-107, 1980. |
The behaviour-based nature of the plan intepreter and robot controller was akin in general philosophy to the behaviour-based ideas being developed by Brooks of MIT in the domain of simple reactive mobile robots at the same time, but here were designed to act as an assembly plan interpreter. In the early 1980s Brooks (with Lozano-Perez) had been working on the general strategy for developing an ambitious assembly system incorporating planning and uncertainty handling. Coming to realise that this strategy was doomed by its combinatorially explosive computational requirements he decided that robotics research should switch to small simple reactive but complete mobile robot systems. These would be incrementally developed into robot systems of greater complexity and scope in an abbreviated recapitulation of the progress of biological evolution from simple to more complex animals. This coupled animal biology and ethology with robotics in what is sometimes called "biologically inspired robotics", a programme of research which saw robots as synthetic analogues of animals, picking up a tradition pioneered by Grey Walter in the 1950s.
The collapse of the ambitious assembly robotics systems with integrated planners was seen as demonstrating clearly that robotics researchers had got things very badly wrong. The idea was that starting with very simple mobile robots "animals" and working upwards like this would enable robotics researchers to learn the fundamentals of robotics, this time correctly. The rejection of what was now (mid-to-late 1980s) seen as the failed "classical" robotics architecture, based on "classical" or knowledge-based AI led to a general horror amongst the new breed of roboticists of knowledge-based classical AI. Since plans and planners were paradignatic examples of classical knowledge-based AI, and assembly robots had to have detailed plans of how to put things together, assembly robotics was generally seen as an unprofitable area in which to pursue robotics research.
The exception to this was Edinburgh, who saw behaviour-based robotics of this new kind as a way of revising the division of labour between planner and plan-performing agent in such a way that their marriage become a profitable and computationally economical collaboration, instead of the combinatiorial explosion into intractability of the previous "classical" approach. The idea was to marry a classical idealised planner with a behaviour-based plan interpreter.
The chosen experimental domain to test this hybrid architecture was the SOMASS puzzle, a kind of simple 3D jigsaw puzzle based on seven bent bricks which could be assembled into a cube. This had been invented by the mathematician Piet Hin, allegedly during a boring lecture by Heisenber. He called it the "SOMA puzzle". It became a popular puzzle and is available at all good puzzle shops. The first version of the SOMASS (SOMa ASSembly) system (1985) used no sensors, dealing with uncertainty by using compliant motions, and was capable of planning and performing the assembly of a SOMA cube in dozens of of different ways. This was generalised to handle the assembly of any shape from these parts, and from instances of these parts of any size, and by 1990 behaviour-based uncalibrated vision-guided part aquisition had been developed, which did not need to know camera parameters or position, or robot kinematics. What a system does not need to know it does not need to worry about, and things the system need not know are free to change without affecting the operation of the system. By way of humorous contrast with knowledge-based systems, this was sometimes jokingly referred to as an ignorance-based system.
This proved to be the case, but programming the robot in terms of force control routines was a difficult and expert task. Could any way be found to simplify the programming of the force control routines?
While people find it particularly difficult to describe force control tasks in formal terms, they are particularly good at learning the knack of accomplishing a new force control task. Would it be possible to equip a person with appropriate sensory feedback so they could exploit their own natural skills and intuitions in learning how to control Eddie (the compliant robot) to accomplish a certain task, and would it be possible for the computer system to learn from observing this process how to accomplish the task autonomously, unaided by human control?
This is an ongoing research project, which has now shifted (with Graham Deacon) to the Mechatronic Systems and Robotics Research Group of the University of Surrey. The results so far, in the few tasks for which this has been tried, are encouraging.
Deacon, G, Accomplishing Task-Invariant Assembly Strategies by Means of an Inherently Accommodating Robot Arm, PhD Thesis, 1997.
Since robots are built from components from mechanical, electronic, and computational realms, they can take advantage, when possible, of the opportunity to solve this kind of ``computational'' problem outside the computer. Not all problems can be solved at all these levels of course; that's why digital computers ousted analogue computers. But when they can, the result is often both a faster response, and a lessened requirement for computational speed.
In assembly robots, where it is often possible to engineer the parts and the environment to suit the task, one can often save computation in this way by taking advantage of local physics, such as by exploiting guiding chamfers on the edges of pegs and holes, or using gravity's handy tendency to align the bottoms of objects with the tops of tables. In general, a great deal more than many imagine can be done by partly constrained and partly compliant motions of robot and parts, such as in pushing and sliding, and we (largely Deacon and Wright) have contributed to the theory of pushing and sliding.
Deacon,G; Wright,M; Malcolm,CA, Qualitative Transitions in Object Reorienting Behaviour, Part 1: the Effects of Varying Friction, Proceedings of IEEE International Conference on Robotics and Automation, pp.2697-2704, Albuquerque, New Mexico, USA, April 1997, Edinburgh University DAI RP 863.
Deacon,G; Wright,M; Malcolm,CA, Qualitative Transitions in Object Reorienting Behaviour, Part 2: the Effects of Varying the Centre of Mass,, as above, Edinburgh University DAI RP 864.