assembly robot logo

ASSEMBLY ROBOTICS GROUP

ipab icon

Part of the
Institute of Perception, Action, and Behaviour,
in the
School of Informatics,
University of Edinburgh

University Crest

Sister Groups:

computer graphics

machine vision

mobile robotics



For more information contact: Chris Malcolm
Last updated: Mon May 1 14:47:17 2000

The Assembly Robotics group in Edinburgh has a long and distinguished history. Some of the highlights are mentioned below:

Currently the focus of our research concerns is finding more principled support for the familiar intuitive general guidelines of behaviour-based robotics, and exploring its use in the hybrid version of behavior-based robotics we use in assembly work (a hybridisation of classical planner and behaviour-based plan interpreter), with special emphasis on the sensor fusion problem.

Freddy, the Famous Scottish Robot

Freddy (mid1960s - 1981) was one of the first robots to be able to assemble wooden models using vision to identify and locate the parts -- given a jumbled heap of toy wooden car and boat pieces it could assemble both in about 16 hours using a parallel gripper and single camera (1973). The 16 hours was due to the slowness of movement of the robot, an artefact of the limited computational power available for movement control in those days. An Elliot 4130 computer with 64k 24-bit words, later upgraded to 128k, was the main computer. A Honeywell H316, initially with 4k 16-bit words, later upgraded to 8k, controlled the robot motors and cameras. The videos we now have of Freddy's assembly work have been dubbed from 16mm film, as video hadn't been invented then. Even with today's knowledge, methodology, software tools, and so on, getting a robot to do this kind of thing would be a fairly complex and ambitious project. In those days, when they had to design and buld the robot, design and build the programming system, design and build the vision system, etc., it was a heroic pioneering feat which required to be demonstrated in practice in order to convince some that it was even possible.

Key Reference

A. P. Ambler, H. G. Barrow, C. M. Brown, R. M. Burstall, and R. J. Popplestone, A Versatile Computer-Controlled Assembly System, Proc. Third Int. Joint Conf. on AI, Stanford, California, pp. 298-307, 1973.

RAPT

Freddy's famous car and boat asssembly above was programmed as a list of end-effector positions. The great tedium and lack of generality in programming assembly robots in this way prompted the search for a higher level of assembly description. RAPT permitted robot positions and movements to be specified in terms of relationships (such as parallel, aligned, against) between geometric features (such as point, edge, and surface) of the parts being assembled. By 1980 this was well developed (largely by Popplestone, Ambler, and Bellos) and had been integrated with a solid geometric modeller front end (by Cameron) to facilitate part description and trajectory planning. The geometric nature of vision permitted its neat integration within the RAPT system using such ideas as a plane-of-gaze (defined by lens centre and two points in image plane marking an object edge) projected out and touching the edge of the real object (by Yin). The attempt in the mid 1980s to include reasoning about uncertainty based on tolerances on dimensions and positions, errors in robot movements, etc., foundered on a combinatorial explosion of computation.

This raised the interesting question of whether this was an unavoidably hard problem which simply needed very much more powerful computers, or whether there was another computationally cheaper way of tackling the problem.

Key Reference

Popplestone, R.J., Ambler, A.P., and Bellos, I., An Interpreter for a Language for Describing Assemblies, Artificial Intelligence Vol. 14, No. 1, pp. 79-107, 1980.

SOMASS

The SOMASS system was intended to overcome the computational problems of dealing with uncertainty at the geometric modelling and reasoning level by dealing with it at a lower level in the system, by means of a behaviour-based assembly plan interpreter, the plan having been constructed by a classical assembly planner which imagined it was dealing in an ideal world with no uncertainty. This changed the difficult problem of working out all possibilities in advance in the off-line system, to the easier problem of handling the small subset of these possibilities actually encountered by the on-line system when running. An idealised and certain world was presented to the planner by the capability of the behaviour-based plan interpreter to deal with expected amounts and kinds of uncertainty. Planners are good at handling perfect idealised worlds. Instead of trying to develop the planner to handle an imperfect realistic world, which had proved computationally intractable when attempted as an extension to RAPT, attention was shifted to finding a way of using perfect idealised plans to handle imperfect realistic situations by developing the capabilities of the plan interpreter.

The behaviour-based nature of the plan intepreter and robot controller was akin in general philosophy to the behaviour-based ideas being developed by Brooks of MIT in the domain of simple reactive mobile robots at the same time, but here were designed to act as an assembly plan interpreter. In the early 1980s Brooks (with Lozano-Perez) had been working on the general strategy for developing an ambitious assembly system incorporating planning and uncertainty handling. Coming to realise that this strategy was doomed by its combinatorially explosive computational requirements he decided that robotics research should switch to small simple reactive but complete mobile robot systems. These would be incrementally developed into robot systems of greater complexity and scope in an abbreviated recapitulation of the progress of biological evolution from simple to more complex animals. This coupled animal biology and ethology with robotics in what is sometimes called "biologically inspired robotics", a programme of research which saw robots as synthetic analogues of animals, picking up a tradition pioneered by Grey Walter in the 1950s.

The collapse of the ambitious assembly robotics systems with integrated planners was seen as demonstrating clearly that robotics researchers had got things very badly wrong. The idea was that starting with very simple mobile robots "animals" and working upwards like this would enable robotics researchers to learn the fundamentals of robotics, this time correctly. The rejection of what was now (mid-to-late 1980s) seen as the failed "classical" robotics architecture, based on "classical" or knowledge-based AI led to a general horror amongst the new breed of roboticists of knowledge-based classical AI. Since plans and planners were paradignatic examples of classical knowledge-based AI, and assembly robots had to have detailed plans of how to put things together, assembly robotics was generally seen as an unprofitable area in which to pursue robotics research.

The exception to this was Edinburgh, who saw behaviour-based robotics of this new kind as a way of revising the division of labour between planner and plan-performing agent in such a way that their marriage become a profitable and computationally economical collaboration, instead of the combinatiorial explosion into intractability of the previous "classical" approach. The idea was to marry a classical idealised planner with a behaviour-based plan interpreter.

The chosen experimental domain to test this hybrid architecture was the SOMASS puzzle, a kind of simple 3D jigsaw puzzle based on seven bent bricks which could be assembled into a cube. This had been invented by the mathematician Piet Hin, allegedly during a boring lecture by Heisenber. He called it the "SOMA puzzle". It became a popular puzzle and is available at all good puzzle shops. The first version of the SOMASS (SOMa ASSembly) system (1985) used no sensors, dealing with uncertainty by using compliant motions, and was capable of planning and performing the assembly of a SOMA cube in dozens of of different ways. This was generalised to handle the assembly of any shape from these parts, and from instances of these parts of any size, and by 1990 behaviour-based uncalibrated vision-guided part aquisition had been developed, which did not need to know camera parameters or position, or robot kinematics. What a system does not need to know it does not need to worry about, and things the system need not know are free to change without affecting the operation of the system. By way of humorous contrast with knowledge-based systems, this was sometimes jokingly referred to as an ignorance-based system.

Key References

Chris Malcolm, A Hybrid Behavioural/Knowledge-Based Approach to Robotic Assembly, Evolutionary Robotics '97, April 17-18, Tokyo, Japan, ed Takashi Gomi, 221-256, AAI Books, Ontario, Canada, 1997. Edinburgh University DAI RP 875.

Back to top

Eddie

``What if all these problems of uncertainty which so bedevil assembly robotics were an artefact of trying to do assembly with a position-controlled robot?'' This was the question which Graham Deacon asked in 1994. To answer this question he built a direct drive (and therefore easily back-drivable) robot arm which was purely force controlled. The hope was that once parts to be assembled together had been brought into initial contact, it would prove possible to devise part fitting routines based purely on force control strategies, completely ignorant of position, which would accomplish typical assembly problems such as peg-into-hole and brick-into-corner. If this proved possible, then a peg-in-hole routine (for example) which successfully put a small peg in a small hole in one position, should equally well, with no change, put a large peg into a large hole anywhere else. In other words, the kind of family generality of assembly task which others had sought with such high level descriptions of the task as RAPT would in this case fall directly out of the basic bottom level of control of the robot. If it knew nothing about position, and needed to know nothing, it was not going to be worried by positional errors. Another example of the ocassional virtue of ignorance.

This proved to be the case, but programming the robot in terms of force control routines was a difficult and expert task. Could any way be found to simplify the programming of the force control routines?

While people find it particularly difficult to describe force control tasks in formal terms, they are particularly good at learning the knack of accomplishing a new force control task. Would it be possible to equip a person with appropriate sensory feedback so they could exploit their own natural skills and intuitions in learning how to control Eddie (the compliant robot) to accomplish a certain task, and would it be possible for the computer system to learn from observing this process how to accomplish the task autonomously, unaided by human control?

This is an ongoing research project, which has now shifted (with Graham Deacon) to the Mechatronic Systems and Robotics Research Group of the University of Surrey. The results so far, in the few tasks for which this has been tried, are encouraging.

Key References

Deacon,G; Malcolm,C A, Robot System Designed for Task-Level Assembly, Proc of "International Workshop on Advanced Robotics and Intelligent Machines", Salford, England, March 1994.

Deacon, G, Accomplishing Task-Invariant Assembly Strategies by Means of an Inherently Accommodating Robot Arm, PhD Thesis, 1997.

Back to top

Pushing and Sliding

A key element in the implementation of robot control is goal-seeking, which can be implemented in many ways. Watt's famous steam engine speed governor, which used centrifugal force to adjust the steam valve, was a purely mechanical implementation. In the 1960s we would have been more likely to solve this problem by electronically sensing the engine shaft speed, and using a differential amplifier to subtract this from the required speed, amplifying the difference and applying it to (say) a sprung solenoid controlling the steam valve. In the 1980s we would have been more likely to solve the problem by shifting the data into a computer, evaluating a control function of whatever complexity we cared to program, and using the result to control the steam valve. Thus this problem can be solved mechanically, electronically, or computationally.

Since robots are built from components from mechanical, electronic, and computational realms, they can take advantage, when possible, of the opportunity to solve this kind of ``computational'' problem outside the computer. Not all problems can be solved at all these levels of course; that's why digital computers ousted analogue computers. But when they can, the result is often both a faster response, and a lessened requirement for computational speed.

In assembly robots, where it is often possible to engineer the parts and the environment to suit the task, one can often save computation in this way by taking advantage of local physics, such as by exploiting guiding chamfers on the edges of pegs and holes, or using gravity's handy tendency to align the bottoms of objects with the tops of tables. In general, a great deal more than many imagine can be done by partly constrained and partly compliant motions of robot and parts, such as in pushing and sliding, and we (largely Deacon and Wright) have contributed to the theory of pushing and sliding.

Key References

Deacon,G; Low,PL; Malcolm,CA, Orienting Objects in a Minimum Number of Robot Sweeping Motions, Edinburgh University DAI PP 686.

Deacon,G; Wright,M; Malcolm,CA, Qualitative Transitions in Object Reorienting Behaviour, Part 1: the Effects of Varying Friction, Proceedings of IEEE International Conference on Robotics and Automation, pp.2697-2704, Albuquerque, New Mexico, USA, April 1997, Edinburgh University DAI RP 863.

Deacon,G; Wright,M; Malcolm,CA, Qualitative Transitions in Object Reorienting Behaviour, Part 2: the Effects of Varying the Centre of Mass,, as above, Edinburgh University DAI RP 864.

Back to top