Monday, December 16, 2019

Robotic Programming

Robot is , any  machine that replaces humans, though it may or may not resemble human beings in appearance or perform functions in a human like manner.Robotics is the branch of study or an engineering discipline dealing with the design, making, and working of robots.


THE KEY characteristic of robots is versatility adapts to do many things or serve many functions. This versatility derives from the generality of the robot’s physical structure and control. In some cases, the lack of adequate programming tools can make some tasks impossible to perform.  Robot Programming involves manually moving the robot to each desired position, and recording the internal joint coordinates corresponding to that position. In addition, operations such as closing the gripper or activating a welding gun are specified at some of these positions. The resulting “program” is a sequence of vectors of joint coordinates plus activation signals for external equipment. Such a program is executed by the indicated signals.

Many recent approaches to robot programming seek to provide the power of robot-level languages without requiring programming expertise. One approach is to extend the basic philosophy of guiding to include decision-making based on sensing. Another approach, known as task-level programming, requires specifying goals for the positions of objects, rather than the motions of the robot needed to achieve those goals. In particular, a task-level specification is meant to be completely robot-independent; no positions or paths that depend on the robot geometry or kinematics are specified by the user. Task level programming systems require complete geometric models of the environment and of the robot as input; for this reason, they are also referred to as world-modeling systems. Task-level programming is still in the research stage, in contrast to guiding and robot-level programming which have reached the commercial stage. B. Goals of this Paper The goals of this paper are twofold: one, to identify the requirements for advanced robot programming systems, the other to describe the major approaches to the design of these A representative robot application. systems.

 ROBOT APPLICATION Robots are applied in many fields
 1) determine the position and orientation of the parts
 2) grasp the parts on the moving belts
3) place each part on a fixture, add it to the assembly, or put it aside for future use, depending on the state of the assembly. A vision system; the following sequence is one segment of the application. 
The task is to grasp a cover on the moving belt, place it on the pump base, and insert four pins so as to align the two parts. 
1) Identify, using vision, the (noT-overlapping) parts arriving on the same  belt,
The  position and orientation is known  to the robot.The position and orientation of objects in an environment
2) Move R 1 to the grasp point for the cover, relative to the cover's position and orientation as known by the vision system. 

3) Grasp the cover using a particular gripping force guided by the programmer

4) finger opening is measured against the expected opening at the grasp point. If it is not within the             expected tolerance, signal an error. 

5) Place the cover on the base, by moving to an approach position above the base and moving down until a programmer specified upward force is detected by the wrist force sensor. During the downward motion, rotate the hand so as to null out any torques exerted on the cover because of misalignment of the cover and the base. Release the cover and record its current position for future use. 

6) In parallel with the previous steps, move R 2  to acquire an aligning pin from the feeder. Bring the pin to a point above the position of the first hole in the cover, computed from the known position of the hole relative to the cover and the position of the cover recorded above. 
7) Insert the pin. One strategy for this operation requires tilting the pin slightly to increase the chances of the tip of the pin falling into the hole. If the pin does not fall into the hole, a spiral search can be initiated around that point

8) In parallel with the insertion of the pin by ROBOT2, ROBOT^ fetches another pin and proceeds with the insertion when ROBOT2 is done. This cycle is repeated until all the pins are inserted. Care should be Taken to avoid a collision.

This application makes use of four types of sensors: 
1) Direct position sensors. The internal sensors in the robot joints and in the conveyor belts are used to determine the position of the robot and the belt at any instant of time. 
2) Vision sensors. The camera above each belt is used to determine the identity and position of parts          and to inspect them.
3) Finger touch sensors. Sensors in the fingers are used to control the magnitude of the gripping force and to detect the presence or absence of objects between the fingers. 
4) Wrist force sensors. The positioning errors in the robot, uncertainty in part positions, errors in grasping position, and part tolerances all conspire to make it impossible to reliably position parts relative to each other accurately. 
 Sensors can be used to identify the position of parts, to inspect parts, to detect errors during manufacturing operations, and to accommodate to unknown surfaces. Sensing places two key requirements on robot programming systems.
 first requirement is to generate  input and output mechanisms for getting sensory data. This requirement can be met simply by providing the 1/0 mechanisms available in most high-level computer programming languages, 1 means true means objects available  and 0 mean False means object not available.
 The second requirement is to provide versatile control mechanisms, such as force , for using sensory data to determine robot motions. This need to specify parameters for sensor-based motions and to specify alternate actions based on sensory conditions is the primary motivation for using sophisticated robot programming languages. Sensors are used for different purposes in robot programs; each purpose has a separate impact on the system design.
The principal uses of sensing in robot Programming are as follows
 1) initiating and terminating motions: Starting the act of  motion
 2) choosing among alternative actions: Choosing Between the directions .If the Sensors do not detect  the object in one direction. It chooses another direction
 3) obtaining the identity and position of objects and features
 4) complying to external constraints.
 A motion specification of certain type requires the following information:
 1) A coordinate frame in which the force sensor reading are to be resolved, known as the constraint frame. Some common alternatives are: a frame attached to the robot hand, a fixed frame in the room, or a frame attached to the object being manipulated.
2) The desired position trajectory of the robot. This specifies the robot’s nominal position as a function of time.
3) Stiffness es for each of the motion freedoms relative to the constraint frame.

It is possible to use the signal lines supported by most robot systems to coordinate multiple robots and machines. Some steps requires that ROBOT l and ROBOT2 be coordinated so as to minimize the duration of the operation while avoiding interference among the robots. If we let ROBOT l be in charge, we can coordinate the operation using the following signal lines: 1) GET-PIN?: ROBOT2 asks if it is safe to get a new pin. 2) OK-TO-GET: ROBOT 1 says it is OK. 3) INSERT?: ROBOT2 asks if it is safe to proceed to insert 4) OK-TO-INSERT: ROBOT1 says it is OK. 5) DONE : ROBOT 1 says it is all over. the pin. The basic operation of the control programs could be as follows: ROBOTl ROBOT2 Wait for COVER-ARRIVED 3: Signal OK-TOGET Call PlaceCover-in-Fixture i:= 1 Wait for INSERT-PIN? Signal OK-TO-INSERT if (i < np) then do [Call Get-Pin-1 i:=i+ 11 4: else do [Signal DONE Goto 21 if (i < np) then do Wait for GET-PIN? [Signal OK-TOGET i:= i+ 11 Call Insert-Pin-1 Goto 1 ... If signal DONE Goto 4 Signal GET-PIN? Call Get-Pin-2 Signal INSERT-PIN? Wait for OK-TO-INSERT Call Insert-Pin-2 Goto 3 Wait for OK-TO-GET ... This illustration of how a simple coordination task could be PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983 done with only binary signals also serves to illustrate the limitations of the method. 1) The programs are asymmetric; one robot is the master of the operation. If the cover can arrive on either belt and be retrieved by either robot, then either an additional signal line is needed to indicate which robot will be the master or both robot systems must be subordinated to a third controller. 2) If one of the robots finds a defective pin, there is no way for it to cause the other robot to insert an additional pin while it goes to dispose of the defective one. The program must allocate new signal lines for this purpose. In general, a large number of signals may be needed. 3) Because one robot does not know the position of the other one, it is necessary to coordinate them on the basis of very conservative criteria, e.g., being engaged in getting a pin or inserting a pin. This will result in slow execution unless the tasks are subdivided very finely and tests performed at each division, which is cumbersome. 4) The position of the pump cover and the pin-feeder must be known by each process independently. No information obtained during the execution of the task by one robot can be used by the other robot; it must discover the information independently. The difficulties outlined above are the due to limited communication between the processes. Signal lines are a simple, but limited, method of transferring information among the processes. In practice, sophisticated tasks require fficient means for coordination and for sharing the world model (including the state of the robots) between processes. The issue of coordination between cooperating and competing asynchronous processes is one of the most active research areas in Computer Science. Many language mechanisms have been proposed for process synchronization, among these are: semaphores [ 171, events, conditional critical regions [ 391, monitors and queues [ 11 1, and communicating sequential processes [40]. Robot systems should build upon these developments, perhaps by using alanguage such as Concurrent Pascal [ 11 ] or Ada [42] as a base language. A few existing robot languages have adopted some of these mechanisms, e.g., AL and TEACH [81], [821. Even the most sophisticated developments in computer languages do not address all the robot coordination problems, however. When the interaction among robots is subject to critical real-time constraints, the paradigm of nearly independent control with periodic synchronization is inadequate. An example occurs when multiple robots must cooperate physically, e.g., in lifting an object too heavy for any one. Slight deviations from a pre-planned position trajectory would cause one of the robots to bear all the weight, leading to disaster. What is needed, instead, is cooperative control of both robots based on the force being exerted on both robots by the load .The programming system should provide a mechanism for specifying the behavior of systems more complex than a single robot. Another example of the need of this kind of coordination is in the programming and control of multifingered grippers [ 841. In summary, existing robot programming systems are based on the view of a robot system as a single robot weakly linked to other machines. In practice, many machines including sensors, special grippers, feeders, conveyors, factory control computers, and several robots may be cooperating during a task. Furthermore, the interactions between them may be highly dynamic, e.g., to maintain a force between them, or may require extensive sharing of information. No existing LOZANO-PBREZ: ROBOT PROGRAMMING 829 robot programming system adequately deals with all of these interactions. In fact, no existing computer language is adequate to deal with this kind of parallelism and real-time constraints. E. Programming Support Robot applications do not occur in a vacuum. Robot programs often must access external manufacturing data, ask users for data or corrective action, and produce statistical reports. These functions are typical of most computer applications and are supported by all computer programming systems. Many robot systems neglect to support them, however. In principle, the exercise of these functions can be separated from the specification of the task itself but, in practice, they are intimately intertwined. A sophisticated robot programming system must first be a sophisticated programming system. New robot programming systems must be carefully designed  not to overlook the “mundane” programming functions. A similar situation exists with respect to program development.
Robot program development is often ignored in the design of robot systems and, consequently, complex robot programs can be very difficult to debug.

              The development of robot programs has several characteristics which merit special treatment.
1)      Robot programs have complex side-effects and their execution time is usually long, hence it is not always feasible to re-initialize the program upon failure. Robot programming systems should allow programs to be modified on-line and immediately restarted. 2) Sensory information and real-time interactions are not usually repeatable. One useful debugging tool for sensor based programs provides the ability to record the sensor outputs, together with program traces. 3) Complex geometry and motions are difficult to visualize; simulators can play an important role in debugging. These are not minor considerations, they are central to increased usefulness of robot programming systems.

2)      Stand-alone Robots are meant to be used directly by a single user without the use of computers. This design made perfect sense when robots were not controlled by general-purpose computers; today it makes little sense. A robot system should support a high-speed command interface to other computers. Therefore, if a user wants to develop an alternate interface, he need not be limited by the performance of the robot system’s user interface. On the other hand, the user can take advantage of the control system and kinematics calculations in the existing system. This design would also facilitate the coordination of multiple robots and make sophisticated applications easier to develop. Iv. SURVEY OF ROBOT PROGRAMMING SYSTEMS In this section, we survey several existing and proposed robot Programming systems.
All robot programming systems support some form of Help and Guidance . The simplest form of guiding is to record a sequence of robot positions that can then be “played back”; we call this basic guiding.
 In robot-level systems, guiding is defined as positions while it is also sequenced  in a program. The differences among basic guiding systems are a) in the way the positions are specified and b) the repertoire of motions between positions. The most common ways of specifying positions are: by specifying incremental motions on a teach pendant, and by moving the robot hrough the motions, either directly or via a master-slave linkage. The incremental motions specified via the teach-pendant can be interpreted as: independent motion of each joint between positions, straight lines in the joint-coordinate space, or straight lines in Cartesian space relative to some coordinate system, e.g., the robot’s base or the robot’s end-effector. When using the teach-pendant, only a few positions are usudy recorded, on command from the instructor. The path of the robot is then interpolated between these positions using one of the three types of motion listed above. When moving the robot through the motions directly, the complete trajectory can be recorded as a series of closely spaced positions on a fixed time base. The latter method is used primarily in spray-painting, where it is important o duplicate the input trajectory precisely. The primary advantage of guiding is its immediacy: what you see is what you get. In many cases, however, it is extremely cumbersome, as when the same position (or a simple variation) must be repeated at different points in a task or when fine positioning is needed. Furthermore, we have indicated repeatedly the importance of sensing in robotics and the limitations of guiding in the context of sensing. Another important limitation pf basic guiding is in expressing control structures, which inherently require tsting and describing alternate sequences. 1) Extended Guiding: The limitations of basic guiding with respect to sensing and control can be abated, though not completely abolished, by extensions short of a full programming language. For example, one of the most common uses of sensors in robot programs is to determine the location of some object to be manipulated. After the object is located, subsequent motions are made relative to the object’s coordinate frame. This capability can be accomodated within the guiding paradigm if taught motions can be interpreted as relative to some coordinate frame that may be modified at execution time. These coordinate frames can be determined, for example, by having the robot move until a touch sensor on the endeffector encounters an object. This is known asguarded motion or a search. This capability is part of some commercial robot systems, e.g., ASEA [3], Cincinatti Milacron [41], and IBM [321, 1921. This approach could be extended to the case when the coordinate frames are obtained from a vision system. Some guiding systems also provide simple control structures. For example, the instructions in the taught sequence are given numbers. Then, on the basis of tests on external or internal binary signals, control can be transferred to different points in the taught sequence. The ASEA and Cincinatti Milacron guiding systems, for example, both support conditional branching. These systems also support a simple form of procedures. The procedures can be used to carry out common operations performed at different times in the taught sequence, such as common machining operations applied to palletized parts. The programmer can exploit these facilities to produce more compact programs. These control structure capabilities are limited, however, primarily because guiding systems do not support explicit computation. To illustrate the capabilities of extended guiding systems, 830 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983 LINPUT PALLET PICKUP OPERATION (DETAIL) TARGET CONTACT TARGET GRASP Fig. 4. Palletizing task. we present a simple task programmed in the ASEA robot’s guiding system.’ The task is illustrated in Fig. 4; it consists of picking a series of parts of different heights from a pallet, moving them to a drilling machine, and placing them on a different pallet. The resulting program has the following structure: I.No. 10 20 30 40 50 60 100 110 130 140 160 170 200 210 220 23 0 240 Instruction OUTPUT ON 17 PATTERN TEST JUMP 17 JUMP 170 OUTPUT OFF 17 MOD MOD MOD OUTPUT ON 17 MOD MOD ... .. . ... ... ... Remarks Flag ON indicates do pickup Beginning of procedure Skip next instruction if flag is on Next time do put down Pickup operation (see below) End of common code for pickup Positioning for fust pickup Execute procedure Positioning for second pickup Execute procedure Machining and put down operation Next time do pickup End of common code for put down Position for first put down Execute procedure Position for second put down Note that the MOD operation is used with two meanings: 1) to indicate the end of a common section of the PATTERN, and 2) to indicate where the common section is to be executed. The sequence of instructions exected would be: 10, 20, 30, 50, 60, ***, 100, .**, 130, 30, 40, 170;.*, 200;**230, 30, 50, * Programmer action Position vertically to P2. Select speed to P2. Key code for search and vertical operation PTPF Set grip opening and select waiting time. GRIPPERS Position to P3. PTPL Select time for motion. Set grip opening and select waiting time. GRIPPERS Remarks Manual motion to the end position of search. This code indicates that the motion that follows is a search in vertical direction. program. Insert positioning command to P2 in Specify finger opening Insert command to actuate grippers (open). Grasping position (relative to P2). Coordinated joint motion, relative to the position after the search. Specify finger closing Insert command to actuate grippers (close). The putdown sequence would be programmed in a similar fashion. 2) Off-Line Guiding: Traditional guiding requires that the workspace for the task, all the tooling, and any parts be available during program development. If the task involves a single large or expensive part, such as an airplane, ship or automobile, it may be impractical to wait until a completed part is available before starting the programming; this could delay the complete manufacturing process. Alternatively, the task environment may be in space or underwater. In these cases, a mockup of the task may be built, but a more attractive alternative is available when a CAD model of the task exists. In this case, the task model together with a robot model can be used to define the program by off-line guiding. In this method, the system simulates the motions of the robot in response to a program or to guiding input from a teach-pendant. Off-line guiding offers the additional advantages of safety and versatility. In particular, it is possible to experiment with different arrangements of the robot relative to the task so as to find one that, for example, minimizes task execution time [381. B. Robot-Level Programming In Section I11 we discussed a number of important functional issues in the design of robot programming systems. The design of robot-level languages, by virtue of its heritage in the design of computer languages, has inherited many of the controversies of that notoriously controversial field. A few of these controversial issues are important in robot programming: 1) Compiler versus interpreter. Language systems that compile high-level languages into a lower level language can LOZANO-PEREZ: ROBOT PROGRAMMING achieve great efficiency of execution as well as early detection of some classes of programming errors. Interpreters, on the other hand, provide enhanced interactive environments, including debugging, and are more readily extensible. These human factors issues have tended to dominate; most robot language systems are interpreter based. Performance limitations of interpreters have sometimes interfered with achieving some useful capabilities, such as functionally defined motions. 2) New versus old. Is it better to design a new language or extend an old one? A new one can be tailored to the need of the new domain. An old one is likely to be more complete, to have an established user group, and to have supporting software packages. In practice, few designers can avoid the temptation of starting de novo; therefore, most robot languages are “new” languages. There are, in addition, difficulties in acquiring sources for existing language systems. One advantage of interpreters in this regard is that they are smaller than compilers and, therefore, easier to build. In the remainder of the section, we examine some representative robot-level programming systems, in roughly chronological order. The languages have been chosen to span a wide range of approaches to robot-level programming. We use examples to illustrate the “style” of the languages; a detailed review of all these languages is beyond the scope of this paper. We close the section with a brief mention of some of the many other robot-level programming systems that have been developed in the past ten years. 1) MHI 1960-1961: The fit robot-level programming language, MHI, was developed for one of the earliest computercontrolled robots, the MH-1 at MIT [ 181. As opposed to its contemporary the Unimate, which was not controlled by a general-purpose computer and used no external sensors, MH-I was equipped with several binary touch sensors throughout its hand, an array of pressure sensors between the fingers, and photodiodes on the bottom of the fingers. The availability of sensors fundamentaly affected the mode of programming developed for the MH-1. MHI (Mechanical Hand Interpreter) ran on an interpreter implemented on the TX-0 computer. The programming style in MHI was framed primarily around guarded moves, i.e., moving until a sensory condition was detected. The language primitives were: 1 j “move”: indicates a direction and a speed; 2 j “until”: test a sensor for some specified condition; 3) “ifgoto”: branch to a program label if some condition is 4) “ifcontinue”: branch to continue action if some condition detected; holds. A sample program, taken from [ 181, foliows: a, move x for 120 ; Move along x with speed 120 until sl 10 re1 lo1 ; until sense organ 1 ; indicates a decrease of 10, relative ; to the value at start of this step ; (condition 1) until sl 206 lo1 abs stp ; or until sense organ 1 indicates ; 206 or less absolute, then stop. ; (condition 2) ifgoto fl, b : if condition 1 alone is fulfilled ifgoto t f2 ; go to sequence b ; if at least condition 2 is fulfded ; go to sequence c ifcontinue t, a ; in all other cases continue sequence a 831 MHI did not support arithmetic or any other control structure beyond sensor monitoring. The language, still, is surprisingly “modern” and powerful. It was to be many years before a more general language was implemented. 2) WAVE 1970-1 975: The WAVE [741 system, developed at Stanford, was the earliest system designed as a generalpurpose robot programming language. WAVE was a “new” language, whose syntax was modeled after the assembly language of the PDP-10. WAVE ran off-line as an assembler on a PDP-10 and produced a trajectory file which was executed on-line by a dedicated PDP-6. The philosophy in WAVE was that motions could be pre-planned and that only small deviations from these motions would happen during execution. This decision was motivated by the computation-intensive algorithms employed by WAVE for trajectory planning and dynamic ompensation. Better algorithms and faster computers have removed this rationale from the design of robot systems today. In spite of WAVE’S low-level syntax, the system provided an extensive repertoire of high-level functions. WAVE pioneered several important mechanisms in robot programming systems; among these were 1 j the description of positions by the Cartesian coordinates 2) the coordination of joint motions to achieve continuity 3) The specification of compliance in Cartesian coordinates. The following program in WAVE, from [74], serves to pick up a pin and insert it into a hole: of the end-effector (x, y, z, and three Euler angles); in velocities and accelerations. TRANS PIN . . . TRANS HOLE.. . ASSIGN TRIES 2 MOVE PIN PICKUP: CLOSE 1 SKIPE 2 JUMP OK OPEN 5 SOJG TRIES, PICKUP WAIT NO PIN JUMP PICKUP CHANGE Z, -1, NIL, 0,O OK: MOVE HOLE STOP FV, NIL SKIPE 23 JUMP NOHOLE FREE 2, X, Y SPIN 2, X, Y STOP FV, NIL CHANGE, 2, - 1, NIL, 0, 0 CHANGE 2, -2, NIL, 0, 0 NOHOLE: WAIT NO HOLE Location of pin Location of hole Number of pickup attempts ; Move to PIN. MOVE first moves in +Z, then to a point above PIN, then -Z. ; Pickup pin ; Skip next instruction if Error 2 occurs ; (Error 2: fingers closed beyond arg ; to CLOSE) ; Error did not occur, goto OK ; Error did occur, open the fingers ; Move down one inch ; Decrement TRIES, if not negative ;jump to PICKUP ; Print “NO PIN” and wait for operator ; Try again when operator types PROCEED ; Move above hole ;Stop on 50 02. ; Try to go down one inch ; Error 23, failed to stop ; Error did not occur (pin hit surface) ; Proceed with insertion by complying ; with forces along x and y ; Also comply with torques about x and y ;Stop on 50 oz. ; Make the insertion ; Failed Note the use of compliance and guarded moves to achieve robustness in the presence of uncertainty and for error recovery. 832 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983 WAVE’S syntax was difficult, but the language supported a significant set of robot functions, many of which still are not available in commercial robot systems. 3) MINI 1972-1 976: MINI [go], developed at MIT, was not a “new” language, rather it was an extension to an existing LISP system by means of a few functions. The functions served as an interface to a real-time process running on a separate machine. LISP has little syntax; it is a large collection of procedures with common calling conventions, with no distinction between user and system code. The robot control functions of MINI simply expanded the repertoire of functions available to the LISP programmer. Users could expand the basic syntax and semantics of the basic robot interface at will, subject to the limitations of the control system. The principal limitation of MINI was the fact that the robot joints were controlled independently. The robot used with MINI was Cartesian, which minimized the drawbacks of uncoordinated point-to-point motions. The principal attraction of “The Little Robot System” [ 441, (901 in which MINI ran was the availability of a highquality 6-degree-of-freedom force-sensing wrist [44] , [ 661 which enabled sensitive force control of the robot. Previous forcecontrol systems either set the gains in the servos to control compliance [43], or used the error signals in the servos of the electric joint motors to estimate the forces at the hand [73]. In either case, the resulting force sensitivity was on the order of pounds; MIM’s sensitivity was more than an order of magnitude better (approximately 1 oz). The basic functions in MINI set position or force goals for each of the degrees of freedom (SETM), reading the position and force sensors (GETM), and waiting for some condition to occur (WAIT). We will illustrate the use of MINI using a set of simple procedures developed by Inoue [44]. The central piece of a peg-in-hole program would be rendered as follows in MINI: (DEFUN MOVE-ABOVE (P OFFSET) (X = (X-LOCATION P)) (Y = (Y-LOCATION P)) (Z = (PLUS (Z-LOCATION P) OFFSET)) (WAIT ’ (AND (?X) (?Y) (?Z)))) ; set x, y, z gods and wait till they are reached (DEFUN INSERT (HOLE) (MOVE-ABOVE HOLE 0.25) (SETQ ZTARGET (DIFFERENCE (GETM ZPOS) 1.0)) ; define a target 1 inch below current position ; move down until a contact force is met or until ; the position target is met. (WAIT ’ (OR (?FZ) (SEQ (GETM ZPOS) ZTARGETI)) (COND ((SEQ (GETM ZPOS) ZTARGET) ; if the position goal was met, i.e. no surface encountered ; comply with lateral forces (FX = 0) (FY = 0) ; and push down until enough resistance is met. (WAIT ’ (FZ))) (T; if a surface was encountered (ERROR INSERT)))) (FZ = LANDING-FORCE) (FZ = INSERTION-FORCE) MINI did not have any of the geometric and control operations of WAVE built in, but most of these could easily be implemented as LISP procedures. The primary functional difference between the two systems lay in the more sophisticated trajectory planning facilities of WAVE. The compensating advantage of MINI was that it did not require any preplanning; the programs could use arbitrary LISP computations to decide on motions in response to sensory input. 4/ AL 1974-Present: AL (241, [67] is an ambitious attempt to develop a high-level language that provides all the capabilities required for robot programming as well as the programmizlg features of modem high-level languages, such as ALGOL and Pascal. AL was designed to support robot-level and task-level specification. The robot level has been completed and will be discussed here; the task level development will be discussed in Section IV-C. AL, like WAVE and MINI, runs on two machines. One machine is responsible for compiling the AL input into a lower level language that is interpreted by a real-time control machine. An interpreter for the AL language has been completed, as well [5]. AL was designed to provide four major kinds of capabilities: 1) The manipulation capabilities provided by the WAVE system: Cartesian specification of motions, trajectory planning, and compliance. 2) The capabilities of a real-time language: concurrent execution of processes, synchronization, and on-conditions. 3) The data nd control structures of an ALGOL-like language, including data types for geometric alculations, e.g., vectors, rotations, and coordinate frames. 4) Support for world modeling, especially the AFFIXMENT mechanism for modeling attachments between frames including temporary ones such as formed by grasping. An AL program for the peg-in-hole task is: BEGIN “insert peg into hole” FRAME peg-bottom, peg-grasp, hole-bottom, hole-top; {The coordinates frames represent actual positions of object features, peg-bottom + FRAME(nilrot, VECTOR(20, 30,O)*inches); hole-bottom + FRAME(nilrot, VECTOR(25, 35, O)*inches); {Grasping position relative to peg-bottom } peg-grasp t FRAME(ROT(xhat, 180*degrees) ,3*zhat*inches); tries t 2; grasped + FALSE; { The top of the hole is defined to have a fued relation to the bottom } AFFIX hole-top to hole-bottom RIGIDLY not hand positions } AT TRANS(nilrot, 3*zhat*inches); OPEN bhand TO peg-diameter + l*inches; {Initiate the motion to the peg, note the destination frame } MOVE bamn TO peg-bottom * peg-grasp; WHILE NOT grasped AND i < tnes DO BEGIN “Attempt grasp” CLOSE bhand TO 0 * inches; IF bhand < peg_diameter/2 THEN BEGIN “No object in grasp” OPEN bhand TO peg-diameter + 1 * inches; MOVE barm TO @ - 1 * inches; { @ indicates current location } END i+i+ 1; END ELSE grasped +- TRUE; IF NOT grasped THEN ABORT (“Failed to grasp the peg”); {Establish a fixed relation between arm and peg. } AFFIX peg-bottom TO barm RIGIDLY; {Note that we move the peg-bottom, not barm } MOVE peg-bottom TO hole-top; {Test if a hole is below us } MOVE barm TO €9- 1 * inches ON FORCE(zhat) > 10 * ounces DO ABORT(“No Hole’’); {Exert downward force, while complying to side forces } MOVE peg-bottom to hole-bottom DIRECTLY WITH FORCE-FRAME = station IN WORLD WITH FORCE(zhat) = - 10 * ounces WITH FORCE (fiat) = 0 * ounces WITH FORCE (yhat) = 0 * ounces SLOWLY; END “insert peg in hole” LOZANO-P~REZ: ROBOT PROGRAMMING 833 AL is probably the most complete robot programming system yet developed; it was the first robot language to be a sophisticated computer language as well as a robot control language. AL has been a significant influence on most later robot languages. 5) VAL 1975-Present: VAL [89], [98] is the robot language used in the industrial robots of Unimation Inc., especially the PUMA series. If was designed to provide a subset of the capabilities of WAVE on a stand-alone mini-computer. VAL is an interpreter; improved trajectory calculation methods have enabled it to forego any off-line trajectory calculation phase. This has improved the ease of interaction with the language. The basic capabilities of the VAL language are as follows: point-to-point, joint-interpolated, and Cartesian motions (including approach and deproach motions); specification and manipulation of Cartesian coordinate frames, including the specification of locations relative to arbitrary frames; integer variables and arithmetic, conditional branching, and procedures; setting and testing binary signal lines and the ability to monitor these lines and execute a procedure when an event is detected. VAL’s support of sensing is limited to binary signal lines. These lines can be used for synchronization and also for limited sensory interaction as shown earlier. VAL‘s support of on-line frame computation is limited to composition of constant coordinate frames and fixed translation offsets on existing frames. It does support relative motion; this, together with the ability to halt a motion in response to a signal, provides the mechanisms needed for guarded moves. The basic VAL also has been extended to interact with an industrial vision system [30] by acquiring the coordinate frame of a part in the field of view. As a computer language, VAL is rudimentary; it most resembles the computer language Basic. VAL only supports integer variables, not floating-point numbers or character strings. VAL does not support arithmetic on position data. VAL does not support any kind of data aggregate such as arrays or lists and, although it supports procedures, they may not take any arguments. A sample VAL program for the peg-in-hole task is shown below. VAL does not support compliant motion, so this operation assumes either that the clearance between the peg and hole is greater than the robot’s accuracy or that a passive compliance device is mounted on the robot’s endeffector [ 1021. This limits the comparisons that can be made to other, more general, languages. In the example, we assume that a separate processor is monitoring a force sensor and communicating with VAL via signal lines. In particular, signal line 3 goes high if the 2 component of force exceeds a preset threshold. SETI REMARK 10 GRASP REMARK GOT0 REMARK 20 OPEN1 DRAW TRIES = 2 If the hand closes to less than 100 mm, go to statement 100,20 Otherwise continue at statement 30. 30 Open the fingers, displace down along world Z axis 5 00 0, 0, -200 labelled 20. and try again. SETI TRIES = TRIES - 1 IF TRIES GE 0 THEN 10 TYPE NOPIN STOP REMARK Move 300mm above HOLE following a straight line. 30 APPROS HOLE, 300 REMARK Monitor simal line 3 and call procedure ENDIT to STOP theprogram REMARK if the signal is activated during the next motion. REACT1 3, ENDIT APPROS HOLE, 200 REMARK Did not feel force, so continue to HOLE. MOVES HOLE VAL has been designed primarily for operations involving predefined robot positions, hence its limited support of computation, data structures, and sensing. A new version of the system, VAL-2, is under development which incorporates more support for computation and communication with external processes. 6) AML 1977-Present: AML [ 961 is the robot language used in IBM’s robot products. AML, like AL, is an attempt at developing a complete “new” programming language for robotics that is also a full-fledged interpreted computer language. The design philosophy of AML is somewhat different from that of AL, however. Where AL focuses on providing a rich set of built-in high-level primitives for robot operations, AML has focused on providing a systems environmertt where different user robot programming interfaces may be built. For example, extended guiding [ 921 and visioninterfaces [ 501 can be programmed within the AML language itself. This approach is similar to that followed in MINI. AML supports operations on data aggregates, which can be used to implement operations on vectors, rotations, and coordinate frames, although these data types are part of recent releases of the language. AML also supports joint-space trajectory planning subject to position and velocity constraints, absolute and relative motions, and sensor monitoring that can interrupt motions. Recent AML releases support Cartesian motion and frame affixment, but not general compliant motion,8 or multiple processes. An AML program for peg-inhole might be: PICKUP: SUBR (PART-DATA, TRIES); MOVE(GRIPPER, DIAMETER(PART-DATA)+O.2); MOVE(< 1,2, 3>, XYZ-POSITION(PART-DATA)+); TRY-PICKUP(PART-DATA, TRIES); END; TRY-PICKUP: SUBR(PART-DATA, TRIES); IF TRIES LT 1 THEN RETURN(’N0 PART’); IF GRASP(DIAMETER(PART-DATA)) = ’NO PART’ END; DMOVE(3, -1.0); THEN TRY-PICKUP(PART-DATA, TRIES - 1); GRASP: SUBR(DIAMETER, F); FMONS: NEW APPLY($MONITOR, PINCH-FORCE(F)); MOVE(GRIPPER, 0, FMONS); RETURN ( IF QPOSITION(GRIPPER) LE DIAMETER/Z THEN ’NO PART’ ELSE ’PART’); END; INSERT: SUBR(PART-DATA, HOLE); FMONS: NEW APPLY ($MONITOR, TIP-FORCE(LAND1NG-FORCE)); in AML. by using its sensor 1/0 operations. For highspeed motions, *Compliant motions at low-speed could be written as user programs the real-time control process would have to be extended. 834 PROCEEDINGS OF THE IEEE, VOL. 71, NO. 7, JULY 1983 MOVE(< 1, 2, 3>, HOLE+); IF QMONITOR(FM0NS) = 1 MOVE(3, HOLE(3) + PART-LENGTH(PART-DATA)); END; DMOVE(3, -1.0, FMONS); THEN RETURN(’N0 HOLE’); PART-IN-HOLE: SUBR(PART-DATA, HOLE); PICKUP (PARTDATA, 2.); INSERT (PART-DATA, HOLE); END; This example has hown the implementation of low-level routines such as GRASP, that are available as primitives in AL and VAL. In general, such routines would be incorporated into a programming library available to users and would be indistinguishable from built-in routines. The important point is that such programs can be written in the language, The AML language design has adopted many decisions from the designs of the LISP and APL programming languages. AML, like LISP, does not make distinctions between system and user programs. Also AML provides a versatile uniform data aggregate, similar to LISP’s lists, whose storage is managed by the system. AML, like APL and LISP, provides uniform facilities for manipulating aggregates and for mapping operations over the aggregates. The languages, WAVE, MINI, AL, VAL, and AML are well within the mold of traditional procedural languages, both in syntax and the semantics of all except a few of their operations. The next three languages we consider have departed from the main line of computer programming languages in more significant ways. 7) TEACH 19 75-1 978: The TEACH language [ 81 1, [ 821 was developed as part of the PACS system at Bendix Corporation. The PACS system addressed two important issues that have received little attention in other obot programming systems: the issue of parallel execution of multiple tasks with multiple devices, including a variety of sensors; and the issue of defining robot-independent programs. In addressing these issues TEACH introduced several key innovations; among these are the following: 1) Programs are composed of partially ordered sequences of statements that can be executed sequentially or in parallel. 2) The system supports very flexible mapping between the logical devices, e.g., robots and fixtures, specified in the program and the physical devices that carry them out. 3) All motions are specified relative to local coordinate frames, so as to enable simple relocation of the motion sequence. These features are especially important in the context of systems with multiple robots and sensors, which are likely to be common in future applications. Few attempts have been made to deal with the organization and coordination problems of complex tasks with multiple devices, not all of them robots. Ruoff [ 821 reports that even the facilities of TEACH proved inadequate in coping with very complex applications and argues for the use of model-based programming tools. 8) PAL 1978-Present: PAL [93] is very different in conception from the languages we have considered thus far. PAL programs consist primarily of a sequence of homogeneous coordinate equations involving the locations of objects and of the robot’s endeffector. Some of the transforms in these equations, e.g., those specifying the relative location of a feature to an object’s frame, are defined explicitely in the program. Other coordinate frames are defined implicitly by the equations; leading the robot through an execution of the task establishes relations among these frames. Solving for the implicitly defined frames completes the program. PAL programs manipulate basic coordinate frames that define the position of key robot features: z represents the base of the robot relative to the world, T6 represents the end of the sixth (last) robot link relative to Z, and E represents the position of the end-effector tool relative to ~6. Motions of the tool with respect to the robot base are accomplished by specifying the value of z + T6 + E, where +indicates composition of transforms. So, the example, z + ~6 + E = CAM + BKT + GRASP specifies that the end-effector should be placed at the grasp position on the bracket whose position is known relative to a camera, as discussed in Section 111-B. The MOV command in PAL indicates that the “generalized” robot tool frame, ARM + TOL, is to be moved to . For simple motions of the end-effector relative to the robot base, ARM is Z + T6 and TOL is E. We can rewrite ARM to indicate that the motion happens relative to another object, e.g., the example above can be rewritten to be -BKT-CAM+Z+T6+E=GRASP. In this case ARM can be set to the transform expression - BKT - CAM + Z + T6. MOV GRASP will then indicate that the end-effector is to be placed on the grasp frame of the bracket, as determined by the camera. Similarly, placing the pin in the bracket’s hole can be viewed as redefining the tool frame of the robot to be at the hole. This can be expressed as - FIXTURE + Z + T6 + E - GRASP + HOLE = PIN. By Setting ARM to - FIXTURE + Z + T6 and TOL to E - GRASP + HOLE, MOV PIN will have the desired effect. Of course, the purpose of setting ARM and TOL is to simplify the expression of related motions in the same coordinate frame. PAL is still under development; the system described in [ 931 deals only with position data obtained from the user rather than the robot. Much of the development of PAL has been devoted to the natural use of guiding to define the coordinate frames.
The basic idea is that sensory information serves to define the actual value of some coordinate frame in the coordinate equations. 9) MCL 1979-Present: MCL is an extension of the APT language for Numerically Controlled machining to encompass robot control, including the following capabilities: 1) data types, e.g., strings, booleans, reals, and frames; 2) control structures for conditional execution, iterative 3) real-time input and output; 4) vision interface, including the ability to define a shape to execution, and multiprocessing; be located in the visual field. Extending APT provides some ease of interfacing with existing machining facilities including interfaces to existing geometric databases. By retaining APT compatibility, MCL can also hope to draw on the existing body of skilled APT part programmers. On the other hand, the APT syntax, which was designed nearly 30 years ago, is not likely to gain wide acceptance outside of the NC-machining community. 10) Additional Systems: Many other robot language systems are reported in the literature, among these are the following: 1) ML is a low-level robot language developed at IBM, with operations comparable to those of a computer assembly language. The motion commands specified joint motions for ROBOT PROGRAMMING.The language provided support for guarded moves by means of SENSOR commands that enabled sensor monitors; when a monitor was activated by a sensor value outside of the specified range, all active motions were terminated. The language has a flexible way of defining and accessing input or output lines, either as single or multiple bit numbers.
 There are three major types of commercial CAD systems, differing on their representations of solid objects: 1) line-objects are represented by the lines and curves 2) surface-objects are represented as a set of surfaces; 3) solid-objects are represented as combinations of primitive needed to draw them; solids. Line systems and some surface systems do not represent all the geometric information needed for task planning.
The legal motions of an object are constrained by the presence of other objects in the environment and the form of the constraints depends in detail on the shapes of the objects. This is the fundamental reason why a task planner needs geometric descriptions of objects. There are additional constraints on motion imposed by the kinematic structure of the robot itself. If the robot is turning a crank or opening a valve, then the kinematics of the crank and the valve impose additional restrictions on the robot’s motion. The kinematic models provide the task planner with the information required to plan robot motions that are consistent with external constraints. The bulk of the information in a world model remains unchanged throughout the execution of a task. The kinematic descriptions of linkages are an exception, however. As a result of the robot’s operation, new linkages may be created and old linkages destroyed. For example, inserting a pin into a hole creates a new linkage with one rotational and one translational degree of freedom. Similarly, the effect of inserting the pin might be to restrict the motion of one plate relative to another, thus removing one degree of freedom from a previously existing linkage. The task planner must be apprised of these changes, either by having the user specify linkage changes with each new task state, or by having the planner deduce the new linkages from the task state description. In planning robot operations, many of the physical characteristics of objects play important roles. The mass and inertia of parts, for example, will determine how fast they can be moved or how much force can be applied to them before they fall over. Also, the coefficient of friction between a peg and a hole affects the jamming conditions during insertion.
 The feasible operations of a robot are not sufficiently characterized by its geometrical, kinematical, and physical descriptions. We have repeatedly stressed the importance of a robot’s sensing capabilities: touch, force, and vision. For task planning purposes, vision allows obtaining the position of an object to some specified accuracy, at execution time. Force sensing allows performing guarded and compliant motions. Touch information could serve in both capacities, but its use remains largely unexplored [36]. In addition to sensing, there are many individual characteristics of robots that must be described in the world model: velocity and acceleration bounds, positioning accuracy of each of the joints, and workspace bounds, for example. Much of the complexity in a world model arises from modeling the robot, which is done once. Geometric, kinematic, and physical models of other objects must be provided for each new task, however. Task description as a sequence of model states. steps during execution of the task. An assembly of several parts, illustrates one possible sequence of models for a simple task. All of the models in the task specification share the descriptions of the robot’s environment and of the objects being manipulated; the steps in the sequence differ only in the positions of the objects. Hence, a task specification is, at first approximation, a model of the robot’s world together with a sequence of changes in the positions of the model components. A model state is given by the positions of all the objects in the environment. Hence, tasks may be defined, in principle, by sequences of states of the world model. The sequence of model states needed to fully specify a task depends on the capabilities of the task planner. The ultimate task planner might need only a description of the initial and final states of the task. This has been the goal of much of the research on automatic problem solving within artificial intelligence .These problem solving systems typically do not specify the detailed robot motions necessary to achieve an operation.’ without specifying the robot path or any sensory operations. In contrast to these systems, task planners need significant information about intermediate states, but they can be expected to produce a much more detailed robot program. The positions needed to specify a model state are essentially similar to those needed to specify positions to robot-level systems. The option of using the robot to specify positions is not open, however. The other techniques described in Section 111-B are still applicable. The use of symbolic spatial relationships is particularly attractive for high-level task specifications. We have indicated that model states are simply sets of positions and task specifications are sequences of models. Therefore, given a method such as symbolic spatial relationships for specifying positions, we should be able to specify tasks. This approach has several important limitations, however. We noted earlier that a set of positions may over specify a state. A typical example [23] of this difficulty arises with symmetric objects, for example a round peg in a round hole. The specific orientation of the peg around its axis given in a model is irrelevant to the task. This problem can be solved by treating the symbolic spatial relationships themselves as specifying the state, since these relationships can express families of positions. Another, more fundamental, limitation is that geometric and kinematic models of an operation’s mechanisms to carry out the plan in the real world. ’The most prominent exception is STRIPS which included LOZANO-P~REZ: ROBOT PROGRAMMING 831 final state is not always a complete specification of the desired operation. One example of this is the need to specify how hard to tighten a bolt during an assembly. In general, a complete description of a task may need to include parameters of the operations used to reach one task state from another. The alternative to task specification by a sequence of model states is specification by a sequence of operations. Thus instead of building a model of an object in its desired position, we can describe the operation by which it can be achieved. The description should still be object-oriented, not robot-oriented; for example, the target torque for tightening a bolt should be specified relative to the bolt and not the robot joints. Operations will also include a goal statement involving spatial relationships between objects. The spatial relationships given in the goal not only specify positions, they also indicate the physical relationships between objects that should be achieved by the operation. Specifying that two surfaces are Against each other, for example, should produce a compliant motion that moves until the contact is actually detected, not a motion to the position where contact is supposed to occur. For these reasons, existing proposals for task-level programming languages have adopted an operation-centered approach to task specification. The task specified as a sequence of model states  can be specified by the following symbolic operations, assuming that the model includes names for objects and object features: PLACE BEARING1 SO (SHAFT FITS BEARING1.HOLE) AND (BEARING1.BOTTOM AGAINST SHAFT'.LIP) PLACE SPACER SO (SHAFT FITS SPACER.HOLE) AND (SPACER.BOTTOM AGAINST BEARING1.TOP) PLACE BEARING SO (SHAFT FITS BEARING2.HOLE) AND (BEARING2.BOTTOM AGAINST SPACER.TOP) PLACE WASHER SO (SHAFT FITS WASHER.HOLE) AND (WASHER.BOTTOM AGAINST BEARING2.TOP) SCREW-IN NUT ON SHAFT TO (TORQUE = to) The first step in the task planning process is transforming the symbolic spatial relationships among object features in the SO clauses above to equations on the position parameters of objects in the model. These equations must then be simplified as far as possible to determine the legal ranges of positions of all objects [ 11, [78], [94]. The symbolic form of the relationships is used during program synthesis also. We have mentioned that the actual positions of objects at task execution time will differ from those in the model; among the principal sources of error are part variation, robot position errors, and modeling errors. Robot programs must tolerate some degree of uncertainty if they are to be useful, but programs that guarantee success under worst case error assumptions are difficult to write and slow to execute. Hence, the task planner must use expectations on the uncertainty to choose motion and sensing strategies that are efficient and robust [44]. If the uncertainty is too large to guarantee success, then additional sensory capabilities or fixtures may be used to limit the uncertainty  For this reason, estimated uncertainties are a key part of task specification. Robot Program Synthesis: The synthesis of a robot program from a task specification is the crucial phase of task planning. The major steps involved in this phase are grasp planning, motion planning, and plan checking. The output of the synthesis phase is a program composed of grasp commands, several kinds of motion specifications, sensor commands, and error tests. This program is in a robot-level language for a particular robot and is suitable for continuous execution.

Grasping is a key operation  of a  robot programs since it affects all continuous  motions. The grasp planner must choose where to grasp objects so that no colliding will result when grasping and moving them. In addition, the grasp planner must choose grasp positions so that the grasped objects are stable and does not move in the gripper. In particular, the grasp must be able to hold forces generated in motion and contact with other objects in the environment. The grasp operation should be planned so that it reduces the uncertainty.  Once the object is grasped, the task planner must synthesize motions that will achieve the desired goal of the operation reliably. We have seen that robot programs involve three basic kinds of motions: free, guarded, and compliant. Motions during an assembly operation, for example, may have up to four sub motions: a guarded departure from the current position, a free motion towards the destination position of the task step, a guarded approach to contact at the destination, and a compliant motion to achieve the goal position. During
Compliant motions are designed to maintain contact among objects.
The robot can only control forces if it is to guarantee contact with the surface. The planning of compliant motions .Compliant motions assume that the robot is already in contact with an object. A guarded motion in the presence of uncertainty, however, does not allow the program to determine completely the relative position of the objects, several outcomes may be possible as a result of the motion .A strategy of compliant motions, guarded motions, and sensing must be synthesized to reliably achieve the specified goal, the strategy must guarantee that the desired final state is achieved.

 Most of the difficulty in doing motion synthesis stems from the need to operate under uncertainty in the positions of the objects and of the robot. These individual uncertainties can be modeled and their combined effect on positions computed. The requirements for successful completion of task steps can be used to choose the strategy for execution. Hence, an important part of robot program synthesis should be the inclusion of sensory tests for error detection. Error detection and correction in robot programs is a very difficult problem. A number of task-level language systems have been proposed, but no complete system has been implemented. We saw above that many fundamental problems remain unsolved in this area; languages have served primarily as a focus of research, rather than as usable systems.
 Choices strategy, the planner computes the additional parameters needed to specify the strategy motions, such as grasp positions and approach positions. A program is produced by inserting these parameters into the procedure skeleton that implements the chosen strategy. The approach to strategy synthesis based on procedure skeletons assumes that task geometry for common subtasks is predictable and can be divided into a manageable number of classes each requiring a different skeleton. This assumption is needed because the sequence of motions in the skeleton wiU only be consistent with a particular class of geometries.
Robotics  does not deal with obstacle avoidance, automatic grasping, or sensory operations. Some robot-level language systems have proposed extensions to allow some task-level specifications.


A key problem in the development of robot languages has been the reluctance, on the part of users and researchers alike, to accept that a robot programming language must be a sophisticated computer language

The evidence seems to point to the conclusion that a robot language should be a superset of an established computer programming language, not a subset. The developments should be matched with continuing efforts at raising the level of robot programming towards the task level. By automating many of the routine programming functions, we can simplify the programming process and thereby expand the range of applications available to robot systems. This should greatly stimulate development of the sophisticated robot programming systems that we will surely need in the future.

No comments: