2.1 Derivation of a control method



next up previous
Next: 2.2 Model estimation Up: 2 Differential visual feedback Previous: 2 Differential visual feedback

2.1 Derivation of a control method

In general, an unknown visual-motor transfer function can be written as a multidimensional Taylor expansion. In particular we are interested in the linear approximation:

where for a sufficiently "smooth"

the residual term

is small for

.

is called the visual motor Jacobian and is defined as

The visual motor Jacobian relates small changes in vision space to small changes in motor (control) space, e.g., for

,

,

and

we have:

Thus the visual motor Jacobian acts as a local model of the transfer function behavior around

(and

).

Given that we observe an initial perceptual state represented in the feature vector

and that we wish to bring about the (goal) state

, the best prediction using our linear model J, of the required change

in robot control signals, is given by solving the system of equations:

Executing this move, i.e. setting

changes the perceived state to

. Unless

is a linear transfer function,

is typically not equal to

. However, if

is small, and

smooth,

is likely to be smaller than

, and the process can be repeated yielding a sequence

successively closer to

. What we have is a quasi Newton method for solving a set of nonlinear equations in

(find

such that

). In a continuous setting we have a similar control law:

where

is a gain matrix.

These two control laws implement the image based look-and-move and the image based visual servoing. These or similar control laws have been used in previously published work e.g. [Hosoda, Asada 94][Hager et al 93][Chong, Conkie 90a]. However, there are two compelling reasons why the above design is not sufficient. First, it is well known that Newton methods are not globally convergent [Gustafsson 91]. Even in cases where the continuous system should be stable, the low sampling frequency of a TV camera based vision sensory signals makes it difficult to obtain both stability and reasonable performance in the discrete time version ([Feddema et. al. 92]). At the expense of generality in the visual processing, some researchers (e.g. [Wije et al 93]) have circumvented this problem by using special purpose high speed ``vision'' systems. Drawing on results in optimization we instead use a globally convergent step restricted method, adapting the step length to keep it within a range where our local model is sufficiently accurate. In spirit, this is similar to the control theory approach of changing controller gains and limits dynamically to account for local conditions in different parts in the work space.

The second reason is that, in a high DOF system, when the simple control scheme servos on a single, distant goal, it may drift substantially due to differences in model accuracy along different directions in the high DOF space. Relying only on a distant goal may also result in unacceptable deviation from the desired path should some unexpected disturbance occur. The solution to this problem involves the generation of way points, which are essentially additional visual goals describing the visual situation at intermediate points. By requiring a system to pass near or through such way points, drift from the desired trajectory can be eliminated. The current low level visual space trajectory generator lays out way points along straight lines in

, where

is the dimension of of the visual perception space. The distance between the way points is adapted to fulfill a Marquart condition of model accuracy. The Marquart algorithm is simple [Gustafsson 91]: If the model error at time

is

adjust the step size downward, for instance halve it. If

, then increase the step size. The tuning parameters

and

are set differently for different types of movements (e.g. long range transportation vs. fine manipulation), trading accuracy for time.

The implicit assumption in planning trajectories only along straight lines is that, if more complex trajectories are required, they will be provided by the higher level modules. Exact straight line trajectories are typically not achievable, but for reasonably short distances in visual space a straight line plans have proven to be a good enough approximation to achieve convergence and stability.

  
Figure 2: Planning intermediate way points in visual space reduces trajectory error compared to pure servoing on the goal.

Dividing the transition between

and

into several subsegments, with intermediate visual goals along the straight line between them also allows us to bring the manipulator back onto the desired trajectory as soon as possible in the event of a deflection, by servoing on the nearest subgoal between the current pose and the goal. This is illustrated in fig. 2. This behavior also reflects how the human arm reacts to a deflection.



next up previous
Next: 2.2 Model estimation Up: 2 Differential visual feedback Previous: 2 Differential visual feedback



jag@cs.rochester.edu