Successful application of machine vision and robotics in unstructured environments, without using any a-priori camera or kinematic models has proven hard, yet there are many such environments where robots would be useful. We do transfer function estimation or ``learning'' on-line by estimating piecewise linear models. The robot controller uses the learned models to predict how to move to achieve new goals. We have showed how to improve a standard, Newton-type visual servoing algorithm. We use a trust region method to achieve convergence for difficult transfer functions, and ``inbäddning''[25] or homotopy methods to transform a positioning task on a non-convex domain of the transfer function to a series of smaller tasks, each on a smaller convex domain. Intuitively both these techniques serve to synchronize actions with model acquisition, so that the actions never run ahead too far before the local model has been adapted to the new environment.
We have carried out extensive experiments and found that for typical robot arms (PUMA 761 and 762), and hands (Utah/MIT), repeatability is up to five times better under visual servo control than under traditional joint control. We also found that the adaptive visual servoing controller is very robust. The algorithm can successfully estimate the image Jacobian without any prior information, while carrying out a 3 DOF manipulation task. We showed how to bootstrap higher DOF tasks from the 3 DOF Jacobian estimate. We were able to verify that redundant visual information is valuable. Both errors due to imprecise tracking and goal specification were reduced. Furthermore highly redundant systems allow us to detect outliers, and deal with partial occlusion.
We have shown how the estimated models can be used for model free, image based view synthesis of an articulated agent. In that application we traded viewing quality for simplicity of use and speed of model acquisition. The system is currently limited by the performance of the visual front end, in which the raw intensity image is turned into a parameterized visual representation.