Sensor Configuration for Error Detection and Recovery
Amy J. Briggs
Abstract
Much of the early work in robotics focused on developing guaranteed
plans for accomplishing tasks specified at a high level. Such task
specifications might be of the form "mesh these two gears", or "place
part A inside region B". It is not always possible, however,
especially in the realm of assembly planning, to generate guaranteed
plans. For example, errors in tolerancing of the parts might render an
assembly infeasible. The Error Detection and Recovery (EDR) framework
of Donald was developed to deal with these inadequacies of the guaranteed
planning framework. EDR strategies will either achieve a goal if it is
recognizably reachable, or signal failure. Given a geometrically-specified
goal region G, an EDR strategy involves computing a failure region H and a
motion plan that will terminate recognizably either in G or H.
The question addressed in this work is that of computing sensing
strategies for distinguishing which of G and H have been attained.
We propose a method with which we can strengthen the guarantee of
reaching G or H into a guarantee of recognizability. In particular,
we show how to configure a sensor or set of sensors so that an object
in G or in H can be distinguished. Our approach assumes a general
sensor model, and builds on algorithms for computing partial and complete
visibility maps based on point-to-point visibility between objects in
an environment. We characterize recognizability and confusability
regions, that is, sensor placement regions from which an object in G or
in H can be distinguished, and regions from which attainment of G or H
could be confused.
A gzip'ed postscript version of this paper
is available.
Back to Amy Briggs' home page