On one hand, control theory is based on proofs before the experiment in order to validate an a priori (partially) known deterministic interaction between the robot and its environment; the gain of this methodology is the expected reliability of the real system based on his model. On another hand, statistical methods are used when the envi- ronment is supposed unknown; in return, the ro- bot may own adaptation capabilities but its be- havior is not predictable before the experiment and is not necessary reliable. We believe that the core issue for the first ap- proach (that we call deterministic approach) relies on the fact it cannot handle the variety of all the possible situations in an unconstrain environment (we call this the variability issue). Oppositely, we think that the statistical approach is essentially missing a validating stage before the experiment in order to possibly falsify a given model. The aim of this article is to suggest a method- ology that combines both a validating stage be- fore the experiment and the settlement of models that handle unknown environments. To do this, we build models that own a validation statement and internal parameters that fix a compromise between falsifiability and robustness. For simple cases, we show that it is possible to fix internal parameters in order to meet the two antagonist constraints. As a consequence, we stress that the precision of the model has a lower bound and we determine a Heisenberg-like uncertainty principle.
In order to set up a list of libraries that you have access to,
you must first login
or sign up.
Then set up a personal list of libraries from your profile page by
clicking on your user name at the top right of any screen.