Gaze control for goal-oriented humanoid walking

Gaze Control for Goal-Oriented Humanoid Walking J. F. Seara, O. Lorch, and G. Schmidt Institute of Automatic Control Engineering, Technische Universit¨ D-80290 Munich, Germany (i) development of strategies for walking pattern In this article a predictive task-dependent gaze primitive synthesis and concatenation to a stable control strategy for goal-oriented humanoid walk- step sequence allowing situation-dependent walk- ing is presented. In the context of active vision ing, see (ii) algorithms for robust image pro- systems we introduce an information theoretical cessing and obstacle reconstruction considering approach for maximization of visual information.
the head motion in 6 DoFs during locomotion; Based on two novel concepts, Information Content (iii) biologically motivated studies on intention of a view situation and Incertitude, we present a based gaze control with the objective to increase method for selecting optimal subsequent view di- visual information in a task and goal dependent rections, thus contributing to an improved perfor- way []. In this article we present our approach to mance in typical semi-autonomous robot locomo- predictive task-dependent view direction control.
tion tasks. Simulations and experimental results The article is organized as follows: dealing with the duality of concurrent tasks during presents the "where to look next?" problem in locomotion, i.e. obstacle avoidance and self local- the framework of active vision systems, the infor- ization, demonstrate the applicability of our ap- mation theoretical approach taken and its corre- proach on humanoid walking machines.
sponding mathematical formalization. A generaldescription of the proposed predictive gaze control strategy is presented in Section with the corre-sponding simulation results. Section summarizes Gaze Control, Active Vision, Vision Guided Hu- the benefits of the approach for the active vision manoid Walking.
system of ViGVVaM. The article is concluded in Section with an outlook on possible improve- Research Directions ments in gaze control obtained by the proposedapproach.
On Humanoids 2000 [] we presented the ViGVVaMemulation environment to support our theoreti- Active Vision and Gaze Control cal studies on the interaction between vision and Vision is one of the most powerful sensor modali- biped locomotion.
This framework allows the ties both for biological and artificial systems. Vi- validation and extrapolation of our vision-based sion allows them to gather simultaneously "what" control approaches to a varity of walking ma- and "where" information about their environ- chines. Figure shows the simplified overall con- ments. For biological agents, like humans or an- trol and emulation architecture and its coupling imals, vision proves to be the main aid to obsta- with the experimental set-up.
cle avoidance, localization, and navigation. Most visual perception and image processing are cou- of these visually guided locomotion "specialists" pled with ViGVVaM — Vision-Guided Virtual make use of an active vision system. An active vi- Walking Machine. Its operation is visualized in sion system is a system capable of adjusting its vi- an augmented reality display, where an image of sual parameters in a controlled manner to extract an external camera observing the real scenario is relevant information about the scene in space and overlayed by a virtual 3D graphic animation of time []. Figure depicts the vision architecture of ViGVVaM with the gaze control blocks high- Theoretical investigations carried out in our group lighted. Image processing algorithms are adapted can be subdivided in three major directions: to the time varying task-depended perception re- Proceedings of the IEEE-RAS International Conference on Humanoid Robots Planning & Execution Walking Task / Goal Figure 1: Emulator architecture and experimental set-up with ViGVVaM overlayed.
quirements. This is mirrored in the cascaded vi- the captured image conveys about the state of the sion architecture presented here. The results of observer and environment. The maximization of the image processing are fused with a 3D-map and visual information implies an information quan- are employed for step sequence adaptation and tification (measure of information). Our approach gaze control.
to this quantification is related to the definition of Since in biologically inspired artificial systems the the concept of Information Content.
field of view is typically limited, a means of di- A well-known definition of information based on recting the visual sensor is required. This can be Shannon's Theory is: Information is a measure of accomplished by a gaze control scheme which de- the decrement of uncertainty. To our best know- termines the orientation of the sensor, so that the ledge there has not been made an information the- relevant objects of the scene are within the field oretic treatment of how uncertainty can be em- of view. Thus, the primary task of the gaze con- ployed in robotics without following a probabilistic troller is to direct the visual sensor (overt attention analysis. Uncertainty is a concept normally ap- in biological systems) rather than to determine the plied in a probabilistic context (Kalman filtering, high saliency regions in a given scene (covert at- Markov models, etc., see [] for example). To point tention). A gaze controller accomplishes the task out that the novel approach presented here does to determine "where to look next?", which is an not need an exhaustive probabilistic analysis of the visual sensor capabilities in order to come up Quite a few articles have been presented on the with a task-dependent optimal gaze behavior, we topic of view direction control, see [, How- will refer to it with the notion Incertitude.
ever the "where to look next?"–problem has been We define Incertitude as a measure of the uncer- treated so far in a rather limited way. The goal tainty (lack of reliable knowledge) related to the was to determine the most conspicuous point or total amount of information necessary for the suc- feature within an image in order to direct attention cessful fulfillment of a given task. When express- towards it, i.e. shifts on covert attention. In this ing this in percentage, 0% Incertitude means an article we present a novel approach to the prob- exact and sufficient amount of task-related know- lem with a focus on increasing system autonomy legde, and 100% Incertitude means a total lack by task-oriented maximization of visual informa- of useful information for a given task. We define the Information Content of a view situation, i.e.
environment status and sensor position and ori- Information Theoretical Approach entation, as the measure of how perception under We consider a framework based on Shannon's In- these terms is capable of reducing Incertitude.
formation Theory [] for selecting the camera pa- Applying these definitions within the framework of rameters that maximize the visual information, i.e.
active vision systems, we present a methodological task-oriented optimization of the information that approach to quantify the Information Content in Proceedings of the IEEE-RAS International Conference on Humanoid Robots Feature Tracking Initialization Global/Local 3D-Map Step Primitive Database Figure 2: Cascaded vision architecture with gaze control highlighted.
a view situation, i.e. quantification of the Incer- is richer due to the improvement in the image titude variations in a specific perception status.
quality regarding our purpose. There is less un- We assume that the instantaneous visual percep- modeled optical distortion because only the area tion of a scenario by the observer does not reduce near the optical axis is used and the features will its Incertitude to zero, i.e. there exist uncertain- be imaged near the principal points. Moreover, ties in the measurements. It is noteworthy that when an object projects on the image center, this approach comes to stand in the general con- the probability of lying completely within the text of active robot perception and requires there- image boundaries is higher and analysis more fore a geometrical interpretation. Thus the final goal is to improve an agent's performance in differ-ent autonomous locomotion tasks such as obstacle • The richness of the Information Content in a avoidance or navigation. The Information Con- specific view situation depends not only on the tent quantification in a view situation is explained above mentioned aspects but also on the scene perception time history. This is the history ofgaze points and the accumulated useful infor- • The visual perception of objects in the ob- mation, which means the dynamical evolution server's immediate vicinity leads to a substan- of the whole process, i.e. the current Incertitude tial reduction in the Incertitude of the type, in the present situation.
location, and other features of these objects,which can be estimated by means of visual per- ception. A view of a nearby object implies a In this section we present the mathematical for- richer Information Content than the view of a malization of the above presented concepts. We distant one under the same perception condi- represent the pose, i.e. position and orientation, tions. Obviously, quality of measurement de- of the active vision system in a global reference creases with distance as close objects can be sensed with higher resolution. An approxima- tion for this dependency can be given by the proportionality to the inverse of squared objectdistance, see [, [qB . . qB] [qOb . . qOb]T , • The visual perception of an object projecting where ΩB is the pose of the base reference frame on a central region in the visual field yields a (B) of the active vision system. ΩOb describes greater reduction/decrement in the observer's the present pose of the observer (Ob) relative to Incertitude than the perception of objects on B and is related to its internal structural confi- the image's outer parts (Fixation is one of the guration. The number of degrees of freedom n of more important tools in active vision).
a e.g. stereo active vision system is 1 ≤ n ≤ 12, Information Content of those vision situations being for example n = 3 in a stereo pan-tilt-roll Proceedings of the IEEE-RAS International Conference on Humanoid Robots camera head without vergence control. The case Predictive Gaze Controller n = 12 would indicate independent relative mo- Gaze Control Algorithm tion between both cameras and between cameras As discussed in Section when working with an and base reference frame, which is a rare case.
active vision system a view direction control al- We assume the existence of an initial amount gorithm must be employed to determine how to of knowledge about the scene (i.e. statically pre- direct the sensor so that the interesting objects of acquired information which can be used to con- the scene are within its field of view (Fixation). A struct an initial 3D-geometrical map): pose infor- simple reactive controller neglecting the delay due mation about the objects (index i), i.e. the po- to the non-zero image acquisition and processing sition (index p) and orientation (index r) of the time, seems therefore inappropriate for active vi- objects, in the scenario considered in the task, sion application which require not only a reactive component but also a purposive task-dependent the initial values of a parameter set which deter- active behavior. Prediction proves to be an ap- mines the degree of Incertitude ν propriate method to improve performance in such In order to describe the perception process we systems. Biological experiments have proven that need to establish a model of the vision system.
humans make also heavy use of internal predic- This model defines the mapping between a 3D- tion []. The human brain uses pre-stored infor- point in the environment, Υ mation for navigation and guidance. This infor- i, and its correspon- ding projected 2D-image point, m mation, for example the approximate position of This imaging relationship can be modeled as m an oncoming obstacle on a walking trail, helps to direct the visual sensor. This is the biological ba- The individual Information Content (IC sis of the predictive component incorporated in the the presence of one object i in the field of view in gaze controller described here.
a specific view situation at time t is given by With the earlier defined concepts of InformationContent and Incertitude we present a predictive task-dependent gaze control strategy for active vi- i(t) = F (t, Ω, Υi, νi) sion systems based on maximization of the pre- and the total Information Content (IC) in a spe- dicted visual information.
The approach is as cific view situation in the presence of N visible follows: Using the a priori available information objects is given by about the scene (for example a 3D-map, i.e. Υiand ν i(t)), the current observer location and ac- tual motion parameters (Ω(t) and ˙ Ω(t)), the gaze controller predicts the (quasi-)optimal pose of the visual sensor for a future view situation b The function F in can then be expressed ac- dicted variables are denoted as cording to the definitions presented in the previous by index ?). The optimal pose is found by select- ing the state of maximum predicted Information ICmax, among a set of Predicted States 1(t, νi), f2(t, di), f3(t, mi)) , (PS). This can be expressed as: where f1 represents the influence of the percep- tion time history defined above, f of the relative position between observer and ob- ject i which is di = h(Ω, Υp,i), and f3 representsthe influence of the projecting position of a feature on the image.
With this notation, we can express the relation between Information Content and variation of In- i(t) = − C1 · I Ci(t) + C2 · ISM (t), where C1 and C2 are positive constant weightingfactors and the term ISM represents the increment where are upper and lower bounds for the ac- of Incertitude due to the observer's self motion.
tive vision system parameters and () guarantees Proceedings of the IEEE-RAS International Conference on Humanoid Robots a steady gaze behavior without abrupt view direc- the pose of the obstacles in this scenario, Υinit, tion changes. The latter would negatively influ- but maximum (100%) Local Incertitude, see In- ence the perception process. Equation () takes certitude initial values in Figure .
into consideration the observer's system dynamics Furthermore, we assume that the active observer's and represents some sort of penalty function which position remains constant over the ground and in limits the performance (e.g. speed) of saccades.
lateral direction: The observer moves only in x- A predicted state is therefore specified by the pre- direction, cf. Figure with speed b, i.e. ΩB = dicted future pose of the observer and the variable value of sensor configuration parameters (for ex- a3, and b constants, see ground trajectory pro- ample pan-tilt-angles), which determine the field jection in Figure . The active observer has only of view. The Information Content of a given pre- two rotational degrees of freedom: pan and tilt, dicted state is quantified, depending on the cur- i.e. ΩOb = [qOb qOb]T = [pan tilt]T . Note that rent goal according to the relationships described this assumption does not constrain the applicabil- ity of the proposed approach to other hardware According to (), the predicted Incertitude ν in set-ups and that this pan-tilt configuration is very the system expressed in discrete time k evolves as common in robotic systems with active vision.
bi (k+1) = νi (k) − C1 · c ISM are the predicted values for the Information Content and the predicted gain of Incertitude due to the observer's self motion.
Implementation and Simulation In order to demonstrate the validity of our ap-proach, we present an example application in which the advantages of using the before presented gaze controller in an arbitrary active visual system are pointed out compared to the traditional pas-sive approach.
The active observer's goal in this example is togather as much task-oriented information as possi- Figure 3: Simulated scenario with landmarks and ble. The two tasks considered are typical problems typical obstacles.
in autonomous robot navigation and guidance: self The heuristically determined function F used for localization and obstacle avoidance. While per- calculating the Information Content in a specific forming the two tasks, the active observer consid- view situation is given by ers two different types of Information Content, i.e.
two different Incertitudes. These are here noted as ICi = F f1(νi), f2( b Global Incertitude, νG, (referring to self localiza- tion) and Local Incertitude, νL, (referring to ob- stacle positions, where i denotes the number of existing obstacles). The main goal of the active where α is the camera horizontal view angle, α∗ is system is therefore to gather a sufficient amount the camera vertical view angle, b di is the predicted of task-oriented information to keep these two In- relative distance to obstacle i, and b certitudes within certain safety margins.
the predicted projection angles of the feature ref- margins guarantee that a sufficient amount of in- erence point on the image plane. These variable formation is available for the safe guidance and values are calculated by obstacle avoidance of the observer.
In this experiment we assume an active vision sys- tem, which moves in a finite 3D-space with obsta- cles and landmarks. At the start situation we as- sume a quite exact knowledge about the observer's Figure shows the function f3 for a horizontal pose in space, Ωinit, in this case 30% Global Incer- camera view angle α = 60◦. This function rep- titude, and a certain amount of knowledge about resents the influence of the projecting position of Proceedings of the IEEE-RAS International Conference on Humanoid Robots a feature on the image. Notice that the distribu- x-Position Object #4 tion representing this influence can be interpreted as an approximate model of a biologically foveated Local Incert. for Obj. #4 without Gaze Control er: Local Incert. pan-angle = 0 rad tilt-angle = 0.35 rad (fixed) self localization out of view Incertitude Local Incertitude S Figure 6: Local Incertitude results with (—) and Figure 4: Influence of a feature's projecting posi- without (- - -) gaze control for single lateral object tion on the image on the Information Content.
and Clobal Incertitude results with gaze control(· · ·).
The evolution of Global (G) and Local (L) Incer-titude is specified by the following equation object in motion is followed by the gaze in order to gather the maximum information about it.
Figure presents some results of Local Incerti- with C1 and C2 denoting adequately chosen tude for one of the objects (#4) obtained in the weighting constants. This pair of constants has same scenario and the comparison of the two be- different values for Global Incertitude νG and Local fore mentioned approaches: fixed gaze strategy Incertitude νL. In this simulation the increment and active vision approach with predictive gaze of Incertitude due to the observer's self motion,I controller, which corresponds to the gaze behav- SM , is a function of the covered distance that ex- ior described in Figure In this example the fixed ponentially approaches the upper bound of 100% gaze approach is defined by two constant angles: Incertitude. The prediction interval is determined pan-angle = 0.0 rad and and tilt-angle = 0.35 rad by the processing time required by the visualiza- (visual sensor slightly pointing down). This visual configuration seems to be a reasonable compro- Simulation Results mise among the feasible fixed gazes for this sce- Figure shows a top (upper figure) and side view nario. The Local Incertitude values reached are (lower figure) of the gaze behavior resulting from comparatively high compared to other configura- the simulation according to the scenario presented tions, i.e. in this situation objects remain within in Figure . Five landmarks (2) and five objects the field of view for a long time.
(◦), represented for this example by their center of This figure shows also the security ranges consid- gravity as point-objects, are present. Objects #1 This means that when passing an object to #4 are static while #5 is moving with constant its corresponding Local Incertitude must be in this velocity during the interval t1 ≤ t ≤ t2 along the area. This property is evaluated just before the marked path (◦ . . ◦), see Figure The gaze observer reaches the object position. If this value behavior is represented as a vector field indicating is outside the security range considered, the simu- the view direction while the observer is moving lation indicates a failure situation as it is shown in through the scenario. Some properties of the gaze the figure. On the other hand can it be observed controller can be analyzed based on this figure.
that in this experiment this failure does not occur It is obvious that the observer's attention is at- when applying gaze control.
tracted by close objects. When moving the ob- Figure shows the results obtained for the five ob- server's Incertitude about its position increases jects positioned in different configurations (Object and self localization is needed. Therefore, the view #1 to #5). In all five cases the results are better direction is selected to point towards a landmark for the gaze controller approach: The Incertitude after some distance, see the results for Global In- values are lower and the objects remain within the certitude in Figure It can also be seen how the field of view for a longer period of time, as indi- Proceedings of the IEEE-RAS International Conference on Humanoid Robots Figure 5: Top view (upper figure) and side view (lower figure) of the resulting gaze behavior.
cated by the global minima in Figure . This is The head motion is synchronized with a dynami- especially distinctive when the object is located cally simulated walking machine to achieve realis- quite aside from the observers path, see Object tic perception conditions. The cameras can travel over the scenario and emulate the ViGVVaM head It is obvious that the corresponding results for motion in 6 DoFs.
Global Incertitude are much better with the gaze Results on Gaze Control Behavior controller than without due to the lack of direct In this new experiment, the mission of ViGVVaM fixation in the fixed gaze approach. In addition, it is constrained to walking straight ahead towards a can be observed how the Local Incertitude curves staircase in a prototypical sidewalk scenario, not for the active vision system with gaze controller only avoiding collision with a barrier, but keeping present non smooth regions, which correspond to its Global Incertitude νG (referring to self localiza- the fact that in these regions the gaze is directed tion, here only one landmark is considered for this towards a landmark to assure the Global Incerti- purpose) within a safety margin, cf. Figure . In tude values remaining below the security margin, this application the time evolution of the Incerti- tude must be considered. Now, the task of the gaze Experiments with ViGVVaM controller is not only the instantaneous minimiza-tion of the Local Incertitude with respect to the location of the obstacles as in [The objective A reduced version of this goal-oriented predictive of gaze control is to ensure that objects of interest gaze controller has been implemented in the ac- remain within the field of view as long and often as tive vision system of ViGVVaM []. In that arti- necessary for step sequence adaptation and gather cle we proposed an emulation environment that enough information for navigation, cf. Section allows validation of our vision-based control ap- In our experimental setup, the camera pair is proach for humanoid walking. In this emulation mounted on a pan-tilt-head. This means that the environment, see Figure a set of stereo cameras gaze controller possesses of two variables pan- and mounted on a pan-tilt-head can be moved by use tilt-angle, ΩOb = [qOb qOb]T = [pan tilt]T , to con- of a mobile platform over a prototypical scenario.
trol perception. In this case the position and ori- Proceedings of the IEEE-RAS International Conference on Humanoid Robots Object #2: Stairs Object #1: Barrier Figure 8: Observer's view of the scenario for gazecontrol experiments with a barrier (#1), staircase Figure 7: Local Incertitude results with (—) and (#2), and landmark.
without (- - -) gaze control for the five objects inscenario.
active vision systems. Our information theoretical entation of the observer's base reference frame ΩB approach leads to the definition of two new con- is defined by the movement of the platform.
cepts: Information Content and Incertitude, to- The type and dimensions of the obstacles are visu- gether with the corresponding mathematical for- ally determined in an initialization process with an malism stated.
Based on these concepts a sys- individual degree of Incertitude. The Information tematic approach to the development of a task- Content of a given state is quantified, depending dependent gaze control strategy was proposed.
on the current goal, taking into account the num- Simulations have shown the validity of the method ber of objects in view, their distance to the walk- for visual information maximization for robot nav- ing machine, their relative position to the center igation and obstacle avoidance tasks. The exper- of the camera view, and the present Global Incer- imental results achieved in an emulation environ- titude. The term F used to calculate the Infor- ment demonstrate the applicability and efficiency mation Content in a view situation for a predicted of the proposed approach for real robot navigation state (PS) is given by ICPS(ΩOb) = F(f1(νG), f2( b The (quasi-)optimal pan-tilt-angles are obtainedby maximizing ) for future locations of the ob-server. Figure shows the resulting view direc-tions adjusted by the gaze controller during mo-tion. Images from the upper left to lower rightcorner were taken in equally spaced intervals of0.2 m from the left eye. The distance to the bar-rier from our starting point is 1.5 m and to thestaircase 3.8 m. The landmark is located at a dis-tance of 7.9 m ahead and 4.2 m to the right of theobserver's initial position. It can be seen how thecurrent relevant objects remain focused while be-ing analyzed and how the gaze is modified accord-ing to the locomotion task.
Conclusions and Future Work In this article we have presented a novel biologi- Figure 10: High-perfomance stereo camera head cally inspired predictive gaze control strategy for for active vision applications.
Proceedings of the IEEE-RAS International Conference on Humanoid Robots Figure 9: Image sequence while moving over the scenario shown in Figure with optimized view directionfor self localization (2) and obstacle avoidance (◦).
These results on gaze control as well as the com- [4] J. Denk and G. Schmidt. Walking Primitive plete biologically inspired vision architecture for Synthesis for an Anthropomorphic Biped us- goal oriented humanoid walking, presented in Hu- ing Optimal Control Techniques. In Proc. of manoids 2000 [], will be validated on real walk- the Int. Conf. on Climbing and Walking Robots ing machine hardware. As a first step towards the (CLAWAR), pages 819–826, Karlsruhe, Ger- integration into a real biped, a high-performance stereo camera head has been designed and con- [5] J. F. Seara et. al.
ViGWaM Active Vi- structed, cf. Figure to reach the high dynami- sion System— Gaze Control for Goal-Oriented cal requirements arising from the gaze control for Walking. In Proc. of the Int. Conf. on Climb- biped locomotion.
ing and Walking Robots (CLAWAR), pages265–272, Karlsruhe, Germany, 2001. Acknowledgements : This work was supported [6] I. Moon et. al. On-line Selection of Stable Vi- in part by the German Research Foundation sual Landmarks under Uncertainty. In Proc. of (DFG) within the "Autonomous Walking" Prior- the IEEE/RSJ Int. Conf. on Intelligent Robots ity Research Program.
and Systems (IROS), pages 781–786, Kyongju, [7] L. Matthies and S. A. Shafer. Error modeling [1] A. E. Patla. Understanding of roles of vision in stereo navigation. IEEE Journal of Robotics in the control of human locomotion. Gait and and Automation, 3(3), June 1987. Posture, 5:54–69, 1997. [8] O. Lorch et. al.
ViGWaM — An Emula- [2] Y. Aloimonos. Introduction: Active Vision Re- tion Environment for a Vision Guided Virtual visited. Technical report, University of Mary- Walking Machine. In Proc. of the IEEE/RAS Int. Conf. on Humanoid Robots, Cambridge, [3] A. J. Davison and N. Kita. Active Visual Lo- Massachusetts, USA, 2000. , calisation for Cooperating Inspection Robots.
[9] C. E. Shannon.
A Mathematical Theory of In Proc. of the IEEE/RSJ Int. Conf. on Intel- The Bell System Technical ligent Robots and Systems (IROS), 2000. Journal, 27:397–423, 623–659, 1948. Proceedings of the IEEE-RAS International Conference on Humanoid Robots

Source: http://www.robotic.dlr.de/fileadmin/robotic/stroblk/publications/seara_2001human.pdf

arihantcapital.co.in

Apr08th – Apr 13th, 2013 Key developments during the week 05-Apr-13 28-Mar-13 Govt likely to finalise details of inflation-indexed bonds this HC refuses to restrain Glenmark from mfg, sale of Januvia  Power secy says state cos' loan recast deadline extended to  Bharti moves SC against DoT order banning co's 3G roaming BSE 200

Clinical abstracts using hitachi real-time tissue elastography

Hitachi Real-time Tissue Publications & International Clinical Abstracts Hitachi Real-time Tissue Elastography for applications using Endoscopic Ultrasound ENDOSCOPIC ULTRASOUND ELASTOGRAPHY FOR DIFFERENTIAL DIAGNOSIS OF PANCREATIC MASSES: A META-ANALYSIS Abstract Background

Copyright © 2008-2016 No Medical Care