2013 IEEE/RSJ International Conference on Intelligent Robots and Systems

5th Workshop on Planning, Perception and Navigation for Intelligent Vehicles

Best paper Award with a $500 price

Franck Dellaert and Avdhut Joshi have been awarded

Full Day Workshop (Room 608)

November 3rd, 2013, Tokyo, Japan

Workshop Proceedings, Program

Contact : Professor Philippe Martinet
IRCCYN-CNRS Laboratory, Ecole Centrale de Nantes,
1 rue de la Noë
44321 Nantes Cedex 03, France
Phone: +33 237406975, Fax: +33 237406934,
Email: Philippe.Martinet@irccyn.ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet


Professor Philippe Martinet, IRCCYN-CNRS Laboratory, Ecole Centrale de Nantes, 1 rue de la Noë, 44321 Nantes Cedex 03, France, Phone: +33 237406975, Fax: +33 237406934, Email: Philippe.Martinet@irccyn.ec-nantes.fr,
Home page: http://www.irccyn.ec-nantes.fr/~martinet

Research Director Christian Laugier, INRIA, Emotion project, INRIA Rhône-Alpes, 655 Avenue de l'Europe, 38334 Saint Ismier Cedex, France, Phone: +33 4 7661 5222, Fax : +33 4 7661 5477, Email: Christian.Laugier@inrialpes.fr,
Home page: http://emotion.inrialpes.fr/laugier

Professor Urbano Nunes, Department of Electrical and Computer Engineering of the Faculty of Sciences and Technology of University of Coimbra, 3030-290 Coimbra, Portugal, GABINETE 3A.10, Phone: +351 239 796 287, Fax: +351 239 406 672, Email: urbano@deec.uc.pt,
Home page: http://www.isr.uc.pt/~urbano

Professor Christoph Stiller, , Institut für Mess- und Regelungstechnik, Karlsruher Institut für Technologie (KIT), Engler-Bunte-Ring 21, Gebäude: 40.32, 76131 Karlsruhe, Germany, Phone: +49 721 608-42325 Fax: +49 721 661874, Email: stiller@kit edu
Home page: http://www.mrt.kit.edu/mitarbeiter_stiller.php

Professor Philippe Bonnifait, University of Technology of Compiègne, Heudiasyc UMR CNRS 7253, BP 20529, 60205 Compiègne, Phone: +33 3 44 23 44 81, Fax: +33 3 44 23 44 77, Email: philippe.bonnifait@hds.utc.fr
Home page: https://www.hds.utc.fr/~bonnif

General Scope

The purpose of this workshop is to discuss topics related to the challenging problems of autonomous navigation and of driving assistance in open and dynamic environments. Technologies related to application fields such as unmanned outdoor vehicles or intelligent road vehicles will be considered from both the theoretical and technological point of views. Several research questions located on the cutting edge of the state of the art will be addressed. Among the many application areas that robotics is addressing, transportation of people and goods seem to be a domain that will dramatically benefit from intelligent automation. Fully automatic driving is emerging as the approach to dramatically improve efficiency while at the same time leading to the goal of zero fatalities. This workshop will address robotics technologies, which are at the very core of this major shift in the automobile paradigm. Technologies related to this area, such as autonomous outdoor vehicles, achievements, challenges and open questions would be presented.

Main Topics

  • Road scene understanding
  • Lane detection and lane keeping
  • Pedestrian and vehicle detection
  • Detection, tracking and classification
  • Feature extraction and feature selection
  • Cooperative techniques
  • Collision prediction and avoidance
  • Advanced driver assistance systems
  • Environment perception, vehicle localization and autonomous navigation
  • Real-time perception and sensor fusion
  • SLAM in dynamic environments
  • Mapping and maps for navigation
  • Real-time motion planning in dynamic environments
  • 3D Modeling and reconstruction
  • Human-Robot Interaction
  • Behavior modeling and learning
  • Robust sensor-based 3D reconstruction
  • Modeling and Control of mobile robot
  • Multi-agent based architectures
  • Cooperative unmanned vehicles (not restricted to ground transportation)
  • Multi autonomous vehicles studies, models, techniques and simulations
  • International Program Committee

  • Philippe Bonnifait (Heudiasyc, UTC, France)
  • Alberto Broggi (VisLab, Parma University, Italy)
  • Paul Furgale (ETH Zurich, Switzerland)
  • Zhencheng Hu, (Kumamoto University, Japan)
  • Javier Ibanez-Guzman (Renault, France)
  • Christian Laugier (Emotion, INRIA, France)
  • Philippe Martinet (IRCCYN, Ecole Centrale de Nantes, France)
  • Urbano Nunes (Coimbra University, Portugal),
  • Anya Petrovskaya (Stanford University, USA)
  • Cedric Pradalier (GeorgiaTech Lorraine, France)
  • Christoph Stiller (Karlsruhe Institute of Technology, Germany)
  • Rafael Toledo Moreo (Universidad Politécnica de Cartagena, Spain)
  • Sebastian Thrun (Stanford University, USA)
  • Ming Yang (SJTU Shanghai, China)
  • Final program

    Introduction to the workshop 8:30

    Session I: Localization & mapping 8:30
    Chairman: Philippe Bonnifait (HEUDIASYC, France)

    • Title: New Concepts in Robotic Mapping: PHD Filter SLAM 8:40
      Keynote speaker: Martin Adams (Universidad de Chile, Santiago, Chile) 35min + 5min questions
      Presentation Video1, Video2, Video3
      Co-authors: John Mullane, Keith Leung, Felipe Inostroza

      Abstract: Applications for autonomous robots have long been identified in challenging environments including built-up areas, mines, disaster scenes, underwater and in the air. Robust solutions to autonomous navigation remain a key enabling issue behind any realistic success in these areas. Arguably, the most successful robot navigation algorithms to-date, have been derived from a probabilistic perspective, which takes into account vehicle motion and terrain uncertainty as well as sensor noise. Over the past decades, a great deal of interest in the estimation of an autonomous robot?s location state, and that of its surroundings, known as Simultaneous Localisation And Map building (SLAM), has been evident. This presentation will explain recent advances in the representations of robotic measurements and the map itself, and their consequences on the robustness of SLAM. Fundamentally, the concept of a set based measurement and map state representation allows all of the measurement information, spatial and detection, to be incorporated into joint Bayesian SLAM frameworks. Representing measurements and the map state as sets, rather than the traditionally adopted vectors, is not merely a triviality of notation. It will be demonstrated that a set based framework circumvents the necessity for any fragile data association and map management heuristics, which are necessary, and often the cause of failure, in vector based solutions. Implementation details of the Bayesian set based estimator - the Probability Hypothesis Density (PHD) Filter, and its application to SLAM will be the focus of the presentation. Experimental results, demonstrating SLAM with laser, radar and vision based sensors in urban and marine environments will be demonstrated. Comparisons of PHD Filter based SLAM and state of the art vector based implementations will demonstrate the robustness of the former to the realistic situations of sensor false alarms, missed detections and clutter.

    • Title: Large-Scale Dense 3D Reconstruction from Stereo Imagery 9:20
      Authors: Pablo F. Alcantarilla, Chris Beall, Frank Dellaert 17min + 3min questions
      Presentation, Video 1, Video 2, paper

      Abstract: In this paper we propose a novel method for largescale dense 3D reconstruction from stereo imagery. Assuming that stereo camera calibration and camera motion are known, our method is able to reconstruct accurately dense 3D models of urban environments in the form of point clouds. We take advantage of recent stereo matching techniques that are able to build dense and accurate disparity maps from two rectified images. Then, we fuse the information from multiple disparity maps into a global model by using an efficient data association technique that takes into account stereo uncertainty and performs geometric and photometric consistency validation in a multi-view setup. Finally, we use efficient voxel grid filtering techniques to deal with storage requirements in large-scale environments. In addition, our method automatically discards possible moving obstacles in the scene. We show experimental results on real video large-scale sequences and compare our approach with respect to other state-of-the-art methods such as PMVS and StereoScan.

    • Title: Generation of Accurate Lane-Level Maps from Coarse Prior Maps and Lidar 9:40
      Authors: Avdhut Joshi, Michael R. James 17min + 3min questions
      Presentation, Video, paper

      Abstract: While many research projects on autonomous driving and advanced driver support systems make heavy use of highly accurate lane-level maps covering large areas, there is relatively little work on methods for automatically generating such maps. Here, we present a method that combines coarse, inaccurate prior maps from OpenStreetMap (OSM) with local sensor information from 3D Lidar and a positioning system. The algorithm leverages the coarse structural information present in OSM, and integrates it with the highly accurate local sensor measurements. The resulting maps have extremely good alignment with manually constructed baseline maps generated for autonomous driving experiments.

    Coffee Break 10:00

    Session II: Perception 10:30
    Chairman: Christian Laugier (INRIA, France)
    • Title: Vision-Controlled Micro Aerial Vehicles: from "calm" navigation to "aggressive" maneuvers 10:30
      Keynote speaker: Davide Scaramuzza (ETHZ, Zurich, Switzerland) 35min + 5min questions
      Presentation, paper1, paper2, paper3, paper4,

      Abstract: In the last two years, we have heard a lot of news about drones, small autonomous flying vehicles. Flying robots have numerous advantages over ground vehicles: they can get access to environments where humans cannot get access to and, furthermore, they have much more agility than any other ground vehicle. Unfortunately, their dynamics makes them extremely difficult to control and this is particularly true in GPS-denied environments. In this talk, I will present challenges and results for both ground vehicles and flying robots, from localization in GPS-denied environments to motion estimation. I will show several experiments and real-world applications where these systems perform successfully and those where their applications is still limited by the current technology.

    • Title: Enabling Efficient Registration using Adaptive Iterative Closest Keypoint 11:10
      Authors: Johan Ekekrantz, Andrzej Pronobis, John Folkesson, Patric Jensfelt 17min + 3min questions
      Presentation, paper

      Abstract: Registering frames of 3D sensor data is a key functionality in many robot applications, from multi-view 3D object recognition to SLAM. With the advent of cheap and widely available, so called, RGB-D sensors acquiring such data has become possible also from small robots or other mobile devices. Such robots and devices typically have limited resources and being able to perform registration in a computationally efficient manner is therefore very important. In our recent work [1] we proposed a fast and simple method for registering RGB-D data, building on the principle of the Iterative Closest Point (ICP) algorithm. This paper outlines this new method and shows how it can facilitate a significant reduction in computational cost while maintaining or even improving performance in terms of accuracy and convergence properties.

    • Title: Information fusion and evidential grammars for object class segmentation 11:30
      Authors: Jean-Baptiste Bordes, Philippe Xu, Franck Davoine, Huijing Zhao, Thierry Denoeux 17min + 3min questions
      Presentation, paper

      Abstract: In this paper, an original method for traffic scene images understanding based on the theory of belief functions is presented. Our approach takes place in a multi-sensors context and decomposes a scene into objects through the following steps: at first, an over-segmentation of the image is performed and a set of detection modules provides for each segment a belief function defined on the set of the classes. Then, these belief functions are combined and the segments are clustered into objects using an evidential grammar framework. The tasks of image segmentation and object identification are then formulated as the research of the best parse graph of the image, which is its hierarchical decomposition from the scene, to objects and segments while taking into account the spatial layout. A consistency criterion is defined for any parse tree, and the search of the optimal interpretation of an image formulated as an optimization problem. We show that our framework is flexible enough to include new sensors as well as new classes of object. The work is validated on real and publicly available urban driving scene data.

    Lunch break 11:50

    Session III: Interactive session 12:45
    Chairman: Philippe Martinet (IRCCYN, France)
    • Title: Hierarchical Traffic Control for Partially Decentralized Coordination of Multi AGV Systems in Industrial Environments
      Authors: Valerio Digani, Lorenzo Sabattini, Cristian Secchi, Cesare Fantuzzi 17min + 3min questions

      Abstract: This paper deals with decentralized coordination of Automated Guided Vehicles (AGVs) used for logistics operations in industrial environments. We propose a hierarchical traffic control algorithms, that implements path planning on a two layer architecture. The high-level layer describes the topological relationships among different areas of the environment. In the low-level layer, each area includes a set of fixed routes, along which the AGVs have to move. In the proposed control architecture, each AGV autonomously computes its path, on both layers. The coordination among the AGVs is obtained exploiting shared resources (i.e. centralized information) and local negotiation (i.e. decentralized coordination). The proposed strategy is validated by means of simulations. This work is developed within the PAN-Robots European project.

    • Title: TLD based Real-Time Weak Traffic Participants Tracking for Intelligent Vehicles
      Authors: Linji Xue, Ming Yang, Yongkun Dong, Chunxiang Wang, Bing Wang 17min + 3min questions

      Abstract: Pedestrians and bicycles are the most vulnerable participants in urban traffic. Numerous Pedestrian detection methods have been proposed in the last ten years. However, most of them cannot meet the real-time requirement of intelligent vehicles. At the same time, there is little paper on bicycle detection or tracking. This paper proposes a real-time pedestrian and bicycle tracking method based on TLD (Tracking-Learning-Detection), which is an award-winning, real-time algorithm for tracking of unknown objects. In order to solve the background movement arising from the moving observation platform, the location of feature points of the tracking part of TLD is adjusted according to the characteristic of moving pedestrian and bicycle. Then gradient feature is used instead of gray feature in original TLD algorithm, in order to solve the pedestrians and bicycles? deformation problem. Experimental results demonstrated that the effectiveness and real-time performance of the proposed method.

    • Title: On Keyframe Positioning for Pose Graphs Applied to Visual SLAM
      Authors: Andru Putra Twinanda, Maxime Meilland, Désiré Sidibé, Andrew I. Comport 17min + 3min questions

      Abstract: In this work, a new method is introduced for localization and keyframe identification to solve a Simultaneous Localization and Mapping (SLAM) problem. The proposed approach is based on a dense spherical acquisition system that synthesizes spherical intensity and depth images at arbitrary locations. The images are related by a graph of 6 degrees-offreedom (DOF) poses which are estimated through spherical registration. A direct image-based method is provided to estimate pose by using both depth and color information simultaneously. A new keyframe identification method is proposed to build the map of the environment by using the covariance matrix between raletive 6 DOF poses, which is basically the uncertainty of the estimated pose. This new approach is shown to be more robust than an error-based keyframe identification method. Navigation using the maps built from our method also gives less trajectory error than using maps from other methods.

    • Title: Online Spatiotemporal-Coherent Semantic Maps for Advanced Robot Navigation
      Authors: Ioannis Kostavelis, Konstantinos Charalampous, Antonios Gasteratos 17min + 3min questions

      Abstract: In this paper we introduce a novel online semantic mapping framework apt to establish the seamless cooperation between the low level geometrical information and the high level environment?s perception. Its main contribution involves the online formation of a semantic map, relying on the memorization of abstract place representations and capitalizing both on space quantization and time proximity. A time evolving Augmented Navigation Graph is formed describing the semantic topology of the explored environment and the connectivity among the places visited, which is expressed as the interplaces transition probability. A side contribution of this paper involves the utilization of the learned semantic maps for efficient navigation in the explored environment. Moreover, a specific human-robot interaction paradigm is proposed by illustrating a competent methodology to address the go-to tasks. The performance of the proposed framework was evaluated on long range robot datasets in an unstructured office environment and it exhibits remarkable performance by inferring semantic maps in previously unexplored environments.

    • Title: Use of a Monocular Camera to Analyze a Ground Vehicle?s Lateral Movements for Reliable Autonomous City Driving
      Authors: Young-Woo Seo, Ragunathan Rajkumar 17min + 3min questions

      Abstract: For safe urban driving, one prerequisite is to keep a car within a road-lane boundary. This requires human and robotic drivers to recognize the boundary of a road-lane and the location of the vehicle with respect to the boundary of a road-lane that the vehicle happens to be driving in. We present a new computer vision system that analyzes a stream of perspective images to produce information about a vehicle?s lateral movements, such as distances from a vehicle to a roadlane?s boundary and detection of lane-crossing maneuvers. We improve existing work in this field and develop new algorithms to tackle more challenging cases, such as driving on inter-city highways. Tests on real inter-city highways showed that our system provides stable and reliable performance in terms of computing lateral distances, while yielding reasonable performance in detecting lane-crossing maneuvers.

    • Title: Object-Level View Image Retrieval via Bag-of-Bounding-Boxes
      Authors: Ando Masatoshi, Yuuto Chokushi, Yousuke Inagaki, Shogo Hanada, Kanji Tanaka 17min + 3min questions

      Abstract: We propose a novel bag-of-words (BoW) framework to build and retrieve a compact database of view images, toward robotic localization, mapping and SLAM applications. Our method does not explain an image by many small local features (e.g. bag-of-SIFT-features) as most previous methods do. Instead, the proposed bag-of-bounding-boxes (BoBB) approach attempts to explain an image by fewer larger object patterns, which leads to a semantic and compact image descriptor. To make the view retrieval system more practical and autonomous, the object pattern discovery process is unsupervised, via common pattern discovery (CPD) between the input and a known reference images, without requiring pre-trained object detector. Moreover, our CPD subtask does not rely on good image segmentation techniques and is able to handle scale variations, exploiting the recently developed CPD technique, spatial random partition. Following traditional bounding box -based object annotation and knowledge transfer, we compactly describe an image in a form of bag-of-bounding-boxes (BoBB). With a slightly modified inverted file system, we efficiently index/search the BoBB descriptors. Experiments with publicly available ?RobotCar? dataset show that the proposed method achieves accurate object-level view image retrieval with significantly compact image descriptors, e.g. 20 words per image.

    • Title: Cart-O-matic project : autonomous and collaborative multi-robot localization, exploration and mapping
      Authors: Antoine Bautin, Philippe Lucidarme, Remy Guyonneau, Olivier Simonin, Sebastien Lagrange, Nicolas Delanoue, Francois Charpillet 17min + 3min questions
      paper, poster

      Abstract: The aim of the Cart-O-matic project was to design and build a multi-robot system able to autonomously map an unknown building. This work has been done in the framework of a French robotics contest called Defi CAROTTE organized by the General Delegation for Armaments (DGA) and the French National Research Agency (ANR). The scientific issues of this project deal with Simultaneous Localization And Mapping (SLAM), multi-robot collaboration and object recognition. In this paper, we will mainly focussed on the two first topics : after a general introduction, we will briefly describe the innovative simultaneous localization and mapping algorithm used during the competition. We will next explain how this algorithm can deal with multi-robots systems and 3D mapping. The next part of the paper will be dedicated to the multi-robot pathplanning and exploration strategy. The last section will illustrate the results with 2D and 3D maps, collaborative exploration strategies and example of planned trajectories.

    • Title: Driving Intention Assistance for Front-wheel-drive Personal Electric Vehicle
      Authors: Satoshi Fujimoto, Zhencheng Hu, Nobutomo Matsunaga, Claude Aynaud, Roland Chapuis, Han Wang 17min + 3min questions

      Abstract: Indoor personal electric vehicle ?STAVi? was developed to reduce the burden of moving for elderly people in the progress of the aging population in Japan, in order to improve their quality of life. The STAVi is a front-wheel-drive EV which is operated through an 8-directional joystick by the driver. However, the over-steering caused by two rear caster wheels leads to unstable vehicle dynamics and difficult to control in some driving scenarios. This paper presents a novel Lidar SLAM based driving intention assistance algorithm which employs Line Segment Matching SLAM technique for fast SLAM matching for indoor scenario. Line Segment Matching provides more accurate result than the conventional corner-based Scan Matching. Model Error Compensator (MEC) is used in our feedback controller to assist STAVi moving correctly by driving intention. Real indoor experimental results show the effectiveness of the proposed algorithm.

    • Title: Ad-hoc heterogeneous (MAV-UGV) formations stabilized under a top-view relative localization
      Authors: Martin Saska, Vojtech Vonasek 17min + 3min questions

      Abstract: A stabilization and navigation technique for adhoc formations of autonomous ground and aerial robots is investigated in this paper. The algorithm, which enables a composing of heterogeneous teams via consequence splitting and decoupling, is aimed at deployment of micro-scale robots in environments without any precise global localization system. The proposed approach is designed for utilization of an onboard visual navigation and a top-view relative localization of team members. The leader-follower formation driving method is based on a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the groups. This representation of the formation shape is crucial to ensure that the direct visibility between the team members in environments with obstacles is kept, which is the key requirement of the top-view relative localization. A Receding Horizon Control (RHC) concept is employed to integrate this avoidance function. The RHC scheme enables fluent splitting and decoupling of formations and responding to dynamic environment and team members? failures. All these abilities are verified in simulations and experiments, which prove the possibility of formation driving based on the visual navigation and top-view relative localization.

    • Title: Toward Smooth and Stable Reactive Mobile Robot Navigation using On-line Control Set-points
      Authors: Lounis Adouane 17min + 3min questions

      Abstract: This paper deals with the challenging issue of online mobile robot navigation in cluttered environment. Indeed, it is considered in this work, a mobile robot discovering the environment during its navigation, it should thus, to react to unexpected events (e.g., obstacles to avoid) while guaranteeing to reach its objective. Nevertheless, in addition to avoid safely and on-line these obstacles, it is proposed to enhance the smoothness of the obtained robot trajectories. Otherwise, to quantify this smoothness, suitable indicators were used. Specifically, this paper proposes to appropriately link on-line set-points defined using elliptic limit-cycle trajectories with a multi-controller architecture which guarantees the stability (according to Lyapunov synthesis) and the smoothness of the switch between controllers. Moreover, a comparison between fully reactive mode (the aim of this paper) and planned mode is given through the proposed control architecture which could exhibits the two aspects. Many simulations in cluttered environments permit to confirm the reliability and the robustness of the overall proposed reactive control.

    Session IV: Navigation, Control, Planning 13:45
    Chairman: Philippe Bonnifait (HEUDIASYC, France)
    • Title: Intention Aware Planning for Autonomous Vehicles 13:45
      Keynote speaker: Tirthankar Bandyopadhyay (CSIRO, Australia) 35min + 5min questions
      Presentation, paper, Video1, Video2, Video3

      Abstract: As robots venture into new application domains as autonomous vehicles on the road or as domestic helpers at home, they must recognise human intentions and behaviours in order to operate effectively. This generates a new class of motion planning, problems with uncertainty in human intention. Identifying human intentions is difficult because of the diversity, subtlety of human behaviours and the lack of a powerful "intention sensor". The intentions have to be inferred from observations about the person's behaviour. This is especially true for many cases of interactions between autonomous vehicles and human agents on the road where explicit communication channels are not always available. In this talk, I will mention some of the work we have been doing in developing an intention aware planning framework for autonomous vehicles on the road. I will present the framework in the context of Partially Observable Markov Decision Processes (POMDPs) and show how the recent advances in the field make Intention Aware Planning practical on real systems.

    • Title: Safe highways platooning with minimized inter-vehicle distances of the time headway policy 14:25
      Authors: Alan Ali, Gaetan Garcia, Philippe Martinet 17min + 3min questions
      Presentation, paper

      Abstract: Optimizing the inter-distances between vehicles is very important to reduce traffic congestion on highways. Variable spacing and constant spacing are the two policies for the longitudinal control of platoon. Variable spacing doesn?t require a lot of data (position, speed...) from other vehicles, and string stability using only on-board information is obtained. However, inter-vehicle distances are very large, and hence traffic density is low. Constant spacing can offer string stability with high traffic density, but it requires at least data from the leader. In [1], we have proposed a modification of the constant time headway control law. This modification leads to inter-vehicle distances that are close to those obtained with constant spacing policies, while requiring only low rate information from the leader. In this paper, the work done in [1] is extended by taking into account the model of the motor. This enables to reduce the distance between the vehicles to 1 meter, and it has been proved that the platoon is stable and safe in normal working mode. Simulation results are done using TORCS simulator environment.

    • Title: Optical Flow Templates for Superpixel Labeling in Autonomous Robot Navigation 14:45
      Authors: Richard Roberts, Frank Dellaert 17min + 3min questions
      Presentation, paper

      Abstract: Instantaneous image motion in mobile robot onboard cameras contains rich information about the structure of the environment. We present a new framework, optical flow templates, for capturing this information and an experimental proof-of-concept that labels superpixels using them. Optical flow templates encode the possible optical flow fields due to egomotion for a specific environment shape and robot attitude. We label optical flow in superpixels with the environment shape they image according to how consistent they are with each template. Specifically, in this paper we employ templates highly relevant to mobile robot navigation. Image regions consistent with ground plane and distant structure templates likely indicate free and traversable space, while image regions consistent with neither of these are likely to be nearby objects that are obstacles. We evaluate our method qualitatively and quantitatively in an urban driving scenario, labeling the ground plane, and obstacles such as passing cars, lamp posts, and parked cars. One key advantage of this framework is low computational complexity, and we demonstrate per-frame computation times of 20ms, excluding optical flow and superpixel calculation.

    Coffee break 15:05

    Session V: Situation Awareness & Risk Assessment 15:30
    Chairman: Urbano Nunes (Coimbra University, Portugal)
    • Title: Road Scenes Understanding and Risk Assessment using Embedded Bayesian Perception 15:30
      Keynote speaker: Christian Laugier (INRIA, Grenoble, France) 35min + 5min questions
      Presentation paper
      Co-Authors: Mathias Perrollaz, Christopher Tay Meng Keat, Stéphanie Lefevre

      Abstract: Robust analysis and understanding of dynamic scenes in road and urban traffic environments is needed to estimate and predict the collision risk level during vehicle driving. The risk estimation relies on the monitoring of the traffic environment of the vehicle either by means of on-board sensors, or by means of Vehicle-to-Vehicle (V2V) communications. In both cases, the collision risks are considered as stochastic variables. These variables are continuously evaluated and used by the vehicle embedded system to generate emergency warnings to the human driver, or to decide of the best driving action to be executed in the case of a fully autonomous vehicle. This talk addresses both the multi-modal embedded perception issue and the collision risk assessment problem. The perception issue is addressed using the new concept of ?Bayesian Perception?. The collision risk assessment problem has been solved using two complementary approaches leading to respectively use a ?trajectory prediction? paradigm or a novel ?intention/expectation? concept. In the first approach, the perception is performed using onboard sensors. Hidden Markov Model and Gaussian Processes are used to predict the likely behaviors of multiple dynamic agents in road scenes and to evaluate the related collision risks. This approach has been performed in collaboration with Toyota. In the second approach, V2V communication is used in the vicinity of road intersections for exchanging the dynamic states of the involved vehicles. In this context, we have shown that it is more efficient to identify dangerous situations by comparing ?what drivers intend to do? with ?what they are expected to do?. What a driver intends to do is estimated from the motion of the vehicle, taking into account the layout of the intersection; what a driver is expected to do is derived from the current configuration of the vehicles and the traffic rules at the intersection. This approach has been developed in cooperation with Renault. Both approaches have been experimentally validated in simulation and on real experimental vehicles.

    • Title: Detection of Unusual Behaviours for Estimation of Context Awareness at Road Intersections 16:10
      Authors: Alexandre Armand, David Filliat, Javier Ibanez-Guzman 17min + 3min questions
      Presentation, paper

      Abstract: Most Advanced Driving Assistance Systems (ADAS) warn drivers once a high risk situation has been inferred. This is made under the assumption that all drivers react in the same manner. Drivers react as a function of their own driving style. This paper proposes a framework which allows the estimation of the degree of awareness with regard to the focus object of the context that is governing the vehicle behaviour (e.g. the arrival to an intersection). The framework learns the manner in which individual drivers behave for a given context, and then detects whether or not the driver is behaving differently under similar conditions. In this paper the principles of the framework are applied to a fundamental use-case, the arrival to a stop intersection. Results from experiments under controlled conditions are included. They show that the formulation allows for a coherent estimation of the driver awareness while approaching to such intersections.

    • Title: Enhancing Mobile Object Classification Using Geo-referenced Maps and Evidential Grids 16:30
      Authors: Marek Kurdej, Julien Moras, Veronique Cherfaoui, Philippe Bonnifait 17min + 3min questions
      Presentation, paper

      Abstract: Evidential grids have recently shown interesting properties for mobile object perception. Evidential grids are a generalisation of Bayesian occupancy grids using Dempster? Shafer theory. In particular, these grids can handle efficiently partial information. The novelty of this article is to propose a perception scheme enhanced by geo-referenced maps used as an additional source of information, which is fused with a sensor grid. The paper presents the key stages of such a data fusion process. An adaptation of conjunctive combination rule is presented to refine the analysis of the conflicting information. The method uses temporal accumulation to make the distinction between stationary and mobile objects, and applies contextual discounting for modelling information obsolescence. As a result, the method is able to better characterise the occupied cells by differentiating, for instance, moving objects, parked cars, urban infrastructure and buildings. Experiments carried out on realworld data illustrate the benefits of such an approach.

    Award ceremony and Closing 16:50
    Best-Paper-Award_ PPNIV13
    Author Information

      Format of the paper: Papers should be prepared according to the IROS13 final camera ready format and should be 4 to 6 pages long. The detailed information on the paper format is available from the IROS13 page. http://www.iros2013.org/instructions.html. Papers must be sent to Philippe Martinet by email at Philippe.Martinet@irccyn.ec-nantes.fr

      Important dates (preliminary)

      • Deadline for Paper submission: Extented to july 15th, 2013
      • Acceptance with review comments: July 30th, 2013
      • Deadline for final paper submission: August 21th, 12am at last, 2013

      Talk information

      • Invited talk: 40 min (35 min talk, 5 min question)
      • Other talk: 20 min (17 min talk, 3 min question)

      Interactive session

      • Interactive and open session: 1h00

    Previous workshops

      Previously, several workshops were organized in the near same field. The 1st edition PPNIV'07 of this workshop was held in Roma during ICRA'07 (around 60 attendees), the second PPNIV'08 was in Nice during IROS'08 (more than 90 registered people), the third PPNIV'09 was in Saint-Louis (around 70 attendees) during IROS'09, and the fourth edition PPNIV'12 was in Vilamoura (over 95 attendees) during IROS'12.
      In parallel, we have also organized SNODE'07 in San Diego during IROS'07 (around 80 attendees), SNODE'09 in Kobe during ICRA'09 (around 70 attendees), and RITS'10 in Anchrorage during ICRA'10 (around 35 attendees), and the last one PNAVHE11 in San Francisco during the last IROS11 (around 50 attendees).

      A special issue in IEEE Transaction on ITS, mainly focused on Car and ITS applications, has been published in September 2009. We are preparing a new proposal for a special issue on ?Perception and Planning for Autonomous Vehicles? in ITS Magazine.


      Proceedings: The workshop proceedings will be published within the IROS Workshop/Tutorial CDROM and electronically as a pdf file.

      Special issue: Selected papers will be considered for a special issue in the IEEE Intelligent Transportation Systems Magazine in connection with this workshop. We will issue an open call, submissions will go through a separate peer review process.

      Best paper award: This year the IEEE-RAS TC on AGV-ITS will offer a best paper award and a price of 500$ to the winner.