Multi-modal sensing (Radar, Lidar, Imager, etc)

For autonomous agents, it is essential to have a clear view and understanding of their surroundings to be able to navigate safely. To generate this view many different types of sensors including vision, lidar, radar, ultrasound, hyperspectral, infrared and other sensors can be used. The fusion can be based on a time series of data from the same type or from several different sensing modalities, the latter called multi-modal sensor fusion. Multi-modal sensor fusion offers advantages in terms of being able to sense various complementary aspects of an object or scene with the different modalities, i.e. increasing information gain. Due to the uncertain nature of sensor measurements, the fusion of the (time series) data is non-trivial. And as soon as the sensors are mounted on a moving agent the issue of synchronization, drift, blur and others make the fusion yet more difficult. Fusing sensors with different modalities is even more challenging because the sensors perceive different aspects of the environment. One sensor might sense an object while the other sensor might not see it at all because of the material of the object, for instance seeing through glass with a visual camera and getting a return with an ultrasound sensor. Oftentimes the sizes and the distances to the objects are also reported differently. These aspects make multi-modal sensor fusion non-trivial.

    Related Conference of Multi-modal sensing (Radar, Lidar, Imager, etc)

    May 22-23, 2024

    11th Global Meet on Wireless and Satellite Communications

    Amsterdam, Netherlands
    July 25-26, 2024

    23rd International Conference on Big Data & Data Analytics

    Amsterdam, Netherlands
    November 20-21, 2024

    5th World Congress on Robotics and Automation

    Paris, France

    Multi-modal sensing (Radar, Lidar, Imager, etc) Conference Speakers

      Recommended Sessions

      Related Journals

      Are you interested in