Enhancing Indoor Navigation with MRDVS Ceiling Visual SLAM Solution

Introduction

Autonomous localization technology plays a critical role in modern automation and robotics. With the constant development and increasing demand for applications, methods and technologies for autonomous localization continue to evolve. From traditional sensor-based methods to modern techniques utilizing visual and deep learning technologies, autonomous localization has become a key driver of industrial automation and intelligence​​​​.

Challenges of Traditional Localization Methods

Laser Localization Technology: Laser localization technology has been widely used in various industrial and commercial environments due to its high precision and stability. This technology primarily involves scanning the surrounding environment with lasers and calculating the robot’s position and direction based on reflected laser data. However, this approach has limitations. In dynamic environments with frequently moving people and transport equipment, laser localization can be disrupted. Moreover, in geometrically simple or repetitive environments (such as long corridors), the localization accuracy can decrease​​.

Visual Localization Technology: Visual localization mimics the functionality of the human eye, extracting rich texture and color information from the environment to help robots understand and recognize their surroundings. This method is particularly suitable for complex and dynamically changing environments, significantly improving localization accuracy and reliability. However, traditional 2D visual SLAM (Simultaneous Localization and Mapping) methods face challenges, such as sensitivity to lighting changes and limitations in acquiring depth information​​.

MRDVS Ceiling Visual SLAM Solution for Indoor Navigation

To address the challenges of traditional localization methods, the MRDVS Ceiling Visual SLAM Solution combines 2D visual data and 3D depth information using high-resolution visual sensors. By incorporating deep learning models and SAM (Semantic Augmentation Module) segmentation technology, this system provides high-precision localization and navigation in dynamic environments. The solution achieves accurate and reliable autonomous localization through high-precision recognition and tracking of key industrial scene targets, significantly enhancing indoor navigation capabilities​​​​.

Ceiling-Vision SLAM

Technical Approach

Deep Learning Model Construction: The project constructs deep learning networks for industrial application scenarios, leveraging vast amounts of accumulated industrial data. By labeling important target objects in the work environment (such as people, material carts, forklifts, goods, equipment, signage, etc.), the data volume dynamically updates and accumulates over time. The trained deep learning network achieves strong generalization capabilities, combined with SAM semantic segmentation technology for precise semantic segmentation in 2D images​​.

Mapping Phase:

  • 2D-3D Data Acquisition: Simultaneously acquire 2D images and 3D point cloud data from high-resolution visual sensors at a frame rate of 10Hz, with higher frequency odometry data collection.
  • Multilevel Feature Extraction: Extract corner features from RGB texture images, geometric information from point cloud data, and high-level semantic information using deep learning models.
  • 2D-3D Data Reprojection: Generate accurate 3D point cloud information of objects of interest using multi-frame tracking and fusion technology.
  • Frame Matching: Estimate inter-frame motion transformations, filtering out dynamic object point clouds (such as pedestrians and other AGVs).
  • Key Submap Establishment: Create local submaps by fusing information from multiple frames with significant co-view areas.
  • Loop Closure Optimization: Automatically search for loop closure frames within close spatial distances to ensure accurate map construction​​​​.

Localization Phase:

  • Real-Time Feature Extraction: Extract current frame features and quickly match them with saved 3D geometric and high-level semantic features in the map to achieve automatic online positioning at any location.
  • Position Calculation: Real-time feature matching with the map to obtain precise pose information.

Results and Applications

Visual Localization Module Development: The MRDVS Ceiling Visual SLAM Solution includes a visual localization module composed of high-resolution visual sensors and visual mapping localization algorithms. Testing and research have shown that the solution effectively reconstructs spatial structures, accurately reflecting the actual environment’s features without common issues like loop closure and scale drift seen in traditional visual SLAM methods​​​​.

Industrial Applications:

  • Photovoltaic Industry: Successfully implemented in a leading domestic new energy enterprise, replacing traditional 2D laser radar localization. The system has been running stably for over a year in large-scale workshops​​.
  • Automotive Manufacturing: Deployed in a large, dynamic engine production workshop, where traditional 2D laser radar localization struggled.
  • Textile Manufacturing: Used in deep storage warehouses where the dynamic nature of stored materials makes 2D laser navigation challenging. The visual localization module is installed on automated forklift robots for efficient pallet transport​​.

The MRDVS Ceiling Visual SLAM Solution represents a significant advancement in autonomous indoor navigation technology. By integrating 2D and 3D visual data with deep learning models, it offers unparalleled accuracy and reliability in dynamic environments. This innovative solution is poised to drive the future of industrial automation, providing robust and scalable navigation capabilities for various applications.

Click to learn more about the solution: CV-SLAM: Advanced Localization and Mapping Solutions (mrdvs.com)

Share to: