en

Please fill in your name

Mobile phone format error

Please enter the telephone

Please enter your company name

Please enter your company email

Please enter the data requirement

Successful submission! Thank you for your support.

Format error, Please fill in again

Confirm

The data requirement cannot be less than 5 words and cannot be pure numbers

4D-BEV annotation solution for autonomous vehicles

From:Nexdata Date: 2024-08-14

Bird's Eye View (BEV) is like a god's-eye view from a high vantage point, where data from multiple sensors on the vehicle is collected and fed into a unified model for comprehensive reasoning. The generated bird's eye view presents the data from various sensors in the same perspective, effectively avoiding error accumulation and addressing the challenge of fusing multiple sensor data for autonomous driving.

 

Within the BEV space, as the coordinate systems align, temporal fusion can occur, forming a 4D space. However, due to the massive quantity of point cloud data, traditional 3D annotation techniques are insufficient. The industry has thus turned its attention to 4D annotation techniques tailored for BEV.

 

I. 4D Annotation Based on BEV

 

4D-BEV annotation technology introduces the fourth dimension, namely the time series, into ai data annotation. Based on the bird's eye view, annotators can label static objects like vehicles, pedestrians, and traffic signs, recording their positions and sizes. Simultaneously, time-axis annotation records the entry and exit times of objects, aiding algorithms in more accurately tracking object trajectories, thereby enhancing the safety and decision support of autonomous driving.

 

To help clients quickly and cost-effectively build large amounts of high-quality 4D-BEV ground truth data for perception training and evaluation, Nexdata has launched a 4D-BEV annotation solution.

 

Nexdata's 4D annotation tool can annotate in the 3D space + temporal dimension, employing various sensor fusion methods that support ai data service such as lidar, millimeter-wave radar, cameras, and overhead views, while also supporting data alignment and fusion. The platform's built-in pre-recognition annotation technology significantly improves annotation efficiency and accuracy.

 

II. Highlights of the Annotation Tool

 

Supports handling billions of point cloud data smoothly.

 

The 4D point cloud annotation template uses Potree for display, a WebGL-based point cloud visualization framework that allows interactive display of large-scale point cloud data on web pages.

 

Obtains mapping parameters from the ai data collection to avoid parameter biases caused by multiple frames.

 

Personalized color settings for accurate discrimination of point cloud targets.

 

Built-in preloading function to enhance annotation efficiency effectively.

 

Mature and efficient pre-recognition annotation processing capabilities.

 

III. Case Studies

 

4D Lane Marking Annotation:

Annotation of continuous frames of lidar point cloud data annotation services with corresponding global pose information and auxiliary image data for a specific moment.

 

Annotation of lane markings after frame stacking, including solid lines, dashed lines, double solid lines, double dashed lines, guide lines, etc.

 

Adjustment and pasting of mapped 2D lane markings for each camera image.

 

 

4D Segmentation Annotation:

Reconstruction and stacking of a sequence of frames based on pose parameters.

 

Annotation of semantic segmentation after frame stacking, including categories such as vegetation, drivable area, unknown obstacles, etc.

 

Leveraging years of data collection and annotation experience and offering a comprehensive data solution, Nexdata has deep collaborations with hundreds of global autonomous driving companies, spanning OEMs, new EV startups, leading tech firms, mainstream algorithm companies, and top-tier Tier1 manufacturers. In the future, Nexdata will continue to invest in technical research and development, continuously improve AI infrastructure, and assist users in training and deploying AI applications in a more convenient manner.

1357afba-f7b0-4dee-8ac9-cfc632182d69