Dataset

EagleVision is a unified LiDAR-based perception benchmark for high-speed autonomous racing, standardized into a common annotation and coordinate convention.

Domains
Three standardized datasets
  • IAC (Indy Autonomous Challenge): ROS bag recordings with GPS-based state; 3D boxes are manually labeled.
  • A2RL Simulator: Official simulator data with ground-truth 3D boxes exported to the unified format.
  • A2RL Real-World: Competition racing data (ROS bags) manually labeled under real sensor noise and occlusion.
What’s annotated
Detection + prediction
  • 3D Detection: single class Car with 3D bounding boxes (PSR).
  • Trajectory Prediction: per-frame ego/vehicle pose entries used to build observation/prediction windows.

Dataset statistics

Domain LiDAR Hz Annotated frames Avg objects/frame Points/scan (approx.)

Annotation format

3D Detection (PSR JSON)
Position, Scale, Rotation (yaw)

Boxes are defined in the ego-vehicle LiDAR frame, without motion compensation.


          
Trajectory Prediction (Pose JSON)
Frame-wise position + quaternion

Each entry stores frame id, timestamp, 3D position, and unit quaternion orientation.


          

Coordinate convention (minimal)

  • All labels are in the ego-vehicle LiDAR coordinate frame.
  • 3D boxes are parameterized by center (x,y,z), size (l,w,h), and yaw.
  • Only one semantic class is used: Car.