For the ntu_ sequences, the ouster lidar has 128 channels with 1024 points per channel. For the kth_ and tuhh_ sequences, the ouster lidar has 64 channels with 1024 points per channel. In all sequences, the mid70 Livox lidar have one single non-repetitive line with 9984 points per line. All lidars output data at 10 Hz.
For ouster lidar, the C++ definition of the point struct by the PCL convention is as follows:
struct PointOuster
{
PCL_ADD_POINT4D;
float intensity;
uint32_t t;
uint16_t reflectivity;
uint16_t ambient;
uint32_t range;
uint8_t ring;
EIGEN_MAKE_ALIGNED_OPERATOR_NEW
} EIGEN_ALIGN16;
POINT_CLOUD_REGISTER_POINT_STRUCT(PointOuster,
(float, x, x) (float, y, y) (float, z, z)
(float, intensity, intensity)
(uint32_t, t, t)
(uint16_t, reflectivity, reflectivity)
(uint16_t, ambient, ambient)
(uint32_t, range, range)
(uint8_t, ring, ring)
)
For python programming, we recommend using the pypcd package, which can read the pointcloud directly from ROS message into structured numpy array with the field names intact. Please find the demo here.
For livox lidar, the point has a custom structure that is defined by the manufacturer as follows:
# Livox costum pointcloud format.
uint32 offset_time # offset time relative to the base time
float32 x # X axis, unit:m
float32 y # Y axis, unit:m
float32 z # Z axis, unit:m
uint8 reflectivity # reflectivity, 0~255
uint8 tag # livox tag
uint8 line # laser number in lidar
Please follows the instructions here to install the livox driver, after which you should be able to import and manipulate the livox message in ROS python or C++ program.
For MCD, please take note of the following:
The main coordinate systems in MCD is defined as follows:
First, the coordinate system of the prior maps is referred to as the World frame \(\mathtt{W}\). Then the Body frame \(\mathtt{B}\) coincides with the VN100 IMU in the NTU sequences, and the VN200 in the KTH and TUHH sequences. Each sensor has a Sensor frame \({\mathtt{S}}\) attached to it.
The extrinsics of the sensors in MCD are declared as transformation matrices \({}^{\mathtt{B}}_{\mathtt{S}}\bf{T} = \begin{bmatrix} {}^{\mathtt{B}}_{\mathtt{S}}\mathrm{R} & {}^{\mathtt{B}}_{\mathtt{S}}\mathrm{t} \\ 0 &1\end{bmatrix}\), where \({}^{\mathtt{B}}_{\mathtt{S}}\mathrm{R}\) and \({}^{\mathtt{B}}_{\mathtt{S}}\mathrm{t}\) are respectively the rotational and translational extrinsics.
Therefore if one observes a landmark \({}^{\mathtt{C}}\mathrm{f}\) in the camera frame \(\mathtt{C}\), its coordinate in the body frame \({}^{\mathtt{B}}\mathrm{f}\) is calculated as:
\[{}^{\mathtt{B}}\mathrm{f} = {}^{\mathtt{B}}_{\mathtt{C}}\mathrm{R} {}^{\mathtt{C}}\mathrm{f} + {}^{\mathtt{B}}_{\mathtt{C}}\mathrm{t}\]The ground truth data in our csv files are the poses \(({}^{\mathtt{W}}_{\mathtt{B}_t}\mathrm{q}, {}^{\mathtt{W}}_{\mathtt{B}_t}\mathrm{p})\), where \({}^{\mathtt{W}}_{\mathtt{B}_t}\mathrm{q}\) and \({}^{\mathtt{W}}_{\mathtt{B}_t}\mathrm{p}\) are the orientation (in quaternion) and position of the body frame at time t relative to the world frame.
In most cases, the SLAM estimate \({}^{\mathtt{B}_0}_{\mathtt{B}_t}\hat{\bf{T}}\) is relative to the coordinate frame that coincides with the body frame at initial time. It is therefore neccessary to align the SLAM estimate with the groundtruth. The evo package is a popular tool for this task.