前视图投影
为了将激光雷达传感器的前视图平展成二维图像,必须将三维空间中的点云投影到可以展开的圆柱形表面,一个平面上。根据论文Vehicle Detection from 3D Lidar Using Fully Convolutional Network所述,以下的代码完成了这部分功能。
复制代码
1
2
3
4# h_res = horizontal resolution of the lidar sensor # v_res = vertical resolution of the lidar sensor x_img = arctan2(y_lidar, x_lidar)/ h_res y_img = np.arctan2(z_lidar, np.sqrt(x_lidar**2 + y_lidar**2))/ v_res
问题在于这样做会将图像的接缝直接放在汽车的右侧。 将接缝定位在汽车的最后部更有意义,因此前部和侧部更重要的区域是不间断的。 让这些重要区域不间断将使卷积神经网络更容易识别那些重要区域中的整个对象。 以下代码修复了这一点。
复制代码
1
2
3
4# h_res = horizontal resolution of the lidar sensor # v_res = vertical resolution of the lidar sensor x_img = np.arctan2(-y_lidar, x_lidar)/ h_res # seam in the back y_img = np.arctan2(z_lidar, np.sqrt(x_lidar**2 + y_lidar**2))/ v_res
沿每一个坐标轴设置刻度
变量h_res和v_res比较依赖于所使用的激光雷达传感器。KITTI数据集中,使用Velodyne 64E传感器,根据相关说明,它具有如下重要性质:
- 垂直视野为26.9度,分辨率为0.4度,垂直视野别分解为传感器上方+2度,下方-24.9度;
- 360度的水平视野,分辨率为0.08-0.35度(取决于旋转速度);
- 可以选择旋转速率在5-20Hz之间。
现在将代码更新为:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13# Resolution and Field of View of LIDAR sensor h_res = 0.35 # horizontal resolution, assuming rate of 20Hz is used v_res = 0.4 # vertical res v_fov = (-24.9, 2.0) # Field of view (-ve, +ve) along vertical axis v_fov_total = -v_fov[0] + v_fov[1] # Convert to Radians v_res_rad = v_res * (np.pi/180) h_res_rad = h_res * (np.pi/180) # Project into image coordinates x_img = np.arctan2(-y_lidar, x_lidar)/ h_res_rad y_img = np.arctan2(z_lidar, d_lidar)/ v_res_rad
但是这使得一般的点被定位在x轴的负半轴,而且大部分在Y的负半轴,为了将其投影到二维图像上,必须将最小值设置为(0,0)。故:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16# SHIFT COORDINATES TO MAKE 0,0 THE MINIMUM x_min = -360.0/h_res/2 # Theoretical min x value based on specs of sensor x_img = x_img - x_min # Shift x_max = 360.0/h_res # Theoretical max x value after shifting y_min = v_fov[0]/v_res # theoretical min y value based on specs of sensor y_img = y_img - y_min # Shift y_max = v_fov_total/v_res # Theoretical max x value after shifting y_max = y_max + 5 # UGLY: Fudge factor because the calculations based on # spec sheet do not seem to match the range of angles # collected by sensor in the data.
作为二维图像校准
现在将三维点云投影到二维坐标系的点,最小值为(0,0),并且将这些点绘制成二维图像。
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19pixel_values = -d_lidar # Use depth data to encode the value for each pixel cmap = "jet" # Color map to use dpi = 100 # Image resolution fig, ax = plt.subplots(figsize=(x_max/dpi, y_max/dpi), dpi=dpi) ax.scatter(x_img,y_img, s=1, c=pixel_values, linewidths=0, alpha=1, cmap=cmap) ax.set_axis_bgcolor((0, 0, 0)) # Set regions with no points to black ax.axis('scaled') # {equal, scaled} ax.xaxis.set_visible(False) # Do not draw axis tick marks ax.yaxis.set_visible(False) # Do not draw axis tick marks plt.xlim([0, x_max]) # prevent drawing empty space outside of horizontal FOV plt.ylim([0, y_max]) # prevent drawing empty space outside of vertical FOV fig.savefig("/tmp/depth.png", dpi=dpi, bbox_inches='tight', pad_inches=0.0)
第一成果
将以上代码放进一个功能函数中:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103def lidar_to_2d_front_view(points, v_res, h_res, v_fov, val="depth", cmap="jet", saveto=None, y_fudge=0.0 ): """ Takes points in 3D space from LIDAR data and projects them to a 2D "front view" image, and saves that image. Args: points: (np array) The numpy array containing the lidar points. The shape should be Nx4 - Where N is the number of points, and - each point is specified by 4 values (x, y, z, reflectance) v_res: (float) vertical resolution of the lidar sensor used. h_res: (float) horizontal resolution of the lidar sensor used. v_fov: (tuple of two floats) (minimum_negative_angle, max_positive_angle) val: (str) What value to use to encode the points that get plotted. One of {"depth", "height", "reflectance"} cmap: (str) Color map to use to color code the `val` values. NOTE: Must be a value accepted by matplotlib's scatter function Examples: "jet", "gray" saveto: (str or None) If a string is provided, it saves the image as this filename. If None, then it just shows the image. y_fudge: (float) A hacky fudge factor to use if the theoretical calculations of vertical range do not match the actual data. For a Velodyne HDL 64E, set this value to 5. """ # DUMMY PROOFING assert len(v_fov) ==2, "v_fov must be list/tuple of length 2" assert v_fov[0] <= 0, "first element in v_fov must be 0 or negative" assert val in {"depth", "height", "reflectance"}, 'val must be one of {"depth", "height", "reflectance"}' x_lidar = points[:, 0] y_lidar = points[:, 1] z_lidar = points[:, 2] r_lidar = points[:, 3] # Reflectance # Distance relative to origin when looked from top d_lidar = np.sqrt(x_lidar ** 2 + y_lidar ** 2) # Absolute distance relative to origin # d_lidar = np.sqrt(x_lidar ** 2 + y_lidar ** 2, z_lidar ** 2) v_fov_total = -v_fov[0] + v_fov[1] # Convert to Radians v_res_rad = v_res * (np.pi/180) h_res_rad = h_res * (np.pi/180) # PROJECT INTO IMAGE COORDINATES x_img = np.arctan2(-y_lidar, x_lidar)/ h_res_rad y_img = np.arctan2(z_lidar, d_lidar)/ v_res_rad # SHIFT COORDINATES TO MAKE 0,0 THE MINIMUM x_min = -360.0 / h_res / 2 # Theoretical min x value based on sensor specs x_img -= x_min # Shift x_max = 360.0 / h_res # Theoretical max x value after shifting y_min = v_fov[0] / v_res # theoretical min y value based on sensor specs y_img -= y_min # Shift y_max = v_fov_total / v_res # Theoretical max x value after shifting y_max += y_fudge # Fudge factor if the calculations based on # spec sheet do not match the range of # angles collected by in the data. # WHAT DATA TO USE TO ENCODE THE VALUE FOR EACH PIXEL if val == "reflectance": pixel_values = r_lidar elif val == "height": pixel_values = z_lidar else: pixel_values = -d_lidar # PLOT THE IMAGE cmap = "jet" # Color map to use dpi = 100 # Image resolution fig, ax = plt.subplots(figsize=(x_max/dpi, y_max/dpi), dpi=dpi) ax.scatter(x_img,y_img, s=1, c=pixel_values, linewidths=0, alpha=1, cmap=cmap) ax.set_axis_bgcolor((0, 0, 0)) # Set regions with no points to black ax.axis('scaled') # {equal, scaled} ax.xaxis.set_visible(False) # Do not draw axis tick marks ax.yaxis.set_visible(False) # Do not draw axis tick marks plt.xlim([0, x_max]) # prevent drawing empty space outside of horizontal FOV plt.ylim([0, y_max]) # prevent drawing empty space outside of vertical FOV if saveto is not None: fig.savefig(saveto, dpi=dpi, bbox_inches='tight', pad_inches=0.0) else: fig.show()
以下为其使用的样例:
复制代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16import matplotlib.pyplot as plt import numpy as np HRES = 0.35 # horizontal resolution (assuming 20Hz setting) VRES = 0.4 # vertical res VFOV = (-24.9, 2.0) # Field of view (-ve, +ve) along vertical axis Y_FUDGE = 5 # y fudge factor for velodyne HDL 64E lidar_to_2d_front_view(lidar, v_res=VRES, h_res=HRES, v_fov=VFOV, val="depth", saveto="/tmp/lidar_depth.png", y_fudge=Y_FUDGE) lidar_to_2d_front_view(lidar, v_res=VRES, h_res=HRES, v_fov=VFOV, val="height", saveto="/tmp/lidar_height.png", y_fudge=Y_FUDGE) lidar_to_2d_front_view(lidar, v_res=VRES, h_res=HRES, v_fov=VFOV, val="reflectance", saveto="/tmp/lidar_reflectance.png", y_fudge=Y_FUDGE)
生成以下三种图像:
深度图:
高度图:
强度图:
参考链接:http://ronny.rest/blog/post_2017_03_25_lidar_to_2d/
最后
以上就是迷人御姐最近收集整理的关于点云生成前视图(投影到2D平面)的全部内容,更多相关点云生成前视图(投影到2D平面)内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复