文件名称:
基于移动平台的激光雷达点云投影到相机图像上的不确定性估计
开发工具:
文件大小: 3mb
下载次数: 0
上传时间: 2019-10-20
详细说明:结合多传感设备以实现高级的感知能力是自动驾驶汽车导航的关键要求。传感器融合用于获取有关周围环境的丰富信息。摄像头和激光雷达传感器的融合可获取精确的范围信息,该信息可以投影到可视图像数据上。这样可以对场景有一个高层次的认识,可以用来启用基于上下文的算法,例如避免碰撞更好的导航。组合这些传感器时的主要挑战是将数据对齐到一个公共域中。由于照相机的内部校准中的误差,照相机与激光雷达之间的外部校准以及平台运动导致的误差,因此这可能很困难。在本文中,我们研究了为激光雷达传感器提供运动校正所需的算法。由于不可能完全消除由于激光雷达的测量值投影到同一里程计框架中而导致的误差,因此,在融合两个不同的传感器时,必须考虑该投影的不确定性。这项工作提出了一个新的框架,用于预测投影到移动平台图像帧(2D)中的激光雷达测量值(3D)的不确定性。所提出的方法将运动校正的不确定性与外部和内部校准中的误差所导致的不确定性相融合。通过合并投影误差的主要成分,可以更好地表示估计过程的不确定性。我们的运动校正算法和提出的扩展不确定性模型的实验结果通过在电动汽车上收集的真实数据进行了演示,该电动汽车配备了可覆盖180度视野的广角摄像头和16线扫描激光雷达。IL BACKGROUND
serial link(gms) cameras to provide a 360-degree view
Fusing multi-modal sensor data is important to improving Each camera has a 100-degree of field of view. The camera
the perception of autonomous platforms. In [21, 3D data images have a resolution of 1920 X 1208 and a frame rate of
and colour information are combined to perform real-time
0 Hz. The extrinsic camera calibration is calculated relative
tracking, and in [3] the multi-modal data is used to estimate to the lidar sensor frame, and both are registered to the local
the velocity of the moving vehicles. Premebida et al. [4] frame of reference of the vehicle. Further, the platform also
exploit lidar and camera information for pedestrian detection contains wheel encoders and an IMU containing gyroscopes,
and Dou et al. [5] use lidar data to improve Cnn based accelerometers and magnetometers
pedestrian detection conducted using image data. In [6the
B. Camera Calibration
image and sparse lidar measurements are used to effectively
recognize obstacles. All of these tracking, classification and
The extrinsic camera calibration is challenging when
detection algorithms assume well-aligned image and lidar
working with wide angle cameras due to the significant
data. Therefore, the uncertainty of the projection has not been
distortions in the lens. Tools which perform intrinsic and
considered here
extrinsic calibration simultaneously can provide erroneous
Camera
lidar calibration is challenging as the two results due to the distinctive nature of the data required for
sensors generate data in different domains. It is important each process. To account for the high level of distortion, a
to convert this data into a single domain using accurate calibration checker board requires particular attention to sam
calibration. Le et al. [7] propose a framework to produce 3D
ples close to the camera, as well as covering the entire field
data from a range of sensors. Camera and lidar calibration of view. In particular, the distortion is greatest in the corners
parameters are prone to error in real time operation. A and the intrinsic calibration requires good quality samples in
framework to perform automatic online calibration while these areas to generate an accurate set of parameters
accounting for the gradual drifts in the sensor during live Contrary to the intrinsic calibration process, the extrinsic
operation is introduced in [8]. In [91, lidar points are ac- calibration requires samples where the checkerboard is po-
cumulated over time and matched with the corresponding sitioned considerably far from the camera, and at a variety
image based on the intensity information to optimise the
of different ranges such that both the camera and lidar can
calibration parameters. Even though the process is automatic, observe the board. Because of this, we firstly compute the
the reliability of this approach relies heavily on the accuracy intrinsic parameters using the matLaB camera calibrator,
of the odometry. Scott et al. [10] propose a scene selection
selecting the five distortion coefficient model and obtaining
scheme to get more accurate calibration parameters. For the camera matrix K and distortion coefficient D. Secondly
practical reasons the extrinsic calibration process cannot be these intrinsic parameters are applied to the raw camera
perfect, so it becomes important to have an accurate estimate image and then passed to the extrinsic calibration process
of the uncertainty
For this paper, we use the autoware calibration toolbox
A novel framework for calibrating extrinsic parameters [15][16] to align the frames of the lidar and each camera
and timing offset between multi-modal sensors such as
riid is the transformation obtained between the lidar and
cameras, 3D lidars, GPS and IMU is discussed in [11]. camera frame. When projecting lidar points into wide angle
They use an improved version of a motion-based model with camera images, the distortion at the edges of the images
which they estimate the uncertainty of the final calibration can cause noise in the projection. Points that are beyond
parameters based on the sensor reading uncertainties using
the horizontal field of view of the camera are warped in
a probabilistic approach. Wendel et al. [12] estimate the projection due to the extreme distortion at the edges of
te
6D pose of the camera relative to the navigation reference image. Therefore, it is important to make sure each lidar
frame by a maximum likelihood approach. This work also point is within the field of view of the camera(in camera
uses a Markov chain monte Carlo method to estimate the coordinate frame)before applying the transformation to the
uncertainty of the pose estimation
pixel coordinate frame
In [13, the authors introduce a framework to optimize
the extrinsic calibration between a camera and lidar sensors
C. Molion correction
to accounting for the sensor drift due to the motion of the Rectification of the lidar points is conducted using the
platform. They also apply motion correction to improve the method described in [17 when the vehicle is moving. This
accuracy of the calibration while the platform is moving. process requires precise odometry based transformations and
Underwood et al. [14] maps 3D range data to a common proper time synchronization between the cameras and the
navigation frame. Then a spatial error model is developed lidar. The Velodyne Ros lidar driver provides timestamps
based on the transformation used for mapping. This model for individual parts of the scan. Each full revolution lidar
encodes the key geometric and temporal components the scan published by the driver is broken up into 75 packets
errors that occur during the mapping process. Using this each of which contains its own timestamp For computational
model the accuracy of the of the mapping is estimated during reasons, we assume that all points in each packet correspond-
operation
ing to approximately 5 degrees of the full revolution scan are
IIL METHOD
observed at the published timestamp We consider one packet
at a time, and apply the correction and alignment using the
A. Experimental Platform
odometry frame. This assumption is important because it
Our electrical vehicles are equipped with a 16-beam lidar enables the algorithm to run in real time. We define here
(Velodyne VLP-16)and six fixed lens gigabit multimedia the corrected lidar point for a given image Pi
6638
camera extrinsic parameters and 6 platform motion parame-
P=mn×Tn20×Tan×P
ters)is represented by 2p. It is important to mention that the
uncertainties involved with y and w include the errors due
where Pi refers to a 3D lidar point within a lidar data to time jitter, errors relating to time synchronization, MU
packet, Tli indicates the rigid transformation from the lidar performance, wheel slip, and noise in the wheel encoder
coordinate frame to the vehicle base coordinate frame (at
the centre of rotation of the vehicle), Ao is the duration
00
between two consecutive odometry readings. The two odom-
P
0
0
etry readings are selected to overlap the timespan between
00∑
the image timestamp and the lidar packet timestamp acl
velodyne packet timestamp- image timestamp. It is impor
The covariance matrix >p can be approximated as indi-
tant to select the image closest in time to the nearest lidar
cated in Eq. 3, where >i is the covariance matrix for the
packet. Ty denotes the ego motion of the platform during
intrinsic parameters(D,K), te is the covariance matrix
Ao. Ty is obtained from the difference between the absolute
for the parameters corresponding to Tlad and the Xmis
vehicle state measured from the vehicle odometry. Because
covariance matrix for the parameters corresponding to y and
the relative odometry frame is used, the global position Q. Then the
sensor drift is negligible for the estimation of Tv. Tveh On v. Then the
can be obtained by
rid x Ilid is the vehicle-to-camera transformation. All
(4)
Carn
the transformations are represented in 4-by-4 matrices in the where J g denotes the Jacobian matrix of projection function
matrix and t is the transla. where R is the rotational g in Eq. 2 w.r. t 21 parameters. Square root of the diagonal
format T
3×3
3×1
of >c indicates the standard deviation of al,i.
IV. EXPERIMENT RESULTS
D. Uncertainty modeling
The experimental section is organized as follows. Firstly
The main contribution of this paper is the consistent and we estimate the uncertainty of the lidar to image frame pro-
reliable fusion of camera and lidar data. This combines jection caused by the intrinsic and extrinsic calibration errors
the uncertainty resulting from the intrinsic calibration of In this scenario we keep the autonomous platform static to
the cameras. the extrinsic calibration between the cameras avoid any projection errors due to motion. Secondly, we
and lidar and the motion correction of the lidar points. The evaluate the performance of our motion correction algorithm
projection o
of a point p=[c, 1, z] in the lidar coordinate Finally, we evaluate the uncertainty of the lidar to image
frame to P=u
lu, U] in the image pixel coordinate frame is frame projection after the motion correction is applied. This
performed. In this paper we estimate the uncertainty of this approach considers the uncertainty added by the position
transformation based on the calibration parameters and the specinc parameters in addition to the intrinsic and extrinsic
motion correction parameters Calibration of the cameras can calibration parameters as described in Section III-D
be conducted using an offline or online technique. a Jacobian
In order to evaluate the true projection errors, we used 1
based uncertainty model is derived to fuse the various cm wide reflective tapes placed in vertical and horizontal po-
uncertainty estimates. The main advantage of the Jacobian sitions relative to the ground. The lidar points observed from
based uncertainty model is that it could be used with any
the reflections intersecting the tape were accurately extracted
platform independent of the particular techniques used to
based on the high reflective returns. We assume that all the
perform the camera calibration and motion correction
high reflective laser point observations originate from the
W
center line of the tapes. The points were then projected onto
ye exect the method described in [1] for modeling the image frame. The tapes are manually labeled in the image
the uncertainty of the projected points by adding the ad
ditional parameters related to motion correction. These pa-
frame. We then computed the error between the projected
rameters are denoted by业=[△x,△y,△z]andt
points and the corresponding center line of the vertical tapes
dis saceia pitch YAwl, which are the linear and angular along u coordinates in the pixel frame Independently to thi
nent of the platform respectively, during the time we computed the error between the projected points observed
Detween the camera observation and the related lidar packet
from the horizontal reflective tapes and the corresponding
To is formulated from v and q/. Transformation of points in center lines of the horizontal tapes along the v coordinates
to image frame is denoted bi
in pixel frame. In this manner, approximate crrors in u, 1
coordinates are obtained independently. Please refer to Fi
Pe=9(D,K, T,Ttn,亚,v,△c,△0,P).(2)1 for examples
In this scenario we assume that the uncertainty of lidar A. Covariance Matrices
point observation Pi
'eh are negligible. The variance
We compute > i and >e using the Jackknife sampling
of y, y implicitly contains the uncertainty in Ac caused by method as explained in [1]. If the underlying calibration
inaccurate time synchronization. Ao is a constant. Therefore, method provides the uncertainty estimation for the param
the uncertainty of the point Pc is caused by the uncertainty of eters, no explicit computation would be required TI
D, K, Tcam, y and y, as denoted by the covariance matrix retically, 2im covariance matrix should be changing when
e. The covariance matrix of the these 21 parameters(4 the platform is moving. Nevertheless, we validated from
camera intrinsic parameters, 5 lens distortion coefficients, 6 experiments that it could be approximated by a constant
6639
u
(a)
Fig. 1. Shifts of laser points in image pixel coordinates in u and u directions
are demonstrated in(a) and(b), respectively
ESTIMATED AND TRUE ERRORS IN LIDAR TO CAMERA PROJECTION
Standard deviation in pixels
u
Standard deviation of estimated minimum error 3.5
8.2
Standard deviation of estimated ImlaxiInull error 30.9 17.0
Standard deviation of estimated average error 4.5
6.5
Measured average error
4.494.5
matrix.We employed the variance values provided in the data
sheets for the IMU sensors as initial guess. The process was
further fine tuned by driving at different speeds and analyzing
Std in pixels
the estimated resultant variance of the projected points and
the true error based on the ground truth values. We analyzed
the results for a variety of lidar point observations at different
Fig. 2. Lidar to camera projection uncertainty for a static platform.(a)
ranges and angles. Lastly, we obtained a constant covariance and(b)prcscnt unccrtaintics of pixel coordinates along u and v axes
matrix that provides a reasonable accurate estimate of the respectively,(c) shows the color map
uncertainty
B. Uncertainty Estimation of Lidar to Camera Projection for for closer objects compared to more distant objects. Based
A Static platform
on these observations, the uncertainty estimation is shown to
In this process we have adopted a framework [1] for accurately incorporate the major sources of uncertainties for
measuring the variance of the u, v coordinates of the a stationary vehicle
projected points from the lidar to image frame when the
vehicle is static. In this scenario all the uncertainties are
assumed to correspond only to the intrinsic and extrinsic
calibration parameters The results are presented in Fig. 2
Table i refers to the standard deviations of estimated and true
errors of the u. v coordinates
From Fig. 2(a) it is evident that the uncertainty along
Velodyne
the u axis is larger at the edges of the image This is
true due to the large distortion in the lens. Other than
that we can see that the uncertainty of v coordinates is
significantly greater than the u coordinates due to the sparsit
of the lidar in vertical direction. The underlying extrinsic
3
R
C
calibration method exploits a plane fitting technique. This
process is more prone to error with the increase of the
gap between the lidar beams From this, it is evident that
with a denser lidar containing 32 or 64 beams should have
an accordingly lower uncertainty in the v direction. The
objects closer to the camera have a higher uncertainty than Fig 3. The experimental mobile platform. (a) shows the front of the vehicle
and(b)demonstrates the mounting of the VLp-16 lidar and the front three
further objects. This is reasonable because the errors in the GMSL cameras. The labels r, C, and L represent right, centre, and left
angular extrinsic parameters create a larger projection error camera, respectively
6640
TABLE II
MOTION CORRECTION ACCURACY FOR DRIVING IN CIRCULAR MOTION
D. Uncertainty Estimation of Lidar to Camera Projection
With motion correction
Average error
20 deg/s 40 deg/s 60 deg/s
Once the motion correction is applied, the uncertaint
in u pixel coordinates
of the projected lidar point can be computed using the
Before motion correction 18.6
18.8
35.1
Jacobian method. The experimentally obtained 2m becomes
After motion correction 5.2
8.6
a constant diagonal matrix. where standard deviations for
△r,△yand△zare0.03m, and for△rol,△ pitch and
△ yau are0.0031rad.
Fig. 5 shows the uncertainty distribution of the projected
1000
lidar points in the image frame. As can be seen, the uncer
tainty distribution changes depending on the relative location
of the camera and lidar. It is evident that the uncertainty on
the left side of the image from the left camera is very high
The high uncertainty in this area of the image occurs because
objects to the side of the vehicle have higher relative velocity
with respect to the camera frame. As a result, the errors due
400
to the motion shift can be larger. This trend can also be
bserved. though with a smaller magnitude
the centre
camera image. This effect is due to the error introduced by
the motion shift. This can also be observed by looking at the
points projected onto the trees in close compared to the more
15
distant trees. Close range trees have larger misalignment
Angular Speed in deg!s
while further trees are well aligned these two cases are
very well represented by the uncertainty estimation
4. Histogram of turning rates during normal driving around the Finally, Fig. 6 shows an instance when the vehicle is
campus
rning at 45 deg/s. Since the angular speed is very high,
correction is prone to larger errors. Nevertheless, it can be
seen that the ground truth still lies within the estimated vari-
C. Motion Correction
ance bounds. The experimental results clearly demonstrate
that the uncertainty model provides a good measure of the
The experimental vehicle platform is based on the ros uncertainty of the alignment process. This information i
kinetic environment. The platform publishes messages con- essential for a data fusion process that considers vision and
taining odometry, camera images and full rotation lidar scans lidar information. We use estimation error squared(EES)as
at a frequency of 100 Hz, 30 Hz and 10 Hz, respectively. a quantitative metric for evaluating the consistency of the
The value of Ao for our experiments is 10 ms and Ad proposed uncertainty estimation. The error squared for the
varies from 0 to 16 ms. In other words, the maximum projection coordinates is computed by
difference between the lidar data packet timestamp and the
nearest image timestamp is approximately 16 ms. Fig. 3
D2=(x21-x)2S2(x2-x2)
shows the mounting of the lidar sensor and the front three
GMSL cameras on the experimental vehicle. These front where Tiis the x or y coordinate for the ith projected lidar
cameras cover a 180-degree field of view. All the cameras point i refers to the ground truth value, and Si is the
are synchronized to trigger simultaneously
variance of the estimate using the uncertainty model. For
Table II demonstrates the performance of the motion the proposed uncertainty model, which is closer to linear
correction algorithm when driving in a circular path at vari- and Gaussian, Di should have a x(chi-square) distribution
ous rotational velocities. The results validate the robustness with 1 degree of freedom, i.e., di(i)= 1. Besides, the
of the motion correction for extreme rotational cases. The true estimated errors are consistent with the model-based
results show that the algorithm is effectively correcting for variances, if D;E xi(0.025),x2(0.975). This interval
the motion shift when the angular speed is lower than 40 associates bounds for the two-sided 95% probabil interval
deg/s. Fig. 4 depicts a histogram of the rotational velocity In the experiments, the estimates were produced with
of our vehicle sampled every 2 seconds over a typical drive EES values calculated. Fig. 7 shows in total 865 sample
around the university campus. The recorded data consists points obtained from driving at linear speeds ranging from
of 45 minutes of normal driving following the campus road 10 to 30 km/h and in circular paths with turning rates up to
peed limits. This plot shows that in normal operation, the 42 deg/s. The actual percentage of points lying within the
maximum turning rate of the vehicle does not exceed 33 95%x bounds is calculated as 95.6%, which indicates a
deg/s for the collected data. This demonstrates the usefulness good consistency of the uncertainty model. When a higher
of the motion correction approach that has been validated to percentage of points exceed the upper bound, it indicates that
a reasonable accuracy at rates of up to 40 deg/s. Furthermore, the uncertainty model tends to produce optimistic estimates
most of the turning rate samples are concentrated at less than On the contrary, if a higher percentage of points stay below
20 deg/s, for which the model is very accurate
the lower bound the model is considered more conservative
6641
Std in pixels
(c) Color Map
Fig. 5. Laser points projected onto the images when driving at 30 km/h Points are colored based on their resultant variance in the pixel coordinate frame
along u and v axes. The left image is taken from the left camera and the right figure is a part of the image from the centre camera
6
Q5.0239
L
4
0.0010
Fig. 7. The distribution of the eEs values for the projected lidar coordinates
Fig. 6. Comparison of images before motion correction in (a)and afler for linear(blue)and circular (green) motion. The red lines represent the
after motion correction in(b). Red ellipses centered on each projected point 95%x confidence bounds. In total 865 samples were used to generate the
denote the region with the probability of 95%c for where the true point lies. figure
The radii of the ellipse are 2 Ec(0, 0)and 2vEc(1, 1)along u and v
directions, respectively
V. CONCLUSIONS AND FUTURE WORKS
errors is then projected into the image pixel frame using
uncertainty prediction for the projection of 3D lidar points resuls e an method. A comprehensive set of experimental
In this paper we propose an approach to provide accurate the Jacobia
monstrated the accuracy of the uncertainty estima
into a 2D camera image frame. This approach takes into tion. Experiments were conducted with an electric vehicle
account the uncertainties caused by translational and rota- equipped with lidar, cameras, GPS, and IMU Sensors and
tional motion correction. The proposed framework enables driven around in a university campus environment. The
errors in motion correction to be incorporated, and also other consistent estimation of projection uncertainty is essential
sources of uncertainty such as those introduced by extrinsic for a data fusion algorithm that combines lidar and camera
and intrinsic calibration. The uncertainty in the motion cor- data. In future works we expect to replace the approximated
rection process is formulated using the variance of linear and static covariance matrix of the motion parameters by the true
angular displacements between two odometry measurements dynamic covariance values. Furthermore the model could be
from the vehicle. The uncertainty considering all sources of extended to incorporate the timing uncertainty explicitly
6642
REFERENCES
[I] T. Peynot and A. Kassir, " Laser-camera data discrepancies and reliable
perception in outdoor robotics, in Proceedings of 2010 IEEE/rS/
International Conference on Intelligent Robots and SystemS(IROS)
2010
2625-263
[2] D. Held, J. Levinson, S. Thrun, and s. Savarese, ""Robust real-time
tracking combining 3d shape, color, and motion, The international
Journal of Robotics Research, vol 35, no. 1-3, pp 30-49, 2016
[3 D. Held, J. Levinson and s. Thrun, Precision tracking with sparse 3d
and dense color 2d data, in Proceedings of 2013 IEEE international
Conference on Robotics and Automation (ICRA), 2013, pp. 1 138
1145
[4] C. Premebida, O. Ludwig, and U. Nunes, "Lidar and vision-based
pedestrian detection system, Journai of Field Robotics, vol. 26, no 9,
Pp.696-711,2009
[5] J. Dou, J. Fang, T. Li, and J. Xue, "Boosting cnn-based pedestrian
detection via 3d lidar fusion in autonomous driving in Proceedings
of international Conference on Image and Graphics, 2017, pp. 3-13
[6] Y. Wei, J. Yang, C. Gong, S. Chen, and J. Qian, Obstacle detection by
fusing point clouds and monocular image, "Neural Processing letters,
pp.1-13,2018.
[7Q.V.Ie and A.Y. Ng, oint calibration of multiple sensors, in
Proceedings of 2009 IEEE/RSJ International Conference on Intelligent
Robots and Systems(IROS), 2009, pp. 3651-3658
U. Levinson and s. thrun "automatic online calibration of cameras
and lasers, in Proceedings of Robotics: Science and Systems, vol. 2
2013.
[9]H.J. Chien, R. Klette, N. Schneider, and U. Franke, Visual odometry
driven online calibration for monocular lidar-camera systems, in
Proceedings of 2016 the 23rd International Conference on PaTtern
R
ion(CPR),2016,pp.2848-2853
[10] T. Scott, A. A. Morye, P. Pinies, L. M. Paz, I. Posner, and P. Newman,
Choosing a time and place for calibration of lidar- camera systems
Proceedings of 2016 IEEE International Conference on Robotics
and Automation(ICRA), 2016, pp. 4349-4356
[ Z. Taylor and J. Nieto, " Motion-based calibration of multimodal
sensor extrinsics and timing offset estimation le transactions on
Robotics,vol.32,no.5,pp.1215-1229,2016
[12] A. Wendel and J. Underwood, Extrinsic parameter calibration for line
scanning cameras on ground vchiclcs with navigation systcms using
a calibration pattern, " Sensors, vol. 17, no. Il, p. 2491, 2017
[13 S. Nedevschi et al., Online cross-calibration of camera and lidar
in Proceedings of 2017 13th IEEE Intermational Conference on
Intelligent Computer Communication and Processing(CCP), 2017
pp.295-301
[14] J. P. Underwood, A. Hill,T. Peynot, and s.J. Scheding, " Error mod
eling and calibration of exteroceptive sensors for accurate mapping
applications, "Journal of Field Robotics, vol. 27, no. I, Pp. 2-20, 2010
[15] s. Kato, S. Tokunaga, Y. Maruyama, S. Maeda, M. Hirabayashi
Y Kitsukawa, A Monroy, T. Ando, Y Fujii, and T Azumi,Autoware
on board: enabling autonomous vehicles with embedded systems, in
Proceedings of the 9th ACMAEEE International Conference on Cyber-
Physical Systems, 2018, pp. 287-296
[16]S. Kato, E. Takeuchi, Y. Ishiguro, Y. Ninomiya, K. Takeda, and
T. Hamada, "An open approach to autonomous vehicles, IEEE Micro
1ol.35,m6,P.6008.2015
R. Varga, A Costea, H. Florea, I Giosan, and S. Nedevschi,""Super
sensor for 360-degree environment perception: Point cloud segmenta
tion using image features. in Proceedings of 2017 Ice the 20th In
ternational Conference on Intelligent Transportation Systems (ITSC),
2017,pp
6643
liAw rxiblicatinn stats
(系统自动生成,下载前可以参看下载内容)
下载文件列表
相关说明
- 本站资源为会员上传分享交流与学习,如有侵犯您的权益,请联系我们删除.
- 本站是交换下载平台,提供交流渠道,下载内容来自于网络,除下载问题外,其它问题请自行百度。
- 本站已设置防盗链,请勿用迅雷、QQ旋风等多线程下载软件下载资源,下载后用WinRAR最新版进行解压.
- 如果您发现内容无法下载,请稍后再次尝试;或者到消费记录里找到下载记录反馈给我们.
- 下载后发现下载的内容跟说明不相乎,请到消费记录里找到下载记录反馈给我们,经确认后退回积分.
- 如下载前有疑问,可以通过点击"提供者"的名字,查看对方的联系方式,联系对方咨询.