Views:1 Author:Maichuang Laser Publish Time: 2021-04-23 Origin:MC
With the rapid development of digital image processing and computer vision technology, more and more researchers adopt the camera as the sensing sensor of the fully autonomous mobile robot.
This is mainly because the original ultrasonic or infrared sensors perceive limited information and poor robustness, and the visual system can make up for these shortcomings.
Camera calibration algorithms: traditional camera calibration mainly includes Faugeras calibration method, TSCAI two-step method, direct linear transformation method, Zhang Zhengyou plane calibration method and WENG iterative method.
The self-calibration includes the self-calibration method based on Kruppa equation, the hierarchical step-by-step self-calibration method, the self-calibration method based on absolute quadric surface and Pollefeys' module constraint method.
Visual calibration includes Ma Songde's tri-orthogonal translation method, Li Hua's orthogonal plane calibration method and Hartley's rotation internal parameter calibration method.
CCD visual positioning algorithm: Filter-based positioning algorithms mainly include KF, SEIF, PF, EKF, UKF and so on.
A combination of monocular vision and odometer can also be used.
The oemometer readings are used as auxiliary information to calculate the coordinate positions of the feature points in the current robot coordinate system by triangulation method. The three-dimensional coordinate calculation here needs to be carried out on the basis of one delay time step.
The basic process of positioning algorithm:
The video stream acquired by the camera (mainly grayscale images; images in STEREO VO can be either color or grayscale), the images recorded by the camera at the time of t and t+1 are "It" and "It+1". The internal parameters of the camera are obtained through camera calibration and can be calculated as fixed quantities through Matlab or OpenCV.3.