 at least one fourcamera group arranged in a form of a 2×2 array of digital cameras, wherein the digital cameras comprise a camera A, a camera B, a camera C, and a camera D, with the cameras A, B, C and D being arranged on a same plane and being of identical models with identical lenses
 focal points Oa, Ob, Oc, and Od on imaging optical axes of the respective four cameras A, B, C, and D forming a rectangular plane wherein each of the imaging optical axis of the cameras A, B, C and D is perpendicular to the rectangular plane
 and at least one vertical laser, wherein the vertical laser is configured to be located on a perpendicular bisector of a connecting line OaOb, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line OaOc
 at least one horizontal laser
 and a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction
 wherein: the camera A is located on a position horizontal to the camera B such that a horizontal location of a ChargeCoupled Device (CCD) in the camera A corresponds to a horizontal location of a CCD in the camera B
 the camera C is located on a position horizontal to the camera D such that a horizontal location of a CCD in the camera C corresponds to a horizontal location of a CCD in the camera D
 the camera A is located on a position vertical to the camera C such that a vertical location of the CCD in the camera A corresponds to a vertical location of the CCD in the camera C
 the camera B is located on a position vertical to the camera D such that a vertical location of the CCD in the camera B corresponds to a vertical location of the CCD in the camera D
 the camera group formed by cameras A, B, C and D comprises four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, forming a rectangular plane
 and all of the imaging optical axis central lines of the four cameras are perpendicular to the rectangular plane
 and wherein: in the at least one fourcamera group, a distance between two adjacent cameras in a horizontal direction is “m”
 and a distance between two adjacent cameras in a vertical direction is “n”, and a range of “m” is 50 to 100 millimeters, and a range of “n” is 50 to 100 millimeters.
Objectpoint threedimensional measuring system using multicamera array, and measuring method
Updated Time 12 June 2019
Patent Registration DataPublication Number
US10001369
Application Number
US15/118607
Application Date
14 April 2016
Publication Date
19 June 2018
Current Assignee
BEIJING QINGYING MACHINE VISUAL TECHNOLOGY CO., LTD.
Original Assignee (Applicant)
YIN, XING
International Classification
H04N9/47,G01B11/03,G01C11/02,G06T7/593,G01B11/25
Cooperative Classification
G01B11/2545,G01B11/002,G01B11/03,G01C11/02,G06T7/557
Inventor
YIN, XING
Patent Images
This patent contains figures and images illustrating the invention and its embodiment.
Abstract
A system for measuring object points on a threedimensional object using a planar array of a multicamera group, and a measuring method, are provided. The system is useful in the field of optical measuring technologies. The method includes establishing a measuring system of at least one fourcamera group wherein digital cameras form a 2×2 array; matching an image object point acquired by the camera group; upon matched object point image coordinates, calculating coordinates of spatial locations of respective object points; upon coordinates of the spatial locations, calculating other threedimensional dimensions of the measured object to be specially measured to form threedimensional point clouds and establish a threedimensional point clouds graph for performing threedimensional stereoscopic reproduction. Here, full matching is performed for all measured points of the measured object by directly translating, superimposing, and comparing, pointbypoint, the pixel points of measured images in X and Yaxes directions. In this way a threedimensional object may be reproduced.
Claims
1. A multicamera measuring system for measuring object points of a threedimensional object, comprising: at least one fourcamera group arranged in a form of a 2×2 array of digital cameras, wherein the digital cameras comprise a camera A, a camera B, a camera C, and a camera D, with the cameras A, B, C and D being arranged on a same plane and being of identical models with identical lenses; focal points Oa, Ob, Oc, and Od on imaging optical axes of the respective four cameras A, B, C, and D forming a rectangular plane wherein each of the imaging optical axis of the cameras A, B, C and D is perpendicular to the rectangular plane; and at least one vertical laser, wherein the vertical laser is configured to be located on a perpendicular bisector of a connecting line OaOb, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line OaOc; at least one horizontal laser; and a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction; wherein: the camera A is located on a position horizontal to the camera B such that a horizontal location of a ChargeCoupled Device (CCD) in the camera A corresponds to a horizontal location of a CCD in the camera B; the camera C is located on a position horizontal to the camera D such that a horizontal location of a CCD in the camera C corresponds to a horizontal location of a CCD in the camera D; the camera A is located on a position vertical to the camera C such that a vertical location of the CCD in the camera A corresponds to a vertical location of the CCD in the camera C; the camera B is located on a position vertical to the camera D such that a vertical location of the CCD in the camera B corresponds to a vertical location of the CCD in the camera D; the camera group formed by cameras A, B, C and D comprises four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, forming a rectangular plane; and all of the imaging optical axis central lines of the four cameras are perpendicular to the rectangular plane; and wherein: in the at least one fourcamera group, a distance between two adjacent cameras in a horizontal direction is “m”; and a distance between two adjacent cameras in a vertical direction is “n”, and a range of “m” is 50 to 100 millimeters, and a range of “n” is 50 to 100 millimeters.
2. The multicamera measuring system of claim 1, wherein the at least one fourcamera group is provided in a form of 2×3, 2×4, 2×5, 3×2, 3×3, 3×4, 3×5 or 4×4 array of cameras.
3. The multicamera measuring system of claim 1, wherein in the at least one fourcamera group each of the cameras has: sensors of a type of ⅔″ complementary metaloxide semiconductor (CMOS); a pixel dimension of 5.5 μm; a resolution of 1024×2048; and a focal length of lens of 25 millimeters.
4. The multicamera measuring system of claim 1, wherein the at least one fourcamera group comprises a group of more than four cameras placed in a rectangular planar array, and the processor is configured to perform the matching operation with each group of four adjacent cameras.
5. The multicamera measuring system of claim 1, wherein: the matching operation generates matched object point image coordinates; and the processor is configured to place coordinate expressions of object points PN in respective spaces of a measured object, and calculate coordinates of spatial locations of the respective object points.
6. The multicamera measuring system of claim 5, wherein the processor is further configured to calculate a width dimension of each measured object through matched object points between two pairs of horizontal cameras, a height dimension of measured objects through matched object points between two pairs of vertical cameras, and a length dimension of the measured objects through matched object points between two pairs of horizontal cameras and two pairs of vertical cameras.
7. The multicamera measuring system of claim 1, wherein all the dimensions having a redundant feature are compared and analyzed on redundant data, improving measuring accuracy and precision rate.
8. The multicamera measuring system of claim 1, wherein the processor is further configured to, according to obtained coordinates of the spatial locations of respective object points, calculate other threedimensional dimensions of the measured objects which need to be specially measured to form threedimensional point clouds and to establish a threedimensional point clouds graph for performing threedimensional stereoscopic reproduction.
9. A method of measuring object points on a threedimensional object, comprising the steps of:providing a threedimensional measuring system comprising: at least one fourcamera group arranged in a form of a 2×2 array of digital cameras, wherein the digital cameras comprise a camera A, a camera B, a camera C, and a camera D, with the cameras A, B, C and D being arranged on a same plane and being of identical models with identical lenses, forming a rectangular shape;focal points O_{a}, O_{b}, O_{c}, and O_{d }on imaging optical axes of the four cameras A, B, C and D, forming a rectangular plane wherein each of the imaging optical axis central lines of the cameras A, B, C and D, respectively, is perpendicular to the rectangular plane;and wherein: the camera A is located on a position horizontal to the camera B, the camera C is located on a position horizontal to the camera D, the camera A is located on a position vertical to the camera C, and the camera B is located on a position vertical to the camera D; and dimensions of the rectangular shape and parameters of the cameras and the lenses are selected by considering factors including accuracy of the measuring system and a size of a measured object, while ensuring that the measured object is capable of simultaneously having corresponding imaging points on the four cameras, and if the measured object is out of the imaging range, it is needed to increase the measuring cameras in pair, forming an array of the measuring cameras; a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction, respectively; placing each of the cameras A, B, C and D in a field of view that enables simultaneous imaging; selecting dimensions of the rectangular shape and parameters of the cameras A, B, C and D and the lenses are selected by considering factors including accuracy of the measuring system and a size of a measured object, wherein when high measuring accuracy is required, it is needed to improve resolution of the cameras and increase focal length of the lenses; using the measuring system, acquiring images of object points of the threedimensional object;after acquisition of images is completed, performing a matching operation for object points of the images captured by the camera group which comprises: a binocular stereoscopic vision measurement, wherein one of the object points of the measured object is a known point and a point corresponding to the known point is disposed on two or more alternate images;connecting the known point and the corresponding points on the two or more alternate images to form one plane; epipolar lines being defined as lines that intersect with the one plane and with the two or more alternate images, wherein the corresponding points are located on the epipolar lines and all corresponding points of the measured object are parallel to the horizontal direction or the vertical direction; and processing the acquired images to form a threedimensional stereoscopic reproduction; provided that where the at least one fourcamera group comprises a group of more than four cameras placed in a rectangular planar array, and the processor is configured to perform the matching operation with each group of four adjacent cameras, finding out all object points whose spatial locations need to be calculated; according to matched object point image coordinates, calculating coordinates of a spatial location of each of the object points, wherein using the processor, matched object point image coordinates are put into coordinate expressions of any object point P_{N }in a space of the measured object, to calculate coordinates of spatial locations of respective object points; wherein the processor is further configured to calculate a width dimension of each measured object through matched object points between two pairs of horizontal cameras, a height dimension of measured objects through matched object points between two pairs of vertical cameras, and a length dimension of the measured objects through matched object points between two pairs of horizontal cameras and two pairs of vertical cameras wherein all of the dimensions above, having a redundant feature, are compared and analyzed on redundant data, improving measuring accuracy and precision rate; according to obtained coordinates of the spatial locations of respective object points, calculating other threedimensional dimensions of the measured object which need to be specially measured to form threedimensional point clouds and establish a threedimensional point clouds graph for creating the threedimensional stereoscopic reproduction.
10. The method of claim 9, wherein the threedimensional measuring system further comprises: at least one vertical laser; and at least one horizontal laser; wherein the vertical laser is configured to be located on a perpendicular bisector of a connecting line O_{a}O_{b}, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line O_{a}O_{c}.
11. The method of claim 9, wherein parameters of the cameras and the lenses, and the length and width dimensions of the rectangular shape are selected based on: when a measuring distance is unchanged, the larger a volume of the measured object, the shorter the focal lengths required by the lenses; when the measuring distance is increased, a measurable range is also correspondingly increased; and a measuring resolution is improved in a way of: improving the resolutions of the cameras, decreasing the measuring distance, and in a condition that the measuring distance is unchanged, decreasing values of the focal lengths, and increasing dimensions of an array of centers of the optical axes of the fourcamera group.
12. The method of claim 9, wherein according to the matched object point image coordinates, formulas of calculating coordinates of the spatial location of the object point are: with a central point O of a rectangular plane of the focal points O_{a}, O_{b}, O_{c}, and O_{d }of a group of four cameras, the camera a, the camera B, the camera C, and the camera D, serving as an origin, setting a triangular coordinate system of the space of the measured object, wherein X is a horizontal direction, Y is a vertical direction, and Z is a length or depth direction; coordinates of a spatial location of a same point, point P_{1}, of the measured object are P_{1 }(P_{1x}, P_{1y}, P_{1z}), and corresponding imaging points of the spatial threedimensional coordinates of the point P_{1 }in the group of four cameras, the camera A, the camera B, the camera C, and the camera D are P_{1a }(P_{1ax}, P_{1ay}), P_{1b }(P_{1bx}, P_{1by}), P_{1c }(P_{1cx}, P_{1cy}), and P_{1d }(P_{1dx}, P_{1dy}), relational expressions of location coordinates are as follows: a horizontal operation formula of the camera A and the camera B:
a horizontal operation formula of the camera C and the camera D:
a vertical operation formula of the camera A and the camera C:
a vertical operation formula of the camera B and the camera D:
a depth operation formula of the camera A and the camera B:
a depth operation formula of the camera C and the camera D:
a depth operation formula of the camera A and the camera C:
a depth operation formula of the camera B and the camera D:
wherein: “m” is an O_{a}O_{b }length of the rectangular plane;“n” is an O_{a}O_{c }length of the rectangular plane, and“f” is the focal length of the four cameras.
13. The method of claim 12, wherein a general expression for calculating coordinates of an object point, the spatial location coordinates of P_{N}, is:
wherein let the focal points of the four cameras, the camera A, the camera B, the camera C, and the camera D, be O_{a}, O_{b}, O_{c}, and O_{d}, wherein the focal points O_{a}, O_{b}, O_{c}, and O_{d }are on the same plane and form one rectangular plane, wherein let the O_{a}O_{b }length of the rectangular plane be “m” and the O_{a}O_{c }length be “n”, and the optical axes of the four cameras are parallel to each other and perpendicular to the rectangular plane; the group of four cameras, the camera A, the camera B, the camera C, and the camera D, use identical CCD for imaging, and also identical lenses, with the focal length of the four cameras set to be “f”; setting a rectangular coordinate system of the space of the measured object, taking the central point O of the rectangular plane of O_{a}, O_{b}, O_{c}, and O_{d }as the origin, wherein X is a horizontal direction parallel to an edge O_{a}O_{b }of the rectangular shape, Y is a vertical direction parallel to an edge O_{a}O_{c }of the rectangular shape, and Z is a length or depth direction and points towards the measured object; and let any object point in the measured object be P_{N}, and coordinates of projection points of P_{N }on imaging planes of the group of four cameras, the camera A, the camera B, the camera C, and the camera D, be P_{Na }(P_{Nax}, P_{Nay}), P_{Nb }(P_{Nbx}, P_{Nby}), P_{Nc }(P_{Ncx}, P_{Ncy}), and P_{Nd }(P_{Ndx}, P_{Ndy}), and then, let coordinates of a spatial location of the point P_{N }be P_{N }(P_{Nx}, P_{Ny}, P_{Nz}).
14. An featurepoint threedimensional measuring system of a planar array of a fourcamera group, comprising: at least one fourcamera group, the group formed by four digital cameras, wherein a fourcamera group formed by four digital cameras is arranged in a form of a 2×2 array; the four digital cameras comprise a camera a, a camera b, a camera c, and a camera d, wherein the camera a, the camera b, the camera c, and the camera d are arranged on a same plane; focal points O_{a}, O_{b}, O_{c}, and O_{d }on imaging optical axes of the four cameras, the camera a, the camera b, the camera c, and the camera d, are on a same plane and form one rectangular shape, forming a rectangular plane; all of imaging optical axis central lines of the four cameras, the camera a, the camera b, the camera c, and the camera d, are perpendicular to the rectangular plane; the four cameras, the camera a, the camera b, the camera c, and the camera d, are of identical models and have identical lenses; the camera a is located on a position horizontal to the camera b, and an extension line of a row where an Xcoordinate value of CCD in the camera a is located coincides with a row where a corresponding Xcoordinate value of CCD in the camera b is located; the camera c is located on a position horizontal to the camera d, and an extension line of a row where an Xcoordinate value of CCD in the camera c is located coincides with a row where a corresponding Xcoordinate value of CCD in the camera d is located; the camera a is located on a position vertical to the camera c, and an extension line of a column where a Ycoordinate value of CCD in the camera a is located coincides with a column where a corresponding Ycoordinate value of CCD in the camera c is located; the camera b is located on a position vertical to the camera d, and an extension line of a column where a Ycoordinate value of CCD in the camera b is located coincides with a column where a corresponding Ycoordinate value of CCD in the camera d is located; the fourcamera group formed by the four digital cameras is formed by four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, forming a rectangular plane, and all of the imaging optical axis central lines of the four cameras are perpendicular to the rectangular plane; and wherein the featurepoint threedimensional measuring system further comprises: at least one vertical laser, wherein the vertical laser is configured to be located on a perpendicular bisector of a connecting line OaOb, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line OaOc; at least one horizontal laser; and a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction; and wherein in the fourcamera group, a distance between two adjacent cameras in a horizontal direction is “m”; and a distance between two adjacent cameras in a vertical direction is “n”, and a range of “m” is 50 to 100 millimeters, and a range of “n” is 50 to 100 millimeters.
Claim Tree

11. A multicamera measuring system for measuring object points of a threedimensional object, comprising:

2. The multicamera measuring system of claim 1, wherein
 the at least one fourcamera group is provided in a form of 2×3, 2×4, 2×5, 3×2, 3×3, 3×4, 3×5 or 4×4 array of cameras.

3. The multicamera measuring system of claim 1, wherein
 in the at least one fourcamera group each of the cameras has: sensors of a type of ⅔″ complementary metaloxide semiconductor (CMOS); a pixel dimension of 5.5 μm; a resolution of 1024×2048; and a focal length of lens of 25 millimeters.

4. The multicamera measuring system of claim 1, wherein
 the at least one fourcamera group comprises

5. The multicamera measuring system of claim 1, wherein
 : the matching operation generates matched object point image coordinates; and the processor is configured to place coordinate expressions of object points PN in respective spaces of a measured object, and calculate coordinates of spatial locations of the respective object points.

7. The multicamera measuring system of claim 1, wherein
 all the dimensions having

8. The multicamera measuring system of claim 1, wherein
 the processor is further configured to, according to obtained coordinates of the spatial locations of respective object points, calculate other threedimensional dimensions of the measured objects which need to be specially measured to form threedimensional point clouds and to establish a threedimensional point clouds graph for performing threedimensional stereoscopic reproduction.


99. A method of measuring object points on a threedimensional object, comprising
 the steps of:providing a threedimensional measuring system comprising: at least one fourcamera group arranged in a form of a 2×2 array of digital cameras, wherein the digital cameras comprise a camera A, a camera B, a camera C, and a camera D, with the cameras A, B, C and D being arranged on a same plane and being of identical models with identical lenses, forming a rectangular shape
 focal points O_{a}, O_{b}, O_{c}, and O_{d }on imaging optical axes of the four cameras A, B, C and D, forming a rectangular plane wherein each of the imaging optical axis central lines of the cameras A, B, C and D, respectively, is perpendicular to the rectangular plane
 and wherein: the camera A is located on a position horizontal to the camera B, the camera C is located on a position horizontal to the camera D, the camera A is located on a position vertical to the camera C, and the camera B is located on a position vertical to the camera D
 and dimensions of the rectangular shape and parameters of the cameras and the lenses are selected by considering factors including accuracy of the measuring system and a size of a measured object, while ensuring that the measured object is capable of simultaneously having corresponding imaging points on the four cameras, and if the measured object is out of the imaging range, it is needed to increase the measuring cameras in pair, forming an array of the measuring cameras
 a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction, respectively
 placing each of the cameras A, B, C and D in a field of view that enables simultaneous imaging
 selecting dimensions of the rectangular shape and parameters of the cameras A, B, C and D and the lenses are selected by considering factors including accuracy of the measuring system and a size of a measured object, wherein when high measuring accuracy is required, it is needed to improve resolution of the cameras and increase focal length of the lenses
 using the measuring system, acquiring images of object points of the threedimensional object
 after acquisition of images is completed, performing a matching operation for object points of the images captured by the camera group which comprises: a binocular stereoscopic vision measurement, wherein one of the object points of the measured object is a known point and a point corresponding to the known point is disposed on two or more alternate images
 connecting the known point and the corresponding points on the two or more alternate images to form one plane
 epipolar lines being defined as lines that intersect with the one plane and with the two or more alternate images, wherein the corresponding points are located on the epipolar lines and all corresponding points of the measured object are parallel to the horizontal direction or the vertical direction
 and processing the acquired images to form a threedimensional stereoscopic reproduction
 provided that where the at least one fourcamera group comprises a group of more than four cameras placed in a rectangular planar array, and the processor is configured to perform the matching operation with each group of four adjacent cameras, finding out all object points whose spatial locations need to be calculated
 according to matched object point image coordinates, calculating coordinates of a spatial location of each of the object points, wherein using the processor, matched object point image coordinates are put into coordinate expressions of any object point P_{N }in a space of the measured object, to calculate coordinates of spatial locations of respective object points
 wherein the processor is further configured to calculate a width dimension of each measured object through matched object points between two pairs of horizontal cameras, a height dimension of measured objects through matched object points between two pairs of vertical cameras, and a length dimension of the measured objects through matched object points between two pairs of horizontal cameras and two pairs of vertical cameras wherein all of the dimensions above, having a redundant feature, are compared and analyzed on redundant data, improving measuring accuracy and precision rate
 according to obtained coordinates of the spatial locations of respective object points, calculating other threedimensional dimensions of the measured object which need to be specially measured to form threedimensional point clouds and establish a threedimensional point clouds graph for creating the threedimensional stereoscopic reproduction.

10. The method of claim 9, wherein
 the threedimensional measuring system further comprises:

11. The method of claim 9, wherein
 parameters of the cameras and the lenses, and the length and width dimensions of the rectangular shape are selected based on: when a measuring distance is unchanged, the larger a volume of the measured object, the shorter the focal lengths required by the lenses; when the measuring distance is increased, a measurable range is also correspondingly increased; and a measuring resolution is improved in a way of: improving the resolutions of the cameras, decreasing the measuring distance, and in a condition that the measuring distance is unchanged, decreasing values of the focal lengths, and increasing dimensions of an array of centers of the optical axes of the fourcamera group.

12. The method of claim 9, wherein
 according to the matched object point image coordinates, formulas of calculating coordinates of the spatial location of the object point are: with a central point O of a rectangular plane of the focal points O_{a}, O_{b}, O_{c}, and O_{d }of a group of four cameras, the camera a, the camera B, the camera C, and the camera D, serving as an origin, setting a triangular coordinate system of the space of the measured object, wherein

1414. An featurepoint threedimensional measuring system of a planar array of a fourcamera group, comprising:
 at least one fourcamera group, the group formed by four digital cameras, wherein a fourcamera group formed by four digital cameras is arranged in a form of a 2×2 array
 the four digital cameras comprise a camera a, a camera b, a camera c, and a camera d, wherein the camera a, the camera b, the camera c, and the camera d are arranged on a same plane
 focal points O_{a}, O_{b}, O_{c}, and O_{d }on imaging optical axes of the four cameras, the camera a, the camera b, the camera c, and the camera d, are on a same plane and form one rectangular shape, forming a rectangular plane
 all of imaging optical axis central lines of the four cameras, the camera a, the camera b, the camera c, and the camera d, are perpendicular to the rectangular plane
 the four cameras, the camera a, the camera b, the camera c, and the camera d, are of identical models and have identical lenses
 the camera a is located on a position horizontal to the camera b, and an extension line of a row where an Xcoordinate value of CCD in the camera a is located coincides with a row where a corresponding Xcoordinate value of CCD in the camera b is located
 the camera c is located on a position horizontal to the camera d, and an extension line of a row where an Xcoordinate value of CCD in the camera c is located coincides with a row where a corresponding Xcoordinate value of CCD in the camera d is located
 the camera a is located on a position vertical to the camera c, and an extension line of a column where a Ycoordinate value of CCD in the camera a is located coincides with a column where a corresponding Ycoordinate value of CCD in the camera c is located
 the camera b is located on a position vertical to the camera d, and an extension line of a column where a Ycoordinate value of CCD in the camera b is located coincides with a column where a corresponding Ycoordinate value of CCD in the camera d is located
 the fourcamera group formed by the four digital cameras is formed by four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, forming a rectangular plane, and all of the imaging optical axis central lines of the four cameras are perpendicular to the rectangular plane
 and wherein the featurepoint threedimensional measuring system further comprises: at least one vertical laser, wherein the vertical laser is configured to be located on a perpendicular bisector of a connecting line OaOb, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line OaOc
 at least one horizontal laser
 and a processor configured to receive images acquired from the at least one fourcamera group, and perform a matching operation for object points of a measured object, wherein when performing the matching operation, full matching is carried out for the object points of the measured object by translating, superimposing, and comparing, point by point, pixel points of measured images of two pairs of horizontal cameras and two pairs of vertical cameras in a horizontal direction and in a vertical direction
 and wherein in the fourcamera group, a distance between two adjacent cameras in a horizontal direction is “m”
 and a distance between two adjacent cameras in a vertical direction is “n”, and a range of “m” is 50 to 100 millimeters, and a range of “n” is 50 to 100 millimeters.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of International Application PCT/CN2016/079274 filed Apr. 14, 2016. That application is entitled “FeaturePoint ThreeDimensional Measuring System of Planar Array of FourCamera Group and Measuring Method,” and is incorporated herein in its entirety by reference.
This application also claims priority to Chinese national patent application CN 2016/10046181.0 filed on Jan. 22, 2016 and Chinese national patent application CN 2016/10131645.8 filed on Mar. 8, 2016.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT
Not applicable.
BACKGROUND OF THE INVENTION
This section is intended to introduce various aspects of the art, which may be associated with exemplary embodiments of the present disclosure. This discussion is believed to assist in providing a framework to facilitate a better understanding of particular aspects of the present disclosure. Accordingly, it should be understood that this section should be read in this light, and not necessarily as admissions of prior art.
Field of the Invention
The present invention relates to a threedimensional measuring system using a multicamera array. The invention also pertains to the field of optical measuring, and particularly to methods of measuring the position and dimensions of threedimensional objects using a digital camera group, and calculating threedimensional coordinates of a viewed object through image processing.
Technology in the Field of the Invention
Different techniques have been used for the measurement of threedimensional objects. The techniques calculate location coordinate points of a viewed object to produce a threedimensional stereoscopic measurement of external dimensions or features.
I. The SinglePoint Vision Measuring Method
A first technique is the singlepoint vision measuring method. We can regard all kinds of contactless length measuring sensors as singlepoint vision sensors. Examples include laser ranging sensors and laser scanners superimposed with a highspeed scanning function.
With singlepoint highspeed scanning, singlepoint vision data can be used to produce a threedimensional stereoscopic image. However, singlepoint vision measuring methods have disadvantages, particularly, the morphology characteristics of a whole measured object cannot be quickly and entirely grasped. Further, in dynamic measurement, a quickly moving object will produce image deformation, resulting in measurement blind spots. Processing speed of threedimensional points needs to be improved.
II. The Planar Vision Measuring Method
A second technique is the planar vision measuring method. For planar vision measuring, various types of cameras are used for twodimensional imaging. These include video cameras and video surveillance. Twodimensional image online measurement is widely used in various assembly line tests, such as printing and packaging line quality tests and product quality and appearance tests for specific objects.
With planar vision measuring, an object within a field of view can be captured through twodimensional imaging. A produced image of an object is subjected to analysis and intelligence processing through an edge classification algorithm. A drawback to planar vision measuring is that it can be hard to directly calculate physical dimensions of the object according to the plane image. For a threedimensional test, if only independent image analysis is carried out, it can only make qualitative analysis of the plane exterior profile.
III. ThreeDimensional Vision Measurement
A third technique is threedimensional vision measurement. There are several types of threedimensional vision measuring technologies.
Optical Screenshot Technology and Line Laser Measurement
A first type is optical screenshot and laser line technology. Optical screenshot technology and line laser measurement address the threedimensional measurement by changing it into a twodimensional problem through establishment of a laser plane. One laser plane is generated by one line laser generator, and an image is subjected to a binarization process after image capture by a digital camera arranged at a certain angle with this plane. An image of an intersecting line of the measured object and the laser line is obtained. The laser plane and pixels of the twodimensional image have a unique corresponding relationship. Accurate measurement of a laser cut line of the object can be realized through calibration. Currently, a line laser range finder can directly measure distances of various points on a laser line.
Binocular Vision Measuring Technology
A second technology is binocular, or multiview, vision measuring technology. It is observed here that the reason why the human eye can quickly determine the distance and size of a viewed object is that human beings have two eyes which have a fixed distance and which can dynamically adjust a focal length and an angle. The human brain has a computing speed that is hardly matched by the fastest computer at present. If two cameras with a fixed distance and focal length capture the same object at the same time, with respect to the same measured point of the measured object, there is a unique relationship between images which they form. This is the principle of the binocular vision measurement. Currently, 3D movies substantially use this method for filming and stereoscopic reproduction.
For the binocular vision measuring method, because there are still difficulties in the current technologies of the extraction of edge features of an object and the binocular pixel matching algorithm, it is hard to quickly and accurately match the binocular images. The binocular vision measurement method has not yet been used on a large scale, and products with direct binocular measurement and image recognition have not been seen.
At present, a device which can truly realize direct acquisition of threedimensional data has not yet appeared. Socalled threedimensional vision is formed through combination of related devices and technologies of onedimensional and twodimensional vision. Therefore, a need exists for an improved objectpoint, threedimensional measuring system.
BRIEF SUMMARY OF THE INVENTION
The present invention first provides a threedimensional measuring system for measuring object points. The system uses a multicamera group positioned in a planar array. The system enables a user to quickly and accurately measure dimensions of a threedimensional object using a multiview camera system.
In order to achieve the technical solutions and methods in the present invention, a threedimensional measuring system is offered. In one embodiment, the system includes an array of cameras having an identical model and identical lenses. In one embodiment, the multicamera group comprising at least one fourcamera group arranged in the form of a 2×2 array. The digital cameras in this array may be denoted as camera A, camera B, camera C, and camera D. The cameras A, B, C and D are arranged on the same plane.
In one embodiment, camera A is located at a position horizontal to camera B, while camera C is located at a position horizontal to camera D. Camera A is located at a position vertical to camera C, while camera B is located at a position vertical to camera D. This forms a foursided profile.
Focal points O_{a}, O_{b}, O_{c}, and O_{d }reside on imaging optical axes of the four cameras A, B, C, and D are on the same plane. Thus, cameras A, B, C and D form a polygonal plane, and preferably a rectangular plane, where the imaging optical axis of each of the four cameras A, B, C and D are perpendicular to the polygonal plane.
In one aspect, the array formed by the group of cameras is formed by four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form a rectangular shape, and all of the imaging optical axis of the four cameras are perpendicular to the rectangular plane.
The multicamera group may alternatively be provided in a form of 2×3, 2×4, 2×5, 3×2, 3×3, 3×4, 3×5 or 4×4 array.
In the multicamera group, the cameras may have sensors of a type of ⅔″ CMOS, and a pixel dimension of 5.5 μm. In addition, the cameras may have a resolution of 1024×2048, and a lens having a focal length of 25 millimeters.
The system may further comprise at least one vertical laser and at least one horizontal laser. In this instance, the vertical laser is configured to be located on a perpendicular bisector of a connecting line O_{a}O_{b}, and the horizontal laser is configured to be located on a perpendicular bisector of a connecting line O_{a}O_{c}.
In the multicamera group, a distance between two adjacent cameras in a horizontal direction may be denoted as “m”, and a distance between two adjacent cameras in a vertical direction may be denoted as “n”. Preferably, a range of “m” is 50 to 100 millimeters, while a range of “n” is also 50 to 100 millimeters.
An imaging method based on the threedimensional measuring system described above is also provided herein. The method first comprises establishing a measuring system according to a threedimensional planar array of a multicamera group. The multicamera group may include four or more identical cameras, whose optical axes are parallel, and whose focal points are on a same plane and form one rectangular profile.
The selection of dimensions of the array and parameters of the cameras and the lenses are selected based on desired accuracy of the measuring system and a size of a measured object. When high measuring accuracy is required, it becomes necessary to improve the resolution of the cameras and increase the focal length of the lenses. It is also desirable to ensure that the measured object is capable of simultaneously having corresponding imaging points on the four cameras. If the measured object is out of the imaging range, the operator may increase the measuring cameras in pairs, forming a larger array of the measuring cameras.
As part of the step of setting up the measuring system, the method also includes acquiring images. It is noted that in the first step of establishing a measuring system and acquiring images, parameters of the cameras and the lenses, and the length and width dimensions of the rectangular shape, are selected. When a measuring distance is unchanged, the larger a volume of the measured object, the shorter the focal length is required for the lenses. When the measuring distance is increased, a measurable range is also correspondingly increased.
Also in connection with this first step, a measuring resolution is improved in a way of (i) improving the resolutions of the cameras, (ii) decreasing the measuring distance, (iii) in a condition that the measuring distance is unchanged, decreasing values of the focal lengths, and (iv) increasing dimensions of an array of centers of the optical axes of the fourcamera group.
After acquisition of images is completed, the method next includes performing a matching operation for an object point of the images of the camera group. This is referred to herein as stereoscopic matching. In the binocular stereoscopic vision measurement, stereoscopic matching means that one of the imaging points is known and a corresponding point of this imaging point is found on another image. Epipolar geometric constraint is a common matching constraint technology. We connect three points, the measured point and the imaging points on the corresponding images, to form one plane. Intersecting lines of this plane with the two images in the imaging space are referred to as epipolar. A constraint condition of epipolar is that the matching point(s) must be located on the epipolar.
As to the epipolar algorithm, since in the threedimensional vision measuring method of a planar array of a multicamera group, the optical axes of the cameras are parallel, and the focal points form a rectangular shape on the same plane, the epipolar may be simplified as a straight line parallel to the X axis or the Y axis, that is to say, all corresponding projection points of the measured object on respective imaging planes are on the straight line parallel to the X axis or the Y axis. Thus, when performing the matching operation, full matching can be carried out for all measured points of the measured object, by directly translating, superimposing, and comparing, point by point, the pixel points of measured images of each pair in Xaxis and Yaxis directions.
In the matching operation, it is required to make the matching operations on the images of the multicamera group, finding out all object points whose spatial locations need to be calculated. If a camera array having more than four cameras is used for measurement, different matching operations should be performed in different fourcamera groups, respectively.
A third step in the measuring method is calculating coordinates of a spatial location of the object point. This is done according to matched object point image coordinates. In operation, the matched object point image coordinates are put into coordinate expressions of any object point P_{N }in a space of the measured object, to calculate coordinates of spatial locations of respective object points.
According to calculation formulas of the spatial location of the object point P_{N}, a width dimension of the measured object is calculated through matched object points between two pairs of horizontal cameras. In addition, a height dimension of the measured object is calculated through matched object points between two pairs of vertical cameras. Finally, a length dimension of the measured object is calculated through matched object points between two pairs of horizontal cameras and two pairs of vertical cameras. In this instance, all of the dimensions above, having a redundant feature, are compared and analyzed on redundant data, improving measuring accuracy and precision rate.
In the third step of calculating coordinates of the spatial location of the object point, it is observed that formulas of calculating coordinates of the spatial location of respective object points are:
 with a central point O of a rectangular plane of the focal points O_{a}, O_{b}, O_{c}, and O_{d }of a group of four cameras, the camera A, the camera B, the camera C, and the camera D, serving as an origin, setting a triangular coordinate system of the space of the measured object, wherein X is a horizontal direction, Y is a vertical direction, and Z is a length or depth direction;
 coordinates of a spatial location of a same point, point P_{1}, of the measured object are P_{1 }(P_{1x}, P_{1y}, P_{1z}), and corresponding imaging points of the spatial threedimensional coordinates of the point P_{1 }in the group of four cameras A, B, C and D are P_{1a }(P_{1ax}, P_{1ay}), P_{1b }(P_{1bx}, P_{1by}), P_{1c }(P_{1cx}, P_{1cy}), and P_{1d }(P_{1dx}, P_{1dy}), relational expressions of location coordinates are as follows:
a horizontal operation formula of camera A and camera B:
a horizontal operation formula of camera C and camera D:
a vertical operation formula of camera A and camera C:
a vertical operation formula of camera B and camera D:
a depth operation formula of camera A and camera B:
a depth operation formula of camera C and camera D:
a depth operation formula of camera A and camera C:
a depth operation formula of camera B and camera D:
wherein: “m” is an O_{a}O_{b }length of the rectangular plane;
 “n” is an O_{a}O_{c }length of the rectangular plane, and
 “f” is the focal length of the four cameras.
For the third step of calculating coordinates of the spatial location of the object points, in a general expression for calculating coordinates of an object point, the spatial location coordinates of P_{N}, is:
 wherein
 let the focal points of the four cameras (the camera A, the camera B, the camera C and the camera D) be O_{a}, O_{b}, O_{c}, and O_{d}, wherein the focal points O_{a}, O_{b}, O_{c}, and O_{d }are on the same rectangular plane,
 let the O_{a}O_{b }length of the rectangular plane be “m” and the O_{a}O_{c }length be “n”, with the optical axes of the four cameras being parallel to each other and perpendicular to the rectangular plane,
 wherein the group of four cameras A, B, C and D, use identical cameras having identical lenses for imaging, and
 let the focal length of the lenses be set to be “f”;
 setting a rectangular coordinate system of the space of the measured object, taking the central point O of the rectangular plane of O_{a}, O_{b}, O_{c}, and O_{d }as the origin, wherein X is a horizontal direction parallel to an edge O_{a}O_{b }of the rectangular shape, Y is a vertical direction parallel to an edge O_{a}O_{c }of the rectangular shape, and Z is a length or depth direction and points towards the measured object; and
 let any object point in the measured object be P_{N}, and coordinates of projection points of P_{N }on imaging planes of the group of four cameras A, B, C and D, be P_{Na }(P_{Nax}, P_{Nay}), P_{Nb }(P_{Nbx}, P_{Nby}), P_{Nc }(P_{Ncx}, P_{Ncy}), and P_{Nd }(P_{Ndx}, P_{Ndy}), and then, let coordinates of a spatial location of the point P_{N }be P_{N }(P_{Nx}, P_{Ny}, P_{Nz}).
Finally, a fourth step of the method includes calculating other threedimensional dimensions of the measured object which need to be specially measured to form threedimensional data points (or point clouds) and establish a threedimensional point clouds graph for performing threedimensional stereoscopic reproduction. This is done according to obtained coordinates of the spatial locations of respective object points.
By using the technical solutions and the methods of the present invention, since four or more digital cameras are arranged in a rectangular array on the same plane, after the multiview matching algorithm is completed and respective corresponding object points are found out, coordinates of a threedimensional location of the object point of the viewed object are quickly and accurately calculated using the methods, to further realize accurate threedimensional stereoscopic imaging of the external dimensions of a viewed object. Apart from being capable of quickly calculating the threedimensional coordinates of the object point, since the arrangement of the planar array of the fourcamera group is used, the methods can simplify the matching algorithm of the object point.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the present inventions can be better understood, certain illustrations, charts and/or flow charts are appended hereto. It is to be noted, however, that the drawings illustrate only selected embodiments of the inventions and are therefore not to be considered limiting of scope, for the inventions may admit to other equally effective embodiments and applications.
FIG. 1 is a plan schematic view of an arrangement of a planar, fourcamera array for the system of the present invention.
FIG. 2 is a perspective view of the planar, fourcamera array of FIG. 1.
FIG. 3 is a schematic view of an arrangement of a coordinate system of the planar, fourcamera array of the present invention.
FIG. 4 is a perspective schematic view of calculating horizontal dimensions of the fourcamera array of the present invention.
FIG. 5 is a plan schematic view of calculating horizontal dimensions of the fourcamera array of the present invention.
FIG. 6 is a perspective schematic view of calculating vertical dimensions of the fourcamera array of the present invention.
FIG. 7 is a plan schematic view of calculating vertical dimensions of the fourcamera array of the present invention.
FIG. 8 is a schematic view of arrangement of a threedimensional measuring system of a 3×2 camera array of the present invention.
FIG. 9 is a top view of arrangement of a threedimensional measuring system of the 3×2 camera array of FIG. 8.
FIG. 10 is a side view of an arrangement of a threedimensional measuring system of the 3×2 camera array of FIG. 8.
DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
Below, the technical solutions and the methods of the present invention are further described in detail in conjunction with figures for understanding aspects of the present invention.
As shown in FIG. 1, FIG. 2, and FIG. 3, a threedimensional measuring system using a multicamera group is provided. The group of cameras is used to identify and measure object points on a threedimensional object. The group is formed by an array of at least four digital cameras. In one aspect, the digital cameras are arranged in a 2×2 array. The digital cameras comprise a camera A, a camera B, a camera C and a camera D, wherein the cameras A, B, C and D are arranged on the same plane.
Focal points O_{a}, O_{b}, O_{c}, and O_{d }on imaging optical axes of the four cameras (the camera A, the camera B, the camera C and the camera D), are on the same plane and form one rectangular shape, forming a rectangular plane. Cameras A, B, C and D are respectively located at four corners of the rectangular shape (as in FIG. 1, FIG. 2, and FIG. 3). Thus, all of imaging optical axis of the four cameras A, B, C and D are perpendicular to the rectangular plane.
The four cameras A, B, C and D are preferably of identical models and have identical lenses. A distance between two cameras, forming the length or width dimensions of a rectangular shape, can be adjusted. For the selection of resolutions of the cameras and other parameters, focal lengths of the lenses, and length and width dimensions of the rectangular shape, suitable parameters should be selected, according to a location and dimensions of the measured object. The operator should ensure that all cameras within the array are within the field of view so as to be able to take an image of the measured object, at the same time, and being capable of meeting requirements of measurement accuracy to resolutions of the images.
At least one vertical laser and at least one horizontal laser are further comprised. The socalled vertical laser is a laser provided in a vertical direction, while the socalled horizontal laser is a laser provided in a horizontal direction. Gas lasers, solidstate lasers, semiconductor lasers, free electron lasers or pulsed lasers can be chosen as the lasers.
As shown in FIGS. 1 and 2, the camera A is located on a position horizontal to camera B, while camera C is located on a position horizontal to camera D (parallel to camera A and camera B). The vertical laser is configured to be located on a perpendicular bisector of a connecting line of O_{a }and O_{b}. The vertical laser is located at the top of camera A and camera B, with a distance of 1 to 3 times the length of a connecting line of the focal points of camera A and camera B.
As also shown in FIGS. 1 and 2, camera A is located on a position vertical to camera C, while camera B is located on a position vertical to camera D (parallel to camera A and camera C). The horizontal laser is configured to be located on a perpendicular bisector of a connecting line of O_{a }and O_{c}. The horizontal laser is located at the left of camera A and camera C, with a distance of 1 to 3 times the length of the connecting line of the focal points of camera A and camera C.
The camera group of digital cameras is formed by four digital cameras where a focal point on an imaging optical axis of a chosen camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, forming a rectangular plane. Moreover, all of the imaging optical axis of the four cameras are perpendicular to this rectangular plane.
The multicamera group may be provided in a form of a 2×3, a 2×4, a 2×5, a 3×2, a 3×3, a 3×4, a 3×5 or a 4×4 array. In order to be capable of measuring objects at different positions having different external dimensions and meeting requirements of different levels of measurement accuracy, in the vision measuring system, as demanded, the 2×2 array of the fourcamera group is in pair arranged into an array of cameras in accordance with a rectangular shape for expanding, so as to enlarge the measured range of view field. The array of the fourcamera group is a 2×2 array. If the lateral range of the view field is increased, another pair of cameras can be further arranged in the lateral direction, turning into a 3×2 array.
A principle of the arrangement of the measuring camera array is that a focal point on an imaging optical axis of a camera and focal points on imaging optical axes of three adjacent cameras form one rectangular shape, and all of the imaging optical axis of the cameras are perpendicular to the rectangular plane.
Another principle of the arrangement of the measuring camera array is that, for an object point which needs to be measured, corresponding matching points can be found in all images in the camera array, at least in one fourcamera 2×2 array.
A calculating principle of the measuring camera array is that calculation of image matching and calculation of threedimensional coordinates are performed on the basis of the 2×2 array of the fourcamera group. If one pair of cameras is adjacent to two other pairs of cameras in the horizontal direction or the vertical direction respectively, this pair of cameras can take part in operations of the 2×2 arrays of the fourcamera group with the other two pairs of cameras adjacent to them, respectively.
In the objectpoint measuring method of a planar array of a multicamera group, one or more vertical line laser generators (i.e. vertical lasers) or horizontal line laser generators (i.e. horizontal lasers) can be provided. This is also shown in FIG. 1 and FIG. 2. The purpose of the one or more vertical line laser generators or horizontal line laser generators is being capable of quickly and accurately performing matching of the fourcamera group image versus the same measured point on the intersecting line of the laser and the viewed object through laserstructured light. (Since the laser light has high illuminance compared with a common light source, it is easy to obtain on the image a graph of the intersecting line with the object, moreover, it is easy to calculate threedimensional coordinates of the intersecting line according to the optical screenshot principle, without ambiguity. Similar to an indicating line, it facilitates image matching. From another perspective, if the images are directly matched, the algorithm is complex, easily causing matching error and hard to guarantee uniqueness.) But if the line laser generator is not provided, the measurement results will not be affected as long as quick matching of the image object point can be realized. Sometimes, in order to meet the requirements of measuring speed and accuracy, a plurality of horizontal or vertical laser lines can be arranged. It is recommended to use an arrangement manner of arranging laser lines parallel to each other.
In the fourcamera group, the cameras may have sensors of a type of ⅔″ CMOS, and a pixel dimension of 5.5 μm. In addition, the cameras may have a resolution of 1024×2048, and a focal length of lens of 25 millimeters.
In the fourcamera group, a distance between two adjacent cameras in a horizontal direction is denoted as “m”, and a distance between two adjacent cameras in a vertical direction is denoted as “n”. A range of “m” may be 50 to 100 millimeters, and more preferably about 60 millimeters. A range of “n” may be 50 to 100 millimeters, and more preferably about 50 millimeters.
A measuring method based on the above mentioned threedimensional measuring system comprises the following specific steps:
Step 1: Establishing a measuring system and acquiring images according to a threedimensional vision measuring method using a multicamera array.
A principle of the establishing step is that there are provided four or more identical cameras, whose optical axes are parallel, and whose focal points are on the same plane and can form one rectangular shape. For the selection of dimensions of the rectangular shape and parameters of the cameras and the lenses, mainly considering factors are accuracy of the measuring system and a size of a measured object, wherein when high measuring accuracy is required, it is considered to improve the resolutions of the cameras and increase the focal lengths of the lenses, at the same time, it is needed to ensure that the measured object can simultaneously have corresponding imaging points on the four cameras, and if the measured object is out of the imaging range, it also can be considered to increase the measuring cameras in pair, forming an array of the measuring cameras.
It is noted that in the first step of establishing a measuring system, parameters of the cameras and the lenses, and the length and width dimensions of the rectangular shape, are selected. When a measuring distance is unchanged, the larger a volume of the measured object, the shorter the focal lengths required by the lenses. When the measuring distance is increased, a measurable range is also correspondingly increased.
Also in connection with this first step, a measuring resolution is improved in a way of (i) improving the resolutions of the cameras, (ii) decreasing the measuring distance, (iii) in a condition that the measuring distance is unchanged, decreasing values of the focal lengths, and (iv) increasing dimensions of an array of centers of the optical axes of the fourcamera group.
As part of the step of setting up the measuring system, the method also includes acquiring images.
Step 2: After acquisition of images is completed, performing a matching operation for an object point of the images of the camera group.
A matching algorithm for imaging points of the measured object on the image is carried out. This is done with reference to a process or algorithm using a processor. Since we use the multicamera redundant and specifically structured arrangement method, corresponding matching is performed for the imaging points of the measured object on the images using the binocular and multiview image matching algorithm; as to the epipolar algorithm in the binocular vision matching algorithm, the epipolar is directly simplified as a straight line parallel to an X axis and a Y axis, that is to say, all corresponding projection points of the measured object on respective imaging planes are on the straight line parallel to the X axis and the Y axis; full matching is carried out for all measured points of the measured object, by directly translating, superimposing, and comparing, point by point, the pixel points of measured images of each pair in Xaxis and Yaxis directions. This method simplifies the otherwise complex algorithm of binocular matching.
In the binocular stereoscopic vision measurement, stereoscopic matching means that one of the imaging points is known and a corresponding point of this imaging point is found on another image. Epipolar geometric constraint is a common matching constraint technology. We connect three points, the measured point and the imaging points on the corresponding images, to form one plane. Intersecting lines of this plane with the two images in the imaging space are called as epipolar by us. A constraint condition of epipolar is that the matching point(s) must be located on the epipolar.
As to the epipolar algorithm, since in the threedimensional vision measuring method of a planar array of a multicamera group, the optical axes of the cameras are parallel, and the focal points form a rectangular shape on the same plane, the epipolar is directly simplified as a straight line parallel to the X axis or the Y axis, that is to say, all corresponding projection points of the measured object on respective imaging planes are on the straight line parallel to the X axis or the Y axis; thus, when performing the matching operation, full matching can be carried out for all measured points of the measured object, by directly translating, superimposing, and comparing, point by point, the pixel points of measured images of each pair in Xaxis and Yaxis directions.
In the matching operation, it is required to make the matching operations on the images of the fourcamera group, finding out all object points whose spatial locations need to be calculated, and if a camera array having more than four cameras is used for measurement, different matching operations should be performed in different fourcamera groups respectively.
Step 3: According to matched object point image coordinates, calculating coordinates of a spatial location of the object point.
In step 3, the matched object point image coordinates are put into coordinate expressions of any object point P_{N }in the space of the measured object, to calculate coordinates of spatial locations of respective object points. According to calculation formulas of the spatial location of the object point, a width dimension of the measured object can be calculated through the matched object points between two pairs of horizontal cameras, a height dimension of the measured object can be calculated through the matched object points between two pairs of vertical cameras, and a length dimension of the measured object can be calculated through the matched object points between two pairs of horizontal cameras and two pairs of vertical cameras. All of the dimensions above, having a redundant feature, can be compared and analyzed on the redundant data, improving the measuring accuracy and precision rate;
In the third step of calculating coordinates of the spatial location of the object point, it is observed that formulas of calculating coordinates of the spatial location of respective object point are:
 with a central point O of a rectangular plane of the focal points O_{a}, O_{b}, O_{c}, and O_{d }of a group of four cameras, the camera A, the camera B, the camera C, and the camera D, serving as an origin, setting a triangular coordinate system of the space of the measured object, wherein X is a horizontal direction, Y is a vertical direction, and Z is a length or depth direction;
 coordinates of a spatial location of a same point, point P_{1}, of the measured object are P_{1 }(P_{1x}, P_{1y}, P_{1z}), and corresponding imaging points of the spatial threedimensional coordinates of the point P_{1 }in the group of four cameras A, B, C and D are P_{1a }(P_{1ax}, P_{1ay}), P_{1b }(P_{1bx}, P_{1by}), P_{1c }(P_{1cx}, P_{1cy}), and P_{1d }(P_{1dx}, P_{1dy}), relational expressions of location coordinates are as follows:
a horizontal operation formula of camera A and camera B:
a horizontal operation formula of camera C and camera D:
a vertical operation formula of camera A and camera C:
a vertical operation formula of camera B and camera D:
a depth operation formula of camera A and camera B:
a depth operation formula of camera C and camera D:
a depth operation formula of camera A and camera C:
a depth operation formula of camera B and camera D:
wherein: “m” is an O_{a}O_{b }length of the rectangular plane;
 “n” is an O_{a}O_{c }length of the rectangular plane, and
 “f” is the focal length of the four cameras.
Step 4: According to the obtained coordinates of the spatial locations of respective object points, calculating other threedimensional dimensions of the measured object which need to be specially measured to form threedimensional point clouds and establish a threedimensional point clouds graph for performing threedimensional stereoscopic reproduction.
See FIG. 3, FIG. 4, FIG. 5, FIG. 6, and FIG. 7 for the threedimensional stereoscopic vision measuring principle of the present invention, wherein FIG. 4 and FIG. 5 are schematic views of calculating horizontal dimensions of camera A and camera B, and FIG. 6 and FIG. 7 are schematic views of calculating vertical dimensions of camera A and the camera C.
We take FIG. 3, FIG. 4, FIG. 5, FIG. 6, and FIG. 7 as examples to describe the measuring principle.
In FIG. 3, let the focal points of the group of four cameras, A, B, C, and D, be O_{a}, O_{b}, O_{c}, and O_{d}, the focal points O_{a}, O_{b}, O_{c}, and O_{d }are on the same plane and form one rectangular plane. Let the O_{a}O_{b }length of the rectangular plane be “m”, and the O_{a}O_{c }length be “n”. The optical axes of the fourcamera group are parallel to each other and perpendicular to this rectangular plane. The four cameras A, B, C, and D use identical Chargecoupled Device (CCD) imaging, and also have identical lenses with a focal length set to be f. Let centers of imaging planes of the CCD imaging of the fourcamera group of A, B, C, and D be O_{a}′, O_{b}′, O_{c}′, and O_{d}′.
Let one object point of the measured object be P_{1}. We take the central point O of the rectangular plane of O_{a}, O_{b}, O_{c}, and O_{d }as an origin, and set the triangular coordinate system of the space of the measured object, wherein X is a horizontal direction, Y is a vertical direction, and Z is a length or depth direction, and then, let coordinates of the spatial location of the point P_{1 }be P_{1 }(P_{1x}, P_{1y}, P_{1z}).
As in FIG. 4, only positional relationships of spatial imaging of camera A and camera B are described. Let imaging points of the point P_{1 }on the imaging planes of the camera a and the camera b be P_{1a }and P_{1b}, a projection point of the point P_{1 }on a coordinate XY axis plane be P_{1}′ with coordinates thereof being P_{1}′ (P_{1x}, P_{1y}, 0). According to the imaging principle, a connecting line of the point P1 and P1a passes through the point O_{a}, and a connecting line of the point P_{1 }and P_{1b }passes through the point O_{b}. Taking O_{a}′ and O_{b}′ as centers, coordinate systems of imaging planes of the camera a and the camera b having directions consistent with those of the coordinate axes of the spatial coordinate system OXYZ of the object are set respectively, and then, coordinates of P_{1a }are (P_{1ax}, P_{1ay}), and coordinates of P1b are P_{1b }(P_{1bx}, P_{1by}).
As in FIG. 5, geometrical relationships of projections of the three points P_{1a}, P_{1b}, and P_{1 }in an XZ coordinate plane are described. According to the triangle similarity principle, we obtain:
According to Equations ① and ②,
According to Equations ① and ②,
As in FIG. 6, only positional relationships of spatial imaging of the camera a and the camera c are described. Let an imaging point of the point P_{1 }on the imaging plane of the camera c be P_{1c}, a projection point of the point P_{1 }on the coordinate XY axis plane be Pr and its coordinates be P^{1}′ (P_{1x}, P_{1y}, 0). According to the imaging principle, a connecting line of the point P_{1 }and P_{1c }passes through the point O_{c}. Taking O_{c}′ as a center, an coordinate system of the imaging plane of the camera c having directions consistent with those of the coordinate axes of the spatial coordinate system OXYZ of the object is set, wherein coordinates of P_{1c }are P_{1c }(P_{1cx}, P_{1cy}).
As in FIG. 7, geometrical relationships of projections of the three points P_{1a}, P_{1c}, and P_{1 }in a YZ coordinate plane are described. According to the triangle similarity principle, we obtain:
According to Equations ⑤ and ⑥,
According to Equations ⑤ and ⑦,
According to formulas ③, ④, ⑦, and ⑧, through paired operations of the cameras a and b and the cameras a and c respectively, we obtain expression calculation formulas of the coordinates P_{1x}, P_{1y}, and P_{1z }of a spatial location of the point P_{1 }in regard to the coordinates of the projection points P_{1a }and P_{1b}, and P_{1c }of the point P_{1 }on the cameras A, B, and C.
When the camera group is measuring the horizontal dimensions, the cameras A and B or the cameras C and D can be used for the paired operations, and the operation principles and methods of the cameras c and d are identical with those of the cameras A and B. When the camera group measuring the vertical dimensions, the cameras A and C or the cameras B and D can be used for the paired operations, and the operation principles and methods of the cameras b and d are identical with those of the cameras A and C.
The measuring formulas are summarized as follows:
Taking the central point O of the rectangular plane of the focal points O_{a}, O_{b}, O_{c}, and O_{d }of the group of four cameras A, B, C, and D as an origin, the triangular coordinate system of the space of the measured object is set
wherein: X is the horizontal direction,
 Y is the vertical direction, and
 Z is the length or depth direction.
The coordinates of the spatial location of the same point of the measured object, point P_{1}, are P_{1 }(P_{1x}, P_{1y}, P_{1z}). Relationship expressions of the spatial threedimensional coordinates of the point P_{1 }in regard to the location coordinates of the corresponding imaging points P_{1a}, P_{1b}, P_{1c}, and P_{1d }in the group of four cameras A, B, C and D are as follows (where “m” is the O_{a}O_{b }length and “n” is the O_{a}O_{c }length of the rectangular plane, and “f” is the focal length of the four cameras):
the horizontal operation formula of camera A and camera B:
the horizontal operation formula of camera C and camera D:
the vertical operation formula of camera A and camera C:
the vertical operation formula of camera B and camera D:
the depth operation formula of camera A and camera B:
the depth operation formula of camera C and camera D:
the depth operation formula of camera A and camera C:
the depth operation formula of camera B and camera D:
wherein: “m” is an OaOb length of the rectangular plane;
 “n” is an OaOc length of the rectangular plane, and
 “f” is the focal length of the four cameras.
In the step 3, a general expression for calculating coordinates of an object point, the spatial location coordinates of P_{N}, is as follows:
 let the focal points of the four cameras (the camera A, the camera B, the camera C and the camera D) be Oa, Ob, Oc, and Od, wherein the focal points Oa, Ob, Oc, and Od are on the same rectangular plane,
 let the OaOb length of the rectangular plane be “m” and the OaOc length be “n”, with the optical axes of the four cameras being parallel to each other and perpendicular to the rectangular plane,
 wherein the group of four cameras A, B, C and D, use identical cameras having identical lenses for imaging, and
 let the focal length of the lenses be set to be “f”;
 setting a rectangular coordinate system of the space of the measured object, taking the central point O of the rectangular plane of O_{a}, O_{b}, O_{c}, and O_{d }as the origin, wherein X is a horizontal direction parallel to an edge O_{a}O_{b }of the rectangular shape, Y is a vertical direction parallel to an edge O_{a}O_{c }of the rectangular shape, and Z is a length or depth direction and points towards the measured object; and
 let any object point in the measured object be P_{N}, and coordinates of projection points of P_{N }on imaging planes of the group of four cameras A, B, C and D, be P_{Na }(P_{Nax}, P_{Nay}), P_{Nb }(P_{Nbx}, P_{Nby}), P_{Nc }(P_{Ncx}, P_{Ncy}), and P_{Nd }(P_{Ndx}, P_{Ndy}), and then, let coordinates of a spatial location of the point P_{N }be P_{N }(P_{Nx}, P_{Ny}, P_{Nz}).
Generally, in the objectpoint threedimensional measuring method of a planar array of a multicamera group, an expression of coordinates of any object point P_{N }in the space of the measured object are as follows:
The present invention offers at least the following advantages:
1. The threedimensional vision measuring method of a planar array of a multicamera group can calculate the threedimensional stereoscopic coordinates of the same point of the measured object according to changes of positions of the imaging points of the same point of the measured object in different cameras, wherein horizontal dimensions are calculated by two pairs of horizontally arranged cameras, vertical dimensions are calculated by two pairs of vertically arranged cameras, and depth dimensions can be calculated by both two pairs of horizontal cameras and two pairs of vertical cameras.
2. The threedimensional vision measuring method of a planar array of a multicamera group can calculate the threedimensional stereoscopic data of the object point of the measured object just through algebraic calculations between the object point image coordinates. The calculation accuracy of the coordinates of the same point of the measured object is only related to the camera accuracy and resolution, mutual positional accuracy and distance of the cameras. Compared with the existing optical screenshot algorithm and other algorithms which need calibration in advance, it is not needed to use complex calibration formulas, simplifying the calculation of the spatial dimensions, at the same time, preventing errors of a calibrator and a calibration process from being put into measurement results.
3. The threedimensional vision measuring method of a planar array of a multicamera group belongs to a multicamera redundant and specifically structured arrangement method. As to algorithm of an epipolar in the binocular vision matching algorithm, the epipolar is directly simplified as a straight line parallel to an X axis and a Y axis, that is to say, all corresponding projection points of the measured object on respective imaging planes are on the straight line parallel to the X axis and the Y axis. We can make full matching for all measured points of the measured object, by directly translating, superimposing, and comparing, point by point, the pixel points of measured images of each pair in Xaxis and Yaxis directions. This method can greatly simplify the complex algorithm of binocular matching.
Detailed descriptions are made below with one example.
As shown in FIG. 8, FIG. 9, and FIG. 10, we use a camera 1, a camera 2, a camera 3, a camera 4, a camera 5, and camera 6 to form a threedimensional measuring system of one 3×2 camera array; the camera 1, the camera 2, the camera 3, the camera 4, the camera 5, and the camera 6 are respectively called as 1, 2, 3, 4, 5, and 6 for short.
The six cameras we use have a sensor of the type of ⅔″ CMOS, a pixel size of 5.5 μm, a resolution of 1024×2048, and a focal length of lens of 25 mm. Three cameras are arranged in the horizontal direction, and two cameras are arranged in the vertical direction, forming a threedimensional measuring system of a 3×2 camera array, wherein a distance between cameras in the horizontal direction is m=60 mm, and a distance between cameras in the vertical direction is n=50 mm.
Let a measured object be a cuboid, and two measured points at the top of the cuboid be P1 and P2. It is seen from FIG. 9 (a top view) that the measured object cannot be placed within a view field smaller than L1, otherwise it cannot be ensured that the measured object is imaged within an intersecting area of view fields of at least two cameras in the horizontal direction. It is seen from FIG. 10 (a side view) that the measured object cannot be placed within a view field smaller than L2, otherwise it cannot be ensured that the measured object is imaged within an intersecting area of view fields of two cameras in the vertical direction. L1 and L2 can be called as the nearest measuring distances. It is seen from FIG. 8 that L2>L1. We take the area with a measuring distance smaller than L2 as a blind area of this measuring system. Except the blind area, areas indicated by oblique lines in the figure are just areas where measurement can be realized.
It is seen from the figures that the measured point P1 is located in a measuring area of 1245 fourcamera group, and P2 is located in a measuring area of 2356 fourcamera group. In this measuring system, we can firstly use the 1245 fourcamera group for performing operations to calculate respective measured points of the measured object, including the point P1, which can be imaged by the 1245 fourcamera group, and then, use the 2356 fourcamera group for performing operations to calculate respective measured points of the measured object, including the point P2, which can be imaged by the 2356 fourcamera group, finally, the results of the two calculations are comprehensively analyzed for preferably choosing data which can be measured in both groups, completing the threedimensional measurement of all points.
As to parts of the threedimensional stereoscopic surfaces of the measured object which cannot be imaged in the fourcamera group, we can solve the problem by means of measuring several times or by adding other measuring systems. Furthermore, the present invention employs a processor, which is used for processing digital image data and threedimensional point clouds. The processor may be coupled with the threedimensional measuring system. Also, the processor can be used for achieve the measuring method.
Further variations of the method for measuring a threedimensional object using a fourcamera group may fall within the spirit of the claims, below. It will be appreciated that the inventions are susceptible to modification, variation and change without departing from the spirit thereof.
Great research starts with great data.
Use the most comprehensive innovation intelligence platform to maximise ROI on research.
More Patents & Intellectual Property
 Image measurement apparatus, image measurement method, information processing apparatus, information processing method, and program
 Enterprise Patent & IP Solutions
 Improve R&D Innovation
 Intellectual Property (IP) Tools
 IP & Patent Strategies
 Market Intelligence for Innovation
 IP Data API
 Chemical Structure Search
 DNA Sequence Search
 Free Intellectual Property Courses
 IP & Patent Glossary