Three-dimensional digital image correlation system for deformation measurement in experimental mechanics

Abstract £ºA three-dimensional (3D) digital image correlation system for deformation measurement in experimental mechanics has been developed. The key technologies applied in the system are discussed in details, including stereo camera calibration, digital image correlation, 3D reconstruction and 3D displacement/strain computation. A stereo camera self-calibration algorithm based on photogrammetry is proposed. In the algorithm, the interior and exterior orientation parameters of stereo cameras and the 3D coordinates of calibration target points are estimated all together using the bundle adjustment technique, so the 3D coordinates of calibration target points are not needed in advance to get a reliable camera calibration result. An efficient image correlation scheme with high precision is developed using the iterative least-squares nonlinear optimization algorithm, and a method based on seed point is proposed to provide reliable initial value for the nonlinear optimization. After the 3D coordinates of the object points are calculated using the triangulation method, the 3D displacement/strain field could then be obtained from them. After calibration, the system accuracy for static profile, displacement and strain measurement is evaluated through a series of experiments. The experiment results confirm that the proposed system is accurate and reliable for deformation measurement in experimental mechanics.
Key words£ºstereo vision; digital image correlation; self-calibration; photogrammetry; seed point



1 Introduction
Full-field deformation measurement (displacement/strain) during various loading is a key task in experimental mechanics. Digital image correlation method, which was originally developed by Sutton et al. in 1980s1, 2, is widely used3-7 for full-field deformation measurement due to its advantages of simple equipment, high precision and non-contact measurement. Two-dimensional (2D) digital image correlation8 which is used with a single camera, can only measure in-plane displacement/strain fields on plane objects. To overcome the drawback of 2D digital image correlation, Luo et al.9 proposed a 3D digital image correlation technique, which combines digital image correlation with stereo vision and can measure 3D displacement field and surface strain field of 3D object.
  It can be seen from the principle of the 3D digital image correlation that the two key technologies are stereo camera calibration and digital image correlation. About camera calibration, much work has been done. Luo et al.10 used a multiple precision moving object to calibrate the cameras, which is quite laborious and time-consuming. A popular and practical algorithm was developed by Tsai11 using radial alignment constraint, but in this method, initial camera parameters are required and only lens radial distortion is considered. Zhang12 proposed a flexible technique for camera calibration by viewing a plane from several arbitrary views, in which the calibration target is assumed to be ideal plane and the manufacturing errors of the target is ignored. For digital image correlation, the Newton-Raphson (N-R) method13 is the most commonly used method. Compared to the N-R method, the iterative least-squares algorithm (ILS)14 is more brief and easy to implement, and is used in our algorithm. Both of the two methods are non-linear optimization algorithm, and how to find reliable initial value for them efficiently is a key issue.
  Nowadays, there are some commercial 3D digital image correlation systems in the market, such as ARAMIS system of GOM Company in Germany, VIC-3D system of Correlated Solutions in USA. But these systems are usually too expensive for many research institutes to afford especially in china, so it is still needed to develop low-cost 3D digital image correlation system. Recently, a 3D digital image correlation system (XJTUDIC) has been developed in Xi¡¯an Jiaotong Universiy of china. The XJTUDIC deformation measurement system is described in details in this paper. Much attention has been paid on high-precision camera calibration and digital image correlation method. A stereo camera self-calibration algorithm based on photogrammetry is proposed, in which a 10-parameter lens distortion model is adopted. Using the proposed method, the stereo cameras can be calibrated in high precision without any accurate calibration target. High precision image correlation is realized using ILS algorithm, and to solve the problem in calculating initial value for the nonlinear optimization, a method based on seed point is developed which can provide reliable initial value for the nonlinear optimization. After calibration, three experiments are carried out to validate the XJTUDIC system, and experiments results show that XJTUDIC system can satisfy the requirements of deformation measurement in experimental mechanics.
2 System Description
2.1 Hardware components
Fig. 1 is the hardware components of XJTUDIC system developed by Xi¡¯an Jiaotong Universiy, which consists of the following parts: (1) two COMS cameras (1280*960 pixels, 8 bits) for image acquisition, (2) two high-frequency LED lights for illumination, (3) a control box for the control of cameras and LED lights, (4) a tripod for support and (5) a computer for software installation.
2.2 XJTUDIC software
The software of XJTUDIC system is developed using the c++ programming language. Figure 2 is the software interface, which has the following screen elements: (1) toolbar, (2) menu bar, (3) OpenGL 3D view for result display (3D points/displacement/strain), (4) project tree window for image list display, (5) control panel for camera and light control, (6) 2D view for the left camera image display, (7) 2D view for the right camera image display, (8) curve display window and (9) status bar.
2.3 System workflow
Figure 3 shows the workflow of XJTUDIC system, which mainly consists of the following steps:
(1) Spray the specimen with a stochastic speckle if the specimen surface does not have enough features.
(2) Calibrate the stereo cameras for the first use or if the relative position of the stereo cameras is changed.
(3) Capture the images during the deformation, e.g. tensile test.
(4) Select the calculation area in the left image of the first stage, and the software will divide the calculation area into subsets (one subset represents a point). Then all the other images are processed using the digital image correlation method and corresponding points in all the stages are obtained.
(5) Reconstruct the 3D coordinates of all the points using the triangulation method.
(6) Calculate the 3D displacement/strain using the 3D coordinates of the points.
(7) Display the displacement/strain field in the OpenGL 3D view.
3 Stereo Camera Self-Calibration
Stereo vision is a technique for building a 3D description of scene from two different viewpoints15. As is shown in Fig. 4,   is the world coordinate system,   is the left camera coordinate system,   is the right camera coordinate system, and   is the image coordinate system.   and   are two corresponding image points of object points   in the two cameras. The 3D coordinates of object point   can be obtained by triangulation16 method if (a) the stereo cameras are calibrated and (b) image coordinates of   and   are known. A real construction of stereo vision can be found in Fig. 5.
The mathematical model of self-calibration based on photogrammetry is the well-known co-linearity equations17, which represents the transformation between image 2D space and object 3D space:
where   is the world coordinates of object point,   is the coordinates of the perspective center,   is the measured image coordinates,   is the coordinates of camera principle points,   accounts for the lens distortions, f is the principle distance, and
  is the rotation matrix between the world coordinate system and the camera coordinate system.
The calibration terms in Eq. (1) include the interior orientation parameters (the coordinates of principle point  , the principle distance f) and the lens distortion parameters. In order to improve the calibration precision, a more complete lens distortion model18 is adopted:
where   are radial distortion coefficients,   are tangential distortion coefficients,  are thin prism distortion coefficients, and r is the radial distance from the principle point . So, there are altogether 10 parameters   used in the self-calibration algorithm.
  A planar target with 17 coded points and 208 un-coded points is employed, as shown in Fig. 6. In traditional methods, calibration targets must be manufactured in very high precision to achieve accurate calibration results, which is quit consuming and expensive. In this study, a more accurate and flexible calibration method based on photogrammetry is proposed. A reliable calibration result can be achieved without any accurate calibration target. It means that the accurate positions of all the points on the calibration target are not needed in advance. All we need is an accurate distance between two diagonal coded points as a scale.
  The structure of Eq. (1) allows the direct formulation of primary observed values (image coordinates) as functions of all unknown parameters (3D coordinates of object points, interior and exterior orientation parameters, lens distortion parameters). All these unknowns can be iteratively determined using image coordinates as observations. The observation equations can be obtained through the linearization of Eq. (1):
where V is residuals of re-projection,  ,  and  are the partial derivative of the interior orientation parameters (include distortion parameters), exterior orientation parameters and 3D coordinates of object points.
  For Eq. (3), if interior orientation parameters and 3D coordinates of at least three object points are already known, exterior parameters of a single image can be obtained by space resection. Similarly, if interior and exterior orientation parameters are already known, 3D coordinates of object point can be computed via space intersection. If interior and exterior orientation parameters, along with 3D coordinates of object points are refined simultaneously, the procedure is called bundle adjustment19. The procedure of camera self-calibration algorithm based on photogrammetry is a combination of space resection, space intersection, and bundle adjustment.
  The calibration algorithm consists of the following six steps:
(1) Place the calibration target 360mm from the measurement device, and capture eight pairs of images in different locations by moving the calibration target.
(2) Determine the image coordinates of both coded points and un-coded points in the eight groups of images. The Canny operator is first used to detect the edge of the circle points. Then sub-pixel edge is obtained using gradient of adjacent pixel. At last, a least-square fitting algorithm is adopted to locate the center coordinates of circle points. Besides, for coded points, the ID is recognized.
(3) Calculate the relative orientation using co-planarity equation for the first two images, and reconstruct the 3D coordinates of the coded points.
(4) Compute the exterior orientation parameters of other images using space resection, and reconstruct the 3D coordinates of all the un-coded points using space intersection.
(5) Optimize all the interior and exterior orientation parameters of two cameras and the 3D coordinates of object points iteratively using bundle adjustment method.
(6) Compute the rotation matrix R and translation matrix T between the two camera coordinate systems using the calculated exterior orientation parameters.
4 Digital Image Correlation Method
4.1 Mathematical model
The digital image correlation method uses the random speckle pattern to match the corresponding points precisely on two images. As shown in Fig. 7, the left image is the reference image, and the right image is the deformed image. In the reference image, a square reference subset of (2M+1)*(2M+1) pixel centered at point   is picked. The matching procedure is to find the corresponding subset centered at point  in the deformed image which has the maximum similarity with the reference subset. Then the two center points   and   are a couple of corresponding points of the two images. Obviously, the relative relationship of gray level in the reference image does not change in the deformed image, so any point   in the reference subset can be mapped to a point   in the deformed image according to a mapping function. The first-order mapping function is used in our algorithm, which allows translation, rotation, shear, normal strains and their combinations of the subset:
where   are the distances from the subset center to point  , u and v are the displacement components of the reference subset center in x and y directions, and   are the first-order displacement gradients of the reference subset.
  The gray value of point   and   are   and  , respectively. They are theoretically identical. But in fact, they are not equal because of illumination and random noise. So the relationship between them can be expressed as:
where   stands for the noise component, and   are used to compensate the gray value difference caused by illumination diversity. It has to be noticed that an interpolation scheme is needed in the realization because the coordinates of points in the deformed image are not integer pixel, and a bicubic spline interpolation scheme20 is adopted in our algorithm.
  Assume that there are n pixels in the reference subset and the image pixels are corrupted by independent and identically distributed noise. The corresponding subset in the deformed image which has the maximum similarity with the reference subset can be obtained by minimizing the following function:
where   represents the vector of correlation parameters. This is a nonlinear minimization problem, which can be solved by using the ILS algorithm. To solve the problem, the initial value of correlation parameters must be provided in advance. In traditional method, the initial value of u and v can be obtained by a coarse search scheme pixel by pixel. Initial value of the rest of correlation parameters can be given as follows:
4.2 Calculate initial value of correlation parameters based on seed point
As mentioned before, the initial value of correlation parameters are needed in the ILS algorithm. Inaccurate initial values may decrease the calculation speed or even lead to a wrong convergence result. As can be seen in Fig. 8(a), usually a calculation area has to be specified and be divided into evenly spaced subsets (green rectangles) in the reference image. In traditional method, for each subset to be matched, the coarse search scheme pixel by pixel is used to get the initial value of correlation parameters, which is quite consuming and unstable. And in this method, only initial value of u and v are considered, which may fail to work especially in large deformation situation.
  A method based on seed point is proposed to calculate the initial value of correlation parameters. As we can see in Fig. 8(a), after calculation area is specified and divided into subsets, one subset is chosen as seed point (red rectangle). Traditional method is adopted to match the selected seed point first. Considering the continuity of deformation, the seed point is then used to calculate the initial value of correlation parameters for its four neighbor points (left, right, up, down). An estimate of the location of the neighbor points in deformed image can be obtained according to Eq. (4), which can be used to get the initial value of u and v directly. The initial value of the rest correlation parameters are set equal to the seed point. Then the ILS algorithm is used to refine the correlation parameters of the neighbor points. Once the four neighbor points are matched successfully, they can act as seed points for their neighbor points. The process repeats until all the points are matched, as can be noticed in Fig. 8(b). Using this method, not only the computing time is much reduced but also the precision of initial values is improved.
5 3D Reconstruction and 3D Displacement/Strain Calculation
5.1 3D reconstruction
3D reconstruction involves all the stages in the deformation process, and each stage has two images captured by the stereo cameras. Figure 9 displays the whole match process of all the images. First the calculation area is specified and divided into subsets in the left image of the reference stage (stage 1). Then all the images are processed according to the following rules: the left image of each stage matches with the left image of reference stage and the right image of each stage matches with the left image of the same stage. After all the images are matched, for each stage, 3D coordinates of all the point can be obtained through triangulation method using the calibration parameters of the stereo cameras and the corresponding image points in the left and right images.
5.2 3D displacement/strain calculation
If the 3D reconstruction of all the stages is finished, the 3D displacement of any point in a stage can be obtained directly by comparing its 3D coordinates in current stage and the reference stage.
  The calculation of strain is relative complex21. As can be noticed in Fig. 10, the 3D coordinates of eight neighbor points are used to calculate the strain of point P. The detailed calculation steps are as follows:
(1) In the reference stage, calculate a tangential plane using the neighbor points of point p. Project the neighbor points onto the tangential plane and get a set of 2D points (Pr) in an arbitrary 2D coordinate system  .
(2) Repeat the exactly same process in current stage and get a set of 2D points (Pc) in a arbitrary 2D coordinate system  .
(3) Calculate the deformation gradient tensor F (2*2 matrix) using the two sets Pr and Pc. They have functional relationship as follows:
where u stands for the rigid body translation between Pr and Pc. To solve for F, a standard least-squares algorithm can be adopted. The deformation gradient tensor F=RU can be split to the rotation matrix R and stretch tensor U, and the strain of point p in current stage can be acquired from U directly.
6 Experiments Results and Analysis
6.1 Camera calibration experiment
Calibrate the stereo cameras using the method illustrated in section 3. Figure 11 shows a pair of images used in the calibration procedure. The calibrated interior orientation and lens distortion parameters of binocular cameras are listed in Table 1 respectively.
  Besides, the Rotation matrix R and Translation matrix T between the two camera coordinate systems are also obtained as follows:
  Re-project the 3D points of calibration target using the calibrated parameters, and the average re-project residual is about 0.03pixel, which is much less than that of Zhang¡¯s method (0.33 pixel). The result indicates that the proposed calibration method has considerable precision.
6.2 Static profile measurement of standard cylinder
Radius of the standard cylinder (Fig. 12(a)) is 50.12mm (CMM result). To measure the static profile of an object, only one pair of stereo images is needed. After spraying speckle pattern onto the cylinder, twice measurements are carried out with XJTUDIC system. In the process of image matching, the size of subsets is set to 15*15, 25*25, 35*35 pixels, respectively. Six sets of points are gained all together, which are imported to IMAGEWARE software and fitted to cylinders (Fig. 12(b)). The measurement results are listed in Table 2. Comparing to the standard value (50.12mm), the maximum error of single measurement is 0.035mm, and the average error is 0.024mm. The relative error of static profile measurement is about 0.05% (ratio of average error and standard value).
6.3 Translation experiment
Move a plate with speckle pattern 11mm (standard value) using a high-precision translation platform. The accuracy of translation platform is 0.001mm. XJTUDIC system is used to measure its 3D displacement. The subset size is set 15*15 pixels, and 3D coordinates and displacement vectors of 225 points are gained, which are displayed in the OpenGL view as shown in Fig. 13. The curve of displacement magnitude is displayed in Fig. 14. Comparing with the standard value, the maximum error of single point is 0.011mm, the average error is 0.005mm. The relative error of displacement measurement is 0.05% ( ratio of average error and standard value).
6.4 Tensile test
The tensile test is carried out on the RGM4100 universal testing machine made by the Riger Corporation. The testing machine can provide up to 100KN tensile force. The dimension of the specimen is given in Fig. 15(a). The dashed rectangle in the middle is the calculation area. The specimen material is 45# steel, as can be seen in Fig. 15(b).
  Figure 16 is the tensile test scene. An extensometer is used to verify the strain measurement accuracy of XJTUDIC system during the experiment. The extensometer whose gage length is 50mm can measure strains up to 50% with an accuracy of 0.5%. The speed of the tensile test machine is set to 5mm/min, and the camera frame rate is set to 1frame/s. In order to achieve correspondence with the extensometer data, all the data from the test machine (including force, displacement, strain, etc) is collected using a serial port when each stage is taken by XJTUDIC system. There are 340 stages captured throughout the experiment. While the extensometer can only provide the average strain over the specified gage length, the XJTUDIC system can achieve full-filed strain measurement which can provide the strain of every single point. In order to compare with the extensometer data, the front 120 stages with uniform strain are used and the average strain of the area of XJTUDIC is computed. The strain of each stage measured by the extensometer and XJTUDIC system are employed as x, y coordinate of a point respectively, and 120 points are gained. As we can see in Fig. 17, the fitted line equation by these points is y=0.9961x+0.005, and it can be seen from the equation that only 0.4% deviation exists between the two method. Because the measurement accuracy of the employed extensometer is 0.5%, so the measurement accuracy of XJTUDIC system is not lower than 0.5%.
  As mentioned before, the extensometer can only provide the average strain over the specified gage length, so it can not describe local strain of a single point in the case of non-uniform deformation while XJTUDIC system can. The measurement result of four different stages obtained by XJTUDIC can be found in Fig. 18. As we can see in Fig. 18(d), when necking happens, major strain distribution through the field can be seen clearly. The major strain of the necking position is much bigger the other positions, up to 127.8%. Compared to the traditional gage methods, the XJTUDIC system is more comprehensive, accurate and intuitively for displacement/strain measurement.
7 Conclusion
This paper presents a portable 3D digital image correlation system for deformation measurement in experimental mechanics. The related key technologies are discussed in details, including stereo camera self-calibration based on photogrammetry, digital image correlation, 3D reconstruction and 3D displacement/strain calculation. Experiment results show that the average re-project residual of the calibration result is 0.03 pixels, the accuracy of static profile measurement and displacement measurement is 0.05%, and the accuracy of strain measurement is not lower than 0.5%. As a conclusion, the proposed XJTUDIC system can provide acceptable accuracy for deformation measurement in experimental mechanics, meanwhile has advantages such as non-contact, full-field displacement/strain measurement, and intuitive results over conventional gage method.
The authors acknowledge the support of the National Natural Science Foundation of China (Grant No. 50975219)
1. T. Chu, W. Ranson and M. Sutton, "Applications of digital-image-correlation techniques to experimental mechanics," Experimental Mechanics 25(3), 232-244 (1985)
2. M. A. Sutton, M. Cheng, W. H. Peters, Y. J. Chao and S. R. McNeill, "Application of an optimized digital correlation method to planar deformation analysis," Image Vision Comput. 4(3), 143-150 (1986)
3. H. Schreier, D. Garcia and M. Sutton, "Advances in light microscope stereo vision," Experimental Mechanics 44(3), 278-288 (2004)
4. J. D. Helm, S. R. McNeill and M. A. Sutton, "Improved three-dimensional image correlation for surface displacement measurement," Optical Engineering 35(7), 1911-1920 (1996)
5. J. D. Helm, M. A. Sutton and S. R. McNeill, "Deformations in wide, center-notched, thin panels, part I: three-dimensional shape and deformation measurements by computer vision," Optical Engineering 42(5), 1293-1305 (2003)
6. M. A. Sutton, J. Yan, X. Deng, C.-S. Cheng and P. Zavattieri, "Three-dimensional digital image correlation to quantify deformation and crack-opening displacement in ductile aluminum under mixed-mode I/III loading," Optical Engineering 46(5), 051003-051017 (2007)
7. M. A. Sutton, S. R. McNeill, J. D. Helm and Y. J. Chao, "Advances in two-dimensional and three-dimensional computer vision," Photo-Mechanics 77(323-372 (2000)
8. B. Pan, K. M. Qian, H. M. Xie and A. Asundi, "Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review," Measurement Science & Technology 20(6), (2009)
9. P. Luo, Y. Chao, M. Sutton and W. Peters, "Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision," Experimental Mechanics 33(2), 123-132 (1993)
10. P.-F. Luo, Y. J. Chao and M. A. Sutton, "Application of stereo vision to three-dimensional deformation analyses in fracture experiments," Optical Engineering 33(3), 981-990 (1994)
11. R. Tsai, "A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses," Robotics and Automation, IEEE Journal of 3(4), 323-344 (1987)
12. Z. Zhang, "A flexible new technique for camera calibration," Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(11), 1330-1334 (2000)
13. H. Bruck, S. McNeill, M. Sutton and W. Peters, "Digital image correlation using Newton-Raphson method of partial differential correction," Experimental Mechanics 29(3), 261-267 (1989)
14. B. Pan, A. Asundi, H. Xie and J. Gao, "Digital image correlation using iterative least squares and pointwise least squares for displacement field and strain field measurements," Optics and Lasers in Engineering 47(7-8), 865-874 (2009)
15. A. O. Ozturk, U. Halici, I. Ulusoy and E. Akagunduz, "3D face reconstruction using stereo images and structured light," in Signal Processing, Communication and Applications Conference, 2008. SIU 2008. IEEE 16th, pp. 1-4 (2008).
16. T. Luhmann, S. Robson, S. Kyle and I. Harley, Close Range Photogrammetry: Principles, Techniques and Applications, Whittles Publishing, UK (2006).
17. A. Gruen and S. H. Thomas, Calibration and Orientation of Cameras in Computer Vision, Springer-Verlag New York, Inc. (2001).
18. J.-W. Liu, J. Liang, X.-H. Liang and Z.-Z. Tang, "Videogrammetric system for dynamic deformation measurement during metal sheet welding processes," Optical Engineering 49(3), 033601-033608 (2010)
19. B. Triggs, P. McLauchlan, R. Hartley and A. Fitzgibbon, Bundle Adjustment -- A Modern Synthesis (2000).
20. H. W. Schreier, J. R. Braasch and M. A. Sutton, "Systematic errors in digital image correlation caused by intensity interpolation," Optical Engineering 39(11), 2915-2921 (2000)
21. B. Pan, H. Xie, Z. Guo and T. Hua, "Full-field strain measurement using a two-dimensional Savitzky-Golay digital differentiator in digital image correlation," Optical Engineering 46(3), 033601-033610 (2007)

This is very similar to the

This is very similar to the sort of diagram I had wanted to generate to look at the browser’s memory fragmentation, except with colors assigned to the various allocating locations, so that we could see precisely where allocations from various modules ended up in memory. It would be a nice way to visualize locality and fragmentation at the same time.

Is your software free?

Is your software free? If no, can you provide a trial version to the Opticist users?

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer