Ultrahigh Accuracy Camera Calibration

Opticist.org has developed an ultrahigh-accuracy camera calibration algorithm.  The performance is considerably better than the widely used Matlab camera calibation toolbox and OpenCV camera calibration function. The function has been incorporated into our program: http://opticist.org/node/73

The instruction can be found here: http://faculty.cua.edu/wangz/publications/advanced_camera_calibration.pdf

FAQ can be found here: http://www.opticist.org/node/338

Sample images can be downloaded on this page after logging in.

If you use our program, please cite our paper so that more people can learn and use this technique:

M. Vo, Z. Wang, L. Luu, and J. Ma, “Advanced geometric camera calibration for machine vision,” Optical Engineering, Vol. 50, No. 11, 110503, 2011.

This project is supported by US Army Research Office under grant W911NF-10-1-0502.

Distorted lens calibration failed

Dear Admin,

First of all, great work! Thanks for sharing.

As I was using your software to calibrate distorted lens (Distortion (TV) = -28.0%), I always got weird distortion coefficients like all zeros on 3rd to 5th radial coefficients and 3rd and 4th prism coefficients. I tried all lens models but they all gave me similar results. Could you tell me why?

Any helps are appreciated!

Reply to: Distorted lens calibration failed

May you send us your calibration images? Email address: 29nguyen@cua.edu

Awesome. This would be

Awesome. This would be perfect for security cams, ip camera or any such.

Questions regarding Camera Calibration in MOIRE

Dear Admin,


Sorry about the long post.


I am struggling with understanding the Camera Calibration function of MOIRE.

If I run an experiment with virtually generated sample images - let's say: 10mm focus length and 1/4" sensor virtual camera, with 25-30 hi-res (1024x768) close-ups of the circular pattern sheet, MOIRE does not recognoize almost any of the patterns in the images where the circular pattern's normal vector is not almost parallel to the virtual camera's normal vector. For some of these images MOIRE says: "Ellipse cannot be that big!" (or similar), and for other it states that it cannot find all three of the control circles, or that it only detected a subset of all the (70) circles.

This problem seems to not be present with the chess pattern, as MOIRE seems to be able to identify the reference points even at very high degrees of rotation of the pattern's normal vector relative to the camera's normal vector.


Question: Does the normal vector of all the circular pattern images in the sample set have to be as parallel to Z axis (the camera's normal vector) as possible or am I doing something wrong with the way I am generating the sample images? Is there a minimum resolution that I have to have, or a minimum/maximum focal length, or sensor size, etc.? In other words, are there any bounds to any of these parameters that MOIRE expects to be satisfied?

Nevertheless, even though a lot of the "significantly" rotated patterns are "dropped" while MOIRE is processing them, with whatever is left (mostly non-rotated patterns) I do get (what I believe are) nice results with this virtual experiment (with the RMS reprojection error dropping down to about 1E-02 with the advanced circular pattern option).


My real world results using the circular pattern are not as good (as would be expected anyhow), with RMS reprojection error of about 5E-01 for a 2.8mm focal length, 1/4"  1280x720 IP camera. Even though the RMS reprojection error is not very high, MOIRE estimates the focal length to be at around 3.0 mm (I get the focal length in milimeters after I convert if from px to mm with the FL_in_px = (FL_in_mm / sensor_width_in_mm) * image_width_in_px formula ).


I must be doing something wrong to be getting such an enormous error with the real world sample data.


Please help!


Thank you!!!!!!

Reply to CC using Moire Software



Suppose that you are working on 2D DIC, only the first board posotion need to normal to camera. that notification "Ellipse cannot be that big!" seems normal, we also meet this problem when we did our experiment. First, try to focus camera as best as possible, turn off focus lights or some lights which projected to the board or object, use the ambient light only and turn max the ring on the board to adjust contrast to brightest. Then turn focus ring, and reduce the constrast. 

With that notification, you can try to use binarization method to adjust the contrast untill it is clear between those rings on the board. I remember 1 is global binarization and 2 is local threshold binarization. Make sure no discconected rings or even linked rings.

Go over this one you can get the result. Let me know if you have more questions.


Hieu Nguyen

Scaling Calibration

Hi Admin,

I have a webcam and I wish to estimate its intrinsics parameters for the 640 x 480 resolution (aspect ratio 4:3) (I have to use it in my software, based on OpenCV). I have tried with Moire Software, but I have difficult to make the software detects the ellipsoid patterns at such low resolution. So, I have tried at the maximum (the sensor resolution), that is 2304 x 1536 (aspect ratio 3:2). Now, the problem is: how I can scale (if this is possible at all) the computed intrinsics parameters? I have tried, for example, to divide the focals length and center point by the 3.6 and 3.2 (2304/640=3.6, 1536/480=3.2) but the results (I guess) is wrong. Trying OpenCV calibration tool (with the chessboard) it give totally different values. Some numeric examples (at 2304 x 1536):

Fx: 1648.6126919684557
Fy: 1647.8159326709772
U0: 1190.9148600053952
V0: 740.16564662442261

I try to scale it dividing Fx, Fy & U0 by 3.6 and V0 by 3.2:

Fx: 457.94796999123769444444444444444
Fy: 457.72664796416033333333333333333
U0: 330.809683334832
V0: 231.301764570132065625 

What OpenCV calibration tool returns (using images directly acquired at 640 x 480) is:

Fx: 618.68688179436549
Fy: 617.31182264382346
U0: 324.61937746267364 
V0: 237.61735112968250 

So focals length are about 618 vs 457. According to the tech specification of my webcam, I have sensor of 2304 x 1536 pixels, 4.8 x 3.6 mm (4:3) and a focal length of 3.67 mm.
The OpenCV results is better because (in my 3D reconstruction software), applying bundle adjustment, the resulting error is by far smaller using the calibration matrices computed by OpenCV. My questions are:

1) How to scale the K?
2) How to deal with the different aspect ratio?
3) It's better to calibrate a camera at the highest resolution and then scale or it's better to works at the desired resolution? And if the latter is true, how I can use Moire Software with 640 x 480 images? (It fails the detection of the ellipse no matter which algorithm and parameters I use).

Long post :) Thanks for your time.

Best Regards,

- Gianluca A. (a.k.a. AGPX) 

strange band

Dear Admin,

First of all, congratulations for this incredible software. I have achieved calibrate a camera and I would like to undistortion the images. The problem becomes when I use the menu Advance Analysis/Misc/E(correct lens distortion) and the resulting images have a strange band.

Original image:


Postprocessed image:


Whats the problem?

Thanks in advance.

Reply to strange band

The len distortion operation is already done when you do the calibration.

Stereo Calibration Problem


I'm trying to do the stereo calibration but nothing happens after I accept the calibration data.

I'm probably doing something wrong, maybe you can help me. 

This is what I do:

1. I select the 16 calibration images taken by the two cameras (I put them together in a folder). Does the software automatically recognize those of the first and the second camera?


2. I'm using a regular chessgrid 9x7, so I chose "Checker:regular".


3. Control points per row: 9

   Control points per col: 7

   Stereo Calibration: 1

   Lens Model: 4 (6th order radial distortion with 3 parameters)

   Iter Threshold: 0.001

   For_debug: corners & RTs: 0 


   I'm not really sure about the meaning of "Control Points"


Thanks in advance. 

Reply to Stereo Calibration Problem

Hello Ale_Tuffa,


Sorry for getting back late. I will transfer your question to Admin, he will try to answer your question soon.

Thank you.

If you have further questions, can you contact me directly through this email: 29nguyen@cardinalmail.cua.edu

Hieu Nguyen.

Stereo Calibration

Thanks for providing your software!

I am trying to use Stereo Calibration but have difficulties with obtaining the results...

First, I give the images in the following order: 15 left image then, 15 right images.

I use the lens model nb 3 (consistent with OpenCV).

When I click Yes when prompted if I want to accept the calibration data, nothing happens (calibration result is not visible).

If I look in the file 31415926535A.txt created in C:\Windows\system32, I get 15 lines of 30 parameters for 30 image pairs given in input. What do they correspond to? I would have expected to get 2 intrinsic matrices for the two cameras and either an estimated 3D transform between both cameras or the 30 estimated extrinsic matrices?

Thanks for your answer!


Distance between ring centers


    I am using your software, and it performs very well. But I've a little question -- how to know exactly the distance beteen ring centers? Just measure it by hande or it's constant as the patterns is always printed by the same file (CC_RingBoar.doc)? I notice that in the expample shown in paper Advanced Geometric Camera Calibration for Machine Vision the distance is 25.4mm, but in my case, the result obteined with a vernier caliper is about 24.7mm. There is something I should pa attention when printing the patterns, maybe?




RE: Distance between ring centers

The world file should give you exactly or very close to 25.4mm (i.e., 1in), please make sure that you will not shrink the image size when printing it.

Anyway, since the printer will not give perfect results, you can measure the distance between multiple ring centers, and calculate the average. This will help to minimize the error. The program will automatically handle the non-uniformity issue.

Distance between ring centers

Thanks very muuuuuuuuuuch!!!

I've found the problem——the printer was not setted correctly. Now the distance is near to 25.4mm.

My tests..

Hi ! The software looks great but i dont get as accurate results as I am looking for. I use the chessboard analysis but the results of the advanced optin gives me a smaller rms error but the undistorted picture is worse than the standard checker analysis result ? Perhaps you can help me. Have some data here...You can find some additional test images at (with the same radial distortion)

Distorted stereo with noise


Distorted mono images with optic grid


The same but with stereo


and how do you use stereo images ?

The checker pattern is 30x30 mm

Image Correction

I might do something wrong. Where does the corrected images show up and what should they look like ? I only get an image with a # inforn of the name and some graphics added at the bottom of the image but the actual image is not corected for radial distortion

RE: Image Correction

(1) Our program only supports 256-grayscale (8-bit grayscale) images,  Your images are 32-bit images, so you need to convert the color depth.

(2) Since the distortion is large at the corners, you can try to (a) generate larger patterns, (b) use more images, (c) rotate images more. Look at the sample images for details.


(3) After calibration is done, accept the results. then run "Advanced Analysis" --> "C (lens correction)" to get the undistorted images.



I have just cheked the updated version. It works great. Thank you for your effort. 

Also, the output text file is organized better and easier to understand. However, i wish you could provide the detected ring centers as output.

I guess, the manual is still for the old version (or i couldn't find the manual new manual). Are the 6 parameters for the extrinsic calibration in axis-angle form? Can you tell the order?

Thank you,




Unusual case detected more rings than actual

I'm using the standard ring target 10x7 doc included with the software.

Whatever parameters I use I get an error "more rings than actual" on every frame.

I've checked the images, they are all in range of the hi/lo parameters, I have masked out the background so there are no other marks that could look like rings.

Any ideas?

Adjust high/low parameters

Set "Num. of feature pattern edges" to 4.

Set hi/lo parameters to 0 will pop up a window for you to select the best values.

Wish list

1. Option to fix "skew factor" to zero.
2. Option to fix fx/fy ratio

3. Data for calibration verification.
- Detected centers of the rings (in input image plane)
- Calibration params (already present)
- Extrinsics for every image with calibration pattern
- Modeled ring centres coordinates

Verification process enables to user of software, project all ring centers using extrinsic and calculate residuals. Also it can help to verify coresspondence of camera model params.


Good comments

Some of your comments have been answered here: http://www.opticist.org/node/338

Pixel coordinates of the ring centers

Hi, Admin                                                                                                                Thanks for your update. Now we can see the world coordinates of the ring centers in the final results using bundle adjustment. I wonder if you can also provide the final extracted pixel coordinates of the ring centers to compare your results with the results from conventional matlab calibration toolbox. Sometimes there are significant differences. BTW, which lens error model is recommended in terms of stability and convergence? Some lens model seems not stable since it provides the different results even using the same data set. Thank you

Rings pattern detection

Hi Admin.  Is it possible to visualize intermadiate steps of rings detection? It can be helpfull to prapere images for stable detection. Cause now it depended on resolution brigthness contrast and other.  Thanks.

Calibration and Reprojection error



 Hi Admin,

I post my question several days ago, since I have not figure out them yet, is it possible that you or someone can give me a hint? I will post my question again here:


"Thank you very much for your update and it works well right now. However, with my calibration checkerboard, the reprojection error is around 0.8 pixel even I choose lens model 0 and high iner parameter. I totally used 12 images and rotate the pattern between differnet views. Is there any problem  with my calibration pattern? To get more accurate result, is there any tips when positioning my calibration pattern?

This is the calibration images I used in my calibration


When taking picture, do you require large angle view or you prefer narrow angle between views? And how about rotation?



Calibration and Reprojection error

0.8 pixels is too big. A reasonable value is 0.005-0.05 pixels for concentric patterns (ring patterns). Normally, we get ~0.01 pixels.

The board positions can be arbitrary (but not too close to each other), try to rotation the board around all of x, y, and z axes. 

question about uncertainties in extrinsic parameters

Dear Admin,

I have a quesiton about the uncertainties in the calibration program. Since the reprojection errors are greatly reduced using this software, does that mean the uncertainties in estimating the extrinsic parameters (such as rotation and translation vectors) also be reduced?


question about uncertainties in extrinsic parameters

Yes, the small reprojection error usually means better detection of camera parameters. However, if the actural lens does not follow our lens model, exceptions can happen.

We have used our camera calibration technique to calibrate our frigne-projection-profilometry system for 3D imaging. That is, we do not use calibration gage to calibrate our system; in stead, we use camera calibration board to get the 3D coordinates of points as "gage". With this approach, the full-field accuracy (i.e., whether corner region and center region) can reach 0.04 mm (out-of-plane) for a field of 400mm wide. The highest accuracy can reach 0.005mm (out-of-plane). This help verify the robustness of our camera calibration technique.

Memory crash

I'm having memory crashes after the first pattern detection (Insufficient Memory).  I kept trying different parameters but it still the same.  Could it be that the pixel count of the image is too large (35 mpx)?

Memory crash

Hi Daniel_c, 

That could be the reason. The largest size that I have ever tried is 12M. 


Calibration not respond


I am using your program Moire_0.952 do my camera calibration. I printed my checkerboard pattern with known size(5.08mm), total 12 images. And I followed instruction from your website to do calibration. If I choose 'Checker regular' option ,it works fine. However, if I choose 'checker advanced'this option, and choose' Refi.para= 1';'Lens Model= 4' in the following step, the calibration will run into not responding forever, can you please help me to solve this problem?

I tried to print chessboard from the word file in your zip file. But the distance between two ring is not exact 25.4mm in my printed patten. I did not use any scaling when I was printing. Can you help me on this too?




The problem has been fixed.

There was a little bug in the program, and it has been fixed. Please get the new version.

BTW, if you used the original version, you can set Refi. para to 0 and Lens model to 0.

If the distance is not exactly 25.4mm, you can measure it and input the actual value into the dialog box.

Reprojection Error


Thank you very much for your update and it works well right now. However, with my calibration checkerboard, the reprojection error is around 0.8 pixel even I choose lens model 0 and high iner parameter. I totally used 12 images and rotate the pattern between differnet views. Is there any problem  with my calibration pattern? To get more accurate result, is there any tips when positioning my calibration pattern?

This is the calibration images I used in my calibration




Calibration pattern problem

Hi Admin,

I hava a problem with camera calibration. I used my ring pattern with 10 points per row and 16 points per column instead of the default one in the package because I have a large field of view. I modified the paramenters before the calibration starts. But the program crashed if I run it using my images. Does that mean I can only use the default pattern for the calibration?

Thank you in advance for your answer.

Hao Yu

Calibration pattern

Hi Hao,

Our experience shows that a 10x16 pattern works ok. It is probably because he modified some parameters wrong. Can you upload several of your images online so that I can test it?



Calibration pattern

Hi Minh,

I uploaded 4 images at


BTW, for my previous two calibration results using the default pattern, two calibration results are not very consistent (principle point result varied quite a lot). Do you have any idea of the reason and how should I avoid that and get consistent results. Thank you very much.


Try the new version

The program has been updated. Please let us know how it works.

Try the new version

Hi Minh,

I tried the updated version. It works nicely! Thank you for your great work. Another question, you can see that current 16*10 ring patterns are not covering the whole field of view. I wonder if there is a limit for the number of ring patterns per column or row that we still can use your algorithm. Thanks


Use larger patterns or capture more images

We normally do not use a large number of concentric circle patterns because this can slow down the processing speed. For your questions, there are two simple solutions:

(1) Capture more images at different locations so that the entire field of view can be covered.

(2) Use larger circles. You can use a photocopier to magnify the calibration patterm.

Other version?

Hi Admin

Your software is great! I wonder if you have other versions of your software for my research. If it is not free, how much? I am really interested in your camera calibration tool. Thank you

Calibration results


I have another question, when I perform calibration with the software, I have several lines with the different parameters, which line is the good one?

Thank you in advance for your answer

France Vigouroux

Calibration results

As a user, I think the top one is your most recent result.

Error unit


First of all, thank you for this software. I was wondering what was the unit of the error? In Bouguet Toolbox, the unit is pixel, but here is it pixel as well? Because in Bouguet Toolbox there are two values, x and y.

Thank you in advance for your answer.


Reference Coordinates of the Calibration Pattern


In the "Advanced geometric calibration for machine vision", it was said that the coordinates of the ring centers are also assumed to be unknown and determined during the calibration process. I wonder if it's an assumption made only before finding frontal images or is it still valid at the final calibration step?

Are the localization errors in Fig.2 in the same paper calculated wrt to the calculated ring center coordinates on the fly (which ones are the reference coordinates in error calculation) ?

Thank you,


Reference coordinates of the calibration pattern

Hi Emrah,

(1) Yes, the assumption is only valid before finding the frontal images.

(2) Figure 2 was plot using synthesize data and it showed the differences between the detected ring centers and the ground truth for 2 methods: conventional and frontal-based.


Detected Ring Centers

Dear Admin,

Is it possible to get the coordinates of ring centers detected? It would be useful for debugging.

Thank you,

Ring centers

Bala, we do not provide such feature for public version.


Projecting Control Points on Calibration Image

Dear Admin,

First of all, thank you for the software. 

I have problems with visualizing the output data. I have used the lens model "3" to be able to use the code i have already implemented for opencv calibration.

I have put the distortion coefficients in right order (opencv convention). When i undistort the image myself with the intrinsic parameters, the undistorted image seems quite right.

But i wanted to see where the control points of the ring pattern are projected on the input image. To do it, i generated the control point coordinates (X) for wrt the pattern coordinate system, i.e. (0,0,0), (25,0,0), (50,0,0),....(225,150,0). (The ring pattern has 11x7 rings with 25mm control point distance.) After projecting X with K,R,T and distortion coefficients, the corresponding image coordinates i get are irrelevant.

I assumed that, the R,T given in the calibration result is the pose of the pattern in first calibration image wrt camera coordinate system (i have selected the first image as the reference). If so, the operation i performed should have worked i guess. Can you give me an advice about the situation?

Here is a link to the output image ((0,0,0) corresponds to the bottom-left point):


Thank you very much,

Projecting points

Hi Emrah,

I highly suspected that you mistakely choose the last image as the reference instead of the first one. 

Other reasons could be due to confusion made by our choice of coordinate system. For this issue, please look into our manual for more information.

I attached my own testing images and my very rudimentary testing code that ignores distortion parameters: http://www.mediafire.com/?3frebr8szfm8ie6

I hope this helps.


Projecting Points

Dear Minh,

Thank you for the test code. It seems there is a mistake in my code.

But when i undistorted the image in Moire Software and projected the points by the code you have sent, i've noticed a growing error in x direction. The mean error of the calibration is 4.139235e-002 and the mean error for the current image is 3.722534e-002. The output image is here:


Do you have any advice about this problem?  

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer