Geometric Calibration

For light fields acquired using our camera array, calibration was performed by tracking an LED moving in front of the cameras to recover corresponding points. From this intrinsics and extrinsics were recovered bundle-adjustment. From these values, homographies were computed and applied to retify the camera images. Since the cameras lie on a plane, corresponding points between rectified images cameras are defined by a single relative-depth value. Alternatively, the plane+parallax approach described in [5], could be used.

The cameras are accurately equally-spaced and our calibration confirms this spacing. Thus the camera positions (up to some unknown scale) are simply the camera numbers: (0,0), (1,0), ..., (7,0).

For light fields acquired using the linear gantry. The images are also rectified and the positions are equally spaced. The positions are also defined up to a scale and for N cameras are simply: (0,0), (1,0), ..., (N-1,0).

Color Calibration

The datasets here were captured as raw data and then were demosaiced using a high-quality demosaicing algorithm. The data is linear (i.e. no gamma). Our procedure to compensate for the variations in color responses amongst the different cameras of our array is described in [7]. For light fields acquired using the gantry, no color calibration is needed as all images were taken with the same camera.

Radiometric Falloff Calibration

Radiometric falloff is corrected for by computing per-pixel multipliers such that images of a equally illuminated surface, will produce a "flat-field" image. We computed these multipliers by taking several images of a light-panel and computing multipliers to set the average of the images to be flat. We've found that the radiometric falloff correction was essentiall for both the camera array and linear gantry as the falloff was up to 40% from center to image edge for some of our cameras.