# Willy Nolan

Project Title Description

There is a rich body of research surrounding camera calibration. The process is usually broken up into geometric and radiometric / photometric calibration.

Projectors can essentially be thought of the inverse of a camera. Instead of projecting a scene onto an image plane, they project an image plane onto an environment.

Due to this similarity, projector calibration is also a topic of interest in the interactive computer graphics field. In a projection mapping context, usually what is meant by "projector calibration" is "putting content where it supposed to go in a real, 3D environment".

There are many strategies for performing camera calibration, but using structured light is a popular one.

It is built into both the MadMapper and disguise software programs.

In this research, Mike Walczyk and I explored the structured light process based on its original academic paper.

To explain this process imagine there is a room and for some reason you want to project an image directly where two walls meet.

In the above picture that would be a corner of the room, for instance the corner located at the top of the above image. For illustration purposes let's say the image you want to project is this simple checkerboard image with the letter F to show distortion:

If you project that image directly onto the corner, the image will show obvious distortion.

Using structured light, which is essentially the simultaneous projection and photographing of different black and white images, this problem can be fixed.

A structured light process allows for the acquisition of a complete mapping (which the source paper refers to as $R$) from the camera pixels to the projector pixels.

This can be used to project content without distortion from the position of the viewer. At the end of the process this image can look correct.

Our research included a Python implementation as well as a projector/camera simulated.

The algorithm implemented was based on the academic paper: