Development of the vDome software prototype continued last month as we move closer to our projected completion date of September 20, 2012 for the prototype. Last month we finalized the spatial calibration tools and began adding color correction and edge blending to the software.
We decided to push for the completion of the Max prototype by the third week of September. This would allow the students and faculty at IAIA to utilize the software during upcoming fall semester and we can showcase the software during the ISEA2012 conference. After completion of the prototype, we’ll then take a break from development to focus on user testing. Internally, we’ll create demonstrations, interactive art, and work through different use case scenarios in order to evaluate the software design. We’ll also be sharing the software with our partners and other select alpha testers to work through any flaws and improve the design. In 2013, and continuing through the rest of the grant, we’ll be writing the final version of the software in c++.
There are three main issues we have to consider when trying to create a seamless image from multiple projectors. The first is spatial composition and distortion, the second is color correction, and the third is projector blending. All three issues are equally important and need to be addressed in the projection software. vDome currently has a rather flexible way to map input matrices to any type of 3D surface, and we have achieved near-perfect spatial calibration using it on the dome. Last month, we began development of color correction and edge blending.
Color correction must be done on each projector, because of slight diferences within the projectors themselves, the projector bulbs, or the environmental conditions. We have implemented color correction through a user interface spline with control points that allows one to adjust the individual color channels for each projector image. The interface is similar to the RGB curve tool in Photoshop. We are working with Steve Smith specifically with developing color calibration tools as well as working with other issues such as edge blending.
The type of edge blending one must use is dependent upon the projection surface. The simplest case would be to use multiple projectors for a on a flat wall. In this case, it may be possible not to even use edge-blending as one may be able to place the projected images right next to each other. However, even if edge-blending is necessary, a simple edge gradient on the alpha channel would be enough to blend the images together in this scenario.
The dome presents a more interesting case, as all projected surface are neither flat, nor (in our case) uniform. The dome @ IAIA is already fitted with physical masks around each projector, which gives a good, but less than perfect projector blend. To compensate, we have been begun experimenting with various techniques to make the blend more seamless. One the techniques that has been employed is to import custom alpha masks as either vector or raster image files. This allows the user to create as complicated a mask as required through their favorite drawing program. For further development, we will refer to Santa Fe Complex‘s work that was developed under our collaborative NSF grant.
Through the next two months we be implementing the final missing pieces of the software, and continuing to test use case scenarios. The next post will discuss some the proof-on-concept examples we’ve already realized, and some new fulldome software tools such as a real-time domemaster encoder which converts more typical image resolutions into the domemaster format.
We are also spec’ing out a more robust machine to replace our Mac Pro 12-core that limits us to two Nvidia Quadro 4000 2GB GPUs. We are working with a local computer company to custom build a machine that can handle four GPUs with more power. Our goal here is to increase the number of projectors we can output as well as reach our goal of playing a 4k dome master with real time splicing.