Wednesday, April 28, 2010

CS: Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography

Ramesh Raskar just let me know of their new outstanding multiplexing Imager which is described in the following video:



We describe a novelmultiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing.

A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.


The webpage for the project is here.

Of interest is the implementation







I am looking forward to seeing the reconstruction stage but it seems to me that, for now, they do not seem to be using compressive sensing reconstruction techniques that make use of the fact that the scene is sparse in some fashion. But let us not mistake ourselves here, this Reinterpretable Imager is already performing compressed sensing. The subtle issue is whether we can get more data out of this system using some of the powerful compressive sensing solvers that come with the assumption that the scene is sparse. I note the authors` mention of the Random Lens Imager. When I see the images produced by this camera, I cannot but be reminded of Roummel Marcia and Rebecca Willett's slides ( the paper is Compressive Coded Aperture Superresolution Image Reconstruction) or this more recent video (see here for more information on Compressive Coded Aperture work at Duke/UMerced) where it is shown that coded aperture can use compressive sensing techniques to provide better de-multiplexing than traditional inversion methods. In a previous entry, I noted that Ramesh and his team had made some headways in the world of compressive sensing (Coded Strobing Photography: Compressive Sensing of High-speed Periodic Events, "Things could be worse"). Hopefully, once they do for this type of Imager, one would hope to get more information rather than simply giving away space resolution for angle and time as it seems to be the case right now.

I am repeating myself but Ramesh continues to advertize for people to join his group:

We are actively looking for Graduate Students, MEng, PostDocs and UROPs starting Spring and Fall 2010. Please see here if you are interested.


Forget your 9-to-5 job, go become a starving student at MIT, this is the Future!

No comments:

Printfriendly