Subscribe:

Ads 468x60px

vagitaa

Labels

Wednesday, December 14, 2011

New MIT video camera shoots a trillion frames per second

Media Lab postdoctoral associate Andreas Velten explains how the camera works
Media Lab postdoctoral associate Andreas Velten explains how the camera works

We've been hearing about trillions in the news so much lately, it's easy to become desensitized to just what a colossal number that is. Recently, a team of researchers at MIT's Media Lab (ML) built an imaging system capable of making an exposure every picosecond - or one trillionth of a second. Just how fast is that? Why, a thousand times faster than a nanosecond, of course. Put another way, one picosecond is to one second as one second is to about 31,700 years. That's fast. So fast, in fact, this system can literally slow down light itself and it does so in a manner unlike any other camera.
High-speed photography pioneer Harold "Doc" Edgerton established his groundbreaking laboratory at MIT, so it seems fitting that such work continues there today. But aside from the high speed aspect, any similarity between Edgerton's famous bullet-through-the-apple shots and the Media Lab's project abruptly ends. Whereas Doc shot on film illuminated by powerful strobes, the new "picocam" employs a bright titanium-sapphire laser light source and captures images with an array of about 500 sensors sequentially triggered one trillionth of a second apart. In the words of ML postdoctoral associate Andreas Velten, one of the system's developers, "there's nothing in the universe that looks fast to this camera."
A key component in this complex imaging system is a re-purposed streak camera, an instrument originally designed for measuring temporal variation in the intensity of light pulses. On the ML rig, the streak camera's aperture consists of a narrow slit. Light particles or photons enter here and encounter a rapidly-changing electrical field that deflects them perpendicularly to the slit. This field variation causes later-arriving photons to deflect more than those that arrived earlier, so the 2D images created represent time in one dimension (the degree of deflection) and space in the other (defined by the direction of the slit).
Unfortunately, individual one-dimensional slices of space do not an image make. To get a fully recognizable video of an event, say, a light pulse traveling the length of a one-liter plastic bottle, the event must be precisely repeatable thousands of times. After each light pulse, a mirror supplying the image to the streak camera is repositioned slightly until, slice by slice, the entire subject is captured - an additive process reminiscent of 3D printing that incorporates a bevy of delicate instruments and requires about an hour to complete. The voluminous raw data is then crunched into viewable two-dimensional image sequences by special algorithms developed by other members of the team. A lengthy process overall, the irony of which isn't lost on ML Associate Professor Ramesh Raskar, who dubbed their system "the world's slowest fastest camera."
Obviously, the need for exact repeatability is a significant limitation for a video camera, but the potential discoveries about the picosecond properties of light alone will probably more than make up for that. At US$250,000 for the laser and camera alone, it's not a system likely to be available for home use any time soon, but this valuable new tool will hopefully begin unlocking more of light's hidden secrets... quick as a flash!

0 comments:

Post a Comment