If TMZ.com has taught us anything, it's that there's a lot of cell phone footage out there. Researchers at Microsoft's Labs in Egypt are doing something cool with all that content, combining feeds from multiple phones into multi-angle, live broadcasts.
Dubbed Mobicast, the system requires two sets of software, one for the phone and one for the server receiving the footage. When two or more phones are in the same place capturing the same scene, the software synchronizes their clocks so the framing lines up correctly. Image recognition technology on the server then figures out how the footage should physically mesh, using features of the landscape or scene to recognize parts of the images that match. It then blends the images to create a wider, more detailed view of the scene, sort of like PhotoSynth for video (but without the 3-D – for now).
The coolest part, of course, is that Mobicast can do all this in real time, so an event can be captured and broadcast live to the Web by several cameras at once. Users also receive feedback to their phones showing stills of the stitched video with their contributions highlighted, helping them to see how they can better position themselves for the best contribution.
Before going public, there are some issues to sort out, like how to tell if several phones are in the same vicinity filming the same scene (GPS?). Until then, all we can do is keep on filming and dream of the day that celeb scandals break in full 360-degree 3-D.
[New Scientist]
Popular Science is your wormhole to the future. Reporting on what's new and what's next in science and technology, we deliver the future now.


</img>
</img>
</img> </img> </img> </img>


More...