wraggster
March 20th, 2010, 11:04
http://www.blogcdn.com/www.engadget.com/media/2010/03/livemove-01-top.jpg
If you've been following closely, there are really two sorts of input available to the PlayStation Move. The one that gets the most love and screen time is the camera-based, 3D meatspace tracking that the PlayStation Eye performs in conjunction with the fancy colored ball at the end of the PlayStation Move wand, but most of the actual gameplay we've seen is in truth much more similar to the Wii's MotionPlus than Sony might want to let on. The MotionPlus and PS Move have very similar configurations of gyroscopes and accelerometers, and actually use the same software from AiLive (co-creators of MotionPlus) for developing the gesture recognition that goes into games. We actually got to see the LiveMove 2 development environment in action, and it's pretty impressive: basically you tell a computer what gesture you want to perform (like "fist pump," for instance) and then perform a bunch of examples of that movement. LiveMove then figures out the range of allowable movement, and in playback mode shows you whether you're hitting the mark. AiLive showed us gestures as complicated as a Graffiti (of Palm OS yore) handwriting recognition in the air, built with just a few example movements from people back at their offices. So, this is great news for developers dealing with the significant complication of all these sensors, but at the same time we can't help but be a little disappointed. LiveMove 2 doesn't even use the PlayStation Eye, and as we mentioned in our hands-on impressions of PlayStation Move, we could really sense that a lot of our in-game actions were built from predefined gestures, not us interacting with the 3D environment in any "real" or physics-based way. It's great tech either way, but hopefully that's something that can be improved upon by launch or soon after.
http://www.engadget.com/2010/03/19/ailive-shows-off-its-livemove-2-software-for-building-motionplus/
If you've been following closely, there are really two sorts of input available to the PlayStation Move. The one that gets the most love and screen time is the camera-based, 3D meatspace tracking that the PlayStation Eye performs in conjunction with the fancy colored ball at the end of the PlayStation Move wand, but most of the actual gameplay we've seen is in truth much more similar to the Wii's MotionPlus than Sony might want to let on. The MotionPlus and PS Move have very similar configurations of gyroscopes and accelerometers, and actually use the same software from AiLive (co-creators of MotionPlus) for developing the gesture recognition that goes into games. We actually got to see the LiveMove 2 development environment in action, and it's pretty impressive: basically you tell a computer what gesture you want to perform (like "fist pump," for instance) and then perform a bunch of examples of that movement. LiveMove then figures out the range of allowable movement, and in playback mode shows you whether you're hitting the mark. AiLive showed us gestures as complicated as a Graffiti (of Palm OS yore) handwriting recognition in the air, built with just a few example movements from people back at their offices. So, this is great news for developers dealing with the significant complication of all these sensors, but at the same time we can't help but be a little disappointed. LiveMove 2 doesn't even use the PlayStation Eye, and as we mentioned in our hands-on impressions of PlayStation Move, we could really sense that a lot of our in-game actions were built from predefined gestures, not us interacting with the 3D environment in any "real" or physics-based way. It's great tech either way, but hopefully that's something that can be improved upon by launch or soon after.
http://www.engadget.com/2010/03/19/ailive-shows-off-its-livemove-2-software-for-building-motionplus/