My latest project makes use of the new Kinect sensor and the capabilities of WebGL.
Using the source provided in this post, you’ll be able to record a 3D video of yourself (and those in the proximity of your camera) using the new Kinect for Windows V2, and play it back in a 3D environment running in the browser. The 3D environment is created using Three.js, and it even supports viewing the video using the Oculus Rift DK2.
By having a browser playing back your video you have the opportunity to send a 3D video of yourself without the need for the recipient to install anything. Great isn’t ?
- Windows 8
- A Kinect for Windows V2
- Visual Studio 2012 for Desktop
What the code does is to track humans entering the camera view and stores a 3D view of that person (multiple persons can be tracked). The code will continue to take shots as long as it tracks a person and it hasn’t reached the maximum number of frames.
Each cloud point is stored as a .PLY file. When the user quits the recording, all clouds points are compressed into a .zip file. When playing it back in the browser, you’ll just upload this folder.
Since each cloud is a stored in a .PLY file, you can easily extract single clouds for viewing in a 3D software like MeshLab.
- Just a WebGL capable browser (any popular up-to date browser)
The player loads the generated .zip from the recorder, converts each .PLY file to a three.js point cloud and loops through the clouds, effectively making it a video. As this is done in a 3D environment, you can manipulate the view in a manner similar to a RPG game or a 3D editor.
The player is hosted on my blog, and a high resolution demo recording that you can upload to the player can be downloaded from here. (It takes 5-20 seconds from you upload the .zip file until it starts playing)
In order to keep the size of the recording shown in the youtube video reasonable small, half of the pixels has been removed from the original Kinect sensor capture. That example video is about 38 MB.