In this post you’ll find the code for a Java program that will read the sensor data from the Oculus Rift DK2 (Position and Orientation), and transmit it over a web socket connection. It’s all contained in a single class, so it’ll easy to just copy paste it into your favorite IDE.
My latest project makes use of the new Kinect sensor and the capabilities of WebGL.
Using the source provided in this post, you’ll be able to record a 3D video of yourself (and those in the proximity of your camera) using the new Kinect for Windows V2, and play it back in a 3D environment running in the browser. The 3D environment is created using Three.js, and it even supports viewing the video using the Oculus Rift DK2.
By having a browser playing back your video you have the opportunity to send a 3D video of yourself without the need for the recipient to install anything. Great isn’t ?
The code posted here allows you to create and store a colored 3D point cloud taken using Kinect for Windows v2. The point cloud can easily be imported into a software like MeshLab for visualizing as the data is stored in a common .PLY format.
As of Three.js r69, a updated version of the OculusRiftEffect can be found here
Currently three.js includes a function that renders a Oculus Rift friendly image to the screen. However, the function is made for the first version of the Rift, consequently it will not work with the updated DK2 version.
In order to get it working with the DK2 version, only a few lines of codes needs to be altered.
The updated function may be found here.
It is available from his github page.
Prebuilt jars are available from the maven repository (.dlls are embedded)
Here is a simple WebSocket server I’ve wrote that provides clients with the tracking info:
Now go code some awsome Oculus demoes 😉