Record 3D Video with Kinect V2 and play it back in the browser

My latest project makes use of the new Kinect sensor and the capabilities of WebGL.

Using the source provided in this post, you’ll be able to record a 3D video of yourself (and those in the proximity of your camera) using the new Kinect for Windows V2, and play it back in a 3D environment running in the browser. The 3D environment is created using Three.js, and it even supports viewing the video using the Oculus Rift DK2.

By having a browser playing back your video you have the opportunity to send a 3D video of yourself without the need for the recipient to install anything. Great isn’t ?

Recording playing in the browser
Recording playing in the browser



  • Windows 8
  • A Kinect for Windows V2
  • Visual Studio 2012 for Desktop

The C# code used for the recording uses the Kinect V2 sensor together with the Kinect SDK 2.0. For your convenience a Visual Studio 12 project containing the code can be found here.

What the code does is to track humans entering the camera view and stores a 3D view of that person (multiple persons can be tracked). The code will continue to take shots as long as it tracks a person and it hasn’t reached the maximum number of frames.

Each cloud point is stored as a .PLY file. When the user quits the recording, all clouds points are compressed into a .zip file. When playing it back in the browser, you’ll just upload this folder.

Since each cloud is a stored in a .PLY file, you can easily extract single clouds for viewing in a 3D software like MeshLab.



  • Just a WebGL capable browser (any popular up-to date browser)

The player loads the  generated  .zip from the recorder, converts each .PLY file to a three.js point cloud and loops through the clouds, effectively making it a video. As this is done in a 3D environment, you can manipulate the view in a manner similar to a RPG game or a 3D editor.

The player is hosted on my blog, and a high resolution demo recording that you can upload to the player can be downloaded from here. (It takes 5-20 seconds from you upload the .zip file until it starts playing)

In order to keep the size of the recording shown in the youtube video reasonable small, half of the pixels has been removed from the original Kinect sensor capture. That example video is about 38 MB.

Project files can be retrieved from here. (three.js is a dependency for the playback stuff, while the Kinect SDK 2.0 is a dependency for the recording stuff.

22 thoughts on “Record 3D Video with Kinect V2 and play it back in the browser”

  1. Hey, can’t seem to select a file when I goto ‘choose file’. Just doesn’t do anything, does this viewer have issues with Chrome? Thanks, cool stuff, very keen to talk in detail at a later date, I delve with ‘photogrammetry’ but am also an avid fan of 4D point cloud capture.

    1. Hi, I think the reason is that you press the lower right corner of the button. Try the upper left. I just tested and it does work on chrome. You could consider not being able to press anywhere on the button a bug. The reason it behaves like that is that I’m not very good at HTML 😛

  2. Nice, I really like the possibilities of webGL.

    No head tracking nor positional tracking in your demo when using the DK2, right? I had great luck with Cupola using the DK1 and now it looks like browsers are working on native implementations.

    Also, have you seen this Kinect2 capture project? It has a capture program and uses Unity for playback. I played with it back when he was using point clouds. Now he’s switched to polygons.

    1. Hi,
      No I haven’t had the time to implement headtracking. However it would be a quick thing to do. I already did it here:
      But as you can see it’s dependent on a java server running in the background.

      I’ve seen the project you posted, however it did not work with SDK 2.0 last time I tried it. Nor is is the source code available.

      1. Thank you for making your source code available with the MIT license. Hopefully soon the native browser implementations give us low-latency tracking.

        Your approach is a promising one, and deserves attention. I only have the Kinect V1 which doesn’t have nearly the fidelity, but the results of the home point cloud “videos” are spectacular viewed in VR. It’s got a very holographic feel to it and really captures the essence of the subject differently than stereoscopic video even.

        I haven’t dug into the point cloud format, but it should be straightforward to composite them together to create larger scenes, right?

    1. No, unfortunately. I only had the Kinect for a short period of time, and haven’t done anything more with it since I posted this a while back.

  3. This is really great and goes most of the way to solving my problem!

    However, my application of your tool is not to create a video but mearly to create a single .ply modal of a human.

    I’m wondering if it would be possible to increase the resolution of the .ply image even if this means dramatically reducing the number of frames?

    Thanks! Amazing work!

    Ben Biggs

      1. From the provided code, if you set step = 1 you’ll have twice the resolution compared to default. Then you could take any .ply file from the .zip folder and you’ll have valid point cloud file (Or change the source to output a single .ply when hitting the button). But I don’t have access to a Kinect any longer, and it’s a long time since I used the SDK, so I’m not sure if you can increase the resolution even more…

  4. Hello Laht,

    Thank you very much for the nice tutorial. I wrote my code for recording the depth stream. but when i extracted the depth image from the depth video in MATLAB, then each depth image is 3 channel. I want to know is it correct?

    1. I don’t know to be honest.
      I have not worked with the API for a very long time, but it should be documented in the API?

  5. Ok I finally got everything to work, just a quick question: is there a way to preload a file with your new build instead of using the button to select a file? Thanks!

    1. A while since I was fiddling with this, but you could save a collection of recordings on your web server, and let the user choose which one to play.

      1. Thanks for the response, that’d be a fun thing to set up!
        I’m interested in having a file or two files automatically start playing instead of needing to be selected like your original build, is this still possible?

        1. Yes, you just need to hardcode the path to the file in your code. Similar to how you would include images, 3D models etc.

  6. Hey Laht,

    Thank you so much for this great project. I realize that it has been more than three years since you worked on this so you might not know anything about it anymore.
    I just downloaded the recorder and wanted to record but the .zip file is empty. Do you have any idea what could be the reason for that?

    Thank you in advance

    1. You are right, unfortunately I don’t know much about this anymore. My experience with the Kinect was short lived.
      I think you just have debug the code. As its all contained in a single file, it should be manageable.

      Lars Ivar

Leave a Reply

Your email address will not be published. Required fields are marked *