Kinect for Processing Library

Updates


05/07/2014 – The library is renamed again to Kinect4WinSDK in order not to use the prefix P or P5. It has been built in Windows 7, Kinect for Windows SDK 1.8, Java JRE 1.7u60 and Processing 2.2.1.

05/04/2014 – The library is renamed to P5Kinect according to suggestion from the Processing community, in order not to mix up with official Processing class.

28/03/2014 – The library is updated for the use of Kinect for Windows SDK 1.8, Java JRE 1.7u51 and Processing 2.1.1.

Introduction

The Kinect for Processing library is a Java wrapper of the Kinect for Windows SDK. And it of course, runs in Windows platform. At this moment, I have only tested in Windows 7. The following 4 functions are implemented. All images at this moment are 640 x 480.

  • Obtain the RGB image from the Kinect camera.
  • Obtain the depth image from the Kinect camera.
  • Align the RGB image with the depth image.
  • Obtain player and skeleton information.

API description

GetImage() returns a 640 x 480 ARGB PImage.

GetDepth() returns a 640 x 480 ARGB PImage. The image is, however, grey scale only. It resolution is also reduced from the original 13 bits to 8 bits for compatibility with the 256 grey scale image.

GetMask() returns a 640 x 480 ARGB PImage. The image is transparent in the background using the alpha channel. Only those areas with players are opaque with the aligned RGB images of the players.

Skeleton tracking is a bit complicated. The library will expect 3 event handlers in your Processing sketch. Each event handler uses one or two arguments of type SkeletonData (to be explained later). Each SkeletonData represents a human figure that appears, disappears or moves in front of the Kinect camera.

appearEvent – it is triggered whenever a new figure appears in front of the Kinect camera. The SkeletonData keeps the id and position information of the new figure.

disappearEvent – it is triggered whenever a tracked figure disappears from the screen. The SkeletonData keeps the id and position information of the left figure.

moveEvent – it is triggered whenever a tracked figure stays within the screen and may move around. The first SkeletonData keeps the old position information and the second SkeletonData maintains the new position information of the moving figure.

Please note that a new figure may not represent a real new human player. An existing player goes off screen and comes back may be considered as new.

Data structure description

The SkeletonData class is a subset of the NUI_SKELETON_DATA structure. It implements the following public fields:

public int trackingState;
public int dwTrackingID;
public PVector position;
public PVector[] skeletonPositions;
public int[] skeletonPositionTrackingState;

Javadoc information

Download

Example

import kinect4WinSDK.Kinect;
import kinect4WinSDK.SkeletonData;

Kinect kinect;
ArrayList  bodies;

void setup()
{
  size(640, 480);
  background(0);
  kinect = new Kinect(this);
  smooth();
  bodies = new ArrayList();
}

void draw()
{
  background(0);
  image(kinect.GetImage(), 320, 0, 320, 240);
  image(kinect.GetDepth(), 320, 240, 320, 240);
  image(kinect.GetMask(), 0, 240, 320, 240);
  for (int i=0; i=0; i--) 
    {
      if (_s.dwTrackingID == bodies.get(i).dwTrackingID) 
      {
        bodies.remove(i);
      }
    }
  }
}

void moveEvent(SkeletonData _b, SkeletonData _a) 
{
  if (_a.trackingState == Kinect.NUI_SKELETON_NOT_TRACKED) 
  {
    return;
  }
  synchronized(bodies) {
    for (int i=bodies.size ()-1; i>=0; i--) 
    {
      if (_b.dwTrackingID == bodies.get(i).dwTrackingID) 
      {
        bodies.get(i).copy(_a);
        break;
      }
    }
  }
}

Capture midi messages in Processing during playback

The 2nd midi in Processing example will use the Receiver interface to capture all the midi messages during the playback of a midi file. The program uses the custom GetMidi class to implement the Receiver interface. During the playback, it will display the NOTE_ON message with information of channel, octave and note.

The source code of the example is also in the Magicandlove GitHub repository.

Sample Processing screen during midi playback

Using midi in Processing for playback

This is my first use of midi in Processing. I do not use the MidiBus library for Processing. Instead, I try to use the standard midi package in Java. The SE8 standard Java package also contains the javadoc documentation.

Screenshot of the Processing sketch

The Processing source code and sample midi files are in the Magicandlove GitHub repository. The midi example files are downloaded from the midiworld website.

The code basically needs a Synthesizer class to render midi instruments into audio and a Sequencer class to playback the midi sequence.

Synthesizer synth = MidiSystem.getSynthesizer();
Sequencer player = MidiSystem.getSequencer();
synth.open();
player.open();

All the midi music files are in the data folder of the Processing sketch. To playback each piece of midi music, we need to convert each into a Java File object and use the following code to playback it. The variable f is a File object instance containing the midi file in the data folder.

Sequence music = MidiSystem.getSequence(f);
player.setSequence(music);
player.start();

First try of P5 and OpenCV JS in Electron

This is my first try of the p5.js together with the official release of OpenCV JavaScript. I decided not to use any browsers and experimented with the integration in the Electron environment with Node.js. The first experiment is a simple image processing application using Canny edge detector. The IDE I choose to work on is the free Visual Studio Code and which is also available in multiple OS platforms. I have tested both in Windows 10 and Mac OSX Mojave. In Mac OSX, I first install the Node.js with Homebrew.

brew update
brew install node

Then I install the Electron as a global package with npm.

npm install -g electron

For the Visual Studio Code, I also include the JavaScript support and the ESLint plugin. The next step is to download the p5.js and p5.dom.js code from the p5.js website to your local folder. I put them into a libs folder outside of my application folders. For OpenCV, it actually includes the pre-built opencv.js from its documentation repository. The version I used here is 3.4.3. The only documentation I can find for OpenCV JS is this tutorial.

For each of the Node.js application, you can initialise it with the following command in its folder. Alternately, you can also do it within the Terminal window from Visual Studio Code. Fill in the details when prompted.

npm init

In Visual Studio Code, you have to add a configuration to use the electron command to run the main program, main.js, rather than using the default node command. After adding the configuration, it will generate the launch.json file like the following,

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "type": "node",
            "request": "launch",
            "name": "Electron Main",
            "runtimeExecutable": "electron",
            "program": "${workspaceFolder}/main.js",
            "protocol": "inspector"
        }
    ]
}

For the programming part, I used a main.js to define the Electron window and its related functions. The window will load the index.html page. It is the main webpage for the application. It will then call the sketch.js to perform the p5.js and OpenCV core functions. The p5.js and OpenCV communicate through the use of the canvas object. The GUI functions, imread() and imshow() are used for such communication. This example will switch on the default webcam to capture the live video and perform a blur and Canny edge detection.

Source code is now available at my GitHub repository.