Updated release of the PKinect library

Finally, I updated the original PKinect Processing library for the Microsoft Kinect camera with the latest Kinect for Windows SDK 1.8 and built with Java JRE 1.7 update 51. It was tested in the latest Processing version 2.1.1.

The testing library can be temporarily downloaded here. Place the code folder into your Processing sketch folder. The latest sources will be released in Github soon.

 

OpenCV Motion Template Example in Processing

The following example ported the original OpenCV motion template sample code in C to Java/Processing. The original source is the motempl.c file in the OpenCV distribution.

The program started using the default video capture device and passed it to the class Motion. It employed the accumulated difference images to segment into different motion regions, delivered back with a list of rectangles, indicating where the motion components are. It then returned to the Processing main program with an ArrayList of the class Result.
 

 
Continue reading

Conversion between Processing PImage and OpenCV cv::Mat

This example illustrates the use of the Java version of OpenCV. I built the OpenCV 2.4.8 in Mac OSX 10.9. After building, I copied the following two files to the code folder of the Processing sketch:

  • opencv-248.jar
  • libopencv_java248.dylib

The program initialised the default video capture device. It converted the PImage first into an OpenCV matrix – Mat. The matrix is duplicated into another copy and then converted back to another PImage for display with the image command.
 
It can achieve around 60 frame per second in my old iMac. Here is the screenshot.
 

 
Continue reading

The new face detection and Processing

Here is the slightly modified version of the OpenCV Java tutorial example with face detection, ported to Processing.

import processing.video.*;
 
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.CvType;
import org.opencv.core.Size;
import org.opencv.objdetect.CascadeClassifier;
 
import java.awt.image.BufferedImage;
import java.awt.image.WritableRaster;
import java.awt.image.Raster;
 
Capture cap;
int pixCnt;
BufferedImage bm;
 
CascadeClassifier faceDetector;
MatOfRect faceDetections;
 
void setup() {
  size(640, 480);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
 
  cap = new Capture(this, width, height);
  cap.start();
  bm = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
  pixCnt = width*height*4;
 
  faceDetector = new CascadeClassifier(dataPath("haarcascade_frontalface_default.xml"));
  faceDetections = new MatOfRect();
}
 
void convert(PImage _i) {
  bm.setRGB(0, 0, _i.width, _i.height, _i.pixels, 0, _i.width);
  Raster rr = bm.getRaster();
  byte [] b1 = new byte[pixCnt];
  rr.getDataElements(0, 0, _i.width, _i.height, b1);
  Mat m1 = new Mat(_i.height, _i.width, CvType.CV_8UC4);
  m1.put(0, 0, b1);
 
  Mat m2 = new Mat(_i.height, _i.width, CvType.CV_8UC1);
  Imgproc.cvtColor(m1, m2, Imgproc.COLOR_BGRA2GRAY);   
 
  faceDetector.detectMultiScale(m2, faceDetections, 3, 1, 
  Objdetect.CASCADE_DO_CANNY_PRUNING, new Size(40, 40), new Size(240, 240));
 
  bm.flush();
  m2.release();
  m1.release();
}
 
void draw() {
  if (!cap.available()) 
    return;
  background(0);
  cap.read();
  convert(cap);
  image(cap, 0, 0);
  for (Rect rect: faceDetections.toArray()) {
    noFill();
    stroke(255, 0, 0);
    rect(rect.x, rect.y, rect.width, rect.height);
  }
}

OpenCV 2.4.4 and Processing

The latest OpenCV 2.4.4 supports the desktop version of Java. We can use the OpenCV functions in the Processing environment. Before we write a library for it, we can temporarily test it by putting the opencv-244.jar and libopencv_java244.dylib into the code folder of your sketch. For Windows, the dll file should go there as well.

And we also need to perform conversion between the PImage and the OpenCV Mat. I do not want to loop through the pixels array to use bitwise operations. In this case, I use the Java BufferedImage and WritableRaster classes with the 8 bits and 4 channels standard format.

Here are the example codes that perform the GaussianBlur and Canny functions.

import processing.video.*;
 
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Size;
 
import java.awt.image.BufferedImage;
import java.awt.image.WritableRaster;
import java.awt.image.Raster;
 
Capture cap;
int pixCnt;
BufferedImage bm;
PImage img;
 
void setup() {
  size(640, 480);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
 
  cap = new Capture(this, width, height);
  cap.start();
  bm = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
  img = createImage(width, height, ARGB);
  pixCnt = width*height*4;
}
 
void convert(PImage _i) {
  bm.setRGB(0, 0, _i.width, _i.height, _i.pixels, 0, _i.width);
  Raster rr = bm.getRaster();
  byte [] b1 = new byte[pixCnt];
  rr.getDataElements(0, 0, _i.width, _i.height, b1);
  Mat m1 = new Mat(_i.height, _i.width, CvType.CV_8UC4);
  m1.put(0, 0, b1);
 
  Mat m2 = new Mat(_i.height, _i.width, CvType.CV_8UC1);
  Imgproc.cvtColor(m1, m2, Imgproc.COLOR_BGRA2GRAY);   
 
  Imgproc.GaussianBlur(m2, m2, new Size(7, 7), 1.5, 1.5);
  Imgproc.Canny(m2, m2, 0, 30, 3, false);
  Imgproc.cvtColor(m2, m1, Imgproc.COLOR_GRAY2BGRA);
 
  m1.get(0, 0, b1);
  WritableRaster wr = bm.getRaster();
  wr.setDataElements(0, 0, _i.width, _i.height, b1);
  bm.getRGB(0, 0, _i.width, _i.height, img.pixels, 0, _i.width);
  img.updatePixels();
  bm.flush();
  m2.release();
  m1.release();
}
 
void draw() {
  if (cap.available()) {
    background(0);
    cap.read();
    convert(cap);
    image(img, 0, 0);
  }
}