processing.org

OpenCL and Processing

I have done a number of testings with various Java implementations of OpenCL and Processing. The major Java bindings of OpenCL include

The are two major OpenCL libraries for Processing at the time I do the testing, MSAOpenCL using the JavaCL and openclp5 using the JOCL. I do not use the Processing libraries and call directly the Java binding codes. Each of the implementation contains a Hello World example kernel that performs calculation across a large array. In my test, I modify the kernel program to do a multiplication between two floating point numbers with an array size of a million cells.

The sample kernel program code is

__kernel void sampleKernel(__global const float *a,
        __global const float *b,
        __global float *c,
        int n)
{
	int gid = get_global_id(0);
	if (gid >= n)
		return;
	c[gid] = a[gid] * b[gid];
}

I use a MacBook Pro for the testing. The operating system is OSX Lion. It includes the OpenCL implementation in the default OS. The graphics card is Nvidia GeForce 9400M with 256M graphic memory. The Processing version is alpha build 2.0a5 running in 64-bit mode. Various JAR files have to be copied to the code folder. Here is the summary.

JOCL

  • JOCL-0.1.7.jar

Java OpenCL (jogamp)

  • jocl.jar
  • gluegen-rt.jar
  • gluegen-rt-natives-macosx-universal.jar
  • libjocl.dylib

JavaCL

  • javacl-1.0.0-RC2-shaded.jar

I did ten consecutive runs of each implementation. The final figure is an average of the ten runs. Each measurement is taken just before and after the kernel execution. The first implementation (JOCL) has more fluctuated results. The second implementation has only a single command to invoke the kernel execution and passing back the result. Therefore, it is not easy to single out the execution time of the GPU. The third implementation (JavaCL) is obviously the fastest one and more stable in terms of parallel execution.

Performance with one million cells

Implementation ms
JOCL 3.207
Java OpenCL 17.0008
JavaCL 2.9865
CPU 10.952

Performance with half a million cells

Implementation ms
JOCL 3.9424
Java OpenCL 8.5672
JavaCL 2.9354
CPU 9.2368

The source files of the testings can be downloaded here:

DirectShow for Processing – OpenGL

I try to work out another version of the DirectShow for Processing classes in the last post. In this version, I write the movie data directly to an OpenGL texture object. Below is the modified version of the DMovie class. The DCapture class can also be modified in the same way.
 
The modified DMovie class

import de.humatic.dsj.*;
import java.awt.image.BufferedImage;
import com.sun.opengl.util.texture.*;
 
class DMovie implements java.beans.PropertyChangeListener {
 
  private DSMovie movie;
  public int width, height;
  public Texture tex;
 
  DMovie(String _s) {
    movie = new DSMovie(dataPath(_s), DSFiltergraph.DD7, this);
    movie.setVolume(1.0);
    movie.setLoop(false);
    movie.play();
    width = movie.getDisplaySize().width;
    height = movie.getDisplaySize().height;
    tex = TextureIO.newTexture(movie.getImage(), false);
  }
 
  public void updateImage() {
    BufferedImage bimg = movie.getImage();
    TextureData td = TextureIO.newTextureData(bimg, false);
    tex.updateImage(td);
  }
 
  public void loop() {
    movie.setLoop(true);
    movie.play();
  }
 
  public void play() {
    movie.play();
  }
 
  public void propertyChange(java.beans.PropertyChangeEvent e) {
    switch (DSJUtils.getEventType(e)) {
    }
  }
}

 
Sample code that uses the new DMovie class

import processing.opengl.*;
import javax.media.opengl.*;
 
DMovie mov;
PGraphicsOpenGL pgl;
 
void setup()
{
  size(1280, 692, OPENGL);
  pgl = (PGraphicsOpenGL) g;
  GL gl = pgl.beginGL();
  background(0);
  mov = new DMovie("Hugo.mp4");
  mov.loop();
  mov.tex.bind();
  pgl.endGL();
  ;
}
 
void draw()
{
  GL gl = pgl.beginGL();
  mov.updateImage();
  mov.tex.enable();
  gl.glBegin(GL.GL_QUADS);
  gl.glTexCoord2f(0, 0); 
  gl.glVertex2f(0, 0);
  gl.glTexCoord2f(1, 0); 
  gl.glVertex2f(width, 0);
  gl.glTexCoord2f(1, 1); 
  gl.glVertex2f(width, height);
  gl.glTexCoord2f(0, 1); 
  gl.glVertex2f(0, height);
  gl.glEnd();  
 
  mov.tex.disable();
  pgl.endGL();
}

DirectShow for Processing

I adopt the DirectShow Java Wrapper to work in Processing with two classes, one for movie playback and one for video capture. At this moment, there are just two Java classes, not an individual library yet. Since it is for DirectShow, it is of course in Windows platform. You have to package the dsj.jar and the dsj.dll (32bit or 64bit according to your platform) into your code folder.
 
The DMovie class for movie playback

import de.humatic.dsj.*;
import java.awt.image.BufferedImage;
 
class DMovie implements java.beans.PropertyChangeListener {
 
  private DSMovie movie;
  public int width, height;
 
  DMovie(String _s) {
    movie = new DSMovie(dataPath(_s), DSFiltergraph.DD7, this);
    movie.setVolume(1.0);
    movie.setLoop(false);
    movie.play();
    width = movie.getDisplaySize().width;
    height = movie.getDisplaySize().height;
  }
 
  public PImage updateImage() {
    PImage img = createImage(width, height, RGB);
    BufferedImage bimg = movie.getImage();
    bimg.getRGB(0, 0, img.width, img.height, img.pixels, 0, img.width);
    img.updatePixels();
    return img;
  }
 
  public void loop() {
    movie.setLoop(true);
    movie.play();
  }
 
  public void play() {
    movie.play();
  }
 
  public void propertyChange(java.beans.PropertyChangeEvent e) {
    switch (DSJUtils.getEventType(e)) {
    }
  }
}

 
Sample code that uses the DMovie class

DMovie mov;
 
void setup()
{
  size(1280, 692);
  background(0);
  mov = new DMovie("Hugo.mp4");
  mov.loop();
  frameRate(25);
}
 
void draw()
{
  image(mov.updateImage(), 0, 0);
}

 
The DCapture class that performs video capture with the available webcam

import de.humatic.dsj.*;
import java.awt.image.BufferedImage;
 
class DCapture implements java.beans.PropertyChangeListener {
 
  private DSCapture capture;
  public int width, height;
 
  DCapture() {
    DSFilterInfo[][] dsi = DSCapture.queryDevices();
    capture = new DSCapture(DSFiltergraph.DD7, dsi[0][0], false, 
    DSFilterInfo.doNotRender(), this);
    width = capture.getDisplaySize().width;
    height = capture.getDisplaySize().height;
  }
 
  public PImage updateImage() {
    PImage img = createImage(width, height, RGB);
    BufferedImage bimg = capture.getImage();
    bimg.getRGB(0, 0, img.width, img.height, img.pixels, 0, img.width);
    img.updatePixels();
    return img;
  }
 
  public void propertyChange(java.beans.PropertyChangeEvent e) {
    switch (DSJUtils.getEventType(e)) {
    }
  }
}

Sample code that uses the DCapture class

DCapture cap;
 
void setup() 
{
  size(640, 480);
  background(0);
  cap = new DCapture();
}
 
void draw()
{
  image(cap.updateImage(), 0, 0, cap.width, cap.height);
}

Video Playback Performance – Processing

I try out different video playback mechanism in the Processing to compare their performance. The digital video is the one I used in the last post. It is the trailer of the film Hugo. The details are: 1280 x 692 H.264 AAC, bitrate 2,093.

The computer I am using is iMac 3.06 GHz Intel Core 2 Duo, 4 GB RAM, ATI Radeon HD 4670 256 MB graphic card. Processing is the latest 1.5.1 version.

For the video playback classes, I tested the default QuickTime Video library, a FasterMovie class, the GSVideo library, and the JMCVideo library with JavaFX 1.2 SDK.

To render the video, I start with the standard image() function, and proceed to test with various OpenGL texturing methods, including the GLGraphics library.

Again, I sample the CPU and Memory usage with the Activity Monitor from the Mac OSX utilities, in an interval of 30 seconds. The results are the average of 5 samples.

Performance with 2D image function

CPU (%) Memory (Mb)
QuickTime 175 221
FasterMovie 137 275
GSVideo 151 118
JMCVideo 147 87

The OpenGL and QuickTime video libraries have problem working together. The program stops at the size() statement. I have to either put the first video related command before the size() or a dummy line

println(Capture.list());
size(1280, 692, OPENGL);

The second batch of tests use the standard OpenGL vertex and texture functions.

Performance with OpenGL texture and vertex functions

CPU (%) Memory (Mb)
QuickTime 158 430
FasterMovie 143 610
GSVideo 147 315
JMCVideo 142 397

The third batch of tests involve custom arrangement in OpenGL. Both GSVideo and JMCVideo come with their own functions to write directly to OpenGL texture. For the FasterMovie test, I combine it with the pixel buffer object I have shown in my previous post.
 
Performance with custom OpenGL texture method

  CPU (%) Memory (Mb)
FasterMovie+PBO 69 275
GSVideo+GLGraphics 58 120
JMCVideo+OpenGL 57 91

 
Sample code for GSVideo and GLGraphics (from codeanticode)

import processing.opengl.*;
import codeanticode.glgraphics.*;
import codeanticode.gsvideo.*;
 
GSMovie mov;
GLTexture tex;
 
void setup() 
{
  size(1280, 692, GLConstants.GLGRAPHICS);
  background(0);   
  mov = new GSMovie(this, "Hugo.mp4");
  tex = new GLTexture(this);
  mov.setPixelDest(tex);  
  mov.loop();
}
 
void draw() 
{
  if (tex.putPixelsIntoTexture()) 
  {
    image(tex, 0, 0);
  }
}
 
void movieEvent(GSMovie _m) {
  _m.read();
}

 
Sample code for JMCVideo (from Angus Forbes)

import jmcvideo.*;
import processing.opengl.*;
import javax.media.opengl.*; 
 
JMCMovieGL mov;
PGraphicsOpenGL pgl;
 
void setup() 
{
  size(1280, 692, OPENGL);
  background(0);
  mov = new JMCMovieGL(this, "Hugo.mp4", ARGB);
  mov.loop();
 
  pgl = (PGraphicsOpenGL) g;
  GL gl = pgl.beginGL();  
  gl.glViewport(0, 0, width, height);
  pgl.endGL();
}
 
void draw() 
{
  GL gl = pgl.beginGL();  
  mov.image(gl, 0, 0, width, height);
  pgl.endGL();
}

Pixel Buffer Object in Processing

People switching from Processing to OpenFrameworks or other more serious development platforms due to performance consideration. I have done a few searches and found that there are a number of libraries using different Java bindings of OpenCL, Vertex Buffer Object, Pixel Buffer Object, and even DirectShow. I wonder if it is more possible to use Processing in production environment where performance is important.

I have done a test to compare using live webcam video stream with traditional texture method and another one with pixel buffer object. The performance difference is noticeable and significant using my MacBook Pro. I do not record the videos as it may distort the real time performance.

This is the ‘traditional’ method.

import processing.video.*;
import processing.opengl.*;
 
float a;
 
Capture cap;
PImage img;
 
void setup()
{
  println(Capture.list());
  size(640, 480, OPENGL);
  hint(ENABLE_OPENGL_2X_SMOOTH);
  hint(DISABLE_DEPTH_TEST);
  a = 0;
 
  img = loadImage("tron.jpg");
  frameRate(30);
  cap = new Capture(this, width, height, 30);
  cap.read();
  textureMode(NORMALIZED);
}
 
void draw()
{
  background(0);
  image(img, 0, 0);
  translate(width/2, height/2, 0);
  float b = a*PI/180.0;
  rotateY(b);
  rotateX(b);
  beginShape(QUADS);
  texture(cap);
  vertex(-320, -240, 0, 0, 0);
  vertex( 320, -240, 0, 1, 0);
  vertex( 320, 240, 0, 1, 1);
  vertex(-320, 240, 0, 0, 1);
  endShape();
  a += 1;
  a %= 360;
}
 
void captureEvent(Capture _c)
{
  _c.read();
}

 

 
This is the PBO mehtod.

import processing.video.*;
import processing.opengl.*;
import javax.media.opengl.*;
import java.nio.IntBuffer;
 
float a;
PGraphicsOpenGL pgl;
GL gl;
PImage img;
 
int [] tex = new int[1];
int [] pbo = new int[1];
 
Capture cap;
 
void setup()
{
  println(Capture.list());
  size(640, 480, OPENGL);
  hint(ENABLE_OPENGL_2X_SMOOTH);
  hint(DISABLE_DEPTH_TEST);
  a = 0;
 
  img = loadImage("tron.jpg");
  frameRate(30);
  pgl = (PGraphicsOpenGL) g;
  cap = new Capture(this, width, height, 30);
  cap.read();
 
  gl = pgl.gl;
 
  gl.glGenBuffers(1, pbo, 0);
  gl.glBindBuffer(GL.GL_PIXEL_UNPACK_BUFFER, pbo[0]);  
  gl.glBufferData(GL.GL_PIXEL_UNPACK_BUFFER, 4*cap.width*cap.height, null, GL.GL_STREAM_DRAW);
  gl.glBindBuffer(GL.GL_PIXEL_UNPACK_BUFFER, 0);
 
  gl.glGenTextures(1, tex, 0);
  gl.glBindTexture(GL.GL_TEXTURE_2D, tex[0]);
 
  gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_NEAREST);
  gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_NEAREST);
  gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP);
  gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP);
 
  gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, cap.width, cap.height, 0, GL.GL_BGRA, GL.GL_UNSIGNED_BYTE, null);
  gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
}
 
void draw()
{
  background(0);
  image(img, 0, 0);
 
  gl = pgl.beginGL();
  gl.glColor3f( 1.0f, 1.0f, 1.0f);	
 
  gl.glEnable(GL.GL_TEXTURE_2D);
 
  gl.glBindTexture(GL.GL_TEXTURE_2D, tex[0]);
  gl.glBindBuffer(GL.GL_PIXEL_UNPACK_BUFFER, pbo[0]);
 
  gl.glTexSubImage2D(GL.GL_TEXTURE_2D, 0, 0, 0, cap.width, cap.height, GL.GL_BGRA, GL.GL_UNSIGNED_BYTE, 
  0);
 
  gl.glBufferData(GL.GL_PIXEL_UNPACK_BUFFER, 4*cap.width*cap.height, null, GL.GL_STREAM_DRAW);
 
  IntBuffer tmp1 = gl.glMapBuffer(GL.GL_PIXEL_UNPACK_BUFFER, GL.GL_WRITE_ONLY).asIntBuffer();
  tmp1.put(cap.pixels);
 
  gl.glUnmapBuffer(GL.GL_PIXEL_UNPACK_BUFFER);
  gl.glBindBuffer(GL.GL_PIXEL_UNPACK_BUFFER, 0);
 
  gl.glTranslatef(width/2, height/2, 0);
  gl.glRotatef(a, 1, 1, 0);
 
  gl.glBegin(GL.GL_QUADS);	
  gl.glTexCoord2f(0.0f, 0.0f);			
  gl.glVertex3f(-320, -240, 0);
  gl.glTexCoord2f(1.0f, 0.0f);
  gl.glVertex3f( 320, -240, 0);
  gl.glTexCoord2f(1.0f, 1.0f);
  gl.glVertex3f( 320, 240, 0);
  gl.glTexCoord2f(0.0f, 1.0f);
  gl.glVertex3f(-320, 240, 0);
  gl.glEnd();
  gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
  pgl.endGL();
  a += 1.0;
  a %= 360;
}
 
void captureEvent(Capture _c)
{
  _c.read();
}

 

Neurosky MindWave and Processing

This is my first trial run of a Neurosky MindWave sensor with a custom program written in Processing. The connection architecture is quite straight forward. The ThinkGear connector is a background process that reads from the IR serial port to obtain the brainwave signals and distributes them through a TCP socket server (localhost with port 13854).
 

 
There are a number of Java socket clients implementation. I use the ThinkGear Java library from Creation.

Eric Blue has another Processing based visualizer using the MindWave.

ZeroShore has another implementation with an animation called HyperCat.
 
Sample Code

import processing.video.*;
import neurosky.*;
import org.json.*;
 
ThinkGearSocket neuroSocket;
int attention = 0;
int meditation = 0;
int blinkSt = 0;
PFont font;
int blink = 0;
Capture cap;
 
void setup() 
{
  size(640, 480);
  ThinkGearSocket neuroSocket = new ThinkGearSocket(this);
  try 
  {
    neuroSocket.start();
  } 
  catch (ConnectException e) {
    e.printStackTrace();
  }
  smooth();
  font = loadFont("MyriadPro-Regular-24.vlw");
  textFont(font);
  frameRate(25);
  cap = new Capture(this, width, height);
  noStroke();
}
 
void draw() 
{
  background(0);
 
  image(cap, 0, 0);
  fill(255, 255, 0);
  text("Attention: "+attention, 20, 150);
  fill(255, 255, 0, 160);
  rect(200, 130, attention*3, 40);
  fill(255, 255, 0);
  text("Meditation: "+meditation, 20, 250);
  fill(255, 255, 0, 160);
  rect(200, 230, meditation*3, 40);
 
  if (blink>0) 
  {
    fill(255, 255, 0);
    text("Blink: " + blinkSt, 20, 350);
    if (blink>15) 
    {
      blink = 0;
    } 
    else 
    {
      blink++;
    }
  }
}
 
void captureEvent(Capture _c) 
{
  _c.read();
}
 
void attentionEvent(int attentionLevel) 
{
  attention = attentionLevel;
}
 
void meditationEvent(int meditationLevel) 
{
  meditation = meditationLevel;
}
 
void blinkEvent(int blinkStrength) 
{
  blinkSt = blinkStrength;
  blink = 1;
}
 
void stop() {
  neuroSocket.stop();
  super.stop();
}