Java and time with Processing

Instead of using the Processing millis() function or the Java Timer class, we can also make use of the relatively new Instant and Duration classes in Java 8. Here is one simple example for demonstration.

The program uses 2 Instant variables: start, end. It computes the time duration between them using the Duration.between() function.

import java.time.Instant;
import java.time.Duration;
 
Instant start;
PFont font;
 
void setup() {
  size(640, 480);
  start = Instant.now();
  frameRate(30);
  font = loadFont("AmericanTypewriter-24.vlw");
  textFont(font, 24);
}
 
void draw() {
  background(0);
  Instant end = Instant.now();
  Duration elapsed = Duration.between(start, end);
  long ns = elapsed.toNanos();
  text(Long.toString(ns), 230, 230);
}

 

Screen capture in Processing

This sketch demonstrates the use of the Robot class in Java to perform screen capture in Processing. It will create Jodi like effect with feedback in computer screen. Have fun with it.

Here are the codes. It makes use of the Robot class.

 
import java.awt.Robot;
import java.awt.image.BufferedImage;
import java.awt.Rectangle;
 
Robot robot;
 
void setup() {
  size(640, 480);
  try {
    robot = new Robot();
  } 
  catch (Exception e) {
    println(e.getMessage());
  }
}
 
void draw() {
  background(0);
  Rectangle r = new Rectangle(mouseX, mouseY, width, height);
  BufferedImage img1 = robot.createScreenCapture(r);
  PImage img2 = new PImage(img1);
  image(img2, 0, 0);
}

Save video in Processing with JCodec

As a side product of current research, I manage to save a Processing screen in an MP4 video file with the use of the JCodec library. Download the former jcodec-0.1.5.jar into the code folder of your Processing sketch. The simplest way is to use the SequenceEncoder class to add a BufferedImage to the MP4 video. Remember to finish the video file before ending.

The following example captures the live video stream from a webcam and outputs to an external MP4 file in the data folder. Use the mouse click to control the recording.

Here is the source code.

import processing.video.*;
import org.jcodec.api.SequenceEncoder;
import java.awt.image.BufferedImage;
import java.io.File;
 
Capture cap;
SequenceEncoder enc;
String videoName;
boolean recording;
 
void setup() {
  size(640, 480);
  background(0);
  cap = new Capture(this, width, height);
  videoName = "bear.mp4";
  recording = false;
  frameRate(25);
  smooth();
  noStroke();
  fill(255);
  cap.start();
  try {
    enc = new SequenceEncoder(new File(dataPath(videoName)));
  } 
  catch (IOException e) {
    e.printStackTrace();
  }
}
 
void draw() {
  image(cap, 0, 0);
  String fStr = nf(round(frameRate));
  text(fStr, 10, 20);
  if (recording) {
    BufferedImage bi = (BufferedImage) this.getGraphics().getImage();
    try {
      enc.encodeImage(bi);
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  recording = !recording;
  println("Recording : " + recording);
}
 
void keyPressed() {
  if (keyCode == 32) {
    try {
      enc.finish();
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}

The program also uses the undocumented functions, getGraphics() and getImage() to obtain the raw image of the Processing sketch window.

Artificial Neural Network in OpenCV with Processing

This is the first trial of the Machine Learning module, artificial neural network in OpenCV with Processing. I used the same OpenCV 3.1.0 Java built files. The program took the live stream video (PImage) from webcam and down-sampled to a grid of just 8 x 6 pixels of greyscale. It started by default in the training mode such that I could click on the left hand side of the screen for an image without a hat and on the right hand side for an image of myself wearing a hat. By pressing the SPACE key, it switched to the predict mode where by clicking the video would send the image to the neural network to see if I was wearing a hat or not. I used around 20 images for positive response and 20 images for negative response.

Here are the source codes.
 
The main program

import processing.video.*;
 
Capture cap;
boolean training;
ANN ann;
int w, h;
 
void setup() {
  size(640, 480);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
  cap = new Capture(this, width, height);
  cap.start();
  background(0);
  training = true;
  w = 8;
  h = 6;
  ann = new ANN(w*h);
}
 
void draw() {
  image(cap, 0, 0);
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  PImage img = new PImage(w, h, ARGB);
  img.copy(cap, 0, 0, width, height, 0, 0, w, h);
  img.updatePixels();
  img.filter(GRAY);
  String fName = "";
  float [] grey = getGrey(img);
  if (training) {
    float label = 0.0;
    if (mouseX < width/2) {
      label = 0.0;
    } else {
      label = 1.0;
    }
    ann.addData(grey, label);
    fName = (label == 0.0) ? "Negative" : "Positive";
    fName += nf(ann.getCount(), 4) + ".png";
    img.save(dataPath("") + "/" + fName);
  } else {
    float val = ann.predict(grey);
    float [] res = ann.getResult();
    val = res[0];
    float diff0 = abs(val);
    float diff1 = abs(val - 1);
    if (diff0 < diff1) {
      println("Without hat");
    } else {
      println("With hat");
    }
  }
}
 
float [] getGrey(PImage m) {
  float [] g = new float[w*h];
  if (m.width != w || m.height != h) 
    return g;
  for (int i=0; i<m.pixels.length; i++) {
    color c = m.pixels[i];
    g[i] = red(c) / 256.0;
  }
  return g;
}
 
void keyPressed() {
  if (keyCode == 32) {
    training = !training;
    if (!training) 
      ann.train();
  }
  println("Training status is " + training);
}

The Artificial Neural Network class

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.MatOfInt;
import org.opencv.core.MatOfFloat;
import org.opencv.ml.ANN_MLP;
 
public class ANN {
  final int MAX_DATA = 1000;
  ANN_MLP mlp;
  int input;
  int output;
  ArrayList<float []>train;
  ArrayList<Float>label;
  MatOfFloat result;
  String model;
 
  public ANN(int i) {
    input = i;
    output = 1;
    mlp = ANN_MLP.create();
    MatOfInt m1 = new MatOfInt(input, input/2, output);
    mlp.setLayerSizes(m1);
    mlp.setActivationFunction(ANN_MLP.SIGMOID_SYM);
    mlp.setTrainMethod(ANN_MLP.RPROP);
    result = new MatOfFloat();
    train = new ArrayList<float[]>();
    label = new ArrayList<Float>();
    model = dataPath("trainModel.xml");
  }
 
  void addData(float [] t, float l) {
    if (t.length != input) 
      return;
    if (train.size() >= MAX_DATA) 
      return;
    train.add(t);
    label.add(l);
  }
 
  int getCount() {
    return train.size();
  }
 
  void train() {
    float [][] tr = new float[train.size()][input];
    for (int i=0; i<train.size(); i++) {
      for (int j=0; j<train.get(i).length; j++) {
        tr[i][j] = train.get(i)[j];
      }
    }
    MatOfFloat response = new MatOfFloat();
    response.fromList(label);
    float [] trf = flatten(tr);
    Mat trainData = new Mat(train.size(), input, CvType.CV_32FC1);
    trainData.put(0, 0, trf);
    mlp.train(trainData, Ml.ROW_SAMPLE, response);
    trainData.release();
    response.release();
    train.clear();
    label.clear();
  }
 
  float predict(float [] i) {
    if (i.length != input) 
      return -1;
    Mat test = new Mat(1, input, CvType.CV_32FC1);
    test.put(0, 0, i);
    float val = mlp.predict(test, result, 0);
    return val;
  }
 
  float [] getResult() {
    float [] r = result.toArray();
    return r;
  }
 
  float [] flatten(float [][] a) {
    if (a.length == 0) 
      return new float[]{};
    int rCnt = a.length;
    int cCnt = a[0].length;
    float [] res = new float[rCnt*cCnt];
    int idx = 0;
    for (int r=0; r<rCnt; r++) {
      for (int c=0; c<cCnt; c++) {
        res[idx] = a[r][c];
        idx++;
      }
    }
    return res;
  }
}

Enumerate all files in the data folder of Processing

There are lots of ways to enumerate all the files inside the data folder of Processing sketch. Here are 2 of them. The first one uses the Java DirectoryStream class. The second one uses the static function walkFileTree from the Files class.
 
Example with DirectoryStream

try {
    DirectoryStream<Path> stream = Files.newDirectoryStream(Paths.get(dataPath(""))); 
    for (Path file : stream) {
      println(file.getFileName());
    }
  } 
  catch (IOException e) {
    e.printStackTrace();
}

Example with Files.walkFileTree

try {
    Files.walkFileTree(Paths.get(dataPath("")), new SimpleFileVisitor<Path>() {
      @Override
        public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
        println(file.getFileName());
        return FileVisitResult.CONTINUE;
      }
    }
    );
  } 
  catch (IOException e) {
    e.printStackTrace();
}

 

Looping through a List in Processing

Here is a piece of demonstration code to use various ways to loop through a List or ArrayList in Processing, i.e. Java. The first two examples use for loop and the third one uses the while loop.
 
We initialize the ArrayList with random integers.

ArrayList<Integer> iList = new ArrayList<Integer>();
iList.add(floor(random(10)));
iList.add(floor(random(10)));
iList.add(floor(random(10)));

The first method is the traditional way to loop with an index.

for (int i=0; i<iList.size(); i++) {
    println(iList.get(i));
}

The second method also uses the for loop, but with alternate syntax.

for (int i : iList) {
    println(i);
}

The third method uses the Iterator through the ArrayList.

Iterator<Integer> it = iList.iterator();
while (it.hasNext()) {
    int i = it.next();
    println(i);
}

Processing Test with the PGraphics

To simplify the use of a dynamic mask with image, I try to use the PGraphics class as an off screen buffer to store the image for a subsequent mask operation. The foreground image is the live video input from the webcam. The mouse drag operation will draw a dynamic mask to reveal the webcam image. It makes use of the fact that the PGraphics class is a subclass of PImage. The mask function can directly take the PGraphics instance as input. Here is a sample screen shot.
 

Continue reading “Processing Test with the PGraphics”