Face landmark detection in OpenCV Face module with Processing

The 2nd exercise is a demonstration using the Face module of the OpenCV contribution libraries. The official documentation for OpenCV 3.4.2 has a tutorial on face landmark detection. The Face module distribution also has a sample – Facemark.java.  This exercise is derived from this sample. There are 2 extra parameter files. One is the Haar Cascades file,  haarcascade_frontalface_default.xml we used in the last post for general face detection. The other one is the face landmark model file face_landmark_model.dat that will be downloaded during the building process of the OpenCV. Otherwise, it is also available at this GitHub link.

The program uses the Facemark class with the instance variable fm.

Facemark fm;

It is created by the command.

fm = Face.createFacemarkKazemi();

And load in the model file with the following,

fm.loadModel(datPath(modelFile));

where modelFile is the string variable containing the model file name.

Complete source code is in this GitHub repository.

 

Face detection with the OpenCV Face module in Processing

This will be the series of tutorials to elaborate the OpenCV face swap example. The 1st one is a demonstration of the face detection of the Face module, instead of using the Object Detection module. The sample program will detect faces from 2 photos, using the Haar Cascades file, haarcascade_frontalface_default.xml, located in the data folder of the Processing sketch.

The major command is

Face.getFacesHAAR(im.getBGR(), faces, dataPath(faceFile));

where im.getBGR() is the photo Mat returned from the CVImage object, im, faces is a MatOfRect variable returning the rectangle of all faces detected, and faceFile is a string variable containing the file name of the Haar Cascades XML file.

Complete source code is in the website GitHub repository, ml20180818a.

 

 

 

 

Darknet YOLO v3 testing in Processing with the OpenCV DNN module

This is the third demo of the OpenCV Deep Neural Network (dnn) module in Processing with my latest CVImage library. In this version, I used the Darknet YOLO v3 pre-trained model for object detection. It is based on the object_detection sample from the latest OpenCV distribution. The configuration and weights model files for the COCO datasets are also available in the Darknet website. In the data folder of the Processing sketch, you will have the following 3 files:

  • yolov3.cfg (configuration file)
  • yolov3.weights (pre-trained model weight file)
  • object_detection_classes_yolov3.txt (label description file)

 

You can download the source code in my GitHub repositories.

OpenPose in Processing and OpenCV (DNN)

This is the 2nd test of the OpenCV dnn module in Processing through my CVImage library. It used the OpenPose pre-trained Caffe model.

Since the OpenCV dnn module can read the Caffe model through the readNetFromCaffe() function, the demo sends the real time webcam image to the model for human pose detection. It made use of the configuration file openpose_pose_coco.prototxt and the saved model pose_iter_440000.caffemodel. The original reference of the demo is from the openpose.cpp official OpenCV sample and the Java implementation from  the GitHub of berak. You can download the model details below

The description of the OpenPose output can be found in their official GitHub site. The figure below is the posture information I used in my demo.

Again, the source code is maintained in my Magicandlove repositories of my GitHub. You can download from here.

Deep Neural Network (dnn) module with Processing

This is my first demo run of the dnn (deep neural network) module in OpenCV 3.4.2 with Processing, using my CVImage library. The module can input pre-trained models from Caffe, Tensorflow, Darknet, and Torch.  In this example, I used the Tensorflow model Inception v2 SSD COCO from here. I also obtained the label map file from the Tensorflow GitHub. The following 3 files are in the data folder of the Processing sketch.

  • frozen_inference_graph.pb
  • ssd_inception_v2_coco_2017_11_17.pbtxt
  • mscoco_label_map.pbtxt

The source code is in my GitHub repository of this website here.

TensorFlow in Processing

The Java binding for the Google Deep Learning library, TensorFlow is now available. The binary library files for version 1.1.0-rc1  are also available for download here. Below is the code for the Hello World program included in the distribution that I modified for Processing.
 

import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
import org.tensorflow.TensorFlow;
 
Graph g1;
Output o1;
Output o2;
Output o3;
PFont font;
String res;
 
void setup() {
  size(640, 480);
  noLoop();
}
 
void draw() {
  background(0);
  Graph g = new Graph();
  String value = "Hello from " + TensorFlow.version();
  Tensor t = null;
  try {
    t = Tensor.create(value.getBytes("UTF-8"));
  } 
  catch (Exception e) {
    println(e.getMessage());
  }
  g.opBuilder("Const", "MyConst")
    .setAttr("dtype", t.dataType())
    .setAttr("value", t)
    .build();
  Session s = new Session(g);
  Tensor output = null;
  try {
    output = s.runner()
      .fetch("MyConst")
      .run()
      .get(0);
    println(new String(output.bytesValue(), "UTF-8"));
  } 
  catch (Exception e) {
    println(e.getMessage());
  }
}

OpenCV 3.2 Java Build

In preparing for the forthcoming book in Processing and OpenCV, I have tried to build the Java binding in OpenCV 3.2. It worked easily for the basic components. Nevertheless, when I included the contribution moduleoptflow, it failed. After a number of attempts in various platforms, I found it was due to the gen_java.py script in folder opencv-3.2.0/modules/java/generator. I tried to add back the import details for the class DenseOpticalFlow. It worked again. Here is what I patch in the gen_java.py script.

For those who do not want to build it yourselves, you can download a pre-built version of the OpenCV 3.2 Java library. You can use it with Processing immediately. I have tested it with the current Processing at 3.3. It contains the following files for various platforms in 64 bit:

  • libopencv_java320.dylib
  • libopencv_java320.so
  • opencv_java320.dll
  • opencv-320.jar

Enjoy and happy coding.

 

Save Processing screen as video with jCodec – new

It may not be easy for readers to get the old jcodec-0.1.5.jar for what I have done in the last post. I tried to work out for a newer solution but found that the latest version did change quite a lot. The latest jcodec source is 0.2.0. I built the latest two files for the Processing test

  • jcodec-0.2.0.jar
  • jcodec-javase-0.2.0.jar

You can download a compressed file of the code folder where you can drop and extract inside the Processing sketch folder. The Processing codes also change to reflect the class structure. Here it is.
 

// Save video file
import processing.video.*;
import org.jcodec.api.awt.AWTSequenceEncoder8Bit;
 
import java.awt.image.BufferedImage;
import java.io.File;
 
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.apache.log4j.BasicConfigurator;
 
static Logger log;
Capture cap;
AWTSequenceEncoder8Bit enc;
String videoName;
String audioName;
boolean recording;
 
void setup() {
  size(640, 480);
  background(0);
  log = LoggerFactory.getLogger(this.getClass());
  BasicConfigurator.configure();
  cap = new Capture(this, width, height);
  videoName = "bear.mp4";
  recording = false;
  int fRate = 25;
  frameRate(fRate);
  cap.start();
  try {
    enc = AWTSequenceEncoder8Bit.createSequenceEncoder8Bit(new File(dataPath(videoName)), fRate);
  } 
  catch (IOException e) {
    e.printStackTrace();
  }
}
 
void draw() {
  image(cap, 0, 0);
  if (recording) {
    BufferedImage bi = (BufferedImage) cap.getNative();
    try {
      enc.encodeImage(bi);
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  recording = !recording;
  log.info("Recording : " + recording);
}
 
void keyPressed() {
  if (keyCode == 32) {
    try {
      enc.finish();
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}

Save video in Processing with JCodec

As a side product of current research, I manage to save a Processing screen in an MP4 video file with the use of the JCodec library. Download the former jcodec-0.1.5.jar into the code folder of your Processing sketch. The simplest way is to use the SequenceEncoder class to add a BufferedImage to the MP4 video. Remember to finish the video file before ending.

The following example captures the live video stream from a webcam and outputs to an external MP4 file in the data folder. Use the mouse click to control the recording.

Here is the source code.

import processing.video.*;
import org.jcodec.api.SequenceEncoder;
import java.awt.image.BufferedImage;
import java.io.File;
 
Capture cap;
SequenceEncoder enc;
String videoName;
boolean recording;
 
void setup() {
  size(640, 480);
  background(0);
  cap = new Capture(this, width, height);
  videoName = "bear.mp4";
  recording = false;
  frameRate(25);
  smooth();
  noStroke();
  fill(255);
  cap.start();
  try {
    enc = new SequenceEncoder(new File(dataPath(videoName)));
  } 
  catch (IOException e) {
    e.printStackTrace();
  }
}
 
void draw() {
  image(cap, 0, 0);
  String fStr = nf(round(frameRate));
  text(fStr, 10, 20);
  if (recording) {
    BufferedImage bi = (BufferedImage) this.getGraphics().getImage();
    try {
      enc.encodeImage(bi);
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}
 
void captureEvent(Capture c) {
  c.read();
}
 
void mousePressed() {
  recording = !recording;
  println("Recording : " + recording);
}
 
void keyPressed() {
  if (keyCode == 32) {
    try {
      enc.finish();
    } 
    catch (IOException e) {
      e.printStackTrace();
    }
  }
}

The program also uses the undocumented functions, getGraphics() and getImage() to obtain the raw image of the Processing sketch window.

Searching in Weka with Processing

Further to the last Weka example, I used the same CSV data file for neighbourhood search. By pressing the mouse button, it generated a random sequence of numbers between 1 to 4. The program used the sequence as an instance to match against the database from the CSV data file. The closet match will be shown together with the distance between the test case (random) and the closet match from the database.

A sample screenshot

 
Source codes

import weka.core.converters.CSVLoader;
import weka.core.Instances;
import weka.core.DenseInstance;
import weka.core.Instance;
import weka.core.neighboursearch.LinearNNSearch;
import java.util.Enumeration;
import java.io.File;
 
Instances data;
String csv;
LinearNNSearch lnn;
boolean search;
int idx;
float dist;
String testCase;
String matchCase;
String distance;
 
void setup() {
  size(500, 500);
  csv = "Testing.csv";
  try {
    loadData();
    buildModel();
  } 
  catch (Exception e) {
    e.printStackTrace();
  }
  search = false;
  idx = -1;
  dist = 0.0;
  testCase = "";
  matchCase = "";
  distance = "";
  fill(255);
}
 
void draw() {
  background(0);
  if (search) {
    text(testCase, 100, 100);
    text(matchCase, 100, 150);
    text(distance, 100, 200);
  }
}
 
void loadData() throws Exception {
  // load external CSV data file, without header row.
  CSVLoader loader = new CSVLoader();
  loader.setNoHeaderRowPresent(true);
  loader.setSource(new File(dataPath(csv)));
  data = loader.getDataSet();
  data.setClassIndex(0);
 
  println("Attributes : " + data.numAttributes());
  println("Instances : " + data.numInstances());
  println("Name : " + data.classAttribute().toString());
 
  Enumeration all = data.enumerateInstances();
  while (all.hasMoreElements()) {
    Instance single = (Instance) all.nextElement();
    println("Instance : " + (int) single.classValue() + ": " + single.toString());
  }
}
 
void buildModel() throws Exception {
  // Build linear search model.
  lnn = new LinearNNSearch(data);
  println("Model built ...");
}
 
void test() throws Exception {
  // Construct a test case and do a linear searching.
  double [] val = new double[data.numAttributes()];
  val[0] = 0;
  testCase  = "Test case:  ";
  matchCase = "Match case: ";
  distance  = "Distance:   ";
  for (int i=1; i<val.length; i++) {
    val[i] = floor(random(4))+1;
    testCase += (nf((float)val[i]) + ",");
  }
  testCase = testCase.substring(0, testCase.length()-1);
  DenseInstance x = new DenseInstance(1.0, val);
  x.setDataset(data);
  Instance c = lnn.nearestNeighbour(x);
  double [] tmp = lnn.getDistances();
  dist = (float) tmp[0];
  idx = (int) c.classValue();
  matchCase += data.instance(idx).toString();
  distance += nf(dist);
  saveFrame("weka####.png");
}
 
void mousePressed() {
  try {
    test();
  } 
  catch (Exception e) {
    e.printStackTrace();
  }
  search = true;
}