Darknet YOLO v3 testing in Processing with the OpenCV DNN module

This is the third demo of the OpenCV Deep Neural Network (dnn) module in Processing with my latest CVImage library. In this version, I used the Darknet YOLO v3 pre-trained model for object detection. It is based on the object_detection sample from the latest OpenCV distribution. The configuration and weights model files for the COCO datasets are also available in the Darknet website. In the data folder of the Processing sketch, you will have the following 3 files:

  • yolov3.cfg (configuration file)
  • yolov3.weights (pre-trained model weight file)
  • object_detection_classes_yolov3.txt (label description file)

 

You can download the source code in my GitHub repositories.

OpenPose in Processing and OpenCV (DNN)

This is the 2nd test of the OpenCV dnn module in Processing through my CVImage library. It used the OpenPose pre-trained Caffe model.

Since the OpenCV dnn module can read the Caffe model through the readNetFromCaffe() function, the demo sends the real time webcam image to the model for human pose detection. It made use of the configuration file openpose_pose_coco.prototxt and the saved model pose_iter_440000.caffemodel. The original reference of the demo is from the openpose.cpp official OpenCV sample and the Java implementation from  the GitHub of berak. You can download the model details below

The description of the OpenPose output can be found in their official GitHub site. The figure below is the posture information I used in my demo.

Again, the source code is maintained in my Magicandlove repositories of my GitHub. You can download from here.

Deep Neural Network (dnn) module with Processing

This is my first demo run of the dnn (deep neural network) module in OpenCV 3.4.2 with Processing, using my CVImage library. The module can input pre-trained models from Caffe, Tensorflow, Darknet, and Torch.  In this example, I used the Tensorflow model Inception v2 SSD COCO from here. I also obtained the label map file from the Tensorflow GitHub. The following 3 files are in the data folder of the Processing sketch.

  • frozen_inference_graph.pb
  • ssd_inception_v2_coco_2017_11_17.pbtxt
  • mscoco_label_map.pbtxt

The source code is in my GitHub repository of this website here.

OpenCV 3.4.2 Java Build

After the release of OpenCV 3.4.2, I have prepared the pre-built version of the Java libraries for OSX, Ubuntu, and Windows 8.1 platforms (64 bits).  In this release, there is more extensive support for the Java binding. I also packaged the library as the Processing library – CVImage. Please refer to latest book for details. In addition to the optflow contributed library, it also includes additional contributed libraries, such as bgsegm, face, and tracking.

CVImage for OpenCV 3.4.2

 

Saving video from Processing with the jCodec 2.3

In the former post, I have tested using the jCodec 0.1.5 and 0.2.0 to save the Processing screen into an MP4 file. The latest version of jCodec 0.2.3 has, however, changed its functions for the AWT based applications. Here is the new code for Processing to use jCodec 0.2.3 to save any BufferedImage to an external MP4 file.

To use the code, you need to download from the jCodec website the following two jar files and put them into the code folder of your Processing sketch.

  • jcodec-0.2.3.jar
  • jcodec-javase-0.2.3.jar

The following code will write a frame of your Processing screen into the MP4 file for every mouse pressed action.

import processing.video.*;
import java.awt.image.BufferedImage;
import org.jcodec.api.awt.AWTSequenceEncoder;
 
Capture cap;
AWTSequenceEncoder enc;
 
public void settings() {
  size(640, 480);
}
 
public void setup() {
  cap = new Capture(this, width, height);
  cap.start();
  String fName = "recording.mp4";
  enc = null;
  try {
    enc = AWTSequenceEncoder.createSequenceEncoder(new File(dataPath(fName)), 25);
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
}
 
public void draw() {
  image(cap, 0, 0);
}
 
public void captureEvent(Capture c) {
  c.read();
}
 
private void saveVideo(BufferedImage i) {
  try {
    enc.encodeImage(i);
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
}
 
public void mousePressed() {
  saveVideo((BufferedImage) this.getGraphics().getImage());
}
 
public void exit() {
  try {
    enc.finish();
  } 
  catch (IOException e) {
    println(e.getMessage());
  }
  super.exit();
}

To save only the capture image, you can just replace the following saveVideo command.

saveVideo((BufferedImage) cap.getNative());

CVImage and PixelFlow in Processing

This is a quick demonstration of using the CVImage library, from the book, Pro Processing for Images and Computer Vision with OpenCV, and the PixelFlow library from Thomas Diewald.
 
Here is the video documentation.

The full source code is below with one additional class.

Main Processing sketch

import cvimage.*;
import processing.video.*;
import com.thomasdiewald.pixelflow.java.DwPixelFlow;
import com.thomasdiewald.pixelflow.java.fluid.DwFluid2D;
import org.opencv.core.*;
import org.opencv.objdetect.CascadeClassifier;
import org.opencv.objdetect.Objdetect;
 
// Face detection size
final int W = 320, H = 180;
Capture cap;
CVImage img;
CascadeClassifier face;
float ratio;
DwFluid2D fluid;
PGraphics2D pg_fluid;
MyFluidData fluidFunc;
 
void settings() {
  size(1280, 720, P2D);
}
 
void setup() {
  background(0);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  println(Core.VERSION);
  cap = new Capture(this, width, height);
  cap.start();
  img = new CVImage(W, H);
  face = new CascadeClassifier(dataPath("haarcascade_frontalface_default.xml"));
  ratio = float(width)/W;
 
  DwPixelFlow context = new DwPixelFlow(this);
  context.print();
  context.printGL();
  fluid = new DwFluid2D(context, width, height, 1);
  fluid.param.dissipation_velocity = 0.60f;
  fluid.param.dissipation_density = 0.99f;
  fluid.param.dissipation_temperature = 1.0f;
  fluid.param.vorticity = 0.001f;
 
  fluidFunc = new MyFluidData();
  fluid.addCallback_FluiData(fluidFunc);
  pg_fluid = (PGraphics2D) createGraphics(width, height, P2D);
  pg_fluid.smooth(4);
}
 
void draw() {
  if (!cap.available()) 
    return;
  background(0);
  cap.read();
  cap.updatePixels();
 
  img.copy(cap, 0, 0, cap.width, cap.height, 
    0, 0, img.width, img.height);
  img.copyTo();
 
  Mat grey = img.getGrey();
  MatOfRect faces = new MatOfRect();
 
  face.detectMultiScale(grey, faces, 1.15, 3, 
    Objdetect.CASCADE_SCALE_IMAGE, 
    new Size(60, 60), new Size(200, 200));
  Rect [] facesArr = faces.toArray();
  if (facesArr.length > 0) {
    fluidFunc.findFace(true);
  } else {
    fluidFunc.findFace(false);
  }
  for (Rect r : facesArr) {
    float cx = r.x + r.width/2.0;
    float cy = r.y + r.height/2.0;
    fluidFunc.setPos(new PVector(cx*ratio, cy*ratio));
  }
  fluid.update();
  pg_fluid.beginDraw();
  pg_fluid.background(0);
  pg_fluid.image(cap, 0, 0);
  pg_fluid.endDraw();
  fluid.renderFluidTextures(pg_fluid, 0);
  image(pg_fluid, 0, 0);
  pushStyle();
  noStroke();
  fill(0);
  text(nf(round(frameRate), 2, 0), 10, 20);
  popStyle();
  grey.release();
  faces.release();
}

The class definition of MyFluidData

private class MyFluidData implements DwFluid2D.FluidData {
  float intensity;
  float radius;
  float temperature;
  color c;
  boolean first;
  boolean face;
  PVector pos;
  PVector last;
 
  public MyFluidData() {
    super();
    intensity = 1.0f;
    radius = 25.0f;
    temperature = 5.0f;
    c = color(255, 255, 255);
    first = true;
    pos = new PVector(0, 0);
    last = new PVector(0, 0);
    face = false;
  }
 
  public void findFace(boolean f) {
    face = f;
  }
 
  public void setPos(PVector p) {
    if (first) {
      pos.x = p.x;
      pos.y = p.y;
      last.x = pos.x;
      last.y = pos.y;
      first = false;
    } else {
      last.x = pos.x;
      last.y = pos.y;
      pos.x = p.x;
      pos.y = p.y;
    }
  }
 
  @Override
    public void update(DwFluid2D f) {
 
    if (face) {
      float px = pos.x;
      float py = height - pos.y;
      float vx = (pos.x - last.x) * 10.0f;
      float vy = (pos.y - last.y) * -10.0f;
      c = color(random(100, 255), random(100, 255), random(50, 100));
      f.addVelocity(px, py, radius, vx, vy);
      f.addDensity (px, py, radius, 
        red(c)/255, green(c)/255, blue(c)/255, 
        intensity);
      f.addTemperature(px, py, radius, temperature);
    }
  }
}

Charts in Processing

Here is the first test of using Charts from JavaFX in Processing. In the recent version of Processing, we are able to use FX2D renderer. The following is a simple pie chart example.


 

import javafx.scene.canvas.Canvas;
import javafx.scene.Scene;
//import javafx.stage.Stage;
import javafx.scene.layout.StackPane;
import javafx.collections.ObservableList;
import javafx.collections.FXCollections;
import javafx.scene.chart.*;
import javafx.geometry.Side;
 
void setup() {
  size(640, 480, FX2D);
  background(255);
  noLoop();
}
 
void draw() {
  pieChart();
}
 
void pieChart() {
  Canvas canvas = (Canvas) this.getSurface().getNative();
  Scene scene = canvas.getScene();
  //  Stage st = (Stage) s.getWindow();
  StackPane pane = (StackPane) scene.getRoot();
 
  ObservableList<PieChart.Data> pieChartData =
    FXCollections.observableArrayList(
    new PieChart.Data("Fat Bear", 10), 
    new PieChart.Data("Pooh San", 20), 
    new PieChart.Data("Pig", 8), 
    new PieChart.Data("Rabbit", 15), 
    new PieChart.Data("Chicken", 2));
  PieChart chart = new PieChart(pieChartData);
  chart.setTitle("Animals");
  chart.setLegendSide(Side.RIGHT);
 
  pane.getChildren().add(chart);
}

OpenCV 3.3 Java Build

The new release of OpenCV 3.3 is out now. I again prepare the Java build for the CVImage Processing library use. It also includes the optflow extra module for motion history applications. Here is the list of the 3 OpenCV releases.

The book Pro Processing for Images and Computer Vision with OpenCV will be released soon. It will include the detailed build instructions in multiple platforms.