Capture midi messages in Processing during playback

The 2nd midi in Processing example will use the Receiver interface to capture all the midi messages during the playback of a midi file. The program uses the custom GetMidi class to implement the Receiver interface. During the playback, it will display the NOTE_ON message with information of channel, octave and note.

The source code of the example is also in the Magicandlove GitHub repository.

Sample Processing screen during midi playback

Using midi in Processing for playback

This is my first use of midi in Processing. I do not use the MidiBus library for Processing. Instead, I try to use the standard midi package in Java. The SE8 standard Java package also contains the javadoc documentation.

Screenshot of the Processing sketch

The Processing source code and sample midi files are in the Magicandlove GitHub repository. The midi example files are downloaded from the midiworld website.

The code basically needs a Synthesizer class to render midi instruments into audio and a Sequencer class to playback the midi sequence.

Synthesizer synth = MidiSystem.getSynthesizer();
Sequencer player = MidiSystem.getSequencer();
synth.open();
player.open();

All the midi music files are in the data folder of the Processing sketch. To playback each piece of midi music, we need to convert each into a Java File object and use the following code to playback it. The variable f is a File object instance containing the midi file in the data folder.

Sequence music = MidiSystem.getSequence(f);
player.setSequence(music);
player.start();

Intel Realsense colour image in Processing (Windows only)

The testing is based on the Java wrapper of the Intel Realsense SDK, version 2 found in the following GitHub repository.

https://github.com/edwinRNDR/librealsense/tree/master/wrappers/java.

It only provides the pre-built binary for Windows version. I used it to test with my Intel Realsense D415 camera. The image below is the screenshot of the camera view.

The source code can be found in the GitHub repository of this post.

 

Movement in Space (version 2) Testing videos

A new version of the Movement in Space project will be exhibition end of this year as an installation piece. Here are some testing videos.
 

 

 

 
The work is rewritten from the original web version to a Processing version. The animation is built with 3 parametric harmonic formulae. The outputs from one animation can be used as inputs for another formula, in order to simulate the artificial neural network.

Face landmark detailed information

Referring back to the post on face landmark detection, the command to retrieve face landmark information is

fm.fit(im.getBGR(), faces, shapes);

where im.getBGR() is the Mat variable of the input image; faces is the MatOfRect variable (a number of Rect) obtained from the face detection; shapes is the ArrayList<MatOfPoint2f> variable returning the face landmark details for each face detected.

Each face is a MatOfPoint2f value. We can convert it to an array of Point. The array has length 68. Each point in the array corresponds to a face landmark feature point in the face as shown in the below image.
 

Face swap example in OpenCV with Processing (v.2)

To enhance the last post in face swap, we can make use of the cloning features of the Photo module in OpenCV. The command we use is the seamlessClone() function.

Photo.seamlessClone(warp, im2, mask, centre, output, Photo.NORMAL_CLONE);

where warp is the accumulation of all warped triangles; im2 is the original target image; mask is the masked image of the convex hull of the face contour; centre is a Point variable of the centre of the target image; output will be the blended final image.

Complete source code is now in the GitHub repository, ml20180820b.

Face swap example in OpenCV with Processing (v.1)

After the previous 4 exercises, we can start to work on with the OpenCV face swap example in Processing. With the two images, we first compute the face landmark for each of them. We then prepare the Delaunay triangulation for the 2nd image.  Based on the triangles in the 2nd image, we find corresponding vertices in the 1st image. For each triangle pair, we perform the warp affine transform from the 1st image to the 2nd image. It will create the face swap effect.

Note the skin tone discrepancy in the 3rd image for the face swap.

Full source code is now available at the GitHub repository ml20180820a.