MediaPipe in TouchDesigner 5

This is the continuation of the last post with slight modifications. Instead of just displaying the face mesh details in a Script TOP, it tries to visualise all the face mesh points in a 3D space. As the facial landmarks returned from the MediaPipe contain three dimensional information, it is possible to enumerate all the points and display them in a Script SOP. We are going to use the appendPoint() function to generate the point cloud and the appendPoly() function to create the face mesh.

The data returned from the MediaPipe contains the 468 facial landmarks, based on the Canonical Face Model. The face mesh information (triangles), however, is not available from the results obtained from the MediaPipe solutions. Nevertheless, we can obtain such information from the meta data of the facial landmarks from its GitHub. To simplify the process, I have edited the data into this CSV mesh file. It is expected that the mesh.csv file is located in the TouchDesigner project folder, together with the TOE project file. Here are the first few lines of the mesh.csv file,

173,155,133
246,33,7
382,398,362
263,466,249
308,415,324

Each line is the data for a triangular mesh of the face. The 3 numbers are the indices of the vertices defined in the 468 facial landmarks. The visualisation of the landmarks is also available in the MediaPipe GitHub.

Canonical face model
Image from the Google MediaPipe GitHub

The TouchDesigner project will render the Script SOP with the standard Geometry, Camera, Light and the Render TOP.

I’ll not go through all the code here. The following paragraphs cover some of the essential elements in the Python code. The first one is the initialisation of the face mesh information from the mesh.csv file.

triangles = []
mesh_file = project.folder + "/mesh.csv"
mf = open(mesh_file, "r")
mesh_list = mf.read().split('\n')
for m in mesh_list:
    temp = m.split(',')
    x = temp[0]
    y = temp[1]
    z = temp[2]
    triangles.append([x, y, z])

The variable triangles is the list of all triangles from the canonical face model. Each entry is a list of 3 indices to the entries of the corresponding points in the 468 facial landmarks. The second one is the code to generate the face point cloud and the mesh.

for pt in landmarks:
    p = scriptOp.appendPoint()
    p.x = pt.x
    p.y = pt.y
    p.z = pt.z

for poly in triangles:         
    pp = scriptOp.appendPoly(3, closed=True, addPoints=False)  
    pp[0].point = scriptOp.points[poly[0]]        
    pp[1].point = scriptOp.points[poly[1]]       
    pp[2].point = scriptOp.points[poly[2]]

The first for loop creates all the points from the facial landmarks using the appendPoint() function. The second for loop creates all the triangular meshes from information stored in the variable triangles using the appendPoly() function.

After we draw the 3D face model, we also compute the normals of the model by using another Attribute Create SOP.

The final TouchDesigner project is available in the MediaPipeFaceMeshSOP repository.

MediaPipe in TouchDesigner 4

The following example is a simple demonstration of the Face Mesh function from MediaPipe in TouchDesigner. It is very similar to the previous face detection example. Again, we are going to use the Script TOP to integrate with MediaPipe and display the face mesh information together with the live webcam image.

Instead of flipping the image vertically in the Python code, this version will perform the flipping in the TouchDesigner Flip TOP, both vertically and horizontally (mirror image). We also reduce the resolution from the original 1280 x 720 to 640 x 360 for better performance. The Face Mesh information is drawn directly to the output image in the Script TOP.

Here is also the Python code in the Script TOP

# me - this DAT
# scriptOp - the OP which is cooking
import numpy
import cv2
import mediapipe as mp

mp_drawing = mp.solutions.drawing_utils
mp_face_mesh = mp.solutions.face_mesh

point_spec = mp_drawing.DrawingSpec(
    color=(0, 100, 255),
    thickness=1,
    circle_radius=1
)
line_spec = mp_drawing.DrawingSpec(
    color=(255, 200, 0),
    thickness=2,
    circle_radius=1
)
face_mesh = mp_face_mesh.FaceMesh(
    min_detection_confidence=0.5,
    min_tracking_confidence=0.5
)

# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
def onSetupParameters(scriptOp):
    page = scriptOp.appendCustomPage('Custom')
    p = page.appendFloat('Valuea', label='Value A')
    p = page.appendFloat('Valueb', label='Value B')
    return

# called whenever custom pulse parameter is pushed
def onPulse(par):
    return

def onCook(scriptOp):
    input = scriptOp.inputs[0].numpyArray(delayed=True)
    if input is not None:
        frame = input * 255
        frame = frame.astype('uint8')
        frame = cv2.cvtColor(frame, cv2.COLOR_RGBA2RGB)
        results = face_mesh.process(frame)
        if results.multi_face_landmarks:
            for face_landmarks in results.multi_face_landmarks:
                mp_drawing.draw_landmarks(
                    image=frame,
                    landmark_list=face_landmarks,
                    connections=mp_face_mesh.FACE_CONNECTIONS,
                    landmark_drawing_spec=point_spec,
                    connection_drawing_spec=line_spec)

       frame = cv2.cvtColor(frame, cv2.COLOR_RGB2RGBA)
       scriptOp.copyNumpyArray(frame)
    return

Similar to previous examples, the important code is in the onCook function. The face_mesh will process each frame and draw the results in the frame instance for final display.

The TouchDesigner project is now available in the MediaPipeFaceMeshTOP folder of the GitHub repository.

MediaPipe in TouchDesigner 3

The last post demonstrated the use of the face detection function in MediaPipe with TouchDesigner. Nevertheless, it only produced an image with the detected results. It is not very useful if we want to manipulate the graphics according to the detected faces. In this example, we switch to the use of Script CHOP to output the detected face data in numeric form.

As mentioned in the last post, the MediaPipe face detection expects a vertically flipped image as compared with the TouchDesigner texture, this example will flip the image with a TouchDesigner TOP to make the Python code simpler. Instead of showing all the detected faces, the code just pick the largest face and output its bounding box and the position of the left and right eyes.

Since we are working on a Script CHOP, it is not possible to connect directly the flipped TOP to it. In this case, we use the onSetupParameters function to define the Face TOP input in the Custom tab.

def onSetupParameters(scriptOp):
     page = scriptOp.appendCustomPage('Custom')
     topPar = page.appendTOP('Face', label='Image with face')
     return

And in the onCook function, we use the following statement to retrieve the image from the TOP that we dragged into the Face parameter.

topRef = scriptOp.par.Face.eval()

After we found out the largest face from the image, we append a number channels to the Script CHOP such that the TouchDesigner project can use them for custom visualisation. The new channels are,

  • face (number of faces detected)
  • width, height (size of the bounding box)
  • tx, ty (centre of the bounding box)
  • left_eye_x, left_eye_y (position of the left eye)
  • right_eye_x, right_eye_y (position of the right eye)

The complete project file can be downloaded from this GitHub repository.

MediaPipe in TouchDesigner 2

Now we are ready to integrate the MediaPipe functions in TouchDesigner after we learnt the basic of the Script TOP. The first one we are going to do is the Face Detection. We just use the Script TOP to display the bounding boxes of the detected faces without sending the face details elsewhere for processing. In the next example after this, we shall send the bounding box details to a Script CHOP.

In order to have the mirror image effect, we use the Flip TOP with a horizontal flip. We also add a Resolution TOP to reduce the original 1280 x 720 to half, i.e. 640 x 360 for better performance. Of course, we can achieve the same result by changing the Output Resolution of the Flip TOP from its Common tab.

# me - this DAT
# scriptOp - the OP which is cooking
import numpy
import cv2
import mediapipe as mp

mp_face = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils

face = mp_face.FaceDetection(
     min_detection_confidence=0.7
 )

# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
def onSetupParameters(scriptOp):
    return
# called whenever custom pulse parameter is pushed
def onPulse(par):
    return

def onCook(scriptOp):
    input = scriptOp.inputs[0].numpyArray(delayed=True)
    if input is not None:
        frame = cv2.cvtColor(input, cv2.COLOR_RGBA2RGB)
        frame = cv2.flip(frame, 0)
        frame *= 255
        frame = frame.astype('uint8')
        results = face.process(frame)
        if results.detections:
            for detection in results.detections:
                mp_drawing.draw_detection(frame, detection)

        frame = cv2.flip(frame, 0)
        scriptOp.copyNumpyArray(frame)
    return

In the first place, we need to import MediaPipe into the Python code. The next step is to define a few variables to work with the face detection, mp_face and visualisation of the detected face, mp_drawing, and finally the face detection class instance, face, with the detection confidence value.

To process the video, we also convert the RGBA frame into RGB only. It is found that the image format MediaPipe face detection expected is vertically flipped as compared with the TouchDesigner TOP. In the Python code, we first flip the image vertically before sending it to the face detection with face.process(frame). After the mp_drawing utility draws the detection results onto the frame, we also flip the image vertically again for output to the Script TOP. The object, results.detections contains all the details of the detected faces. Each face will be visualised with a bounding box and 6 dots indicating the two ears, eyes, nose tip and the mouth centre.

The TouchDesigner project file is in this GitHub repository.

Script TOP in TouchDesigner – Canny Edge Detector

After the first introduction of the Script TOP, the coming example will implement the Canny Edge Detector with OpenCV in TouchDesigner as a demonstration. TouchDesigner already includes its own Edge TOP for edge detection and visualisation.

We also implement a slider parameter Threshold in the Script TOP to control the variation of edge detection.

Here is the source code of the Script TOP. Note that we have made a lot of changes in the default function, onSetupParameters to include a custom parameter, Threshold as an integer slider. It will generate a value between 5 and 60, to be used in the onCook function as a threshold value for the Canny edge detection.

# me - this DAT
# scriptOp - the OP which is cooking
import numpy as np
import cv2
# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
def onSetupParameters(scriptOp):
    page = scriptOp.appendCustomPage('Custom')
    p = page.appendInt('Threshold', label='Threshold')
    t = p[0]
    t.normMin = 5
    t.normMax = 60
    t.default = 10
    t.min = 5
    t.max = 60
    t.clampMin = True
    t.clampMax = True
    return

# called whenever custom pulse parameter is pushed
def onPulse(par):
    return

def onCook(scriptOp):
    thresh = scriptOp.par.Threshold.eval()
    image = scriptOp.inputs[0].numpyArray(delayed=True, writable=True)
    if image is None:
        return

    image *= 255
    image = image.astype('uint8')
    gray = cv2.cvtColor(image, cv2.COLOR_RGBA2GRAY)
    gray = cv2.blur(gray, (3, 3))
    edges = cv2.Canny(gray, thresh, 3*thresh, 3)
    output = cv2.cvtColor(edges, cv2.COLOR_GRAY2RGBA)
    scriptOp.copyNumpyArray(output)
    return

The first line in the onCook function is to retrieve the integer value from the parameter, Threshold. We also exit the function when there is not valid video image coming in. For the edge detection, we convert the RGBA image into grayscale and then perform a blur function. the cv2.Canny function returns the detected edges in a grayscale image, edges. Finally, we convert the edges into a regular RGBA image, output, for subsequent output as before.

The final TouchDesign project is available in this GitHub repository.

Script TOP in TouchDesigner

Before we start using MediaPipe in TouchDesigner, we need to be familiar with the use of the Script TOP and Script CHOP first. For the Script TOP, we can generate the image (TOP) directly from Python code. In the following example, we are going to pass through the incoming image from Video Device In TOP to the output window with minimal manipulation in Python inside the Script TOP. The OpenCV in TouchDesigner reference page in the Derivative website is a good starting point.

We create a very simple TouchDesigner project, connecting the Video Device In to the Script TOP and then to the Output window. Note that the Script TOP comes with an associated Script Text DAT. We are going to modify the default Python code inside this text area with the name script1_callbacks.

We can directly edit the Python code inside the Text DAT by turning on the Viewer Active button in the bottom right corner. Alternately, we can click the Edit button in the parameter window to open the code in your default code editor, XCode in my case.

# me - this DAT
# scriptOp - the OP which is cooking
import numpy as np
# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
def onSetupParameters(scriptOp):
     page = scriptOp.appendCustomPage('Custom')
     p = page.appendFloat('Valuea', label='Value A')
     p = page.appendFloat('Valueb', label='Value B')
     return

# called whenever custom pulse parameter is pushed
def onPulse(par):
    return

def onCook(scriptOp):
    image = scriptOp.inputs[0].numpyArray(delayed=True, writable=True)
    image *= 255
    image = image.astype('uint8')
    scriptOp.copyNumpyArray(image)
    return

The code has 3 functions, onSetupParameters, onPulse and onCook. We only use the onCook for this example. Cooking is the update of a node when necessary for very frame. The detailed explanation can be found from the TouchDesigner Cook page. Essentially, we can consider it as frame by frame update of the node we are working on. The first function, onSetupParameters is triggered by a button in the parameter window under the Setup tab. We can consider it the initialisation of the process. The second function, onPulse, will not be used here since we do not have any Pulse button or Pulse parameters defined here. We are going to walk through the simple onCook function.

In the first line, scriptOp (the current node), will retrieve its first input, 0, (the Video Device In) and convert the current video frame in a NumPy array. The format of the array is Height x Width x RGBA. Each colour pixel is a 32 bit floating point number within the range of 0 to 1. In our case, the video size is 1280 x 720. The 2 optional parameters, delayed=True and writable=True will be explained in the TOP class reference. In this example, we aim to convert the 32 bit floating point colour format to 8 bit unsigned integer for output.

In the second line, each colour pixel will multiply 255 by itself to convert the colour range between 0 to 255.

The third line, the NumPy array is modified into 8 bit unsigned integer format, uint8.

The last line will copy back the NumPy array, with the function copyNumpyArray, into the Script TOP texture for output.

The final TouchDesigner project can be downloaded from this GitHub repository.

MediaPipe in TouchDesigner 1

It is the part 1 of the tutorials introducing the use of the Google MediaPipe machine learning library in TouchDesigner. It will assume basic knowledge of TouchDesigner and fundamental coding skill in Python. The platform I am working on is a MacBook Pro running the OSX 11. TouchDesigner has its integrated Python programming environment. At the moment of writing, the Python version is 3.7. It also comes with a number of pre-installed external libraries, such as NumPy and OpenCV.

The first installation will be the Python programming language environment. I would recommend installing the official 3.7 version from the Python download website. Expand the dmg file and run the installer to install the proper Python version to the computer.

After we have the Python installed, the next step will be external libraries we would like to use in the Python environment. The target one is MediaPipe. We are going to use the pip command from the OSX Terminal. For general usage of the OSX Terminal, we can refer to the Terminal User Guide from Apple. For those who may have multiple Python versions installed, we can use the specific command pip3.7 to install the external libraries to make sure they are compatible with the TouchDesigner. For a brand new Python environment, the libraries it come with are:

  • pip
  • setuptools
  • wheel
pip list command

Pip is one of the package management system we can use in the Python environment. To install extra library such as the MediaPipe, we can type the following from the Terminal.

pip3.7 install --upgrade --user mediapipe
Install MediaPipe with pip

The following screenshot listed all the libraries we have after the installation.

The list of libraries after installing MediaPipe

After we ready the Python and the MediaPipe library, we can go back to TouchDesigner to enable it to link to the external libraries that we have installed outside it.

From the TouchDesigner pull down menu, choose Dialogs – Textport and DATs.

Textport and DATs

Inside the Textport, we can try to import OpenCV and list its current version.

OpenCV

The next step is to customise the external libraries location from the Preferences menu. From the pull down menu, choose TouchDesigner – Preferences – General.

Preferences

Click the folder icon from the description, Python 64-bit Module Path. It will open up the file location dialog panel. Choose the home directory of your current user account. Since the Python libraries are installed inside the hidden Library folder, we need to type CMD SHIFT <period> to display all the hidden folders. Press the CMD, SHIFT and period “.” keys together. Choose the correct folder location as

Library/Python/3.7/lib/python/site-packages

and click Open.

External modules folder

Click the Save button for the Preferences panel.

Save the Preferences

After we save the preferences, we can verify the installation of MediaPipe from the Textport panel by importing the mediapipe module and list out some of its components.

import mediapipe as mp
print(dir(mp.solutions))
Verify the MediaPipe installation

We are now ready to play with the MediaPipe library in TouchDesigner. The first one will be the face detection facility in a Script TOP.