Now we are ready to integrate the MediaPipe functions in TouchDesigner after we learnt the basic of the Script TOP. The first one we are going to do is the Face Detection. We just use the Script TOP to display the bounding boxes of the detected faces without sending the face details elsewhere for processing. In the next example after this, we shall send the bounding box details to a Script CHOP.
In order to have the mirror image effect, we use the Flip TOP with a horizontal flip. We also add a Resolution TOP to reduce the original 1280 x 720 to half, i.e. 640 x 360 for better performance. Of course, we can achieve the same result by changing the Output Resolution of the Flip TOP from its Common tab.
# me - this DAT # scriptOp - the OP which is cooking import numpy import cv2 import mediapipe as mp mp_face = mp.solutions.face_detection mp_drawing = mp.solutions.drawing_utils face = mp_face.FaceDetection( min_detection_confidence=0.7 ) # press 'Setup Parameters' in the OP to call this function to re-create the parameters. def onSetupParameters(scriptOp): return # called whenever custom pulse parameter is pushed def onPulse(par): return def onCook(scriptOp): input = scriptOp.inputs.numpyArray(delayed=True) if input is not None: frame = cv2.cvtColor(input, cv2.COLOR_RGBA2RGB) frame = cv2.flip(frame, 0) frame *= 255 frame = frame.astype('uint8') results = face.process(frame) if results.detections: for detection in results.detections: mp_drawing.draw_detection(frame, detection) frame = cv2.flip(frame, 0) scriptOp.copyNumpyArray(frame) return
In the first place, we need to import MediaPipe into the Python code. The next step is to define a few variables to work with the face detection, mp_face and visualisation of the detected face, mp_drawing, and finally the face detection class instance, face, with the detection confidence value.
To process the video, we also convert the RGBA frame into RGB only. It is found that the image format MediaPipe face detection expected is vertically flipped as compared with the TouchDesigner TOP. In the Python code, we first flip the image vertically before sending it to the face detection with face.process(frame). After the mp_drawing utility draws the detection results onto the frame, we also flip the image vertically again for output to the Script TOP. The object, results.detections contains all the details of the detected faces. Each face will be visualised with a bounding box and 6 dots indicating the two ears, eyes, nose tip and the mouth centre.
The TouchDesigner project file is in this GitHub repository.