Author Archives: bryan

Movement in Time, Part 2, Red Flag version (2021)

After the exhibition of the Red Temple version in the USA, I consolidated the display screens and enhanced the artwork with the introduction of cloth simulation in the rendering of the Chinese calligraphy.

The following videos are the various development phases of this version.

The following video is the exhibition ready prototype of Movement in Time, Part 2, Red Flag version.

iFaceDQ (2021)

This artwork is an extended version of the former work, Be a Hong Kong Patriot, Part 3 – The Red Scout, adopted for the group exhibition Art Machines – Past/Present, shown in the Indra and Harry Banga Gallery, City University of Hong Kong.

Exhibition leaflet is also available here.

The artwork features a magic mirror that can detect the face of the visitor and determine according to machine learning how likely he/she may be disqualified from running the Hong Kong Legislative Council election. Through the mirror, the visitor’s face will also be transformed into one of the existing Legislative Council member’s face based on a closest match algorithm.

Testing video of iFaceDQ

Two face transformation methods had been experimented with. The first one is the Face Morphing.

The second method is the Face Swapping.

The face matching algorithm is done by the Python scikit-learn library. The following images are demonstration of the matching results.

Finally, the classification and matching details are included in one single interface.

The artwork also summarises the detection and classification statistics in form of a pie chart.

National Anthem (2020)

In a response to the recent legislative process of the National Anthem Ordinance in Hong Kong, National Anthem is a music based artwork exploring the use of machine learning in media art. The piece used the Google Magenta library to learn over 140 national anthems in the world.

It employed the Long Short-Term Memory (LSTM) neural network to perform the training. The artwork is interactive. Audience can input a sequence of 5 musical notes from a midi keyboard in the exhibition. The custom software will generate a new version of national anthem for the audience with length in 15 seconds.

Here is a video of the screen capture of the performance.

https://www.youtube.com/watch?v=sZ5Z-JWrdoQ

The image below is a typical melody generated from the software with the initial 5 input notes.

Exhibition

The artwork participated in the group exhibition Castles in The Air, 3 Apr – 30 May 2020 in the Karin Weber Gallery, Hong Kong.

Castles in the Air exhibition (Karin Weber Gallery)

Movement in Time, Part 2, live version (2020)

The live version of Movement in Time, Part 2 was proposed to an exhibition originally scheduled to show in Shenzhen in mid 2020. Owing to the covid-19 situation, the exhibition was postponed without any new schedule yet.

The work is a modified version of the original Movement in Time, Part 2. Instead of using existing martial film fighting sequences, the live version will employ a camera to capture the movement of the visitors to generate the cursive style Chinese calligraphic characters.

The first approach made use of a regular webcam to capture the optical flow of the visitor’s movement.

The second approach made use of a depth camera (Primesense) to capture the skeleton movement to identify the closest matched Chinese character.

Movement in Time, Part 2, Red Temple version (2019)

Hong Kong in Poor Images

This is a re-run of the original Movement in Time, Part 2 artwork with 2 new fighting scenes from the 2 martial art films with the same Chinese title 火燒紅蓮寺.

  • The Burning of the Red Lotus Temple 1963
  • Burning Paradise 1994

It is created for an exhibition Hong Kong in Poor Images (curated by Hong Zeng) shown in the Ely Center of Contemporary Art, New Haven, 12 Jan – 16 Feb 2020 .

The Movement in Time series explore the creative use of movement/motion data obtained from found footages of motion pictures. The Part 2 series investigate the motion data from the fighting sequences of martial art films from Hong Kong, Taiwan and China. Once the motion data is extracted from different scenes, it is matched against a database of cursive style Chinese calligraphic characters known as the One Thousand Characters Classics 千字文. The actual matching is based on machine learning algorithms. The matched Chinese characters will be shown on the screen as animated writings. This version also gathers all the matched characters into poetic lines. These lines are, however, not sensible to read as real poem.

In order to obtain the movement/motion data, the custom software that I developed makes use of the technique known as optical flow, in computer vision. It basically tracks the flow of pixels across consecutive picture frames in time. The visualisation of optical flow resembles a low resolution image digitised in poor quality. Nevertheless, it is just low-res enough to give viewer a sense of what the motion is.

Post exhibition development

After the exhibition in Yale, I reworked the software with the same film clip from The Burning of the Red Lotus Temple, with different approaches. Here are some of the experiments. The final artwork may show in a coming exhibition in Shenzhen in July 2020.

Be a Hong Kong Patriot, Part 3, The Red Scout (2019)

做愛國港人之三,紅色童子軍

Introduction

Be a Hong Kong Patriot, Part 3 – The Red Scout is a joke for the participating audience. More than a decade ago, in the Part 1 – Love Takes the Victoria Peak of the series, The artist prepared a dildo attached with the Chinese national flag that waved according to the current Hong Kong Hang Seng Stock Index. In the Part 2 – The Fuzzy Wanker associated the Internet traffic of the listed companies from the local stock market with the tangible flow of small metal particles. The Part 3 of the series – The Red Scout will judge a member of the audience, based on a portrait photograph to tell if he/she is patriotic or not.

Be a Hong Kong Patriot, Part 3 – The Red Scout, exhibition at the Lumenvisum

Background

Similar to the Part 1 and Part 2, the Chinese title of the artwork  紅色童子軍 was also modeled upon the Chinese revolutionary propaganda opera 紅色娘子軍, The Red Detachment of Women, 1962. Nevertheless, the content of the opera did not have any relationship with this project.

The Red Detachment of Women, 1962 (image from Wikipedia)

Information about artificial intelligence (AI) and face recognition has ever growing popularity, especially that related to the surveillance applications in China. Academic journals also cover articles on the use of AI and face recognition to predict the political stance and voting preference.

Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Michal W. Kosinski, Yilun Wang, Journal of Personality and Social Psychology, February 2018, Vol. 114, Issue 2, Pages 246-257.

Political ideology at face value. Jakub Samochowiec, Michaela Wänke, Klaus Fiedler, Social Psychology and Personality Science, first publishedJuly 19, 2010.

Democrats and Republicans can be differentiated from their faces. Nicholas O. Rule, Nalini Ambady, Public Library of Science, first published Jan 18, 2010.

In the project, a custom software was developed to classify human face and determine if the owner is patriotic or not by training an artificial neural network with hundreds of known faces of Hong Kong government officials, councilors, and political celebrities. Nevertheless there are two known issues:

  1. The size of the training sets is relatively small (around 300 portrait photos).
  2. Since the application is a typical supervised learning, who is going to label the portrait photos for training?

As a result, the project will not be a piece of artwork working with machine learning. It will be an artwork about machine learning, and about the assumptions and limitations of machine learning in general. Eventually, the artist took up the role to label all the photos based on the public opinions and political stances of the photo owners that were available in the public domain. The key criterion was whether they demonstrated blind loyalty to the Chinese government.

Experiments

Here are the samples of the portrait photos for training.

Training data set

The first experiment with the data set was an unsupervised clustering into 3 groups.

The clustering is done by the facial landmarks of each photo. The software employed the Python binding of the dlib library to identify the facial landmarks and used scikit-learn to perform the clustering. The following video demonstrates the extraction of facial landmarks from each of the photo in the data set.

The second experiment was to label all the photos in the dataset and train a deep neural network (convolution network) for later classification use. The following images are the average faces (Eigenface) of the patriotic and unpatriotic groups. During the data labeling process, the artist reflected upon the classification task and brought up the following questions:

Who has the authority to classify another person as patriotic or unpatriotic?

Based on what evidence that one can be classified as patriot or not?

Is there any governance of the data labeling process in the AI industry?

These questions are the direct response to a few incidences in Hong Kong, around 2016 when a number of the Legislative Council candidates were disqualified from participating in the election due to their political opinions expressed in social media. Hong Kong government used manual text mining to classify them as unsuitable to run for the election.

Patriotic vs. unpatriotic average face

The third experiment was to explore the use of various face recognition service providers. Finally, the project tested with the Microsoft Azure Facial Recognition and the Face++ cognitive services.

The project eventually chose the Face++ because of its Chinese background. It is part of the MEGVII company that was blacklisted by the Trump’s administration to do business in the States.

The fourth experiment was to develop the software to match the audience face with the closest face in the database that classified as patriotic. It will be used as a recommendation to the audience that if he/she wants to be patriotic, the matched face will be the closest model that he/she can consider to change into.

The fifth experiment was to develop another software to swap the face of the audience with the known faces of the patriotic group. The software is an enhanced live version of what the OpenCV tutorial demonstrated in the official documentation.

Exhibition

The exhibition offered a space for audience to experience the all-encompassing surveillance devices. The major component is a piece of photo-taking software that comprises of an artificial neural network trained with facial features of hundreds of local government officers, politicians, and celebrities. It will differentiate if a visitor is patriotic or not by analyzing his/her portrait photograph.

Before the audience entered the main exhibition venue, they had been presented with a warning that extensive video surveillance would be in place as a performative commentary of the artwork.

The exhibition venue turns into a bureaucratic office space where audience need to queue up for a patriotic test. The software can also recommend how they can ‘improve’ the face to be more patriotic. Finally, they have to speak a statement in front of another camera that is able to swap their faces with members from the patriotic group. The artwork drew visual reference from the scenography of films from Roy Andersson to create the sense of bureaucracy in modern society. Here is one visual reference from the Telegraph, UK.

Roy Andersson scene from the Telegraph
Exhibition floor plan of Lumenvisum

Waiting Area

Each visitor has to get a ticket from the ticket machine and wait until the officer announces her/his number to go inside the photo-taking area.

Before the visitor’s turn to enter, she/he either sit in the Waiting Area or explore a few facial recognition devices, such as the emotion recognition and facial landmarks detection devices shown below.

Emotion detection device
Facial features detection device

Along the wall in the gallery, the visitor can also find the Eigenfaces and other clustering images in the database as photographic displays.

Photo-taking Area

The Photo-taking Area is the main interaction area where the patriotic test will take place here. In addition to the service desk for taking photo. The display monitor on the main wall will show all the sample portrait photographs of the dataset and the brief procedure to extract the facial features for neural network training. It also reminds the audience that in China, the portrait of the party leaders will appear in every government office, corporations, and even general household.

In this area, he first photo will predict if the visitor is patriotic or not according to the trained artificial neural network model.

In the second photo, the system will perform a facial recognition test and list out the personal details of the visitor, such as the gender, age, emotion, health status, and a beauty index, through the service provided by the Chinese company, MEGVII. This company is one of those banned from conducting business with the States by the Trump’s administration. Apparently, the portrait photos will go to some servers in China and we may have little control over the use of them.

In the third photo, the system will identify a member from the patriotic group whom with facial features closest to the visitor’s and recommend her/him to perform plastic surgery according to the model face if she/he wants to be more patriotic.

The officer will print a hard-copy record of all the face recognition and patriotic test results for the visitor.

Declaration & Confession Area

The officer will then guide the visitor to the Declaration & Confession Area. Depending on the patriotic result, the visitor will be invited to read one of the two statements. If the visitor is classified as patriotic, she/he will read a declaration to assert the patriotic status. If the visitor is classified as unpatriotic, she/he will need to read a confession statement and promise to become patriotic in the future. During the reading of the statement, the visitor’s face will also be swapped with one of the Hong Kong government officials. The ‘performance’ will be broadcasted live to a display monitor in the Waiting Area. This section is a response to a phenomenon common in China in which criminal suspects are often required to confess in front of a camera of the Chinese TV news channels and with live broadcast.

Here is the collection of the popular confession videos in the Chinese TV shown in the exhibition area when there are no visitors in the venue.

Making confession and declaration with face swapping effect
Live broadcast of the face swapped confession video

After the face swapping performance, the visitor will be asked to store her/his personal record in a file cabinet in the Waiting Area. If she/he agrees, the patriotic test record will be kept in one of the two drawers of the cabinet depending on the test results. In this case, every visitor in the exhibition can read the test results of others in case they are willing to share their secret.

File cabinet storage of the patriotic test results
File the patriotic test record

If any visitor accidentally open the last drawer (which is labeled as ‘confidential’) of the cabinet. she/he will find the live display of a hidden security camera overlooking everything in the exhibition venue. The installation made use of a 360 security camera created by the Chinese technology company Mi. The company is actually notorious of sending users’ information to its Chinese servers without users’ prior consensus.

Hidden security camera display

Finally, the visitor can depart the exhibition through the exit or she/he can stay in the Waiting Area to observe other audience’s performance, through the live display monitor.

The project also has a separate website and a Facebook page for communication with the general public. The source code of the software developed in the project is also open source and distributed in the GitHub repository.

Walk through of the exhibition
Interview by Lumenvisum

The exhibition of the project was funded by the Hong Kong Arts Development Council and the venue was sponsored by the Lumenvisum. The following gallery is the photos documented by Lumenvisum during the exhibition opening.

Movement in Space, Part 2 (2018)

Movement in Space, Part 2 is an interactive installation with maximum 4 participants controlling 4 sets of animated graphics generated from harmonic motion, and shown through a hologram display. Each animated graphics can influence each other with connection and dis-connection of physical cables.

The artwork is part of the Algorithmic Art: Shuffling Space & Time exhibition in Hong Kong City Hall from 27 Dec 2018 to 10 Jan 2019. The exhibition is one of the public event from the Art Machines: International Symposium of Computational Media Art that took place from 4-7 Jan 2019 at the School of Creative Media, City University of Hong Kong. Movement in Time, Part 2 has been invited to join the exhibition as one of the local participants showcasing the concept of algorithmic art.

Algorithmic Art: Shuffling Space & Time promotion video

Exhibition view at the Hong Kong City Hall

Part 2 is an extension of the original web version of Movement in Space. It also built on top of the implementation of harmonograph by pure software, without the hardware details of the construction of pendulum.

Harmonograph, image from karlsims.com

The custom software was built in Processing around the ideas of concatenating sequences of trigonometric functions such as sine and cosine. The imageries may resemble the early computer art and oscilloscope art, such as the works from Ben Laposky.

Oscillon by Ben Laposky, image from http://dada.compart-bremen.de

It starts with the simple parametric formulae to draw an ellipse in 2 dimensional plane.

x = A x cos (t)
y = B x sin (t)

where A, B are two numbers and t is the time step for animation. It was then extended to 3 dimensional space with more parameters for more sophisticated control.

x = A x cos (B x t) + C x sin (t + D)
y = E x sin (F x t) + G x cos (t + H)
z = I x cos (J x t) x sin (K x t + L)

where A, B, C, D, E, F, G, H, I, J, K and L are 12 numbers ranging from -1 to 1. By changing the 12 numbers, the software can generate very sophisticated drawings. In addition to this, the artwork consists of 4 drawing units. The outputs (x, y, z) of 1 drawing can redirect to the inputs (A – L) or another drawing unit, and which is a simplified version of an artificial neural network. In the installation, the 12 numbers are controlled by the iPad interface while the connection from one unit to another is done by physically plug in of a signal cable.

Connection of various drawing units to form a network

Each of the four units is equipped with physical sockets and cables to connect the outputs from one unit to the inputs of another.

The video below is a simulation of the 4 drawing units viewed from 4 directions (North, West, South and East). Each participant has his/her own color coded animated drawing.

It is a revolving version of the four animated drawings in the 3 dimensional space.

The video below is a software simulation of the hologram display of the four directions.

Video documentation of the artwork exhibited in the City Hall