Be a Hong Kong Patriot, Part 3 – The Red Scout is a joke for the participating audience. More than a decade ago, in the Part 1 – Love Takes the Victoria Peak of the series, The artist prepared a dildo attached with the Chinese national flag that waved according to the current Hong Kong Hang Seng Stock Index. In the Part 2 – The Fuzzy Wanker associated the Internet traffic of the listed companies from the local stock market with the tangible flow of small metal particles. The Part 3 of the series – The Red Scout will judge a member of the audience, based on a portrait photograph to tell if he/she is patriotic or not.
Similar to the Part 1 and Part 2, the Chinese title of the artwork 紅色童子軍 was also modeled upon the Chinese revolutionary propaganda opera 紅色娘子軍, The Red Detachment of Women, 1962. Nevertheless, the content of the opera did not have any relationship with this project.
Information about artificial intelligence (AI) and face recognition has ever growing popularity, especially that related to the surveillance applications in China. Academic journals also cover articles on the use of AI and face recognition to predict the political stance and voting preference.
Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Michal W. Kosinski, Yilun Wang, Journal of Personality and Social Psychology, February 2018, Vol. 114, Issue 2, Pages 246-257.
In the project, a custom software was developed to classify human face and determine if the owner is patriotic or not by training an artificial neural network with hundreds of known faces of Hong Kong government officials, councilors, and political celebrities. Nevertheless there are two known issues:
- The size of the training sets is relatively small (around 300 portrait photos).
- Since the application is a typical supervised learning, who is going to label the portrait photos for training?
As a result, the project will not be a piece of artwork working with machine learning. It will be an artwork about machine learning, and about the assumptions and limitations of machine learning in general. Eventually, the artist took up the role to label all the photos based on the public opinions and political stances of the photo owners that were available in the public domain. The key criterion was whether they demonstrated blind loyalty to the Chinese government.
Here are the samples of the portrait photos for training.
The first experiment with the dataset was an unsupervised clustering into 3 groups.
The clustering is done by the facial landmarks of each photo. The software employed the Python binding of the dlib library to identify the facial landmarks and used scikit-learn to perform the clustering. The following video demonstrates the extraction of facial landmarks from each of the photo in the dataset.
The second experiment was to label all the photos in the dataset and train a deep neural network (convolution network) for later classification use. The following images are the average faces (Eigenface) of the patriotic and unpatriotic groups. During the data labeling process, the artist reflected upon the classification task and brought up the following questions:
Who has the authority to classify another person as patriotic or unpatriotic?
Based on what evidence that one can be classified as patriot or not?
Is there any governance of the data labeling process in the AI industry?
These questions are the direct response to a few incidences in Hong Kong, around 2016 when a number of the Legislative Council candidates were disqualified from participating in the election due to their political opinions expressed in social media. Hong Kong government used manual text mining to classify them as unsuitable to run for the election.
The fourth experiment was to develop the software to match the audience face with the closest face in the database that classified as patriotic. It will be used as a recommendation to the audience that if he/she wants to be patriotic, the matched face will be the closest model that he/she can consider to change into.
The fifth experiment was to develop another software to swap the face of the audience with the known faces of the patriotic group. The software is an enhanced live version of what the OpenCV tutorial demonstrated in the official documentation.
The exhibition offered a space for audience to experience the all-encompassing surveillance devices. The major component is a piece of photo-taking software that comprises of an artificial neural network trained with facial features of hundreds of local government officers, politicians, and celebrities. It will differentiate if a visitor is patriotic or not by analyzing his/her portrait photograph.
Before the audience entered the main exhibition venue, they had been presented with a warning that extensive video surveillance would be in place as a performative commentary of the artwork.
The exhibition venue turns into a bureaucratic office space where audience need to queue up for a patriotic test. The software can also recommend how they can ‘improve’ the face to be more patriotic. Finally, they have to speak a statement in front of another camera that is able to swap their faces with members from the patriotic group. The artwork drew visual reference from the scenography of films from Roy Andersson to create the sense of bureaucracy in modern society. Here is one visual reference from the Telegraph, UK.
Each visitor has to get a ticket from the ticket machine and wait until the officer announces her/his number to go inside the photo-taking area.
Before the visitor’s turn to enter, she/he either sit in the Waiting Area or explore a few facial recognition devices, such as the emotion recognition and facial landmarks detection devices shown below.
Along the wall in the gallery, the visitor can also find the Eigenfaces and other clustering images in the database as photographic displays.
The Photo-taking Area is the main interaction area where the patriotic test will take place here. In addition to the service desk for taking photo. The display monitor on the main wall will show all the sample portrait photographs of the dataset and the brief procedure to extract the facial features for neural network training. It also reminds the audience that in China, the portrait of the party leaders will appear in every government office, corporations, and even general household.
In this area, he first photo will predict if the visitor is patriotic or not according to the trained artificial neural network model.
In the second photo, the system will perform a facial recognition test and list out the personal details of the visitor, such as the gender, age, emotion, health status, and a beauty index, through the service provided by the Chinese company, MEGVII. This company is one of those banned from conducting business with the States by the Trump’s administration. Apparently, the portrait photos will go to some servers in China and we may have little control over the use of them.
In the third photo, the system will identify a member from the patriotic group whom with facial features closest to the visitor’s and recommend her/him to perform plastic surgery according to the model face if she/he wants to be more patriotic.
The officer will print a hardcopy record of all the face recognition and patriotic test results for the visitor.
Declaration & Confession Area
The officer will then guide the visitor to the Declaration & Confession Area. Depending on the patriotic result, the visitor will be invited to read one of the two statements. If the visitor is classified as patriotic, she/he will read a declaration to assert the patriotic status. If the visitor is classified as unpatriotic, she/he will need to read a confession statement and promise to become patriotic in the future. During the reading of the statement, the visitor’s face will also be swapped with one of the Hong Kong government officials. The ‘performance’ will be broadcasted live to a display monitor in the Waiting Area. This section is a response to a phenomenon common in China in which criminal suspects are often required to confess in front of a camera of the Chinese TV news channels and with live broadcast.
Here is the collection of the popular confession videos in the Chinese TV shown in the exhibition area when there are no visitors in the venue.
After the face swapping performance, the visitor will be asked to store her/his personal record in a file cabinet in the Waiting Area. If she/he agrees, the patriotic test record will be kept in one of the two drawers of the cabinet depending on the test results. In this case, every visitor in the exhibition can read the test results of others in case they are willing to share their secret.
If any visitor accidentally open the last drawer (which is labeled as ‘confidential’) of the cabinet. she/he will find the live display of a hidden security camera overlooking everything in the exhibition venue. The installation made use of a 360 security camera created by the Chinese technology company Mi. The company is actually notorious of sending users’ information to its Chinese servers without users’ prior consensus.
Finally, the visitor can depart the exhibition through the exit or she/he can stay in the Waiting Area to observe other audience’s performance, through the live display monitor.
The project also has a separate website and a Facebook page for communication with the general public. The source code of the software developed in the project is also open source and distributed in the GitHub repository.
The exhibition of the project was funded by the Hong Kong Arts Development Council and the venue was sponsored by the Lumenvisum. The following gallery is the photos documented by Lumenvisum during the exhibition opening.