50 . Shades of Grey (2015)

The artwork won the Grand Prize of the Japan Media Art Festival 2015, Art Division. Details can be available at the festival website.

The artwork 50 . Shades of Grey is part of the Early White Exhibition curated by Cally Yu in 1A Space Gallery. In this work, I created a very simply graphical pattern of 50 shades of grey tone with different programming languages I have learnt in the past, and which were already obsolete and no longer popular nowadays. The graphical pattern is extremely simple. I decide to show only the codes and not the image in the exhibition.

50 . Shades of Grey

The work is both a conceptual and visual art piece. The visual part is a simple computer graphics pattern displaying 50 shades of grey tone. Nevertheless, it documents my training as a computer artist, using programming languages to create imagery on screen. The software tools come and go at increasing speed, echoing the ever shortening cycle of IT trends. I self learnt all the programming languages over the last thirty years. Some of them were popular at some points in time in the creative art/design histories. Some of them disappeared from the industries. The fear of obsolescence is a haunting theme in the computer business, as well as in the digital arts. In the work, I go back to these old programming languages, which I have worked with in different years in my life, to generate the same image, fifty shades of grey tone ranging from black to white, as reflected in the Chinese title of the work, Half a Hundred, Half White.

Audience can compare among the different programming languages as poetic text and their relative time in history. The programming languages I have chosen are: Basic, Fortran, Pascal, Lisp, Lingo (Director), ActionScript (Flash). These languages are once popular and now obsolete.

Basic was released around 1964. I came across Basic in some leisure readings in 1981 and last used it in 1985 in a course project.

for i=1 to shades

Fortran was released around 1958. I learnt computer programming with Fortran in 1981, the first course in my university study and last coded it in an internship in 1983.

The codes below used the GnuFor2 interface between the GNU Fortran and Gnuplot.

program grey
use gnufor2
implicit none
integer, parameter  :: Nm = 800
integer             :: rgb(3, Nm, Nm)
integer             :: i, j, k
integer             :: shades
integer             :: step
integer             :: c
shades = 50
step = Nm/shades
do i = 1, shades
     c = (i-1)*255/shades
     do j = (i-1)*step, i*step
          do k = 1, Nm
               rgb(1,j,k) = c
               rgb(2,j,k) = c
               rgb(3,j,k) = c
           end do
     end do
end do
call image(rgb, pause=-1.0, persist='no')
end program grey

Pascal was released around 1970. I used Pascal in the 2nd course in university in 1982 and last used it in a computer graphics course in 1984.

unit Unit1;
{$mode objfpc}{$H+}
  Classes, SysUtils, FileUtil, Forms, Controls, Graphics, Dialogs, StdCtrls;
  { TGrey }
  TGrey = class(TForm)
    procedure FormPaint(Sender: TObject);
    { private declarations }
    { public declarations }
  Grey:      TGrey;
  idx:       integer;
  shades:    integer;
  step:      integer;
  col:       integer;
{$R *.lfm}
{ TGrey }
procedure TGrey.FormPaint(Sender: TObject);
     shades := 50;
     step := Round(Grey.width/shades);
     for idx:= 0 to (shades-1) do
       col := 255-Round(idx*255/shades);
       canvas.Brush.Color := RGBToColor(col, col, col);
       canvas.FillRect(0, 0, (shades-idx)*step, Grey.height);

Lisp was released around 1959. I first encountered Lisp in a programming language course in 1983 and my last program in Lisp was the artificial intelligence course in 1984.

(ql:quickload :lispbuilder-sdl)
(defvar *SIZE* 800)
(defvar *SHADES* 50)
(defvar *STEP* (/ *SIZE* *SHADES*))
(defvar *COL* 0)
(defun box(n) 
   (cond ((>= n *SHADES*) nil)
         ((< n *SHADES*) (progn
                     (setq *COL* (/ (* n 255) *SHADES*))
                     (sdl:draw-box-* (* *STEP* n) 0 *STEP* *SIZE* 
                         :color (sdl:color :r *COL* :g *COL* :b *COL*))
                     (box (+ n 1))))))
   (sdl:window *SIZE* *SIZE* :title-caption "50 . Shades of Grey")
   (setf (sdl:frame-rate) 60)
   (sdl:with-events ()
      (:quit-event () t)
      (:key-down-event () (sdl:push-quit-event))
      (:idle ()
         (sdl:clear-display sdl:*black*)
         (box 0)

Lingo (Director) was released around 1993. I first wrote in Lingo in my master degree study in 1996 and last coded in Lingo in 2002 for an interactive installation project.

on exitFrame me
   width = the stageRight - the stageLeft
   height = the stageBottom - the stageTop
   shades = 50
   step = width/shades
   objImage = _movie.stage.image
   repeat with i = 1 to shades
      c = integer((i-1)*255/shades)
      objImage.fill(point((i-1)*step,0), point(i*step,height), rgb(c,c,c))
   end repeat

ActionScript (Flash) was released around 2000. I first employed ActionScript for teaching computer programming in 2002 and last used ActionScript in 2005 for a location based game.

import flash.display.Shape;
var square:Shape = new Shape();
var shades:uint = 50;
var step:uint = 256 / shades;
var w:uint = stage.stageWidth / shades;
for (var i:uint=0; i<shades; i++)
	var grey:uint = Math.floor(i * step);
	var col:uint = (grey << 16) + (grey << 8) + grey;
	square.graphics.drawRect(i*w, 0, (i+1)*w, stage.stageHeight);

Other than the six programming languages shown in the exhibition, I also created the same graphical pattern with other languages/tools I use recently.

Big Data Small Sound (2015)

The artwork Big Data Small Sound is part of the Living Sound Exhibition in Academy of Visual Arts during the French May Festival 2015. The artwork is my on-going research project on big data together with Mr. Stanley Sin. We manipulated the open data in Hong Kong and converted the data into short sound clips for sonification.

Movement in Time, Part 1, interactive version (2014)

This is an interactive version of the former piece, Movement in Time, Part 1. It adopts the same principle to capture movement from the live webcam and visualize the flow in an animated painting. Instead of using the existing classic Hollywood film sequences, this version captures the live motion data from the webcam. Participants can witness the animation responds interactively with their gestural movement.

The artwork was shown in the Haptic Interface Conference Exhibition, 2014 Hong Kong.

Movement in Time, Part 1 (2014)

100 classic Hollywood films version

The work is a generative animation/painting with sources taken from one hundred classic Hollywood film sequences. I based on the selection of the most popular film moments from the IGN database. From the digitized film sequences, I investigated if I could develop a visual signature for each popular film sequence that registered the aesthetic decisions that the directors, cinematographers, editors and actors/actresses made in producing the sequence. The software I have written based on OpenCV and openFrameworks. It made use of optical flow analysis to visualize the unique motion patterns with each film sequences. The patterns were the creative outputs of the human actions, camera movements and editing efforts. The animations thus generated resembled much the action paintings in art history.

To illustrate the ideas, I divide the screen into four sections. The top left one is the original footage. I play back all the 100 film sequences in chronological order. The top right one is the motion flow in the current moment. The character’s movement, camera pan, tilt, and zoom are easy to recognize by using the optical flow data. The bottom right one is the accumulated flow information together with the colour picked up from the region where the motion happens. As a result, this corner will generate a unique animated painting for each film sequence that identifies it with the colour and motion details. In the last bottom left corner, I further visualize the motion details with 20 consecutive frames. The major motion regions are connected together across all the frames to highlight them.

I also appropriated the ideas of movement image and time image from Deleuze. Hollywood films are mainly structured through movements. The classic sequences contain signature movements that all audience can recall. I want to investigate if I can, through this software, convert the movement sequences into a visualisation of time through the use of the continuous flow of painting brush strokes. It is also an attempt to unify a spatial medium such as painting by going back to its process oriented idea in action painting, and a time-based medium such as cinema by archiving and accumulating the movement information as painting.

For more detailed rationale and description, please check out this post.

Here is the full playlist of the artwork in YouTube. Some of the clips are masked due to copyright concerns.

Each of the film clip will generate a unique signature. The following gallery contains all the 100 snapshots of the animation captured at the end of each clip.

Movement in Void, A Tribute to TV Buddha (2013)

“Movement in Void” is my artistic investigation of the brain wave sensing technology, a reflection of my personal yoga, meditation practices, and a tribute to Nam June Paik’s famous artwork – TV Buddha. Through this artwork, I extend the popular notion of embodiment in interactive art, by abandoning all gestural interactions and resorting to pure mental, and meditative activities. The interactive installation engages the participants in constant awareness of their mental activities and which may echo the mindfulness practices in Buddhism. The work also accumulates the brain wave data of all visitors to generate the collective patterns that manifest through the mechanical movement of the reactive environment.


Project Description
This project is an artistic investigation of the brain wave sensing technology, a reflection of my personal yoga, meditation practices, and a tribute to Nam June Paik’s famous artwork – TV Buddha.

Electroencephalography (EEG) has been in the medical research and applications for a while. Not until the recent popularity of consumer grade brainwave sensors, the applications are limited to usage in laboratories and research institutes. Companies such as Emotiv, Neurosky and OCZ have released brainwave sensing consumer products. Most of them are designed as human computer interface, such as game controller. Besides the detection of eye blink, the products are capable of measuring the degrees of relaxation and concentration. Such measurement demands a certain amount of user training. The use of the products as human computer interface, at the moment, cannot compare with the graphical user interface that we are used to, in terms of the precision and effectiveness. As a result, applications are developed mainly as games. One of the early attempts was the Brainball from the Interactive Institute Smart Studio.

Discussions on media arts often mention embodiment as an essential feature of interactivity. It implies the active engagement of the audience’s bodily activities, such as physical actions and perception, in relation with the artworks. The objects of perception and action are often outside the confinement of the physical bodies that were delimited by our skin. Little has been investigated what happens within our bodies. The use of EEG is an affordable measure to detect the subtle changes happening inside our bodies when confronted with the sensation aroused by the external artworks.

My first encounter with the brainwave indicated that successful control of the devices might demand a certain level of reflexive and introspective awareness of one’s own bodily conditions. Using the brainwave interface to replace the graphical and gestural interfaces may not be a promising direction. Its operation requires training or at least adaptation. Since I have been learning Yoga, Qi Gong for a short period, I would expect a piece of artwork that relates the brainwave measurement and those body-mind regulating activities can work well together. Instead of the common embodied interaction found in media arts, I encourage the participants to sit still and reflect on those ever-changing, ever-flowing thoughts and emotional states within their mind and bodies, in relation with the piece of work.

The layout of the artwork is a tribute to the famous TV Buddha from Nam June Paik.

A ‘meditating’ participant wearing a brainwave-sensing headset will sit in front of a wooden ball hanging from the ceiling. The brainwave device will capture the alpha, beta, theta, and gamma waves and interpret the patterns into meaningful indicators of concentration and relaxation. The result will trigger the swinging action of the wooden ball. It will swing in different directions according to the brainwave signals. As it moves in front of the participant, it may threaten the participant and thus causes emotional response from him/her. The participant may also need to re-align his/her body to avoid being hit by the ball. This forms the first feedback loop between the object and the participant. Although the brainwave signals detected are digital with great precision, the swinging motion is, however, mechanical and analog. It introduces fuzziness in the operation that is unpredictable.

The wooden ball embeds a small camera at the front. As the ball swings, the camera will pick up motion information and use it to control the vibration of eight Buddha hand models, each holding a branch of leaves, placed on the floor around the participant. The orchestrated vibration will generate a subtle tone in additional to the visible waving actions of the leaves. The camera moves along with the swinging ball. The participant may also move. The motion information obtained by traditional image processing techniques such as optical flow is of little use. Everything is relative. It thus introduces the second level of fuzziness.

I shall also introduce another level of complexity in the vibration. The brainwave signals of all participants will be recorded and the subsequent information will generate a self-organized map (SOM). The self-organized map will function as a template to produce the model hand vibration. As a result, it will be the collective effect of all participants who have been in the exhibition to generate the subtle tone produced by the vibrating hand models. All participants are connected to each other in their viewing experience of the artworks.

In this piece of work, I plan to create an environment that engages the inner body of the participant. The brainwave sensor is not just an interface like a game controller. The brainwave signals are not just raw material for visualization or sonification. The work aims to integrate the participant’s mental states with a physical space. The inner and outer bodies are connected. The body and the environment are connected. Different individuals are connected. This connectedness is the philosophical tribute to Nam June Paik’s TV Buddha.

Open source

All the hardware information and software source codes will be released as open source material. The software source codes are in the GitHub repositories.

The host programs are done in C++ with openFrameworks. The master control program uses the OpenCV 2.4.6 for the webcam swinging direction tracking, and which is an adaptation of the motion template sample.

The Brain Paint program uses Memo Akten‘s MSAFluid fluid simulation addon for openFrameworks with minor modifications.

Software Art Project (2011)

Software Art – Towards an Aesthetics of Art-oriented Programming and Programming-oriented Art

It is the artwork associated with my DFA studies in the RMIT University.

The following galleries are the bodies of images created in the development of the artwork and thesis.

Algorithmic drawings

Sine wave drawings

Particle system drawings

Pendulum simulation drawings

Image processing drawings

Interactive particle system drawings