Movement in Time, Part 2, live version (2020)

The live version of Movement in Time, Part 2 was proposed to an exhibition originally scheduled to show in Shenzhen in mid 2020. Owing to the covid-19 situation, the exhibition was postponed without any new schedule yet.

The work is a modified version of the original Movement in Time, Part 2. Instead of using existing martial film fighting sequences, the live version will employ a camera to capture the movement of the visitors to generate the cursive style Chinese calligraphic characters.

The first approach made use of a regular webcam to capture the optical flow of the visitor’s movement.

The second approach made use of a depth camera (Primesense) to capture the skeleton movement to identify the closest matched Chinese character.

Movement in Time, Part 2, Red Temple version (2019)

Hong Kong in Poor Images

This is a re-run of the original Movement in Time, Part 2 artwork with 2 new fighting scenes from the 2 martial art films with the same Chinese title 火燒紅蓮寺.

  • The Burning of the Red Lotus Temple 1963
  • Burning Paradise 1994

It is created for an exhibition Hong Kong in Poor Images (curated by Hong Zeng) shown in the Ely Center of Contemporary Art, New Haven, 12 Jan – 16 Feb 2020 .

The Movement in Time series explore the creative use of movement/motion data obtained from found footages of motion pictures. The Part 2 series investigate the motion data from the fighting sequences of martial art films from Hong Kong, Taiwan and China. Once the motion data is extracted from different scenes, it is matched against a database of cursive style Chinese calligraphic characters known as the One Thousand Characters Classics 千字文. The actual matching is based on machine learning algorithms. The matched Chinese characters will be shown on the screen as animated writings. This version also gathers all the matched characters into poetic lines. These lines are, however, not sensible to read as real poem.

In order to obtain the movement/motion data, the custom software that I developed makes use of the technique known as optical flow, in computer vision. It basically tracks the flow of pixels across consecutive picture frames in time. The visualisation of optical flow resembles a low resolution image digitised in poor quality. Nevertheless, it is just low-res enough to give viewer a sense of what the motion is.

Post exhibition development

After the exhibition in Yale, I reworked the software with the same film clip from The Burning of the Red Lotus Temple, with different approaches. Here are some of the experiments. The final artwork may show in a coming exhibition in Shenzhen in July 2020.

Movement in Space, Part 2 (2018)

Movement in Space, Part 2 is an interactive installation with maximum 4 participants controlling 4 sets of animated graphics generated from harmonic motion, and shown through a hologram display. Each animated graphics can influence each other with connection and dis-connection of physical cables.

The artwork is part of the Algorithmic Art: Shuffling Space & Time exhibition in Hong Kong City Hall from 27 Dec 2018 to 10 Jan 2019. The exhibition is one of the public event from the Art Machines: International Symposium of Computational Media Art that took place from 4-7 Jan 2019 at the School of Creative Media, City University of Hong Kong. Movement in Time, Part 2 has been invited to join the exhibition as one of the local participants showcasing the concept of algorithmic art.

Algorithmic Art: Shuffling Space & Time promotion video

Exhibition view at the Hong Kong City Hall

Part 2 is an extension of the original web version of Movement in Space. It also built on top of the implementation of harmonograph by pure software, without the hardware details of the construction of pendulum.

Harmonograph, image from karlsims.com

The custom software was built in Processing around the ideas of concatenating sequences of trigonometric functions such as sine and cosine. The imageries may resemble the early computer art and oscilloscope art, such as the works from Ben Laposky.

Oscillon by Ben Laposky, image from http://dada.compart-bremen.de

It starts with the simple parametric formulae to draw an ellipse in 2 dimensional plane.

x = A x cos (t)
y = B x sin (t)

where A, B are two numbers and t is the time step for animation. It was then extended to 3 dimensional space with more parameters for more sophisticated control.

x = A x cos (B x t) + C x sin (t + D)
y = E x sin (F x t) + G x cos (t + H)
z = I x cos (J x t) x sin (K x t + L)

where A, B, C, D, E, F, G, H, I, J, K and L are 12 numbers ranging from -1 to 1. By changing the 12 numbers, the software can generate very sophisticated drawings. In addition to this, the artwork consists of 4 drawing units. The outputs (x, y, z) of 1 drawing can redirect to the inputs (A – L) or another drawing unit, and which is a simplified version of an artificial neural network. In the installation, the 12 numbers are controlled by the iPad interface while the connection from one unit to another is done by physically plug in of a signal cable.

Connection of various drawing units to form a network

Each of the four units is equipped with physical sockets and cables to connect the outputs from one unit to the inputs of another.

The video below is a simulation of the 4 drawing units viewed from 4 directions (North, West, South and East). Each participant has his/her own color coded animated drawing.

It is a revolving version of the four animated drawings in the 3 dimensional space.

The video below is a software simulation of the hologram display of the four directions.

Video documentation of the artwork exhibited in the City Hall

Movement in Time, Part 2 (2016)

Chinese martial art film fighting sequence and Cursive style calligraphy

Background

The artwork is the second part of the Movement in Time series. Part 1 used 100 popular Hollywood film sequences to generate animated action paintings. Part 2 of the project analysed the fighting sequences in traditional Chinese martial art films. The results will match against the brush stroke data from the famous Cursive style Chinese calligraphy text – the One Thousand Characters Classics 千字文. In the end, the fight sequences will automatically generate a piece of unique text from the character database. Part of the project is funded by the Faculty Research Grant from the Hong Kong Baptist University.

The Cursive style Chinese calligraphy

In Tang Dynasty, there was a legendary story about the famous Chinese calligrapher, Zhang Xu 張旭. When he was drunk as usual, he saw a sword dance performed by the madam Gong Sun 公孫大娘. Since then, he was inspired to create the Wild Cursive style in Chinese calligraphy. It was the first motivation that I tried to use digital media to connect these two different types of traditional Chinese art forms, calligraphy and martial art.

Calligraphic work of Zhang Xu (from Wikipedia)

With the help from a former student, Ms. Lisa LAM, I digitised all the 1,000 Chinese characters using a drawing tablet with a customer program developed in Processing. Each character will have a separate XML file to keep the brush stroke information. Here is a sample of an XML file.

<character>
  <stroke time="0">
    <point>
      <x>0.21666667</x>
      <y>0.395</y>
      <w>0.032258064</w>
      <t>5</t>
    </point>
    <point>
      <x>0.21666667</x>
      <y>0.39666668</y>
      <w>0.121212125</w>
      <t>33</t>
    </point>

Given the XML file, I can recreate each Chinese character either in still image or animation.

Still image of the first character from the One Thousand Characters Classics (Sky)
The collection of 1000 characters from the database

Given the touch/pressure sensitivity of the drawing tablet, I can simulate the depth information in 3D. It introduces an additional stylistic rendering of the Chinese characters in three dimensional space. Here are some experiments with 3D rendering of the Chinese characters.

The Chinese martial art films

The second component of the project is a collection of the famous Chinese martial art films. I studied and digitised a number of Chinese martial art films from director ranging from the traditional King Hu to the more contemporary Ang Lee.

I developed a custom software, in Processing and OpenCV, to use different methods of motion analysis to extract motion data from the fighting sequences. By experimenting with the motion data as virtual forces, I managed to animate a piece of string (thread) dancing across the screen.

Here are some examples of using the OpenCV functions, such as motion history, dense optical flow to analyse the fighting sequences from the films: Hero (2002), Seven Swords (2005) and Crouching Tiger, Hidden Dragon (2000).

Representation of the 1000 characters

Given the 1000 XML files of the one thousand Chinese characters and some mechanisms to extract fighting data from the martial art film sequences, I have to find a way to enable matching between them. In this case, I go back to my experience of learning Chinese calligraphy. In my childhood, like other kids, I learnt Chinese calligraphy by copying the grand masters’ works with the aids of a grid chart. Normally, we use a 3 x 3 grid chart to layout the brush strokes of each character. In the testings, I have used different grid sizes such as 3 x 3, 5 x 5, and 9 x 9.

With the 1000 Cursive style Chinese calligraphic characters, I also explore different ways to represent them in terms of point density, stroke direction, etc. The results will lead to different ways to enhance matching from the encoded fighting sequence to a unique Chinese characters. Here are some testing results to represent each of the Cursive style characters in a 9 x 9 grid, using both point density and stroke direction, respectively.

Point density model

Stroke direction model

Character representation models

Matching the fighting sequence and characters

In this phase, I explored the machine learning library from OpenCV and the Weka Machine Learning library to test run the matching against a fighting sequence and a Cursive style character from the database. Before I started matching with the fighting sequences, I developed another testing software to cross match a live character writing exercise with the 1000 characters in the database, using both the OpenCV machine learning library and the Weka library, with the K-nearest Neighbour matching.

Exhibitions

In November 2016, I was invited to participate in the Japan Media Arts Festival, Hong Kong Special Exhibition in The Annex, Hong Kong. The artwork includes four martial art film sequences, Crouching Tiger, Hidden Dragon, Hero, House of Flying Daggers (2004).

It is a computational animation where a custom software written in Processing executes live to extract motion data from the film sequences and match in real time with the 1000 characters in the database. The closest match one will be drawn on the fly in the screen. The display screen consists of 4 parts. The top left part is the original film sequence rendered in dense optical flow vectors. The bottom left part is the summary of all the optical flow data and reduced to a 5 x 5 grid indicating the most prominent movement on screen. In each frame of the playback, the motion data summarized in the 5 x 5 grid will match with the 1000 characters database. The closest matched character will be shown on the bottom right part of the screen animating the brush strokes of the characters. The top right part indicates where the current brush stroke position along with the path it travels in previous frames.

Recently, I enhance the work to accumulate all the characters matched successfully in the fighting sequence to form a piece of poetic text, on the top right corner of the screen. The order of the characters is determined purely by the fighting sequences. Quite obviously, they do not intend to make any senses. Nevertheless, when we continue to read along the characters stream, it appears like a poem and occasionally seems to carry meanings beyond our imaginations.

Source code

The artwork is released as open source material. The source codes can be found in the repository Movement in Time – Part 2

50 . Shades of Grey (2015)

The artwork won the Grand Prize of the Japan Media Art Festival 2015, Art Division. Details can be available at the festival website.

The artwork 50 . Shades of Grey is part of the Early White Exhibition curated by Cally Yu in 1A Space Gallery. In this work, I created a very simply graphical pattern of 50 shades of grey tone with different programming languages I have learnt in the past, and which were already obsolete and no longer popular nowadays. The graphical pattern is extremely simple. I decide to show only the codes and not the image in the exhibition.

50 . Shades of Grey

The work is both a conceptual and visual art piece. The visual part is a simple computer graphics pattern displaying 50 shades of grey tone. Nevertheless, it documents my training as a computer artist, using programming languages to create imagery on screen. The software tools come and go at increasing speed, echoing the ever shortening cycle of IT trends. I self learnt all the programming languages over the last thirty years. Some of them were popular at some points in time in the creative art/design histories. Some of them disappeared from the industries. The fear of obsolescence is a haunting theme in the computer business, as well as in the digital arts. In the work, I go back to these old programming languages, which I have worked with in different years in my life, to generate the same image, fifty shades of grey tone ranging from black to white, as reflected in the Chinese title of the work, Half a Hundred, Half White.

Audience can compare among the different programming languages as poetic text and their relative time in history. The programming languages I have chosen are: Basic, Fortran, Pascal, Lisp, Lingo (Director), ActionScript (Flash). These languages are once popular and now obsolete.

Basic was released around 1964. I came across Basic in some leisure readings in 1981 and last used it in 1985 in a course project.

width=800:height=800:shades=50
inc=width/shades
setdisplay(width,height,32,1)
paper(rgb(255,255,255))
cls
setautoback(25)
 
for i=1 to shades
	c=i*255/shades
	ink(rgb(c,c,c))
	bar((i-1)*inc,0,i*inc,height)
next
 
waitkey(k_escape)

Fortran was released around 1958. I learnt computer programming with Fortran in 1981, the first course in my university study and last coded it in an internship in 1983.

The codes below used the GnuFor2 interface between the GNU Fortran and Gnuplot.

program grey
use gnufor2
implicit none
 
integer, parameter  :: Nm = 800
integer             :: rgb(3, Nm, Nm)
integer             :: i, j, k
integer             :: shades
integer             :: step
integer             :: c
 
shades = 50
step = Nm/shades
 
do i = 1, shades
     c = (i-1)*255/shades
     do j = (i-1)*step, i*step
          do k = 1, Nm
               rgb(1,j,k) = c
               rgb(2,j,k) = c
               rgb(3,j,k) = c
           end do
     end do
end do
 
call image(rgb, pause=-1.0, persist='no')
end program grey

Pascal was released around 1970. I used Pascal in the 2nd course in university in 1982 and last used it in a computer graphics course in 1984.

unit Unit1;
{$mode objfpc}{$H+}
interface
uses
  Classes, SysUtils, FileUtil, Forms, Controls, Graphics, Dialogs, StdCtrls;
type
  { TGrey }
  TGrey = class(TForm)
    procedure FormPaint(Sender: TObject);
  private
    { private declarations }
  public
    { public declarations }
  end;
var
  Grey:      TGrey;
  idx:       integer;
  shades:    integer;
  step:      integer;
  col:       integer;
 
implementation
{$R *.lfm}
{ TGrey }
 
procedure TGrey.FormPaint(Sender: TObject);
begin
     shades := 50;
     step := Round(Grey.width/shades);
 
     for idx:= 0 to (shades-1) do
     begin
       col := 255-Round(idx*255/shades);
       canvas.Brush.Color := RGBToColor(col, col, col);
       canvas.FillRect(0, 0, (shades-idx)*step, Grey.height);
     end;
end;
end.

Lisp was released around 1959. I first encountered Lisp in a programming language course in 1983 and my last program in Lisp was the artificial intelligence course in 1984.

(ql:quickload :lispbuilder-sdl)
(defvar *SIZE* 800)
(defvar *SHADES* 50)
(defvar *STEP* (/ *SIZE* *SHADES*))
(defvar *COL* 0)
 
(defun box(n) 
   (cond ((>= n *SHADES*) nil)
         ((< n *SHADES*) (progn
                     (setq *COL* (/ (* n 255) *SHADES*))
                     (sdl:draw-box-* (* *STEP* n) 0 *STEP* *SIZE* 
                         :color (sdl:color :r *COL* :g *COL* :b *COL*))
                     (box (+ n 1))))))
 
(sdl:with-init()
   (sdl:window *SIZE* *SIZE* :title-caption "50 . Shades of Grey")
   (setf (sdl:frame-rate) 60)
 
   (sdl:with-events ()
      (:quit-event () t)
      (:key-down-event () (sdl:push-quit-event))
      (:idle ()
         (sdl:clear-display sdl:*black*)
         (box 0)
         (sdl:update-display))))

Lingo (Director) was released around 1993. I first wrote in Lingo in my master degree study in 1996 and last coded in Lingo in 2002 for an interactive installation project.

on exitFrame me
   width = the stageRight - the stageLeft
   height = the stageBottom - the stageTop
   shades = 50
   step = width/shades
   objImage = _movie.stage.image
 
   repeat with i = 1 to shades
      c = integer((i-1)*255/shades)
      objImage.fill(point((i-1)*step,0), point(i*step,height), rgb(c,c,c))
   end repeat
   halt
end

ActionScript (Flash) was released around 2000. I first employed ActionScript for teaching computer programming in 2002 and last used ActionScript in 2005 for a location based game.

import flash.display.Shape;
 
var square:Shape = new Shape();
var shades:uint = 50;
var step:uint = 256 / shades;
var w:uint = stage.stageWidth / shades;
 
addChild(square);
 
for (var i:uint=0; i<shades; i++)
{
	var grey:uint = Math.floor(i * step);
	var col:uint = (grey << 16) + (grey << 8) + grey;
	square.graphics.beginFill(col);
	square.graphics.drawRect(i*w, 0, (i+1)*w, stage.stageHeight);
	square.graphics.endFill();
}

Other than the six programming languages shown in the exhibition, I also created the same graphical pattern with other languages/tools I use recently.