Content

Visualising Performer–Audience Dynamics

2017

Live performances involve complex interactions between a large number of co-present people. Performance has been defined in terms of these performer–audience dynamics (Fischer-Lichte 2014), but little is known about how they manifest. One reason for this is the empirical challenge of capturing the behaviour of performers and massed audiences. Video-based approaches typical of human interaction research elsewhere do not scale, and interest in audience response has led to diverse techniques of instrumentation being explored (eg. physiological in Silva et al. 2013, continuous report in Stevens et al. 2014). Another reason is the difficulty of interpreting the resulting data. Again, inductive discovery of phenomena as successfully practised with video data (eg. Bavelas 2016) becomes problematic when starting with numerical data sets – you cannot watch a spreadsheet, after all…

A spoken paper presented at the International Symposium on Performance Science, Reykjavík 2017. The talk is a good way to see what I got up to during my PhD… and hey, there’s no stats and lots of pretty pictures.

Files

Diary entries

comedy lab dataset viewer

happy few days bringing-up a visualiser app for my PhD. integrating the different data sources of my live performance experiment had brought up some quirks that didn’t seem right. i needed to be confident that everything was actually in sync and spatially correct, and, well, it got the point where i decided to damn well visualise the whole thing.

i hoped to find a nice python framework to do this in, which would neatly extend the python work already doing most of the processing on the raw data. however i didn’t find anything that could easily combine video with a 3D scene. but i do know how to write native mac apps, and there’s a new 3D scene framework there called SceneKit…

so behold Comedy Lab Dataset Viewer. it’s not finished, but it lives!

  • NSDocument based application, so i can have multiple performances simultaneously
  • A data importer that reads the motion capture data and constructs the 3D scene and its animation
  • A stack of CoreAnimation layers compositing 3D scene over video
  • 3D scene animation synced to the video playback position

visualising everything

visualising head pose, light state, laugh state, computer vision happiness, breathing belt. and, teh pretty. huzzah.

virtual camera, real camera

of course, aligning the virtual camera of the 3D scene with the real camera’s capture of the actual scene was never going to be straightforward. easy to get to a proof of concept. hard to actually register the two. i ended up rendering a cuboid grid on the seat positions in the 3D scene, drawing by hand (well, mouse) what looked about right on a video still, and trying to match the two sets of lines by nudging coordinates and fields-of-view with some debug-mode hotkeys i hacked in.

in hindsight, i would have stuck motion capture markers on the cameras. so it goes.

rotation matrix ambiguities

the head pose arrows look like they’re pointing in the right direction… right? well, of course, it’s not that simple.

the dataset processing script vicon exporter applies an offset to the raw angle-axis fixture pose, to account for the hat not being straight. the quick and dirty way to get these offsets is to say at a certain time everybody is looking directly forward. that might have been ok if i’d thought to make it part of the experiment procedure, but i didn’t, and even if i had i’ve got my doubts. but we have a visualiser! it is interactive! it can be hacked to nudge things around!

except that the visualiser just points an arrow at a gaze vector, and that’s doesn’t give you a definitive orientation to nudge around. this opens up a can of worms where everything that could have thwarted it working, did.

“The interpretation of a rotation matrix can be subject to many ambiguities.”
http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities

hard-won code –

DATASET VISUALISER

// Now write MATLAB code to console which will generate correct offsets from this viewer's modelling with SceneKit
for (NSUInteger i = 0; i < [self.subjectNodes count]; ++i)
{
	// Vicon Exporter calculates gaze vector as
	// gaze = [1 0 0] * rm * subjectOffsets{subjectIndex};
	// rm = Rotation matrix from World to Mocap = Rwm
	// subjectOffsets = rotation matrix from Mocap to Offset (ie Gaze) = Rmo

	// In this viewer, we model a hierarchy of
	// Origin Node -> Audience Node -> Mocap Node -> Offset Node, rendered as axes.
	// The Mocap node is rotated with Rmw (ie. rm') to comply with reality.
	// Aha. This is because in this viewer we are rotating the coordinate space not a point as per exporter

	// By manually rotating the offset node so it's axes register with the head pose in video, we should be able to export a rotation matrix
	// We need to get Rmo as rotation of point
	// Rmo as rotation of point = Rom as rotation of coordinate space

	// In this viewer, we have
	// Note i. these are rotations of coordinate space
	// Note ii. we're doing this by taking 3x3 rotation matrix out of 4x4 translation matrix
	// [mocapNode worldTransform] = Rwm
	// [offsetNode transform] = Rmo
	// [offsetNode worldTransform] = Rwo

	// We want Rom as rotation of coordinate space
	// Therefore Offset = Rom = Rmo' = [offsetNode transform]'

	// CATransform3D is however transposed from rotation matrix in MATLAB.
	// Therefore Offset = [offsetNode transform]

	SCNNode* node = self.subjectNodes[i][@"node"];
	SCNNode* mocapNode = [node childNodeWithName:@"mocap" recursively:YES];
	SCNNode* offsetNode = [mocapNode childNodeWithName:@"axes" recursively:YES];

	// mocapNode has rotation animation applied to it. Use presentation node to get rendered position.
	mocapNode = [mocapNode presentationNode];

	CATransform3D Rom = [offsetNode transform];

	printf("offsets{%lu} = [%f, %f, %f; %f, %f, %f; %f, %f, %f];\n",
		   (unsigned long)i+1,
		   Rom.m11, Rom.m12, Rom.m13,
		   Rom.m21, Rom.m22, Rom.m23,
		   Rom.m31, Rom.m32, Rom.m33
		   );

	// BUT! For this to actually work, this requires Vicon Exporter to be
	// [1 0 0] * subjectOffsets{subjectIndex} * rm;
	// note matrix multiplication order

	// Isn't 3D maths fun.
	// "The interpretation of a rotation matrix can be subject to many ambiguities."
	// http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities
}

VICON EXPORTER

poseData = [];
for frame=1:stopAt
	poseline = [frameToTime(frame, dataStartTime, dataSampleRate)];
	frameData = reshape(data(frame,:), entriesPerSubject, []);
	for subjectIndex = 1:subjectCount

		%% POSITION
		position = frameData(4:6,subjectIndex)';

		%% ORIENTATION
		% Vicon V-File uses axis-angle represented in three datum, the axis is the xyz vector and the angle is the magnitude of the vector
		% [x y z, |xyz| ]
		ax = frameData(1:3,:);
		ax = [ax; sqrt(sum(ax'.^2,2))'];
		rotation = ax(:,subjectIndex)';

		%% ORIENTATION CORRECTED FOR OFF-AXIS ORIENTATION OF MARKER STRUCTURE
		rm = vrrotvec2mat(rotation);

		%% if generating offsets via calcOffset then use this
		% rotation = vrrotmat2vec(rm * offsets{subjectIndex});
		% gazeDirection = subjectForwards{subjectIndex} * rm * offsets{subjectIndex};

		%% if generating offsets via Comedy Lab Dataset Viewer then use this
		% rotation = vrrotmat2vec(offsets{subjectIndex} * rm); %actually, don't do this as it creates some axis-angle with imaginary components.
		gazeDirection = [1 0 0] * offsets{subjectIndex} * rm;

		poseline = [poseline position rotation gazeDirection];
	end
	poseData = [poseData; poseline];
end

through the eyes

with the visualiser established, it was trivial to attach the free view camera to the head pose node and boom!: first-person perspective. to be able to see through the eyes of anyone present is such a big thing.

oriented-to test

need a hit-test for people orienting to others. akin to gaze, but the interest here is what it looks like you’re attending to. but what should that hit-test be? visualisation and parameter tweaking to the rescue…

accepted for ISPS2017

‘visualising performer–audience dynamics’ spoken paper accepted at ISPS 2017, the international symposium on performance science. this is doubly good, as i’ve long been keen to visit reykjavík and explore iceland.

hic et nunc

at some point in my PhD, i found out the phrase i started out with – “the here and now of us together” – had some cultural richness i can’t deny some pleasure over. in there is the imperative motto for the satisfaction of desire. “I need it, Here and Now”

so here is a little indulgence for my talk at ISPS2017 in the making. while it amuses me, i have to face up to the fact that wearing a slogan t-shirt in latin is clearly a dick move.

isps » performer-audience dynamics talk

had a lot of fun with my talk ‘visualising performer–audience dynamics’ at ISPS 2017. with a title like that, some play with the ‘spoken paper’ format had to be had.

pleasingly, people were coming up to me to say how much they enjoyed it for the rest of the conference. huzzah!

i recorded it, and have stitched it together with the slides. the video is here, on the project page.