Content

tagged: phd

forked: video annotation app

a major part of the analysis for comedy lab is manually labelling what is happening when in the recordings. for instance, whether an audience member is laughing or not – for each audience member, throughout the performance. all in all, this adds up to a lot of work.

for this labelling to be accurate, let alone simply to get through it all, the interface of the video annotation software needs to be responsive - you are playing with time, in realtime. i was having such a bad time with elan[1] that the least bad option got to be writing my own simple annotator: all it need be is a responsive video player and a bag of keyboard shortcuts that generates a text document of annotations and times. luckily, there was an open-source objective-c / cocoa annotator out there, and so instead i’ve forked the code and hacked in the features i needed. never have i been so glad to be able to write native os x applications.

if you need features such as annotations coming from a controlled vocabulary and are continuous, ie. non-overlapping over time or workflow such as annotation can be done in one-pass with one hand on keyboard and one on scroll-gesture mouse/trackpad, the application is zipped and attached to this post (tested on 10.8, should work on earlier).

if you are a cocoa developer with similar needs, the code is now on github and i can give some pointers if needed.


  1. to be clear, elan is a powerful tool for which the developers deserve respect, and through it’s import and export is still the marshalling depot of my data. the underlying issue i suspect is the java runtime, as trying alternatives such as anvil didn’t work out either. ↩︎

diary | 20 aug 2013 | tagged: code · mac os · comedy lab · phd · research | downloads: vcodevdata.zip

comedy lab: first results

hacked some lua, got software logging what I needed; learnt python, parsed many text files; forked a cocoa app, classified laugh state for fifteen minutes times 16 audience members times two performances; and so on. eventually, a dataset of meascollect audience response for every tenth of a second. and with that: results. statistics. exciting.

a teaser of that is above, peer review needs to go it’s course before announcements can be made. as a little fun, though, here is the introduction of the paper the first results are published in – at some point before it got re-written to fit house style. this has… more flavour.

Live performance is important. We can talk of it “lifting audiences slightly above the present, into a hopeful feeling of what the world might be like if every moment of our lives were as emotionally voluminous, generous, aesthetically striking, and intersubjectively intense” \cite{Dolan:2006hv}. We can also talk about bums on seats and economic impact — 14,000,000 and £500,000,000 for London theatres alone in recent years \cite{Anonymous:2013us}. Perhaps more importantly, it functions as a laboratory of human experience and exploration of interaction \cite{FischerLichte:2008wo}. As designers of media technologies and interactive systems this is our interest, noting the impact of live performance on technology \cite{Schnadelbach:2008ii, Benford:2013ia, Reeves:2005uw, Sheridan:2007wc, Hook:2013vp} and how technology transforms the cultural status of live performance \cite{Auslander:2008te, Barker:2012iq}. However, as technology transforms the practice of live performance, the experiential impact of this on audiences is surprisingly under-researched. Here, we set out to compare this at its most fundamental: audience responses to live and recorded performance.

diary | 18 sep 2013 | tagged: comedy lab · phd · qmat · research

science photo prize

thanks to this shot, science outreach, and a national competition, i have a new camera. first prize! huzzah!

the full story is here — http://www.qmul.ac.uk/media/news/items/se/126324.html

screenshot above from — http://www.theguardian.com/science/gallery/2014/mar/31/national-science-photography-competition-in-pictures

diary | 31 mar 2014 | tagged: phd · comedy lab · photo · qmat · research

comedy lab dataset viewer

happy few days bringing-up a visualiser app for my PhD. integrating the different data sources of my live performance experiment had brought up some quirks that didn’t seem right. i needed to be confident that everything was actually in sync and spatially correct, and, well, it got the point where i decided to damn well visualise the whole thing.

i hoped to find a nice python framework to do this in, which would neatly extend the python work already doing most of the processing on the raw data. however i didn’t find anything that could easily combine video with a 3D scene. but i do know how to write native mac apps, and there’s a new 3D scene framework there called SceneKit…

so behold Comedy Lab Dataset Viewer. it’s not finished, but it lives!

  • NSDocument based application, so i can have multiple performances simultaneously
  • A data importer that reads the motion capture data and constructs the 3D scene and its animation
  • A stack of CoreAnimation layers compositing 3D scene over video
  • 3D scene animation synced to the video playback position

diary | 16 may 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

comedy lab » alternative comedy memorial society

getting a robot to perform stand-up comedy was a great thing. we were also proud that we could stage the gig at the barbican arts centre. prestigious, yes, but also giving some credibility to it being a “real gig”, rather than an experiment in a lab.

however, it wasn’t as representative of a comedy gig as we’d hoped. while our ‘robot stand-up at the barbican’ promotion did recruit a viably sized audience (huzzah!), the (human) comics said it was a really weird crowd. in short, we got journalists and robo-fetishists, not comedy club punters. which on reflection is not so surprising. but how to fix?

we needed ‘real’ audiences at ‘real’ gigs, without any recruitment prejudiced by there being a robot in the line-up. we needed to go to established comedy nights and be a surprise guest. thanks to oxford brookes university’s kind loan, we were able to load up artie with our software and take him on a three day tour of london comedy clubs.

and so, the first gig: the alternative comedy memorial society at soho theatre. a comedian’s comedy club, we were told; a knowledgeable audience expecting acts to be pushing the form. well, fair to say we’re doing something like that.

diary | 02 jun 2014 | tagged: comedy lab · phd · qmat · research

comedy lab » gits and shiggles

the second gig of our tour investigating robo-standup in front of ‘real’ audiences: gits and shiggles at the half moon, putney. a regular night there, we were booked amongst established comedians for their third birthday special. was very happy to see the headline act was katherine ryan, whose attitude gets me every time.

shown previously was artie on-stage being introduced. he (it, really) has to be on stage throughout, so we needed to cover him up for a surprise reveal. aside from the many serious set-up issues, i’m pleased i managed to fashion the ‘?’ in a spare moment. to my eye, makes the difference.

artie has to be on stage throughout as we need to position him precisely in advance. that, and he can’t walk. the precise positioning is because we need to be able to point and gesture at audience members: short of having a full kinematic model of artie and the three dimensional position of each audience member identified, we manually set the articulations required to point and look at every audience seat within view, while noting where each audience seat appears in the computer vision’s view. the view is actually a superpower we grant to artie, the ability to have see from way above his head, and do that in the dark. we position a small near-infrared gig-e vision camera in the venue’s rigging along with a pair of discreet infra-red floodlights. this view is shown above, a frame grabbed during setup that has hung around since.

diary | 03 jun 2014 | tagged: comedy lab · phd · qmat · research

comedy lab » angel comedy

third gig: angel comedy. again, an established comedy club and again a different proposition. a nightly, free venue, known to be packed. wednesdays was newcomers night which, again, was somewhat appropriate.

what i remember most vividly has not to do with our role in it, but was rather the compère warming up the crowd after the interval. it was a masterclass in rallying a crowd into an audience (probably particularly warranted given the recruitment message of ‘free’ combined with inexperienced acts). i rue to this day not recording it.

diary | 04 jun 2014 | tagged: comedy lab · phd · qmat · research

visualising everything

visualising head pose, light state, laugh state, computer vision happiness, breathing belt. and, teh pretty. huzzah.

diary | 21 jun 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

writing up

if only writing up the phd was always like this. beautiful room, good friends, excellent facilitation by thinkingwriting.

diary | 12 aug 2014 | tagged: phd · qmat · research

rotation matrix ambiguities

the head pose arrows look like they’re pointing in the right direction… right? well, of course, it’s not that simple.

the dataset processing script vicon exporter applies an offset to the raw angle-axis fixture pose, to account for the hat not being straight. the quick and dirty way to get these offsets is to say at a certain time everybody is looking directly forward. that might have been ok if i’d thought to make it part of the experiment procedure, but i didn’t, and even if i had i’ve got my doubts. but we have a visualiser! it is interactive! it can be hacked to nudge things around!

except that the visualiser just points an arrow at a gaze vector, and that’s doesn’t give you a definitive orientation to nudge around. this opens up a can of worms where everything that could have thwarted it working, did.

“The interpretation of a rotation matrix can be subject to many ambiguities.”
http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities

hard-won code –

DATASET VISUALISER

// Now write MATLAB code to console which will generate correct offsets from this viewer's modelling with SceneKit
for (NSUInteger i = 0; i < [self.subjectNodes count]; ++i)
{
	// Vicon Exporter calculates gaze vector as
	// gaze = [1 0 0] * rm * subjectOffsets{subjectIndex};
	// rm = Rotation matrix from World to Mocap = Rwm
	// subjectOffsets = rotation matrix from Mocap to Offset (ie Gaze) = Rmo

	// In this viewer, we model a hierarchy of
	// Origin Node -> Audience Node -> Mocap Node -> Offset Node, rendered as axes.
	// The Mocap node is rotated with Rmw (ie. rm') to comply with reality.
	// Aha. This is because in this viewer we are rotating the coordinate space not a point as per exporter

	// By manually rotating the offset node so it's axes register with the head pose in video, we should be able to export a rotation matrix
	// We need to get Rmo as rotation of point
	// Rmo as rotation of point = Rom as rotation of coordinate space

	// In this viewer, we have
	// Note i. these are rotations of coordinate space
	// Note ii. we're doing this by taking 3x3 rotation matrix out of 4x4 translation matrix
	// [mocapNode worldTransform] = Rwm
	// [offsetNode transform] = Rmo
	// [offsetNode worldTransform] = Rwo

	// We want Rom as rotation of coordinate space
	// Therefore Offset = Rom = Rmo' = [offsetNode transform]'

	// CATransform3D is however transposed from rotation matrix in MATLAB.
	// Therefore Offset = [offsetNode transform]

	SCNNode* node = self.subjectNodes[i][@"node"];
	SCNNode* mocapNode = [node childNodeWithName:@"mocap" recursively:YES];
	SCNNode* offsetNode = [mocapNode childNodeWithName:@"axes" recursively:YES];

	// mocapNode has rotation animation applied to it. Use presentation node to get rendered position.
	mocapNode = [mocapNode presentationNode];

	CATransform3D Rom = [offsetNode transform];

	printf("offsets{%lu} = [%f, %f, %f; %f, %f, %f; %f, %f, %f];\n",
		   (unsigned long)i+1,
		   Rom.m11, Rom.m12, Rom.m13,
		   Rom.m21, Rom.m22, Rom.m23,
		   Rom.m31, Rom.m32, Rom.m33
		   );

	// BUT! For this to actually work, this requires Vicon Exporter to be
	// [1 0 0] * subjectOffsets{subjectIndex} * rm;
	// note matrix multiplication order

	// Isn't 3D maths fun.
	// "The interpretation of a rotation matrix can be subject to many ambiguities."
	// http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities
}

VICON EXPORTER

poseData = [];
for frame=1:stopAt
	poseline = [frameToTime(frame, dataStartTime, dataSampleRate)];
	frameData = reshape(data(frame,:), entriesPerSubject, []);
	for subjectIndex = 1:subjectCount

		%% POSITION
		position = frameData(4:6,subjectIndex)';

		%% ORIENTATION
		% Vicon V-File uses axis-angle represented in three datum, the axis is the xyz vector and the angle is the magnitude of the vector
		% [x y z, |xyz| ]
		ax = frameData(1:3,:);
		ax = [ax; sqrt(sum(ax'.^2,2))'];
		rotation = ax(:,subjectIndex)';

		%% ORIENTATION CORRECTED FOR OFF-AXIS ORIENTATION OF MARKER STRUCTURE
		rm = vrrotvec2mat(rotation);

		%% if generating offsets via calcOffset then use this
		% rotation = vrrotmat2vec(rm * offsets{subjectIndex});
		% gazeDirection = subjectForwards{subjectIndex} * rm * offsets{subjectIndex};

		%% if generating offsets via Comedy Lab Dataset Viewer then use this
		% rotation = vrrotmat2vec(offsets{subjectIndex} * rm); %actually, don't do this as it creates some axis-angle with imaginary components.
		gazeDirection = [1 0 0] * offsets{subjectIndex} * rm;

		poseline = [poseline position rotation gazeDirection];
	end
	poseData = [poseData; poseline];
end

diary | 19 aug 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

robot comedy lab: workshop paper

minos gave a seminar on his engineering efforts for robot stand-up, we back-and-forthed on the wider framing of the work, and a bit of that is published here. his write-up.

workshop paper presented at humanoid robots and creativity, a workshop at humanoids 2014.

pdf download

diary | 18 nov 2014 | tagged: comedy lab · phd · qmat · research

through the eyes

with the visualiser established, it was trivial to attach the free view camera to the head pose node and boom!: first-person perspective. to be able to see through the eyes of anyone present is such a big thing.

diary | 13 jan 2015 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

oriented-to test

need a hit-test for people orienting to others. akin to gaze, but the interest here is what it looks like you’re attending to. but what should that hit-test be? visualisation and parameter tweaking to the rescue…

diary | 03 feb 2015 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

robot comedy lab: journal paper

the robot stand-up work got a proper write-up. well, part of it got a proper write-up, but so it goes.

This paper demonstrates how humanoid robots can be used to probe the complex social signals that contribute to the experience of live performance. Using qualitative, ethnographic work as a starting point we can generate specific hypotheses about the use of social signals in performance and use a robot to operationalize and test them. […] Moreover, this paper provides insight into the nature of live performance. We showed that audiences have to be treated as heterogeneous, with individual responses differentiated in part by the interaction they are having with the performer. Equally, performances should be further understood in terms of these interactions. Successful performance manages the dynamics of these interactions to the performer’s- and audiences’-benefit.

pdf download

diary | 25 aug 2015 | tagged: comedy lab · phd · qmat · research

accepted for ISPS2017

‘visualising performer–audience dynamics’ spoken paper accepted at ISPS 2017, the international symposium on performance science. this is doubly good, as i’ve long been keen to visit reykjavík and explore iceland.

diary | 13 apr 2017 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

submission

…finally.

diary | 09 may 2017 | tagged: phd · research · qmat

hic et nunc

at some point in my PhD, i found out the phrase i started out with – “the here and now of us together” – had some cultural richness i can’t deny some pleasure over. in there is the imperative motto for the satisfaction of desire. “I need it, Here and Now”

so here is a little indulgence for my talk at ISPS2017 in the making. while it amuses me, i have to face up to the fact that wearing a slogan t-shirt in latin is clearly a dick move.

diary | 28 aug 2017 | tagged: phd · performer–audience dynamics

isps » performer-audience dynamics talk

had a lot of fun with my talk ‘visualising performer–audience dynamics’ at ISPS 2017. with a title like that, some play with the ‘spoken paper’ format had to be had.

pleasingly, people were coming up to me to say how much they enjoyed it for the rest of the conference. huzzah!

i recorded it, and have stitched it together with the slides. the video is here, on the project page.

diary | 01 sep 2017 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research · iceland · talk

viva

the dissertation had done the talking, and the viva was good conversation about it wrapped up with a “dr. harris” handshake. phew, and woah. having been in a death-grip with the dissertation draft for so long, nothing in the whole experience could touch the wholesomeness of simply hearing “i read it all, and it’s good”.

supervisor –

Dear All,

I’m delighted to report that Toby Harris successfully defended his thesis "Liveness: An Interactional Account” this morning.
The external said: “that was a sheer pleasure”. (very) minor corrections.

Pat.


Pat Healey,
Professor of Human Interaction,
Head of the Cognitive Science Research Group,
Queen Mary University of London

external examiner –

This is a richly intriguing study of the processes of interaction between performers, audiences and environments in stand-up comedy – a nice topic to choose since it is one where, even more than in straight theatrical contexts, ‘liveness’ is intuitively felt to be crucial. But as Matthew Harris says, what constitutes ‘liveness’ and how precisely it operates and matters, remains elusive – if pugnaciously clung to!

The conclusions reached and offered – which more than anything insist on the value and necessity of seeing all audience contexts as socially structured situations – both rings right, and seems to be based well in the details of the data presented. And the cautions at the end, about the risks with moving to higher levels of abstraction (wherein ‘the audience’ might become increasingly massified, rather than understood processually) looks good and valuable.

The specific claims made – that the ‘liveness’ of the performer matters little (e.g. by replacing him/her with a robot, or with a recording) – will nicely infuriate those with an over-investment in the concept, and will need careful presentation when this work is published. The subsequent experiment on the role of spotlighting or darkness on the kinds and levels of interaction audiences have with each other, and with the performer are also nicely counter-intuitive.

internal examiner –

I greatly enjoyed reading this thesis. It strikes a good balance between theory and experiment and makes several well-defined contributions. The literature reviews show a keen insight and a good synthesis of ideas, and the motivation for each of the experiments is made clear. The writing is polished and engaging, and the order of ideas in each chapter is easy to follow.

diary | 06 oct 2017 | tagged: phd · research · qmat

renaissance garb means dr*spark

dressed up as a renaissance italian, doffed my hat, and got handed a certificate… that was a placeholder, saying the real one will be in the post. truly a doctor, but yet still one little thing!

best of all, is that first-born is no longer my totem of not having got this done; the bigger and better she got, the more egregious the not-being-a-doctor was.

diary | 18 dec 2017 | tagged: phd · research · qmat

1

2

3