Content

Comedy Lab

2013

Three live performance experiments researching performer-audience-audience interaction. They are the empirical contribution of my PhD on ‘liveness’, and required the visualising performer–audience dynamics work.

Comedy Lab: Human vs. Robot

An experiment that tests audience responses to a robot performer’s gaze and gesture. In collaboration with Kleomenis Katevas and part of Hack the Barbican. For my PhD, it provides the first direct evidence of individual performer–audience dynamics within an audience, and establishes the viability of live performance experiments.

Comedy Lab: Live vs. recorded

The experiment contrasts live and recorded performance – directly addressing a topic that animates so much of the debate around ‘liveness’. The data provides good evidence for social dynamics within the audience, but little evidence for performer–audience interaction. While these audiences were indifferent to live vs. mediated performance, the results affirm that events are social-spatial environments with heterogeneous audiences. The results emphasise that both conditions are live events, as even though the recorded condition is ostensibly not live, a live audience is present regardless and it is this that matters.

Comedy Lab: Lit vs. all lit

The experiment contrasts being lit and being in the dark, when all around are lit or not. The data provides strong evidence for social dynamics within the audience, and limited evidence for performer–audience dynamics. Spotlighting individuals reduces their responses, while everyone being lit increases their responses: it is the effect of being picked out not being lit \emph{per se} that matters. The results affirm that live events are social-spatial environments with heterogeneous audiences.

Files

Diary entries

comedy lab

Come and see some free stand-up comedy, in the name of science!

For my PhD, I’m staging a comedy gig. The comedian is booked, I need an audience of volunteers. You won’t hear me trying to make jokes out of performance theory or the theatrical wrangling I’ve had to do to pull this together, rather real stand-up from professional comics. Doing their thing will be Tiernan Douieb and Stuart Goldsmith. You’ll have a fun time, I’ll be able to analyse – putting it in broad strokes – what makes a good performance.

Tuesday 4th June, shows at 3pm and 5pm, at Queen Mary University of London. It’s critical we get the right numbers, so please sign up here. You’ll get an confirmation email the attendance details.

Again: http://tobyz.net/comedylab

comedy lab'd

it happened! performers performed, audiences audienced, and now i have a lot of data to organise and analyse.

thanks to all who took part, and apologies to all whose hair the motion capture hats might have messed with. can’t show too much of the experiment for various reasons, but pictured is main act stuart goldsmith who, yep, left with hair somewhat flatter than when he arrived.

it’s a strange feeling doing an ambitious experiment like this, partly because so much rides on such a short lived, one-off thing. more though, that it doesn’t represent the goal you started with – ie. a designed, informed instance of a live event that exploits it’s liveness – but rather aims to make things worse in the existing status-quo. there’s noble reasoning in that, for you really only get to see whats going on when you start prodding with a stick and what once worked nicely starts to break up. doesn’t stop weird feelings lingering for days afterwards though.

comedy lab: human vs robot

Come and see some more stand-up comedy, in the name of science – and this time, there’s a robot headlining!

What makes a good performance? By pitting stand-up comics Tiernan Douieb and Andrew O’Neill against a life size robot in a battle for laughs, researchers at Queen Mary, University of London hope to find out more — and are inviting you along.
A collaboration between the labs of Queen Mary’s Cognitive Science Research Group, RoboThespian’s creators Engineered Arts, and the open-access spaces of Hack The Barbican, the researchers are staging a stand-up gig where the headline act is a robot as a live experiment into performer-audience interaction.
This research is part of work on audience interaction being pioneered by the Cognitive Science Group. It is looking at the ways in which performers and audiences interact with each other and how this affects the experience of ‘liveness’. The experiment with Robothespian is testing ideas about how comedians deliver their material to maximize comic effect.

Shows at 6pm, Wednesday 7th and Thursday 8th August, Barbican Centre. Part of Hack the Barbican.

Poster attached. Aside from the science, the designer in me is quite content with how that little task turned out.

comedy lab: tiernan douieb

“good evening ladies and gentlemen, welcome to the barbican centre. “comedy lab: human vs robot” will be starting shortly in level minus one. part of hack the barbican, it is a free stand-up gig with robot headlining.”

so said i, on the public address system across all the spaces of the barbican centre. didn’t see that coming when i went to find out how to request an announcement.

the gig started, people came – this photo makes it look a bit thin, you can’t see all the seated people – and tiernan did his warm-up thing. and most brilliantly, didn’t run a mile when we brought up the idea of another comedy lab, and getting a robot to tell jokes.

comedy lab: andrew o'neill

first act proper: andrew o’neill. go watch the opening of this show, it’s perfect: http://www.youtube.com/watch?v=aGjbmywaKMI

highlight of this show had to be turning to the many kids who had appeared at the front, and singing his bumface song. to be clear, the bumface song is far from his funniest gag, not even anything much beyond the antics of a school playground. but what is so interesting is how that content is transformed in that moment of live performance and audience state into a bubble of joy. thats what we’re after. he had lots of techniques for eliciting response from a slightly wary audience.

it’s why we’ve chosen the genre for these live experiments, but it bears repeating: stand-up comedy really is so much more than the jokes.

comedy lab: robothespian

“I never know how to start, which is probably because I run off windows 8” – and there were more laughs than groans!

as part of the media and arts technology phd you spend six months embedded somewhere interesting, working on something interesting. i did a deep dive into web adaptations and the semantic mark-up of stories at the bbc. klemomenis katevas has spent five months at engineered arts working on realtime interaction with their robothespian, and what better test could be a re-staging of comedy lab.

beyond tiernan’s script and kleomenis’s programming of the robot, what was most exciting was to see a robot did colombine gardair’s ‘woooo’ gesture, and the audience responded exactly as they do in covent garden. that’s our first trying out of something we’ve learnt about performance from doing this line of research… and it worked.

robothespian’s first gig was straight delivery of the script and ‘press play’ stagecraft. it went surprisingly well - it really did get laughs and carried the audience to a fair degree. tomorrow, we turn on the interactivity…

comedy lab: instrumenting audiences

getting a robot to tell jokes is no simple feat. programming and polishing a script for the robot to deliver is challenge enough, but trying to get that delivery to be responsive to the audience, to incorporate stagecraft that isn’t simply a linear recording… now that is hard. of course, in the research world, we like hard, so reading the audience and tailoring the delivery appropriate to that is exactly what we set out to do.

having robothespian deliver what was essentially a linear script for his first night performance, for his second performance we turned on the interactivity. we had a camera and microphone giving us an audio-visual feed of the audience, and processed this to give us information to make decisions about robothespian’s delivery. a simple example is waiting until any audience audio – laughing, you hope – dies down before proceeding to the next section of the routine. more interesting to us is what having an humanoid robot allows us to do, as eye contact, body orientation, gesture and so on form so much of co-present human-human interaction. for that you need more than a single audio feed measuring the audience as a whole, you need to know exactly where people are and what they’re doing. in the photo you can see our first iteration of solving this, using the amazingly robust fraunhofer SHORE software, which detects faces and provides a number of metrics for each recognised face, such as male/female, eyes open/closed, and most usefully for instrumenting a comedy gig: a happiness score, which is effectively a smileometer. from this, robothespian delivered specific parts of the routine to the audience member judged most receptive at that point, was able to interject encouragement and admonitions, gestured scanning across the audience, and so on.

research being hard, it seems turning the interacion on backfired, as the gross effect was to slow down the delivery, taking too long between jokes. but this is a learning process, and tweaking those parameters is something we’ll be working on. and – big point i’ve learnt about research, you often learn more when things go wrong, or by deliberately breaking things, than when things work or go as expected. so there’ll be lots to pore over in the recordings here, comparing performer-audience-audience interaction between human and robot.

comedy lab: evening standard article

nice article in the london evening standard on comedy lab, link below and photo of it in the paper attached:
http://www.standard.co.uk/news/london/scientists-create-robot-to-take-on-comedians-in-standup-challenge-8753779.html

here’s the q & a behind the article, our answers channeled by pat healey

What does using robots tell us about the science behind stand-up comedy?
Using robots allows us to experiment with the gestures, movements and expressions that stand-up comedians use and test their effects on audience responses.

What’s the aim of the experiment? Is it to design more sophisticated robots and replace humans?
We want to understand what makes live performance exciting, how performers ‘work’ an audience; the delivery vs. the content.

Is this the first time an experiment of this kind has been carried out? How long is the research project?
Robot comedy is an emerging genre. Our performance experiment is the first to focus on how comedians work their audiences.

Tell me more about RoboThespian. Does he just say the comedy script or is he (and how) more sophisticated? Does he walk around the stage/make hand movements/laugh etc?
This research is really about what’s not in the script - we’re looking at the performance; the gestures, gaze, movement and responsiveness that make live comedy so much more than reading out jokes.

How does his software work?
We use computer vision and audio processing to detect how each person in the audience is responding. The robot uses this to tailor who it talks to and how it delivers each joke - making each performance unique.

What have you learned already from the show? Does the robot get more laughs? Does he get heckled? What has been the feedback from the audience afterwards?
I think Robothespian had a great opening night.

Do you see robots performing stand-up in future?
It will take some time to emerge but yes, I think this will come. Interactive technology is used increasingly in all forms of live performance.

comedy lab: new scientist article

“Hello, weak-skinned pathetic perishable humans!” begins the stand-up comic. “I am here with the intent of making you laugh.”
A curiously direct beginning for most comics, but not for Robothespian. This humanoid robot, made by British company Engineered Arts, has the size and basic form of a tall, athletic man but is very obviously a machine: its glossy white face and torso taper into a wiry waist and legs, its eyes are square video screens and its cheeks glow with artificial light.
Robothespian’s first joke plays on its mechanical nature and goes down a storm with the audience at the Barbican Centre in London. “I never really know how to start,” it says in a robotic male voice. “Which is probably because I run off Windows 8.”
The performance last week was the brainchild of Pat Healey and Kleomenis Katevas at Queen Mary University of London, who meant it not only to entertain but also to investigate what makes live events compelling.
As we watched, cameras tracked our facial expressions, gaze and head movements. The researchers will use this information to quantify our reactions to Robothespian’s performance and to compare them with our responses to two seasoned human comics – Andrew O’Neill and Tiernan Douieb – who performed before the robot. […]

full article: http://www.newscientist.com/article/dn24050-robot-comedian-stands-up-well-against-human-rivals.html

bit miffed that the brainchild line has been re-written to sound definitively like it’s pat and minos only, but hey. in the context of that sentence, it should be my name: comedy lab is my programme, prodding what makes performance and the liveness of live events compelling is my phd topic.

forked: video annotation app

a major part of the analysis for comedy lab is manually labelling what is happening when in the recordings. for instance, whether an audience member is laughing or not – for each audience member, throughout the performance. all in all, this adds up to a lot of work.

for this labelling to be accurate, let alone simply to get through it all, the interface of the video annotation software needs to be responsive - you are playing with time, in realtime. i was having such a bad time with elan[1] that the least bad option got to be writing my own simple annotator: all it need be is a responsive video player and a bag of keyboard shortcuts that generates a text document of annotations and times. luckily, there was an open-source objective-c / cocoa annotator out there, and so instead i’ve forked the code and hacked in the features i needed. never have i been so glad to be able to write native os x applications.

if you need features such as annotations coming from a controlled vocabulary and are continuous, ie. non-overlapping over time or workflow such as annotation can be done in one-pass with one hand on keyboard and one on scroll-gesture mouse/trackpad, the application is zipped and attached to this post (tested on 10.8, should work on earlier).

if you are a cocoa developer with similar needs, the code is now on github and i can give some pointers if needed.


  1. to be clear, elan is a powerful tool for which the developers deserve respect, and through it’s import and export is still the marshalling depot of my data. the underlying issue i suspect is the java runtime, as trying alternatives such as anvil didn’t work out either. ↩︎

comedy lab: first results

hacked some lua, got software logging what I needed; learnt python, parsed many text files; forked a cocoa app, classified laugh state for fifteen minutes times 16 audience members times two performances; and so on. eventually, a dataset of meascollect audience response for every tenth of a second. and with that: results. statistics. exciting.

a teaser of that is above, peer review needs to go it’s course before announcements can be made. as a little fun, though, here is the introduction of the paper the first results are published in – at some point before it got re-written to fit house style. this has… more flavour.

Live performance is important. We can talk of it “lifting audiences slightly above the present, into a hopeful feeling of what the world might be like if every moment of our lives were as emotionally voluminous, generous, aesthetically striking, and intersubjectively intense” \cite{Dolan:2006hv}. We can also talk about bums on seats and economic impact — 14,000,000 and £500,000,000 for London theatres alone in recent years \cite{Anonymous:2013us}. Perhaps more importantly, it functions as a laboratory of human experience and exploration of interaction \cite{FischerLichte:2008wo}. As designers of media technologies and interactive systems this is our interest, noting the impact of live performance on technology \cite{Schnadelbach:2008ii, Benford:2013ia, Reeves:2005uw, Sheridan:2007wc, Hook:2013vp} and how technology transforms the cultural status of live performance \cite{Auslander:2008te, Barker:2012iq}. However, as technology transforms the practice of live performance, the experiential impact of this on audiences is surprisingly under-researched. Here, we set out to compare this at its most fundamental: audience responses to live and recorded performance.

science photo prize

thanks to this shot, science outreach, and a national competition, i have a new camera. first prize! huzzah!

the full story is here — http://www.qmul.ac.uk/media/news/items/se/126324.html

screenshot above from — http://www.theguardian.com/science/gallery/2014/mar/31/national-science-photography-competition-in-pictures

comedy lab dataset viewer

happy few days bringing-up a visualiser app for my PhD. integrating the different data sources of my live performance experiment had brought up some quirks that didn’t seem right. i needed to be confident that everything was actually in sync and spatially correct, and, well, it got the point where i decided to damn well visualise the whole thing.

i hoped to find a nice python framework to do this in, which would neatly extend the python work already doing most of the processing on the raw data. however i didn’t find anything that could easily combine video with a 3D scene. but i do know how to write native mac apps, and there’s a new 3D scene framework there called SceneKit…

so behold Comedy Lab Dataset Viewer. it’s not finished, but it lives!

  • NSDocument based application, so i can have multiple performances simultaneously
  • A data importer that reads the motion capture data and constructs the 3D scene and its animation
  • A stack of CoreAnimation layers compositing 3D scene over video
  • 3D scene animation synced to the video playback position

comedy lab » alternative comedy memorial society

getting a robot to perform stand-up comedy was a great thing. we were also proud that we could stage the gig at the barbican arts centre. prestigious, yes, but also giving some credibility to it being a “real gig”, rather than an experiment in a lab.

however, it wasn’t as representative of a comedy gig as we’d hoped. while our ‘robot stand-up at the barbican’ promotion did recruit a viably sized audience (huzzah!), the (human) comics said it was a really weird crowd. in short, we got journalists and robo-fetishists, not comedy club punters. which on reflection is not so surprising. but how to fix?

we needed ‘real’ audiences at ‘real’ gigs, without any recruitment prejudiced by there being a robot in the line-up. we needed to go to established comedy nights and be a surprise guest. thanks to oxford brookes university’s kind loan, we were able to load up artie with our software and take him on a three day tour of london comedy clubs.

and so, the first gig: the alternative comedy memorial society at soho theatre. a comedian’s comedy club, we were told; a knowledgeable audience expecting acts to be pushing the form. well, fair to say we’re doing something like that.

comedy lab » gits and shiggles

the second gig of our tour investigating robo-standup in front of ‘real’ audiences: gits and shiggles at the half moon, putney. a regular night there, we were booked amongst established comedians for their third birthday special. was very happy to see the headline act was katherine ryan, whose attitude gets me every time.

shown previously was artie on-stage being introduced. he (it, really) has to be on stage throughout, so we needed to cover him up for a surprise reveal. aside from the many serious set-up issues, i’m pleased i managed to fashion the ‘?’ in a spare moment. to my eye, makes the difference.

artie has to be on stage throughout as we need to position him precisely in advance. that, and he can’t walk. the precise positioning is because we need to be able to point and gesture at audience members: short of having a full kinematic model of artie and the three dimensional position of each audience member identified, we manually set the articulations required to point and look at every audience seat within view, while noting where each audience seat appears in the computer vision’s view. the view is actually a superpower we grant to artie, the ability to have see from way above his head, and do that in the dark. we position a small near-infrared gig-e vision camera in the venue’s rigging along with a pair of discreet infra-red floodlights. this view is shown above, a frame grabbed during setup that has hung around since.

comedy lab » angel comedy

third gig: angel comedy. again, an established comedy club and again a different proposition. a nightly, free venue, known to be packed. wednesdays was newcomers night which, again, was somewhat appropriate.

what i remember most vividly has not to do with our role in it, but was rather the compère warming up the crowd after the interval. it was a masterclass in rallying a crowd into an audience (probably particularly warranted given the recruitment message of ‘free’ combined with inexperienced acts). i rue to this day not recording it.

visualising everything

visualising head pose, light state, laugh state, computer vision happiness, breathing belt. and, teh pretty. huzzah.

virtual camera, real camera

of course, aligning the virtual camera of the 3D scene with the real camera’s capture of the actual scene was never going to be straightforward. easy to get to a proof of concept. hard to actually register the two. i ended up rendering a cuboid grid on the seat positions in the 3D scene, drawing by hand (well, mouse) what looked about right on a video still, and trying to match the two sets of lines by nudging coordinates and fields-of-view with some debug-mode hotkeys i hacked in.

in hindsight, i would have stuck motion capture markers on the cameras. so it goes.

rotation matrix ambiguities

the head pose arrows look like they’re pointing in the right direction… right? well, of course, it’s not that simple.

the dataset processing script vicon exporter applies an offset to the raw angle-axis fixture pose, to account for the hat not being straight. the quick and dirty way to get these offsets is to say at a certain time everybody is looking directly forward. that might have been ok if i’d thought to make it part of the experiment procedure, but i didn’t, and even if i had i’ve got my doubts. but we have a visualiser! it is interactive! it can be hacked to nudge things around!

except that the visualiser just points an arrow at a gaze vector, and that’s doesn’t give you a definitive orientation to nudge around. this opens up a can of worms where everything that could have thwarted it working, did.

“The interpretation of a rotation matrix can be subject to many ambiguities.”
http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities

hard-won code –

DATASET VISUALISER

// Now write MATLAB code to console which will generate correct offsets from this viewer's modelling with SceneKit
for (NSUInteger i = 0; i < [self.subjectNodes count]; ++i)
{
	// Vicon Exporter calculates gaze vector as
	// gaze = [1 0 0] * rm * subjectOffsets{subjectIndex};
	// rm = Rotation matrix from World to Mocap = Rwm
	// subjectOffsets = rotation matrix from Mocap to Offset (ie Gaze) = Rmo

	// In this viewer, we model a hierarchy of
	// Origin Node -> Audience Node -> Mocap Node -> Offset Node, rendered as axes.
	// The Mocap node is rotated with Rmw (ie. rm') to comply with reality.
	// Aha. This is because in this viewer we are rotating the coordinate space not a point as per exporter

	// By manually rotating the offset node so it's axes register with the head pose in video, we should be able to export a rotation matrix
	// We need to get Rmo as rotation of point
	// Rmo as rotation of point = Rom as rotation of coordinate space

	// In this viewer, we have
	// Note i. these are rotations of coordinate space
	// Note ii. we're doing this by taking 3x3 rotation matrix out of 4x4 translation matrix
	// [mocapNode worldTransform] = Rwm
	// [offsetNode transform] = Rmo
	// [offsetNode worldTransform] = Rwo

	// We want Rom as rotation of coordinate space
	// Therefore Offset = Rom = Rmo' = [offsetNode transform]'

	// CATransform3D is however transposed from rotation matrix in MATLAB.
	// Therefore Offset = [offsetNode transform]

	SCNNode* node = self.subjectNodes[i][@"node"];
	SCNNode* mocapNode = [node childNodeWithName:@"mocap" recursively:YES];
	SCNNode* offsetNode = [mocapNode childNodeWithName:@"axes" recursively:YES];

	// mocapNode has rotation animation applied to it. Use presentation node to get rendered position.
	mocapNode = [mocapNode presentationNode];

	CATransform3D Rom = [offsetNode transform];

	printf("offsets{%lu} = [%f, %f, %f; %f, %f, %f; %f, %f, %f];\n",
		   (unsigned long)i+1,
		   Rom.m11, Rom.m12, Rom.m13,
		   Rom.m21, Rom.m22, Rom.m23,
		   Rom.m31, Rom.m32, Rom.m33
		   );

	// BUT! For this to actually work, this requires Vicon Exporter to be
	// [1 0 0] * subjectOffsets{subjectIndex} * rm;
	// note matrix multiplication order

	// Isn't 3D maths fun.
	// "The interpretation of a rotation matrix can be subject to many ambiguities."
	// http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities
}

VICON EXPORTER

poseData = [];
for frame=1:stopAt
	poseline = [frameToTime(frame, dataStartTime, dataSampleRate)];
	frameData = reshape(data(frame,:), entriesPerSubject, []);
	for subjectIndex = 1:subjectCount

		%% POSITION
		position = frameData(4:6,subjectIndex)';

		%% ORIENTATION
		% Vicon V-File uses axis-angle represented in three datum, the axis is the xyz vector and the angle is the magnitude of the vector
		% [x y z, |xyz| ]
		ax = frameData(1:3,:);
		ax = [ax; sqrt(sum(ax'.^2,2))'];
		rotation = ax(:,subjectIndex)';

		%% ORIENTATION CORRECTED FOR OFF-AXIS ORIENTATION OF MARKER STRUCTURE
		rm = vrrotvec2mat(rotation);

		%% if generating offsets via calcOffset then use this
		% rotation = vrrotmat2vec(rm * offsets{subjectIndex});
		% gazeDirection = subjectForwards{subjectIndex} * rm * offsets{subjectIndex};

		%% if generating offsets via Comedy Lab Dataset Viewer then use this
		% rotation = vrrotmat2vec(offsets{subjectIndex} * rm); %actually, don't do this as it creates some axis-angle with imaginary components.
		gazeDirection = [1 0 0] * offsets{subjectIndex} * rm;

		poseline = [poseline position rotation gazeDirection];
	end
	poseData = [poseData; poseline];
end

robot comedy lab: workshop paper

minos gave a seminar on his engineering efforts for robot stand-up, we back-and-forthed on the wider framing of the work, and a bit of that is published here. his write-up.

workshop paper presented at humanoid robots and creativity, a workshop at humanoids 2014.

pdf download

through the eyes

with the visualiser established, it was trivial to attach the free view camera to the head pose node and boom!: first-person perspective. to be able to see through the eyes of anyone present is such a big thing.

oriented-to test

need a hit-test for people orienting to others. akin to gaze, but the interest here is what it looks like you’re attending to. but what should that hit-test be? visualisation and parameter tweaking to the rescue…

robot comedy lab: journal paper

the robot stand-up work got a proper write-up. well, part of it got a proper write-up, but so it goes.

This paper demonstrates how humanoid robots can be used to probe the complex social signals that contribute to the experience of live performance. Using qualitative, ethnographic work as a starting point we can generate specific hypotheses about the use of social signals in performance and use a robot to operationalize and test them. […] Moreover, this paper provides insight into the nature of live performance. We showed that audiences have to be treated as heterogeneous, with individual responses differentiated in part by the interaction they are having with the performer. Equally, performances should be further understood in terms of these interactions. Successful performance manages the dynamics of these interactions to the performer’s- and audiences’-benefit.

pdf download

accepted for ISPS2017

‘visualising performer–audience dynamics’ spoken paper accepted at ISPS 2017, the international symposium on performance science. this is doubly good, as i’ve long been keen to visit reykjavík and explore iceland.

isps » performer-audience dynamics talk

had a lot of fun with my talk ‘visualising performer–audience dynamics’ at ISPS 2017. with a title like that, some play with the ‘spoken paper’ format had to be had.

pleasingly, people were coming up to me to say how much they enjoyed it for the rest of the conference. huzzah!

i recorded it, and have stitched it together with the slides. the video is here, on the project page.

comedy lab » on tour, unannounced

an email comes in from a performance studies phd candidate asking if they could watch the whole robot routine from comedy lab: human vs. robot. damn right. i’d love to see someone write about that performance as a performance.

but, better than that staging and its weird audiences (given the advertised title, robo-fetishists and journalists?) there is comedy lab #4: on tour, unannounced. the premise: robot stand-up, to unsuspecting audiences, at established comedy nights. that came a year later with the opportunity to use another robothespian (thanks oxford brookes!). it addressed the ecological validity issues, and should simply be more fun to watch.

for on tour, unannounced we kept the performance the same – or rather, each performance used the same audience responsive system to tailor the delivery in realtime. there’s a surprising paucity in the literature about how audiences respond differently to the same production; the idea was this should be interesting data. so i’ve taken the opportunity to extract from the data set the camera footage of the stage from each night of the tour. and now that is public, at the links below.

the alternative comedy memorial society

gits and shiggles

angel comedy

the robot comedy lab experiments form chapter 4 of my phd thesis ‘liveness: an interactional account’

Four: Experimenting with performance

The literature reviewed in chapter three also motivates an experimental programme. Chapter four presents the first, establishing Comedy Lab. A live performance experiment is staged that tests audience responses to a robot performer’s gaze and gesture. This chapter provides the first direct evidence of individual performer–audience dynamics within an audience, and establishes the viability of live performance experiments.

http://tobyz.net/project/phd

there are currently two published papers –

and finally, on ‘there is a surprising paucity…’, i’d recommend starting with gardair’s mention of mervant-roux.