it’s a funny thing, handing in a thesis, submitting corrections and so on, but not being able to link anyone to the work. finally, so long after may, but at least not so long after the viva, here it is. all 169 pages of it.
it’s a funny thing, handing in a thesis, submitting corrections and so on, but not being able to link anyone to the work. finally, so long after may, but at least not so long after the viva, here it is. all 169 pages of it.
dressed up as a renaissance italian, doffed my hat, and got handed a certificate… that was a placeholder, saying the real one will be in the post. truly a doctor, but yet still one little thing!
the dissertation had done the talking, and the viva was good conversation about it wrapped up with a “dr. harris” handshake. phew, and woah. having been in a death-grip with the dissertation draft for so long, nothing in the whole experience could touch the wholesomeness of simply hearing “i read it all, and it’s good”.
had a lot of fun with my talk ‘visualising performer–audience dynamics’ at ISPS 2017. with a title like that, some play with the ‘spoken paper’ format had to be had.
‘visualising performer–audience dynamics’ spoken paper accepted at ISPS 2017, the international symposium on performance science. this is doubly good, as i’ve long been keen to visit reykjavík and explore iceland.
media and arts technology colleague saul albert put out a call for help for the conversational rollercoaster. happy to help as a last-hurrah for time in the same research group, but more significantly it’s an event conceived to take interaction and audiences seriously.
the robot stand-up work got a proper write-up. well, part of it got a proper write-up, but so it goes.
need a hit-test for people orienting to others. akin to gaze, but the interest here is what it looks like you’re attending to. but what should that hit-test be? visualisation and parameter tweaking to the rescue…
with the visualiser established, it was trivial to attach the free view camera to the head pose node and boom!: first-person perspective. to be able to see through the eyes of anyone present is such a big thing.
minos gave a seminar on his engineering efforts for robot stand-up, we back-and-forthed on the wider framing of the work, and a bit of that is published here. his write-up.
the head pose arrows look like they’re pointing in the right direction… right? well, of course, it’s not that simple.
of course, aligning the virtual camera of the 3D scene with the real camera’s capture of the actual scene was never going to be straightforward. easy to get to a proof of concept. hard to actually register the two. i ended up rendering a cuboid grid on the seat positions in the 3D scene, drawing by hand (well, mouse) what looked about right on a video still, and trying to match the two sets of lines by nudging coordinates and fields-of-view with some debug-mode hotkeys i hacked in.
visualising head pose, light state, laugh state, computer vision happiness, breathing belt. and, teh pretty. huzzah.
third gig: angel comedy. again, an established comedy club and again a different proposition. a nightly, free venue, known to be packed. wednesdays was newcomers night which, again, was somewhat appropriate.
the second gig of our tour investigating robo-standup in front of ‘real’ audiences: gits and shiggles at the half moon, putney. a regular night there, we were booked amongst established comedians for their third birthday special. was very happy to see the headline act was katherine ryan, whose attitude gets me every time.
getting a robot to perform stand-up comedy was a great thing. we were also proud that we could stage the gig at the barbican arts centre. prestigious, yes, but also giving some credibility to it being a “real gig”, rather than an experiment in a lab.
happy few days bringing-up a visualiser app for my PhD. integrating the different data sources of my live performance experiment had brought up some quirks that didn’t seem right. i needed to be confident that everything was actually in sync and spatially correct, and, well, it got the point where i decided to damn well visualise the whole thing.
thanks to this shot, science outreach, and a national competition, i have a new camera. first prize! huzzah!
hacked some lua, got software logging what I needed; learnt python, parsed many text files; forked a cocoa app, classified laugh state for fifteen minutes times 16 audience members times two performances; and so on. eventually, a dataset of meascollect audience response for every tenth of a second. and with that: results. statistics. exciting.
“Hello, weak-skinned pathetic perishable humans!” begins the stand-up comic. “I am here with the intent of making you laugh.”
A curiously direct beginning for most comics, but not for Robothespian. This humanoid robot, made by British company Engineered Arts, has the size and basic form of a tall, athletic man but is very obviously a machine: its glossy white face and torso taper into a wiry waist and legs, its eyes are square video screens and its cheeks glow with artificial light.
Robothespian’s first joke plays on its mechanical nature and goes down a storm with the audience at the Barbican Centre in London. “I never really know how to start,” it says in a robotic male voice. “Which is probably because I run off Windows 8.”
The performance last week was the brainchild of Pat Healey and Kleomenis Katevas at Queen Mary University of London, who meant it not only to entertain but also to investigate what makes live events compelling.
As we watched, cameras tracked our facial expressions, gaze and head movements. The researchers will use this information to quantify our reactions to Robothespian’s performance and to compare them with our responses to two seasoned human comics – Andrew O'Neill and Tiernan Douieb – who performed before the robot. […]
nice article in the london evening standard on comedy lab, link below and photo of it in the paper attached:
getting a robot to tell jokes is no simple feat. programming and polishing a script for the robot to deliver is challenge enough, but trying to get that delivery to be responsive to the audience, to incorporate stagecraft that isn’t simply a linear recording… now that is hard. of course, in the research world, we like hard, so reading the audience and tailoring the delivery appropriate to that is exactly what we set out to do.
“I never know how to start, which is probably because I run off windows 8” – and there were more laughs than groans!
“good evening ladies and gentlemen, welcome to the barbican centre. “comedy lab: human vs robot” will be starting shortly in level minus one. part of hack the barbican, it is a free stand-up gig with robot headlining.“
a sense of satisfaction to see someone i’ve been helping get on the research ladder accepted to a workshop and the paper we co-wrote going into the acm archive.
of course, if you’ve just written a sensor logging smartphone app, and you have some bio-sensing data logging kit in the research group, you’re going to use it, right?
the interactive-map-and-then-some app turned out to be a step too far for the organisation we hoped to make it their own, but there still was a festival and a need to determine just what smartphone sensors could tell you about the activity around a festival. and so another app was born, one to harvest any and all sensor data for real-time or subsequent analysis. the interaction, media and communication group i’m part of now being rebranded cognitive science, here is the cogsci crowd app, as it stands.
out of all the people in this world that i half-understand, bruno latour is by far my favourite. ‘visualisation and cognition: drawing things together’ is my favourite academic paper by far, where he explains in no small part the western world by tracing the practice of using bits of paper. ‘aramis, or the love of technology’ is one of my favourite books, which transcends the genre of ‘look at this team of people working to make their dent in the world’ in the most amazing ways that just won’t make sense in summary. at it’s extremes you go from reading straight technical documents to hearing a train philosophise.
a little bit of a more compelling demo than last shown. development of this app has proved pretty painful, part of which is engaging with openFrameworks and c++ at a level beyond demo, and part of which has been the flakiness of the ofxAddons i’ve tried to use. the 3D model loader ofxAssimpModelLoader turned into the bane of this project; a core component of the app, the scope of its ill-effects was never clear until the debugging got truly brutal. i also had to ditch ofxTwitter, but at least can contribute back my working of the search functionality into the immeasurably better codebase of ofxTwitterStream.
it doesn’t look like much at the moment, but this is the first step into my research group at university doing a study on real festival audiences at real festivals. i’m developing an interactive map / timetable app, which will have quite some interesting features and opt-ins by the time we’re done. the promoters we’ve been talking to already have an interactive map of sorts, i’ve already done some interesting things visualising live events, and of course there’s my phd on audiences and interaction.
to oxford for the ‘Inaugural RCUK Digital Econmy Theme CDT Student Research Symposium’, ie. gather the guinea-pigs and see what they’re up to. happy to regain the overview of my research though, and working on a presentation is so much more enjoyable a process than writing for me.
after being one of the forces behind qmedia’s inaugural open studios last year, this year i was playing behind-the-scenes fixer, and with things fixed was able to get a few hours hacking on the dvi mixer before the show wrapped. even better, having established last year the reward of documentation, there was now a film crew looking intrigued and asking me to explain my research…
from the big revision of the presentation to a write-up that took it in a quite different direction to now: a ground-up re-write. a scholarly work that builds an argument and from the theory offers provocations back to the practice. spoiler: smartphone screens. tl;dr: play your audience not your computers. put out there in a spirit of debate: all crit welcome!
lying with statistics is oh so easy, intentionally or otherwise. the graph on the previous poster showed… a correlation so great it instantly raised suspicion. rightly so, and so back to the data to make something not only more rigorous but more expressive. turns out it takes waaay longer, not just doing the first thing that comes to mind, and doing it well. ditto for the prose: much butchering of each other, it being a joint effort between saul and i.
the best thing you can be asked to after spending a year getting to grips with a phd and producing a document of goodness knows how many words is to take that and boil it down to two sides. thanks to newcastle’s culture lab (any surprise?)for cornering me into this by proposing a workshop on liveness at the premier conference on human factors in computing. and best of all: my position paper has been accepted.
an early saturday start to attend the ‘audience through time’ conference organised by the drama department at queen mary. it was a good effort, and my chairing of the ‘technology and liveness’ panel seemed to go down well – phew. i especially enjoyed martin barker‘s talk, which was spot-on topic for me and presented with gusto: motivated by the issue of ‘liveness’ it started by asking how do audiences make sense of and respond to the near-live quality of streamed performances in cinemas, but soon progressed to an empirically backed provocation of a ‘scandal to theory’ that really showed the value of crossing disciplines.
to newcastle to the all-hands meeting of the digital economy programme, aka where my funding comes from. the summer had seen an internship project use the data i had generated in my bbc internship the year before, and from this me and saul got talking about how that could go further. importantly for me, in the demonstration proposal for it, the first bit of writing i’ve been happy with.
it might have been finished on the plane out to holiday, but i and it got there.
after nottingham, this time its to highwire at lancaster for the yearly gathering of the digital economy doctoral training centres. it was almost demoralising: we find a green field site and a wonderful brand new dedicated building designed and constructed seemingly in the blink of an eye. not the kind of trick that is possible at queen mary, campus crammed in london’s east end.
thankfully the viva was like a good supervision session rather than a critical demolition. if only i had actually pressed the record button on the dictaphone app like i thought i had. possibly the best insight came right at the end, almost as an afterthought from my drama supervisor: its really all about attention.
three things you don’t want together: wedding organisation, alt-wedding organisation, and writing the first-year dry-run of your PhD thesis. all so important in life; all epic on the deadline front, all with just a week between them.
there is a full gallery of prints and photos from the open studios up on flickr. i had camera in hand pretty much the whole time (or rather with this lovely strap) and there’s a number of shots i think that get towards the spirit of what happened. but my favourite is this quiet shot, which for me tells the whole story: kinect, custom code yet in the context of an ornate frame from an age where mirrors themselves were the cutting edge of technology. for this mirror can play with your image, reconstructing and (re)animating a shadow from skeleton positional data. its also serious research: if we can systematically manipulate how your mirror image reacts, we can learn much about human-human interaction and notions like empathy. what illusion is shattered when your technologically mediated shadow decides to pick its nose?
the programme i’m part of at university – media & arts technology – is quite a leap for the department to have made. perhaps it could be characterised as they realised they had lots of technology research, but nobody around who did things with technology, so they got in some artists, hackers and whatnot in to see what would happen. but rather than create an island of ‘cool’, the plan was to embed MAT within the culture of a science and technology department. very laudable, but has been so frustrating at times: institutional intertia and so on.
having just overhauled the ‘about the live in live cinema’ presentation for the IMAP seminar, i thought it should be quite straightforward to translate this into a 4,000 word essay – my penance for sitting in on a module from QMUL’s excellent drama department last semester. how wrong could i be, the structure to my argument turned out quite differently. all the better for it, however.
Three silhouettes, bodies poised above glowing buttons; a piercing light scanning light beams across the void of the image. ‘Rhythms + Visions: Expanded + Live’ says the text. Flipping the flyer over, the venue as School of Cinematic Arts, University of South California lends an air of authority, and finally in the body text a definition: ‘a live-cinema event’.
I was there — in fact, I am one of the silhouettes on the flyer — and the ‘live-cinema event’ shall frame the following discourse on liveness, media-based performance, and how the role of performance in a true live cinema needs to be rethought.
Walking into the School of Cinematic Arts, there was no being led into the dark of a cinema theatre, rather this would be an exploration through the outdoor spaces of the complex. Moving image works were aligned onto the architecture, and scrims echoed projections in space. Finally, a stage area. The first act starting: Scott Pagano accompanied by four musicians. For the unconventional setting thus far, this is a setup all will recognise: there is what could be termed a cinema-grade screen with performers in front, and rows of seating laid out beyond.
The musicians are playing, seemingly consumed by their instruments and keeping in check with each other. Pagano is standing, the only one twisted around to face to the screen rather than audience. In his hands, an iPad upon which — and with — he is furiously gesturing. On the screen an abstract composition unfolding, organic forms built out with photographic elements, a triumph of aesthetic. The music is instrumental and amplified, without naming a genre it’s accessible to the Los Angeles audience: guitars, keyboards, percussion. The audience seem receptive; there is a pleasing fidelity and sheen to the work.
But what here is live? But what here is cinema? These are the questions in my mind as I watch, and to which we will return.
Next a performance from the collective of which I am part, but not a piece with my direct involvement; I am still in the audience. Endless Cities by D-Fuse. It is a film in the Ruttman and Vertov tradition, a montage of urban scenes from around the world, and is accompanied by a live score: musique concrète performed from laptop with percussion accompaniment. Again it seems accessible, and in the photographic detail there is much to latch onto and be absorbed by.
It’s Live Cinema in the sense that I first heard the term: a musical accompaniment to a silent film. A montage from the kino-eye, it’s easier here to answer ‘what here is cinema’ than the Pegano piece. But I still wonder, what is really live here, and why bother?
The final performance is in many ways my creation, and so here I cannot report from the audience, but can offer my view as a performer. Which is one of immense frustration. Starting out, I am in a good position: we have an expanded staging that breaks the imagery out of the single frame, a developed aesthetic that abstracts footage in sync to the music, and the impressive shot bank of Endless Cities to pull from. It’s less the dérive and more the impressionism of a late night taxi ride. And we’ve performed it really well before. But that is precisely what is killing me by the end of this particular performance. We have performed it better before, so wouldn’t a recording of that performance have served us better? It’s a recognition that performance in this context translates entirely to the audio-visual output, for our actual performance is opaque to the audience, operating somewhere between obscure symbols on an obscured screen and twitching trackpad fingers. At which point, rather than taking the best performance so far to play back, I ask myself why not just create a master version in the studio and be done with it?
This is the terrain from which I argue. My motivation is not to categorise art or debate concepts, but to get to the heart of what a true live cinema could be.
i also had my media+arts technology comrades in town for the draw of transmediale itself. being in berlin, one thing on our agenda was art+com, who have pulled the trick of a) having produced some of the most inspiring work, period (a major reference of mine: computer vision tracked video mapped scenography at the opera… in 2002) and b) doing so with a productive cohesion of commercial work and academic practice. they truly are the model for ‘industrial outreach’; it’s the direction so many universities want to go in, but it seems to me still somehow industry and academia get viewed as oppositional here.
time on the bbc stories project is up, report is written, academic viva viva’d.
to nottingham for the ‘dtc summer school’, a meet-up of all the doctoral training centres that fall under the digital economy initiative. hosted by horizon, it meant the theme was right up my street: the lifelong contextual footprint, ie. the act of living is generating all this data/media, so what are we going to do with it?
to thatcamp london, an “unconference” ahead of the juggernaut that apparently is digital humanities 2010. loved the fact that for a conference organised on-line and firmly embedded in a world of twitter hashtags and multi-channel ADHD, the actual schedule was organised on the day using a giant chalkboard and people putting up their arms.
as part of the PhD programme i’m on, there is a six month placement in industry. i’m super-stoked that i’ve landed a project with the BBC that is right up my street, and could serve my ideas about live cinema very well. they wrote a great post on their r&d blog about what they’ve done so far, and springboarding from that i’ll be looking at possible futures for that kind of stuff. check out their post: http://www.bbc.co.uk/blogs/researchanddevelopment/2010/03/the-mythology-engine-represent.shtml
and mid-stripdown here are the audience tables, each one microphon and speaker’d up. these all turned into a lot of wires, a big multi-channel sound card, and a heroic max patch made by henrik. not a trivial task conceptually, making audience audio feed back - as in, bounce around, echo, etc. - without actually feeding back - as in, screeeeeeeeeeeeeeeeech!.
as part of the media and arts technology programme, a group of us have been investigating live performance in terms of the audience. its an area i have great interest in, believing that the entertainment can be an emergent property of the audience rather than something that has to be received from a singular performance/performer: eg. kinetxt. this project is more subtle than that, instead trying to tease apart what exactly makes an audience an audience and play with that. the first step is to stop thinking of an audience as a single thing. its a collection of people who at some point, hopefully, come together and somehow an audience emerges. its the interactions between the individuals that create an unstable state we call an audience…?
in reality, all very much a work in progress. but: if you wanted to track a radio control car around a room, and explode it remotely when it crossed its own trail, this is just how you might make the car.
with the augmented human interaction lab at their disposal, some of my media and arts technology partners made invisible ping-pong. tracking the bat position and your head, they piped in binaural (ie. positioned) sound to the headphones so you heard the ball rather than saw it. weird to play, but as atau observed, could be just the thing needed to augment the wiimote type games into the kind of natural feeling reality you need to best play the games, or wield that “bat”.
i really like dave’s motives for this - touch control not on a flat screen, but rather a really touchy interface.
CDM, the blog about VJing and beyond that seems to have the best signal to noise ratio, has just picked up the live cinema documentary i’ve been deep in the production of since returning, and has said some very nice things:
such seems my life: christmas day means a largely clear day to crack some of the live cinema documentary. not quite the idea!
my phd has this first year not doing phd stuff. which isn’t all bad: i have to make an “experimental documentary about a contemporary arts practice” to hone my media production skills. i think the brief was “make sure they know how to use a camera”, props to the film department at queen mary for challenging us with something a bit more interesting.
stand-out talk at the outside the box conference by thor magnusson, talking about making creative tools in the digital realm. ostensibly about the nature of digital music instruments, it really dug into how their design is far from a neutral act, and how in use our minds often extend to think through them.
second of two installations i’ve really enjoyed recently, re-rite is a multitrack recording of the philharmonia orchestra performing stravinsky’s the rite of spring, played out across 25 screens and the three floors and many rooms of the wonderfully dilapidated bargehouse. i’d have gone to see such a mediated-live-performance-with-lots-of-screens type thing anyway, and doubly so as the philharmonia partnered with friends yeast to make this (props, pulled it off to a really high standard) but it really got under my skin: the experience was unique, beyond the goal of somehow giving the experience of being inside an orchestra on stage.
a quick retrospective post on ‘untitled piece for 300 speakers, pianola and vacuum cleaner’, the first of two really worth-it installations i’ve been to recently, this one a commission by beaconsfield. there’s lots of good stuff to read on the development blog of the piece (as well a photo to use for this post). personally, even though it was conceived as a sound piece, the sound made no impression on me, but as a visual physical living breathing thing i was captivated. and it literally did breathe, the pianola hidden at the heart of those reclaimed speakers was fed by an ever flexing tube from vacuum cleaner outside.