Content

tagged: research

ORBIT open-sourced

To complement the release of the dataset, here’s the infrastructure used to create it – my code open sourced for future dataset collection projects to build on our work. And as our work shows, machine learning needs more inclusive datasets.

https://github.com/orbit-a11y/ORBIT-Camera
https://github.com/orbit-a11y/orbit_data

diary | 31 mar 2021 | tagged: orbit · research · release

ORBIT Dataset published

Daniela Massiceti, Lida Theodorou, Luisa Zintgraf, Matthew Tobias Harris, Simone Stumpf, Cecily Morrison, Edward Cutrell, Katja Hoffmann

Object recognition has made great advances in the last decade, but predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset, grounded in a real-world application of teachable object recognizers for people who are blind/low vision. The full dataset contains 4,733 videos of 588 objects recorded by 97 people who are blind/low-vision on their mobile phones, and we mark a subset of 3,822 videos of 486 objects collected by 77 collectors as the benchmark dataset. We propose a user-centric evaluation protocol to evaluate machine learning models for a teachable object recognition task on this benchmark dataset. The code for loading the dataset, computing all benchmark metrics, and running the baseline models is available at https://github.com/microsoft/ORBIT-Dataset

Finally, the published dataset: ORBIT Dataset

diary | 31 mar 2021 | tagged: orbit · research

ORBIT data, archived

Data captured, collated, archived, and now sent to the vault. My role, done.

diary | 20 feb 2021 | tagged: orbit · research

mixed reality lab

Title:
The liveness of live events, and how we might design for that

Abstract:
This talk will draw a line through Toby’s practice and research to argue for an interactional account of liveness, and where that might lead as we figure out new relationships between the digital and face-to-face; it’ll have to be a remote talk, after all. Topics will include why I really should have just read my email on stage, instrumenting auditoriums, teaching a humanoid robot stagecraft, visualising performer–audience dynamics, and why the HCI and privacy considerations of data-backed immersive theatre might be the crucible for our near future every day.

I’m interested in the MRL’s take on any or all of these topics, and am open to collaboration. More on my website at http://tobyz.net

Biography:
Toby’s practice spans art, design and engineering, “fascinated by the liveness of live events, and how we can design for that” (http://tobyz.net). He performs worldwide as part of the renown audio-visual collective D-Fuse (http://dfuse.com). He develops and sells hardware, software and design services for events (http://sparklive.net). His practice led to a PhD on liveness and performer–audience–audience interaction. He’s currently focussed on live data.

…and happily, the talk was super-well received. quite the relief, as it’s a storied lab (e.g. long-time blast theory collaborators) with an embarassment of ‘best of CHI’ papers.

diary | 05 feb 2021 | tagged: research · talk · liveness

ORBIT Phase One data

Data collection phase one comes to an end: test and training imagery for 545 things, in the form of 4568 videos. Having built the system with barely a page of test data, it never gets old seeing the paginator having to truncate itself.

diary | 16 jul 2020 | tagged: orbit · research

expanded performance call

Innovations in technology are changing every part of the performance landscape […] We are interested in the concept of liveness and togetherness in the context of these changes in technology.

https://bristolbathcreative.org/pathfinders/expanded-performance

crikey! i have something to say on the matter. no need to rehash that here, though. instead, see this artefact from the workshop. happily, some participants picked one of my contributions from the 100 questions session, and spent some time on it: “how do we move tech away from spectacle to… liveness?”

it took the calm of a day or two later to really realise what i was looking at. that was the question that, more or less, animated the start of my PhD. i spent years chasing that thread, the tech falling away to explore the underlying phenomena through study of human interaction, audiences and an appropriation of experimental psyschology; i have a fine-grained idea of where that question leads. and yet, here is a completely different take, from two completely different minds.

i bring this up because i don’t know how to reconcile the potential of that workshop room with the fixed number of fellowships they are eventually going to fund (and the necessarily constrained research that will result). but that session felt like the start of something good, the above a glimpse of a better way.

diary | 24 feb 2020 | tagged: pervasive media studio · research

drawing, multimodality and interaction analytics

based on the drawing interactions work, we were asked to run a day-long workshop for the national center for research methods. happily, “drawing, multimodality and interaction analytics” sold out early, and went well. i even got some sketches out of it, courtesy of sophie.

happy punters aside, this spurred a day-or-so sprint of app development. i wanted to show some live demos using the sources and drawing techniques that would be presented during the sessions. and that needed a document based app. and that needed a way to persist the timestamped drawings, and so on. which it now all does. and with that, the app is no longer just a prototype. it has practical value to others.

diary | 28 nov 2019 | tagged: drawing interactions · research

ORBIT aka Object Recognition for Blind Image Training

if you are blind, a modern phone can be life-changing. apps like taptapsee connect the phone’s camera to a sighted person so someone can tell you what’s in front of you. though there might be a delay matchmaking that person, and you might not feel entirely comfortable sharing your life with a stranger. could the phone recognise things by itself?

the answer is yes and no, and it’s what i’m working on at city, unversity of london for a while. yes, in that machine learning algorithms have been trained on image datasets and phones running them really can pick things out of the camera feed. the no comes in practice, as the things being picked out don’t seem to be what’s salient to the visually impaired. so, amongst other things, i’m building an iOS app so the blind can show us what’s important to them. a camera app, for the blind.

diary | 21 oct 2019 | tagged: orbit · research

comedy lab » on tour, unannounced

an email comes in from a performance studies phd candidate asking if they could watch the whole robot routine from comedy lab: human vs. robot. damn right. i’d love to see someone write about that performance as a performance.

but, better than that staging and its weird audiences (given the advertised title, robo-fetishists and journalists?) there is comedy lab #4: on tour, unannounced. the premise: robot stand-up, to unsuspecting audiences, at established comedy nights. that came a year later with the opportunity to use another robothespian (thanks oxford brookes!). it addressed the ecological validity issues, and should simply be more fun to watch.

for on tour, unannounced we kept the performance the same – or rather, each performance used the same audience responsive system to tailor the delivery in realtime. there’s a surprising paucity in the literature about how audiences respond differently to the same production; the idea was this should be interesting data. so i’ve taken the opportunity to extract from the data set the camera footage of the stage from each night of the tour. and now that is public, at the links below.

the alternative comedy memorial society

gits and shiggles

angel comedy

the robot comedy lab experiments form chapter 4 of my phd thesis ‘liveness: an interactional account’

Four: Experimenting with performance

The literature reviewed in chapter three also motivates an experimental programme. Chapter four presents the first, establishing Comedy Lab. A live performance experiment is staged that tests audience responses to a robot performer’s gaze and gesture. This chapter provides the first direct evidence of individual performer–audience dynamics within an audience, and establishes the viability of live performance experiments.

http://tobyz.net/project/phd

there are currently two published papers –

and finally, on ‘there is a surprising paucity…’, i’d recommend starting with gardair’s mention of mervant-roux.

diary | 03 may 2019 | tagged: comedy lab · phd · qmat · research

human-machine co-composition

i had a note squirrelled away that there were human-machine co-composition results still buried in the folkrnn.org write-up stats. and to me, that’s the heart of it.

with positive noises from journal paper reviewers in the air, i wrangled the time to go digging.

the following is the co-composition analysis the webapp’s stats management command now also produces.

Refining the folkrnn.org session data above, we can analyze only those tunes which are in some way a tweak of the one that came before. This iterative process of human-directed tweaks of the machine-generated tunes demonstrates co-composition using the folkrnn.org system. In numbers –
Of the 24657 tunes generated on folkrnn.org, 14088 keep the generation parameters from the previous tune while changing one or more (57%).
This happened within 4007 ‘iterative’ sequences of, on average 6 tune generations (mean: 5.9, stddev: 8.7).
The frequency of the generate parameters used now becomes:
key: 0.28, meter: 0.24, model: 0.093, seed locked: 0.34, start abc is excerpt: 0.02, start abc: 0.2, temp: 0.39

One feature now possible to expose is whether the user has identified a salient phrase in the prior tune, and has primed the generation of the new tune with this phrase. This is the strongest metric of co-composition available on folkrnn.org. This is reported above as ‘start_abc is excerpt’, tested for phrases comprising five characters or more (e.g. five notes, or fewer with phrasing), and as per other generation metrics reported here, not counting subsequent generations with that metric unchanged. This happened 283 times (2%)

Further evidence of human-machine co-composition can be seen on themachinefolksession.org, where 239 of the ‘iterative’ folkrnn.org tunes were archived. Using the tune saliency metric used by the themachinefolksession.org homepage, the most noteworthy of these tunes is ‘Green Electrodes’. This was generated in the key C Dorian (//folkrnn.org/tune/5139), and as archived (https://themachinefolksession.org/tune/294) the user has manually added a variation set in the key E Dorian. This shows a limitation of folkrnn.org, that all tunes are generated in a variant of C (a consequence of an optimisation made while training the RNN on the corpus of existing tunes), and shows that the human editing features of themachinefolksession.org have been used by users to work around such a limitation. Also, while not co-composition per-se, the act of the user naming the machine generated tune shows it has some value to them.

Direct evidence of the user’s intent can be seen in ‘Rounding Derry’ (https://themachinefolksession.org/tune/587). The user generated tune ‘FolkRNN Tune №24807’ on a fresh load of folkrnn.org, i.e. default parameters, randomised seed. The user played this tune twice, and then selected the musical phrase ‘C2EG ACEG|CGEG FDB,G,’ and set this for the start_abc generation parameter. The user generated the next iteration, played it back, and archived on themachinefolksession.org with a title of their creation. There, the user writes –
‘Generated from a pleasant 2 measure section of a random sequence, I liked this particularly because of the first 4 bars and then the jump to the 10th interval key center(?) in the second section. Also my first contribution!’

diary | 02 may 2019 | tagged: machine folk · research · code

drawing interactions journal paper

Drawing as transcription: how do graphical techniques inform interaction analysis?

Drawing as a form of analytical inscription can provide researchers with highly flexible methods for exploring embodied interaction. Graphical techniques can combine spatial layouts, trajectories of action and anatomical detail, as well as rich descriptions of movement and temporal effects. This paper introduces some of the possibilities and challenges of adapting graphical techniques from life drawing and still life for interaction research. We demonstrate how many of these techniques are used in interaction research by illustrating the postural configurations and movements of participants in a ballet class. We then discuss a prototype software tool that is being developed to support interaction analysis specifically in the context of a collaborative data analysis session.

Albert, S., Heath, C., Skach, S., Harris, M., Miller, M., & Healey, P. (2019). Drawing as transcription: how do graphical techniques inform interaction analysis? Social Interaction. Video-Based Studies of Human Sociality, 2(1). https://doi.org/10.7146/si.v2i1.113145

Open Access / Creative Commons BY-NC-ND

diary | 28 mar 2019 | tagged: drawing interactions · research

swedish model

folkrnn.org can now generate tunes in a swedish folk idiom. bob having moved to KTH in sweden, had got some new students to create a folkrnn model trained on a corpus of swedish folk tunes. and herein lies a tale of how things are never as simple as they seem.

the tale goes something like this: here we have a model that already works with the command-line version of folkrnn. and the webapp folkrnn.org parses models and automatically configures itself. a simple drop-in, right?

first of all, this model is sufficiently different that the defaults for the meter, key and tempo are no longer appropriate. so a per-model defaults system was needed.

then, those meter, key composition parameters are differently formatted in this corpus, which pretty much broke everything. piecemeal hacks weren’t cutting it, so a sane system was needed that standardised on one format and sanely bridged to the raw tokens of each model.

after the satisfaction of seeing it working, bob noticed that the generated tunes were of poor quality. when a user of folkrnn.org generates a tune with a previous model, setting the meter and key to the first two tokens is exactly what the model expects, and it can then fill in the rest drawing from the countless examples of that combination found in the corpus. but with this new model, or rather the corpus it was trained on, a new parameter precedes these two. so the mechanics that kicks off each tune needed to now cope with an extra, optional term.

so expose this value in the composition panel? that seems undesirable, as this parameter is effectively a technical option subsumed by the musical choice of meter. and manually choosing it doesn’t guarantee you’re choosing a combination found in the corpus, so the generated tunes are still mostly of poor quality.

at this point, one might think that exactly what RNNs do is choose appropriate values. but it’s not that simple, as the RNN encounters this preceeding value first, before the meter value set by the user. it can choose an appropriate meter from the unit-note-length, but not the other way round. so a thousand runs of the RNN and a resulting frequency table later, folkrnn.org is now wired to generate an appropriate pairing akin to the RNN running backwards. those thousand runs also showed that only a subset of the meters and keys found in the corpus are used to start the tune, so now the compose panel only shows those, which makes for a much less daunting drop-down, and fewer misses for generated tune quality.

unit-note-length is now effectively a hidden variable, which does the right thing… providing you don’t want to iteratively refine a composition, as it may vary from tune generated to tune generated. rather than exposing the parameter after all, and then have to implement pinning in as per the seed parameter’s control, a better idea was had: make the initial ABC field also handle this header part of the tune. so rather than just copying-and-pasting-in snippets of a tune, you could paste in the tune from the start, including this unit note length header. this is neat because as well as providing the advanced feature of being able to specify the unit note lenth value, it makes the UI work better for naïve users: why couldn’t you copy and paste in a whole tune before?

as per the theme there, implementing this wasn’t just a neat few lines of back-end python, as now the interface code that is loaded into the browser needs to be able to parse out and verify these header lines, and so on.

diary | 15 jan 2019 | tagged: machine folk · research · code

stats time

for any given tune, how much activity surrounded it?
for any given session, what happened?
what are the usage trends of folkrnn.org and of themachinefolksession.org?

to answer these kinds of questions, enter stats, a django management command for processing the use data of composer and archiver apps for insight / write-up in academic papers.

i used this to write the following, with input from bob and oded. it will surely be edited further for publication, but this is as it stands right now.

During the first 235 days of activity at folkrnn.org, 24562 tunes were generated by – our heuristics suggest – 5700 users. Activity for the first 18 weeks averages a median of 155 tunes weekly. In the subsequent 15 weeks to the time of writing, overall use increased, with a median of 665 tunes generated weekly. This period also features usage spikes. One week, correlating to an interview in Swedish media, shows 2.7x the median tunes generated. The largest, correlating to a mention in German media, shows an 18.4x increase. These results show making our tool available to users of the web has translated into actual use, and that use is increasing. Further, media attention brings increased use, and this use is similarly engaged, judged by similar patterns of downloading MIDI and archiving tunes to themachinefolksession.org.

Of the fields available for users to influence the generation process on folkrnn.org, the temperature was used more often then the others (key, meter, initial ABC, and random seed). Perhaps this is because changing temperature results in more obviously dramatic changes in the generated material. Increasing the temperature from 1 to 2 will often yield tunes that do not sound traditional at all. If changes were made to the generate parameters, the frequency of the resulting tune being played, downloaded or archived increased from 0.78 to 0.87.

Over the same period since launch, themachinefolksession.org has seen tunes 551 contributed. Of these tunes, 82 have had further iterations contributed in the form of ‘settings’; the site currently hosts 92 settings in total. 69 tunes have live recordings contributed; the site currently hosts 64 recordings in total (a single performance may encompass many tunes). These results show around 100 concrete examples of machine-human co-creation have been documented.

Of the 551 contributed tunes, 406 were generated on, and archived from, folkrnn.org. Of these entirely machine-generated tunes, 32 have had human edits contributed; themachinefolksession.org currently hosts 37 settings of folkrnn generated tunes in total. These examples in particular attributable human iteration of, or inspiration by, machine produced scores.

Further value of machine produced scores can be seen by the 30 registered users who have selected 136 tunes or settings as being noteworthy enough to add to their tunebooks. Per the algorithm used by the home page of themachinefolksession.org to surface ‘interesting’ tunes, “Why are you and your 5,599,881 parameters so hard to understand?” is the most, with 4 settings and 5 recordings.

While these results are encouraging, most content-affecting activity on themachinefolksession.org has been from the administrators; co-author Sturm accounts for 70% of such activity. To motivate the use of the websites, we are experimenting with e.g. ‘tune of the month’, see above, and have organised a composition competition.

The composition competition was open to everyone but targeted primarily at music students. Submission included both a score for a set ensemble and an accompanying text describing how the composer used a folkrnn model in the composition of the piece. The judging panel - the first author was joined by Profs. Elaine Chew and Sageev Oore - considered the musical quality of the piece as well as the creative use of the model. The winning piece Gwyl Werin by Derri Lewis was performed by the New Music Players at a concert organised in partnership with the 2018 O’Reilly AI Conference in London. Lewis said he didn’t want to be ‘too picky’ about the tunes, rather he selected a tune to work from after only a few trails. He describes using the tune as a tone row and generating both harmonic, melodic and motivic material out of it. Though the tune itself, as generated by the system, does not appear directly in the piece.

diary | 11 jan 2019 | tagged: machine folk · research · code

interactive machine-learning for music exhibition

having been selected for the interactive machine-learning for music exhibition at the 19th international society for music information retreival conference, the time had come. nice to see a photo back from set-up, with the poster (PDF) commanding attention in the room.

diary | 23 sep 2018 | tagged: machine folk · research

machine folk poster

the folk-rnn webapp was selected for the 19th international society for music information retreival conference, as part of their interactive machine-learning for music exhibition. so poster time! nice to not have to futz around with html+css, instead just draw directly… i’m pretty happy with how it turned out (PDF).

diary | 12 sep 2018 | tagged: machine folk · research

themachinefolksession.org mk.ii

the community site. straight-up django (“the web framework for perfectionists with deadlines”), but there’s a lot going on.

diary | 29 jul 2018 | tagged: machine folk · research · code

machine folk webapp abstract

We demonstrate 1) a web-based implementation of a gen- erative machine learning model trained on transcriptions of folk music from Ireland and the UK (http://folkrnn.org, live since March 2018); 2) an online repository of work created by machines (https://themachinefolksession.org/, live since June 2018). These two websites provides a way for the public to engage with some of the outcomes of our research investigating the application of machine learning to music practice, as well as the evaluation of machine learning applied in such contexts. Our machine learning model is built around a text-based vocabulary, which provides a very compact but expressive representation of melody-focused music. The specific kind of model we use consists of three hidden layers of long short-term memory (LSTM) units. We trained this model on over 23,000 transcriptions crowd-sourced from an online community devoted to these kinds of folk music. Several compositions created with our application have been performed so far, and recorded and posted online. We are also organising a composition competition using our web-based implementation, the winning piece of which will be performed at the 2018 O’Reilly AI conference in London in October.

Matthew Tobias Harris
Queen Mary University of London
London E1 4NS, UK

Bob L. Sturm
Royal Institute of Technology KTH
Lindstedtsvägen 24, SE-100 44 Stockholm, Sweden

Oded Ben-Tal
Kingston University
Kingston Hill, Kingston upon Thames, Surrey KT2 7LB, UK

Submitted to the interactive machine-learning for music (IML4M) @exhibition at the 19th international society for music information retreival conference. PDF

diary | 18 jul 2018 | tagged: machine folk · research

conversational rollercoaster journal paper

The conversational rollercoaster: Conversation analysis and the public science of talk

How does talk work, and can we engage the public in a dialogue about the scientific study of talk? This article presents a history, critical evaluation and empirical illustration of the public science of talk. We chart the public ethos of conversation analysis that treats talk as an inherently public phenomenon and its transcribed recordings as public data. We examine the inherent contradictions that conversation analysis is simultaneously obscure yet highly cited; it studies an object that people understand intuitively, yet routinely produces counter-intuitive findings about talk. We describe a novel methodology for engaging the public in a science exhibition event and show how our ‘conversational rollercoaster’ used live recording, transcription and public-led analysis to address the challenge of demonstrating how talk can become an informative object of scientific research. We conclude by encouraging researchers not only to engage in a public dialogue but also to find ways to actively engage people in taking a scientific approach to talk as a pervasive, structural feature of their everyday lives.

Albert, S., Albury, C., Alexander, M., Harris, M. T., Hofstetter, E., Holmes, E. J. B., & Stokoe, E. (2018). The conversational rollercoaster: Conversation analysis and the public science of talk. Discourse Studies, 20(3), 397–424. https://doi.org/10.1177/1461445618754571

PDF available from Loughborough University Institutional Repository

diary | 16 may 2018 | tagged: conversational rollercoaster · engaging audiences · qmat · research

designer infographics?

the app, as conceived for a prototype, is all about exploratory research. of course, ultimately the insights and the backing evidence need to be distilled for publication.

happily, sylvaine tuncer, barry brown and others have been working on a design-informed exploration of ways to standardise presentation. right up my alley… if we can continue this project, there’s so much that could be done, and i’d love to do it.

diary | 30 mar 2018 | tagged: drawing interactions · research

a room full of ethnographers drawing you

if you ever wondered what it might look like to have a room full of ethnographers learning life drawing with you as the model… well, it’s like this. an unexpected turn of events, to say the least.

i blame sophie, to the left in the photo =]

diary | 30 mar 2018 | tagged: drawing interactions · research

sketching posture

the project takes a fresh approach to (e.g. conversation analytic) transcription, based on long-standing artistic and drafting skills. so, here we are in the workshop, all learning life drawing. while tracing people’s outlines from photo and video source material can get you a long way (something i quickly learnt coming into product design with an engineering background), it can also constrain what can be communicated – or even seen.

cue sophie, whose quick and economical illustrations convey qualities like posture and weight.

diary | 30 mar 2018 | tagged: drawing interactions · research

workshop time

the drawing interactions project is not just the app. and even if it were, what is an app without users. so: a workshop, at ‘new directions in ethnomethodology’.

diary | 30 mar 2018 | tagged: talk · drawing interactions · research

ready for the unveil

after an intense week, the drawing interactions app is ready to be unveiled. the iPad Pro and Apple Pencil turn out to be amazing hardware, and i’ve really enjoyed the deep dive into the kind of precise, expressive, powerful code that’s required for this kind of iOS app. it

  • plays a video
  • lets you navigate around the video by direct manipulation of the video’s filmstrip.
  • when paused, you can draw annotations with a stylus.
  • these drawings also become ‘snap’ points on the timeline
  • these drawings are also drawn into the timeline, partly to identify the snap points, and partly with the hope they can become compound illustrations in their own right
  • when not paused, you can highlight movements by following them with the stylus

i got there. huzzah! that last feature, drawing-through-time, i’m particularly pleased with. of course, there are bugs and plenty of things it doesn’t do. but it’s demoable, and that’s what we need for tomorrow’s workshop.

diary | 29 mar 2018 | tagged: drawing interactions · research · code · ios

hands-on with time

first challenge for the drawing interactions prototype app is to get ‘hands-on with time’. what does that mean? well, clearly controlling playback is key when analysing video data. but that also frames the problem in an unhelpful way, where playback is what’s desired. rather the analyst’s task is really to see actions-through-time.

pragmatically, when i read or hear discussions of video data analysis, working frame-by-frame comes up time and time again, along with repeatedly studying the same tiny snippets. but if i think of the ‘gold standard’ of controlling video playback – the jog-shuttle controllers of older analogue equipment, or the ‘J-K-L’ of modern editing software – they don’t really address those needs.

so what might? i’ve been thinking about the combination of scrolling filmstrips and touch interfaces for years, promising direct manipulation of video. also, in that documentary the filmstrips are not just user-interface elements for the composer, but displays for the audience. such an interface might get an analyst ‘hands on with time’, and might better serve a watching audience. this is no small point, as the analysis is often done in groups, during ‘data sessions’. others would be able to tap the timeline for control – rather than one person owning the mouse – and all present would have an easy understanding of the flow of time as the app is used.

of course, maybe this is all post-hoc rationalisation. i’ve been wanting to code this kind of filmstrip control up for years, and now i have.

a little postscript: that panasonic jog-shuttle controller was amazing. the physical control had all the right haptics, but there was also something about the physical constraints of the tape playback system. you never lost track of where you were, as the tape came to a stop, and starting speeding back. time had momentum. so should this.

diary | 25 mar 2018 | tagged: drawing interactions · research · code · ios

the hack starts

the week hack session starts. claude and sophie talk through how their art practice has informed their research on social interaction, and we all discuss how that could inform graphical tools and techniques for the transcription, analysis and presentation of social interaction… a fun day with old friends doing good work.

having thought i might be making all sorts of acetate-and-pens-and-displays hacks, it becomes pretty clear that a tablet+pen app that could support their kind of approach is achievable, and would be a great platform to then experiment from.

diary | 22 mar 2018 | tagged: drawing interactions · research

folkrnn.org: 50x faster, and online

the web-app adapation of the folk-rnn command line tool is now online, and generating tunes 50x faster – from ~1min to 1-2s. still bare-bones, an ongoing project, but at least playable with.

diary | 29 jan 2018 | tagged: machine folk · research · code

thesis published

it’s a funny thing, handing in a thesis, submitting corrections and so on, but not being able to link anyone to the work. finally, so long after may, but at least not so long after the viva, here it is. all 169 pages of it.

https://qmro.qmul.ac.uk/xmlui/handle/123456789/30624

diary | 20 dec 2017 | tagged: phd · research · qmat

renaissance garb means dr*spark

dressed up as a renaissance italian, doffed my hat, and got handed a certificate… that was a placeholder, saying the real one will be in the post. truly a doctor, but yet still one little thing!

best of all, is that first-born is no longer my totem of not having got this done; the bigger and better she got, the more egregious the not-being-a-doctor was.

diary | 18 dec 2017 | tagged: phd · research · qmat

folkrnn composer mk.i

off the digital anvil: a bare-bones web-app adapation of the folk-rnn command line tool, the first step in making it a tool anyone can use. happily, folk-rnn is written in python – good in and of itself as far as i’m concerned – which makes using the django web framework a no-brainer.

- created a managed ubuntu virtual machine.
- wrangled clashing folk-rnn dependencies.
- refactored the folk-rnn code to expose the tune generation functionality through an API.
- packaged folk-rnn for deployment.
- created a basic django webapp:
	- UI to allow user to change the rnn parameters and hit go.
	- UI to show a generated tune in staff notation.
	- URL scheme that can show previously generated tunes.
	- folk-rnn-task process that polls the database (python2, as per folk-rnn).
	- unit tests.
- functional test with headless chrome test rig.

diary | 15 nov 2017 | tagged: machine folk · research · code

postdoc: machine folk

Folk music is part of a rich cultural context that stretches back into the past, encompassing the real and the mythical, bound to the traditions of the culture in which it arises. Artificial intelligence, on the other hand, has no culture, no traditions. But it has shown great ability: beating grand masters at chess and Go, for example, or demonstrating uncanny wordplay skills when IBM Watson beat human competitors at Jeopardy. Could the power of AI be put to use to create music?

The article, by Bob Sturm and Oded Ben-Tal, goes on to say yes there is precedent, and here’s what we’re doing.

I’m now helping them, on a part-time contract. The idea is it’s part UI design for composing with deep learning, part community engagement (read: website), and part production–reception research.

diary | 18 oct 2017 | tagged: machine folk · research

viva

the dissertation had done the talking, and the viva was good conversation about it wrapped up with a “dr. harris” handshake. phew, and woah. having been in a death-grip with the dissertation draft for so long, nothing in the whole experience could touch the wholesomeness of simply hearing “i read it all, and it’s good”.

supervisor –

Dear All,

I’m delighted to report that Toby Harris successfully defended his thesis "Liveness: An Interactional Account” this morning.
The external said: “that was a sheer pleasure”. (very) minor corrections.

Pat.


Pat Healey,
Professor of Human Interaction,
Head of the Cognitive Science Research Group,
Queen Mary University of London

external examiner –

This is a richly intriguing study of the processes of interaction between performers, audiences and environments in stand-up comedy – a nice topic to choose since it is one where, even more than in straight theatrical contexts, ‘liveness’ is intuitively felt to be crucial. But as Matthew Harris says, what constitutes ‘liveness’ and how precisely it operates and matters, remains elusive – if pugnaciously clung to!

The conclusions reached and offered – which more than anything insist on the value and necessity of seeing all audience contexts as socially structured situations – both rings right, and seems to be based well in the details of the data presented. And the cautions at the end, about the risks with moving to higher levels of abstraction (wherein ‘the audience’ might become increasingly massified, rather than understood processually) looks good and valuable.

The specific claims made – that the ‘liveness’ of the performer matters little (e.g. by replacing him/her with a robot, or with a recording) – will nicely infuriate those with an over-investment in the concept, and will need careful presentation when this work is published. The subsequent experiment on the role of spotlighting or darkness on the kinds and levels of interaction audiences have with each other, and with the performer are also nicely counter-intuitive.

internal examiner –

I greatly enjoyed reading this thesis. It strikes a good balance between theory and experiment and makes several well-defined contributions. The literature reviews show a keen insight and a good synthesis of ideas, and the motivation for each of the experiments is made clear. The writing is polished and engaging, and the order of ideas in each chapter is easy to follow.

diary | 06 oct 2017 | tagged: phd · research · qmat

isps » performer-audience dynamics talk

had a lot of fun with my talk ‘visualising performer–audience dynamics’ at ISPS 2017. with a title like that, some play with the ‘spoken paper’ format had to be had.

pleasingly, people were coming up to me to say how much they enjoyed it for the rest of the conference. huzzah!

i recorded it, and have stitched it together with the slides. the video is here, on the project page.

diary | 01 sep 2017 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research · iceland · talk

submission

…finally.

diary | 09 may 2017 | tagged: phd · research · qmat

accepted for ISPS2017

‘visualising performer–audience dynamics’ spoken paper accepted at ISPS 2017, the international symposium on performance science. this is doubly good, as i’ve long been keen to visit reykjavík and explore iceland.

diary | 13 apr 2017 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

the conversational rollercoaster

media and arts technology colleague saul albert put out a call for help for the conversational rollercoaster. happy to help as a last-hurrah for time in the same research group, but more significantly it’s an event conceived to take interaction and audiences seriously.

cribbed from an email, my quick take after was –
– I could show passers-by a scientific process happening live. A production line, almost.
– With the talkaoke table, not only did we have a source of conversation, but something that passers by had to navigate past.
– Watch enough people come past, you start to spot patterns
– Capture those moments, pore over the detail, and soon you can…
– Build a “theory of passing the talkaoke table without getting pulled in”
– Laws that clearly aren’t like the laws of physics, but for this specific situation do have similar predictive power.
– Why is it that they work?

diary | 23 sep 2016 | tagged: conversational rollercoaster · engaging audiences · qmat · research

robot comedy lab: journal paper

the robot stand-up work got a proper write-up. well, part of it got a proper write-up, but so it goes.

This paper demonstrates how humanoid robots can be used to probe the complex social signals that contribute to the experience of live performance. Using qualitative, ethnographic work as a starting point we can generate specific hypotheses about the use of social signals in performance and use a robot to operationalize and test them. […] Moreover, this paper provides insight into the nature of live performance. We showed that audiences have to be treated as heterogeneous, with individual responses differentiated in part by the interaction they are having with the performer. Equally, performances should be further understood in terms of these interactions. Successful performance manages the dynamics of these interactions to the performer’s- and audiences’-benefit.

pdf download

diary | 25 aug 2015 | tagged: comedy lab · phd · qmat · research

oriented-to test

need a hit-test for people orienting to others. akin to gaze, but the interest here is what it looks like you’re attending to. but what should that hit-test be? visualisation and parameter tweaking to the rescue…

diary | 03 feb 2015 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

through the eyes

with the visualiser established, it was trivial to attach the free view camera to the head pose node and boom!: first-person perspective. to be able to see through the eyes of anyone present is such a big thing.

diary | 13 jan 2015 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

robot comedy lab: workshop paper

minos gave a seminar on his engineering efforts for robot stand-up, we back-and-forthed on the wider framing of the work, and a bit of that is published here. his write-up.

workshop paper presented at humanoid robots and creativity, a workshop at humanoids 2014.

pdf download

diary | 18 nov 2014 | tagged: comedy lab · phd · qmat · research

rotation matrix ambiguities

the head pose arrows look like they’re pointing in the right direction… right? well, of course, it’s not that simple.

the dataset processing script vicon exporter applies an offset to the raw angle-axis fixture pose, to account for the hat not being straight. the quick and dirty way to get these offsets is to say at a certain time everybody is looking directly forward. that might have been ok if i’d thought to make it part of the experiment procedure, but i didn’t, and even if i had i’ve got my doubts. but we have a visualiser! it is interactive! it can be hacked to nudge things around!

except that the visualiser just points an arrow at a gaze vector, and that’s doesn’t give you a definitive orientation to nudge around. this opens up a can of worms where everything that could have thwarted it working, did.

“The interpretation of a rotation matrix can be subject to many ambiguities.”
http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities

hard-won code –

DATASET VISUALISER

// Now write MATLAB code to console which will generate correct offsets from this viewer's modelling with SceneKit
for (NSUInteger i = 0; i < [self.subjectNodes count]; ++i)
{
	// Vicon Exporter calculates gaze vector as
	// gaze = [1 0 0] * rm * subjectOffsets{subjectIndex};
	// rm = Rotation matrix from World to Mocap = Rwm
	// subjectOffsets = rotation matrix from Mocap to Offset (ie Gaze) = Rmo

	// In this viewer, we model a hierarchy of
	// Origin Node -> Audience Node -> Mocap Node -> Offset Node, rendered as axes.
	// The Mocap node is rotated with Rmw (ie. rm') to comply with reality.
	// Aha. This is because in this viewer we are rotating the coordinate space not a point as per exporter

	// By manually rotating the offset node so it's axes register with the head pose in video, we should be able to export a rotation matrix
	// We need to get Rmo as rotation of point
	// Rmo as rotation of point = Rom as rotation of coordinate space

	// In this viewer, we have
	// Note i. these are rotations of coordinate space
	// Note ii. we're doing this by taking 3x3 rotation matrix out of 4x4 translation matrix
	// [mocapNode worldTransform] = Rwm
	// [offsetNode transform] = Rmo
	// [offsetNode worldTransform] = Rwo

	// We want Rom as rotation of coordinate space
	// Therefore Offset = Rom = Rmo' = [offsetNode transform]'

	// CATransform3D is however transposed from rotation matrix in MATLAB.
	// Therefore Offset = [offsetNode transform]

	SCNNode* node = self.subjectNodes[i][@"node"];
	SCNNode* mocapNode = [node childNodeWithName:@"mocap" recursively:YES];
	SCNNode* offsetNode = [mocapNode childNodeWithName:@"axes" recursively:YES];

	// mocapNode has rotation animation applied to it. Use presentation node to get rendered position.
	mocapNode = [mocapNode presentationNode];

	CATransform3D Rom = [offsetNode transform];

	printf("offsets{%lu} = [%f, %f, %f; %f, %f, %f; %f, %f, %f];\n",
		   (unsigned long)i+1,
		   Rom.m11, Rom.m12, Rom.m13,
		   Rom.m21, Rom.m22, Rom.m23,
		   Rom.m31, Rom.m32, Rom.m33
		   );

	// BUT! For this to actually work, this requires Vicon Exporter to be
	// [1 0 0] * subjectOffsets{subjectIndex} * rm;
	// note matrix multiplication order

	// Isn't 3D maths fun.
	// "The interpretation of a rotation matrix can be subject to many ambiguities."
	// http://en.wikipedia.org/wiki/Rotation_matrix#Ambiguities
}

VICON EXPORTER

poseData = [];
for frame=1:stopAt
	poseline = [frameToTime(frame, dataStartTime, dataSampleRate)];
	frameData = reshape(data(frame,:), entriesPerSubject, []);
	for subjectIndex = 1:subjectCount

		%% POSITION
		position = frameData(4:6,subjectIndex)';

		%% ORIENTATION
		% Vicon V-File uses axis-angle represented in three datum, the axis is the xyz vector and the angle is the magnitude of the vector
		% [x y z, |xyz| ]
		ax = frameData(1:3,:);
		ax = [ax; sqrt(sum(ax'.^2,2))'];
		rotation = ax(:,subjectIndex)';

		%% ORIENTATION CORRECTED FOR OFF-AXIS ORIENTATION OF MARKER STRUCTURE
		rm = vrrotvec2mat(rotation);

		%% if generating offsets via calcOffset then use this
		% rotation = vrrotmat2vec(rm * offsets{subjectIndex});
		% gazeDirection = subjectForwards{subjectIndex} * rm * offsets{subjectIndex};

		%% if generating offsets via Comedy Lab Dataset Viewer then use this
		% rotation = vrrotmat2vec(offsets{subjectIndex} * rm); %actually, don't do this as it creates some axis-angle with imaginary components.
		gazeDirection = [1 0 0] * offsets{subjectIndex} * rm;

		poseline = [poseline position rotation gazeDirection];
	end
	poseData = [poseData; poseline];
end

diary | 19 aug 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

writing up

if only writing up the phd was always like this. beautiful room, good friends, excellent facilitation by thinkingwriting.

diary | 12 aug 2014 | tagged: phd · qmat · research

virtual camera, real camera

of course, aligning the virtual camera of the 3D scene with the real camera’s capture of the actual scene was never going to be straightforward. easy to get to a proof of concept. hard to actually register the two. i ended up rendering a cuboid grid on the seat positions in the 3D scene, drawing by hand (well, mouse) what looked about right on a video still, and trying to match the two sets of lines by nudging coordinates and fields-of-view with some debug-mode hotkeys i hacked in.

in hindsight, i would have stuck motion capture markers on the cameras. so it goes.

diary | 16 jul 2014 | tagged: comedy lab · performer–audience dynamics · qmat · research

visualising everything

visualising head pose, light state, laugh state, computer vision happiness, breathing belt. and, teh pretty. huzzah.

diary | 21 jun 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

comedy lab » angel comedy

third gig: angel comedy. again, an established comedy club and again a different proposition. a nightly, free venue, known to be packed. wednesdays was newcomers night which, again, was somewhat appropriate.

what i remember most vividly has not to do with our role in it, but was rather the compère warming up the crowd after the interval. it was a masterclass in rallying a crowd into an audience (probably particularly warranted given the recruitment message of ‘free’ combined with inexperienced acts). i rue to this day not recording it.

diary | 04 jun 2014 | tagged: comedy lab · phd · qmat · research

comedy lab » gits and shiggles

the second gig of our tour investigating robo-standup in front of ‘real’ audiences: gits and shiggles at the half moon, putney. a regular night there, we were booked amongst established comedians for their third birthday special. was very happy to see the headline act was katherine ryan, whose attitude gets me every time.

shown previously was artie on-stage being introduced. he (it, really) has to be on stage throughout, so we needed to cover him up for a surprise reveal. aside from the many serious set-up issues, i’m pleased i managed to fashion the ‘?’ in a spare moment. to my eye, makes the difference.

artie has to be on stage throughout as we need to position him precisely in advance. that, and he can’t walk. the precise positioning is because we need to be able to point and gesture at audience members: short of having a full kinematic model of artie and the three dimensional position of each audience member identified, we manually set the articulations required to point and look at every audience seat within view, while noting where each audience seat appears in the computer vision’s view. the view is actually a superpower we grant to artie, the ability to have see from way above his head, and do that in the dark. we position a small near-infrared gig-e vision camera in the venue’s rigging along with a pair of discreet infra-red floodlights. this view is shown above, a frame grabbed during setup that has hung around since.

diary | 03 jun 2014 | tagged: comedy lab · phd · qmat · research

comedy lab » alternative comedy memorial society

getting a robot to perform stand-up comedy was a great thing. we were also proud that we could stage the gig at the barbican arts centre. prestigious, yes, but also giving some credibility to it being a “real gig”, rather than an experiment in a lab.

however, it wasn’t as representative of a comedy gig as we’d hoped. while our ‘robot stand-up at the barbican’ promotion did recruit a viably sized audience (huzzah!), the (human) comics said it was a really weird crowd. in short, we got journalists and robo-fetishists, not comedy club punters. which on reflection is not so surprising. but how to fix?

we needed ‘real’ audiences at ‘real’ gigs, without any recruitment prejudiced by there being a robot in the line-up. we needed to go to established comedy nights and be a surprise guest. thanks to oxford brookes university’s kind loan, we were able to load up artie with our software and take him on a three day tour of london comedy clubs.

and so, the first gig: the alternative comedy memorial society at soho theatre. a comedian’s comedy club, we were told; a knowledgeable audience expecting acts to be pushing the form. well, fair to say we’re doing something like that.

diary | 02 jun 2014 | tagged: comedy lab · phd · qmat · research

comedy lab dataset viewer

happy few days bringing-up a visualiser app for my PhD. integrating the different data sources of my live performance experiment had brought up some quirks that didn’t seem right. i needed to be confident that everything was actually in sync and spatially correct, and, well, it got the point where i decided to damn well visualise the whole thing.

i hoped to find a nice python framework to do this in, which would neatly extend the python work already doing most of the processing on the raw data. however i didn’t find anything that could easily combine video with a 3D scene. but i do know how to write native mac apps, and there’s a new 3D scene framework there called SceneKit…

so behold Comedy Lab Dataset Viewer. it’s not finished, but it lives!

  • NSDocument based application, so i can have multiple performances simultaneously
  • A data importer that reads the motion capture data and constructs the 3D scene and its animation
  • A stack of CoreAnimation layers compositing 3D scene over video
  • 3D scene animation synced to the video playback position

diary | 16 may 2014 | tagged: comedy lab · performer–audience dynamics · phd · qmat · research

science photo prize

thanks to this shot, science outreach, and a national competition, i have a new camera. first prize! huzzah!

the full story is here — http://www.qmul.ac.uk/media/news/items/se/126324.html

screenshot above from — http://www.theguardian.com/science/gallery/2014/mar/31/national-science-photography-competition-in-pictures

diary | 31 mar 2014 | tagged: phd · comedy lab · photo · qmat · research

comedy lab: first results

hacked some lua, got software logging what I needed; learnt python, parsed many text files; forked a cocoa app, classified laugh state for fifteen minutes times 16 audience members times two performances; and so on. eventually, a dataset of meascollect audience response for every tenth of a second. and with that: results. statistics. exciting.

a teaser of that is above, peer review needs to go it’s course before announcements can be made. as a little fun, though, here is the introduction of the paper the first results are published in – at some point before it got re-written to fit house style. this has… more flavour.

Live performance is important. We can talk of it “lifting audiences slightly above the present, into a hopeful feeling of what the world might be like if every moment of our lives were as emotionally voluminous, generous, aesthetically striking, and intersubjectively intense” \cite{Dolan:2006hv}. We can also talk about bums on seats and economic impact — 14,000,000 and £500,000,000 for London theatres alone in recent years \cite{Anonymous:2013us}. Perhaps more importantly, it functions as a laboratory of human experience and exploration of interaction \cite{FischerLichte:2008wo}. As designers of media technologies and interactive systems this is our interest, noting the impact of live performance on technology \cite{Schnadelbach:2008ii, Benford:2013ia, Reeves:2005uw, Sheridan:2007wc, Hook:2013vp} and how technology transforms the cultural status of live performance \cite{Auslander:2008te, Barker:2012iq}. However, as technology transforms the practice of live performance, the experiential impact of this on audiences is surprisingly under-researched. Here, we set out to compare this at its most fundamental: audience responses to live and recorded performance.

diary | 18 sep 2013 | tagged: comedy lab · phd · qmat · research

forked: video annotation app

a major part of the analysis for comedy lab is manually labelling what is happening when in the recordings. for instance, whether an audience member is laughing or not – for each audience member, throughout the performance. all in all, this adds up to a lot of work.

for this labelling to be accurate, let alone simply to get through it all, the interface of the video annotation software needs to be responsive - you are playing with time, in realtime. i was having such a bad time with elan[1] that the least bad option got to be writing my own simple annotator: all it need be is a responsive video player and a bag of keyboard shortcuts that generates a text document of annotations and times. luckily, there was an open-source objective-c / cocoa annotator out there, and so instead i’ve forked the code and hacked in the features i needed. never have i been so glad to be able to write native os x applications.

if you need features such as annotations coming from a controlled vocabulary and are continuous, ie. non-overlapping over time or workflow such as annotation can be done in one-pass with one hand on keyboard and one on scroll-gesture mouse/trackpad, the application is zipped and attached to this post (tested on 10.8, should work on earlier).

if you are a cocoa developer with similar needs, the code is now on github and i can give some pointers if needed.


  1. to be clear, elan is a powerful tool for which the developers deserve respect, and through it’s import and export is still the marshalling depot of my data. the underlying issue i suspect is the java runtime, as trying alternatives such as anvil didn’t work out either. ↩︎

diary | 20 aug 2013 | tagged: code · mac os · comedy lab · phd · research | downloads: vcodevdata.zip

comedy lab: new scientist article

“Hello, weak-skinned pathetic perishable humans!” begins the stand-up comic. “I am here with the intent of making you laugh.”
A curiously direct beginning for most comics, but not for Robothespian. This humanoid robot, made by British company Engineered Arts, has the size and basic form of a tall, athletic man but is very obviously a machine: its glossy white face and torso taper into a wiry waist and legs, its eyes are square video screens and its cheeks glow with artificial light.
Robothespian’s first joke plays on its mechanical nature and goes down a storm with the audience at the Barbican Centre in London. “I never really know how to start,” it says in a robotic male voice. “Which is probably because I run off Windows 8.”
The performance last week was the brainchild of Pat Healey and Kleomenis Katevas at Queen Mary University of London, who meant it not only to entertain but also to investigate what makes live events compelling.
As we watched, cameras tracked our facial expressions, gaze and head movements. The researchers will use this information to quantify our reactions to Robothespian’s performance and to compare them with our responses to two seasoned human comics – Andrew O’Neill and Tiernan Douieb – who performed before the robot. […]

full article: http://www.newscientist.com/article/dn24050-robot-comedian-stands-up-well-against-human-rivals.html

bit miffed that the brainchild line has been re-written to sound definitively like it’s pat and minos only, but hey. in the context of that sentence, it should be my name: comedy lab is my programme, prodding what makes performance and the liveness of live events compelling is my phd topic.

diary | 16 aug 2013 | tagged: comedy lab · phd · qmat · research

comedy lab: evening standard article

nice article in the london evening standard on comedy lab, link below and photo of it in the paper attached:
http://www.standard.co.uk/news/london/scientists-create-robot-to-take-on-comedians-in-standup-challenge-8753779.html

here’s the q & a behind the article, our answers channeled by pat healey

What does using robots tell us about the science behind stand-up comedy?
Using robots allows us to experiment with the gestures, movements and expressions that stand-up comedians use and test their effects on audience responses.

What’s the aim of the experiment? Is it to design more sophisticated robots and replace humans?
We want to understand what makes live performance exciting, how performers ‘work’ an audience; the delivery vs. the content.

Is this the first time an experiment of this kind has been carried out? How long is the research project?
Robot comedy is an emerging genre. Our performance experiment is the first to focus on how comedians work their audiences.

Tell me more about RoboThespian. Does he just say the comedy script or is he (and how) more sophisticated? Does he walk around the stage/make hand movements/laugh etc?
This research is really about what’s not in the script - we’re looking at the performance; the gestures, gaze, movement and responsiveness that make live comedy so much more than reading out jokes.

How does his software work?
We use computer vision and audio processing to detect how each person in the audience is responding. The robot uses this to tailor who it talks to and how it delivers each joke - making each performance unique.

What have you learned already from the show? Does the robot get more laughs? Does he get heckled? What has been the feedback from the audience afterwards?
I think Robothespian had a great opening night.

Do you see robots performing stand-up in future?
It will take some time to emerge but yes, I think this will come. Interactive technology is used increasingly in all forms of live performance.

diary | 09 aug 2013 | tagged: comedy lab · phd · qmat · research | downloads: comedylab-eveningstandardprint.jpeg

comedy lab: instrumenting audiences

getting a robot to tell jokes is no simple feat. programming and polishing a script for the robot to deliver is challenge enough, but trying to get that delivery to be responsive to the audience, to incorporate stagecraft that isn’t simply a linear recording… now that is hard. of course, in the research world, we like hard, so reading the audience and tailoring the delivery appropriate to that is exactly what we set out to do.

having robothespian deliver what was essentially a linear script for his first night performance, for his second performance we turned on the interactivity. we had a camera and microphone giving us an audio-visual feed of the audience, and processed this to give us information to make decisions about robothespian’s delivery. a simple example is waiting until any audience audio – laughing, you hope – dies down before proceeding to the next section of the routine. more interesting to us is what having an humanoid robot allows us to do, as eye contact, body orientation, gesture and so on form so much of co-present human-human interaction. for that you need more than a single audio feed measuring the audience as a whole, you need to know exactly where people are and what they’re doing. in the photo you can see our first iteration of solving this, using the amazingly robust fraunhofer SHORE software, which detects faces and provides a number of metrics for each recognised face, such as male/female, eyes open/closed, and most usefully for instrumenting a comedy gig: a happiness score, which is effectively a smileometer. from this, robothespian delivered specific parts of the routine to the audience member judged most receptive at that point, was able to interject encouragement and admonitions, gestured scanning across the audience, and so on.

research being hard, it seems turning the interacion on backfired, as the gross effect was to slow down the delivery, taking too long between jokes. but this is a learning process, and tweaking those parameters is something we’ll be working on. and – big point i’ve learnt about research, you often learn more when things go wrong, or by deliberately breaking things, than when things work or go as expected. so there’ll be lots to pore over in the recordings here, comparing performer-audience-audience interaction between human and robot.

diary | 08 aug 2013 | tagged: comedy lab · phd · qmat · research

comedy lab: robothespian

“I never know how to start, which is probably because I run off windows 8” – and there were more laughs than groans!

as part of the media and arts technology phd you spend six months embedded somewhere interesting, working on something interesting. i did a deep dive into web adaptations and the semantic mark-up of stories at the bbc. klemomenis katevas has spent five months at engineered arts working on realtime interaction with their robothespian, and what better test could be a re-staging of comedy lab.

beyond tiernan’s script and kleomenis’s programming of the robot, what was most exciting was to see a robot did colombine gardair’s ‘woooo’ gesture, and the audience responded exactly as they do in covent garden. that’s our first trying out of something we’ve learnt about performance from doing this line of research… and it worked.

robothespian’s first gig was straight delivery of the script and ‘press play’ stagecraft. it went surprisingly well - it really did get laughs and carried the audience to a fair degree. tomorrow, we turn on the interactivity…

diary | 07 aug 2013 | tagged: comedy lab · phd · photo · qmat · research

comedy lab: andrew o'neill

first act proper: andrew o’neill. go watch the opening of this show, it’s perfect: http://www.youtube.com/watch?v=aGjbmywaKMI

highlight of this show had to be turning to the many kids who had appeared at the front, and singing his bumface song. to be clear, the bumface song is far from his funniest gag, not even anything much beyond the antics of a school playground. but what is so interesting is how that content is transformed in that moment of live performance and audience state into a bubble of joy. thats what we’re after. he had lots of techniques for eliciting response from a slightly wary audience.

it’s why we’ve chosen the genre for these live experiments, but it bears repeating: stand-up comedy really is so much more than the jokes.

diary | 07 aug 2013 | tagged: comedy lab · phd · qmat · research

comedy lab: tiernan douieb

“good evening ladies and gentlemen, welcome to the barbican centre. “comedy lab: human vs robot” will be starting shortly in level minus one. part of hack the barbican, it is a free stand-up gig with robot headlining.”

so said i, on the public address system across all the spaces of the barbican centre. didn’t see that coming when i went to find out how to request an announcement.

the gig started, people came – this photo makes it look a bit thin, you can’t see all the seated people – and tiernan did his warm-up thing. and most brilliantly, didn’t run a mile when we brought up the idea of another comedy lab, and getting a robot to tell jokes.

diary | 07 aug 2013 | tagged: comedy lab · phd · qmat · research

comedy lab: human vs robot

Come and see some more stand-up comedy, in the name of science – and this time, there’s a robot headlining!

What makes a good performance? By pitting stand-up comics Tiernan Douieb and Andrew O’Neill against a life size robot in a battle for laughs, researchers at Queen Mary, University of London hope to find out more — and are inviting you along.
A collaboration between the labs of Queen Mary’s Cognitive Science Research Group, RoboThespian’s creators Engineered Arts, and the open-access spaces of Hack The Barbican, the researchers are staging a stand-up gig where the headline act is a robot as a live experiment into performer-audience interaction.
This research is part of work on audience interaction being pioneered by the Cognitive Science Group. It is looking at the ways in which performers and audiences interact with each other and how this affects the experience of ‘liveness’. The experiment with Robothespian is testing ideas about how comedians deliver their material to maximize comic effect.

Shows at 6pm, Wednesday 7th and Thursday 8th August, Barbican Centre. Part of Hack the Barbican.

Poster attached. Aside from the science, the designer in me is quite content with how that little task turned out.

diary | 02 aug 2013 | tagged: comedy lab · phd · qmat · research | downloads: comedy_lab_robot.pdf

sensing festivals paper

a sense of satisfaction to see someone i’ve been helping get on the research ladder accepted to a workshop and the paper we co-wrote going into the acm archive.

In order to sense the mood of a city, we propose first looking at festivals. In festivals such as Glastonbury or Burning Man we see temporary cities where the inhabitants are engaged afresh with their environment and each other. Our position is that not only are there direct equivalences between larger festivals and cities, but in festivals the phenomena are often exaggerated, and the driving impulses often exploratory. These characteristics well suit research into sensing and intervening in the urban experience. To this end, we have built a corpus of sensor and social media data around a 18,000 attendee music festival and are developing ways of analysing and communicating it.

“Sensing Festivals as Cities”, a position paper for ‘SenCity: uncovering the hidden pulse of a city’ workshop, accepted for publication in UbiComp '13 conference proceedings.

diary | 30 jun 2013 | tagged: imc at festivals · liveness · research · qmat

comedy lab'd

it happened! performers performed, audiences audienced, and now i have a lot of data to organise and analyse.

thanks to all who took part, and apologies to all whose hair the motion capture hats might have messed with. can’t show too much of the experiment for various reasons, but pictured is main act stuart goldsmith who, yep, left with hair somewhat flatter than when he arrived.

it’s a strange feeling doing an ambitious experiment like this, partly because so much rides on such a short lived, one-off thing. more though, that it doesn’t represent the goal you started with – ie. a designed, informed instance of a live event that exploits it’s liveness – but rather aims to make things worse in the existing status-quo. there’s noble reasoning in that, for you really only get to see whats going on when you start prodding with a stick and what once worked nicely starts to break up. doesn’t stop weird feelings lingering for days afterwards though.

diary | 04 jun 2013 | tagged: liveness · comedy lab · phd · research

comedy lab

Come and see some free stand-up comedy, in the name of science!

For my PhD, I’m staging a comedy gig. The comedian is booked, I need an audience of volunteers. You won’t hear me trying to make jokes out of performance theory or the theatrical wrangling I’ve had to do to pull this together, rather real stand-up from professional comics. Doing their thing will be Tiernan Douieb and Stuart Goldsmith. You’ll have a fun time, I’ll be able to analyse – putting it in broad strokes – what makes a good performance.

Tuesday 4th June, shows at 3pm and 5pm, at Queen Mary University of London. It’s critical we get the right numbers, so please sign up here. You’ll get an confirmation email the attendance details.

Again: http://tobyz.net/comedylab

diary | 01 jun 2013 | tagged: liveness · phd · comedy lab · research

cogsci crowd app » field day

thanks to the promoters and the media and arts dtc, we had seven people running the crowd app attending the field day music festival in victoria park, london. science! fun!

…the analysis, however, is going to be less fun.

diary | 25 may 2013 | tagged: imc at festivals · liveness · research · qmat

cogsci crowd app » biosensing

of course, if you’ve just written a sensor logging smartphone app, and you have some bio-sensing data logging kit in the research group, you’re going to use it, right?

diary | 25 may 2013 | tagged: imc at festivals · liveness · research · qmat

cogsci crowd app

the interactive-map-and-then-some app turned out to be a step too far for the organisation we hoped to make it their own, but there still was a festival and a need to determine just what smartphone sensors could tell you about the activity around a festival. and so another app was born, one to harvest any and all sensor data for real-time or subsequent analysis. the interaction, media and communication group i’m part of now being rebranded cognitive science, here is the cogsci crowd app, as it stands.

https://github.com/qmat/IMC-Crowd-App-Android
https://github.com/qmat/IMC-Crowd-Server

the UI presents a ‘crowd node’ toggle button, which corresponds to the app running a data logger and making a connection to a server conterpart. it’s called ‘crowd node’ because we hope this to be the beginning of a network of devices word amongst the crowd, from which crowd dynamics can be analysed in realtime, and interventions staged. being on android, this crowd node is a service running in the foreground, which means the app can come and go while the service runs. it maintains a notification, and while this is there, the phone won’t sleep until it powers down. the datalogger registers for updates from all the sensors available on that device, and constantly scans for wifi base stations and bluetooth devices. getting some kind of audio fingerprint should be a useful future addition to the sensing. the server connection mints session IDs that keep things anonymous while tracking instances of the app, and receives the 1000 line json formatted log files either in bulk afterwards or as they’re written. in time, this should be a streaming connection for realtime use, with eg. activity analysis and flocking algorithms running on the incoming data.

diary | 25 may 2013 | tagged: imc at festivals · research · liveness · qmat

latour'd CHI

out of all the people in this world that i half-understand, bruno latour is by far my favourite. ‘visualisation and cognition: drawing things together’ is my favourite academic paper by far, where he explains in no small part the western world by tracing the practice of using bits of paper. ‘aramis, or the love of technology’ is one of my favourite books, which transcends the genre of ‘look at this team of people working to make their dent in the world’ in the most amazing ways that just won’t make sense in summary. at it’s extremes you go from reading straight technical documents to hearing a train philosophise.

the thread that runs through these is that rather than gesture about society, technology, culture and other abstractions, if you want to do something productive in the world of those terms, start at the manifest phenomena in their tiny instantiations and build up. its quite a shift in world view, but i’m signed up - hence looking at the ‘liveness’ of live events through what is exposed by people as they experience the event.

so being able to attend CHI and hear latour give the closing keynote was a gift. not that i’ve entirely wrapped my mind around what he was saying, to put it mildly: WHAT BABOON NOTEBOOKS, MONADS, STATE SURVEILLANCE, AND NETWORK DIAGRAMS HAVE IN COMMON: BRUNO LATOUR AT CHI

diary | 02 may 2013 | tagged: research · qmat

event app as research, shaping up

a little bit of a more compelling demo than last shown. development of this app has proved pretty painful, part of which is engaging with openFrameworks and c++ at a level beyond demo, and part of which has been the flakiness of the ofxAddons i’ve tried to use. the 3D model loader ofxAssimpModelLoader turned into the bane of this project; a core component of the app, the scope of its ill-effects was never clear until the debugging got truly brutal. i also had to ditch ofxTwitter, but at least can contribute back my working of the search functionality into the immeasurably better codebase of ofxTwitterStream.

diary | 29 nov 2012 | tagged: open frameworks · imc at festivals · liveness · research · qmat

event app as research

it doesn’t look like much at the moment, but this is the first step into my research group at university doing a study on real festival audiences at real festivals. i’m developing an interactive map / timetable app, which will have quite some interesting features and opt-ins by the time we’re done. the promoters we’ve been talking to already have an interactive map of sorts, i’ve already done some interesting things visualising live events, and of course there’s my phd on audiences and interaction.

diary | 10 aug 2012 | tagged: code · open frameworks · ios · imc at festivals · vj · liveness · research · qmat

twelve minutes on all my phd

to oxford for the ‘Inaugural RCUK Digital Econmy Theme CDT Student Research Symposium’, ie. gather the guinea-pigs and see what they’re up to. happy to regain the overview of my research though, and working on a presentation is so much more enjoyable a process than writing for me.

given my research is on liveness and lecturing comes into it, there had to be a punchline or some way for the act of presentation to be reflexive of its subject. so the slides ended up looking like tweets, and they sent themselves out hashtagged up as parcels of backchannel fodder. unfortunately i didn’t realise the script i found wasn’t clever enough to parse multiple tweets per slide until afterwards, so all the links and asides that went with each slide didn’t get out, which was kinda the magic i wanted to happen - as if i was talking on two levels with two modalities at once. brushing off my applescript, that is now fixed and available for all.

diary | 03 jul 2012 | tagged: liveness · phd · qmat · research · talk | downloads: tobyharris-livenessresearchpresentation-v01-tweets.pdf · tobyharris-livenessresearchpresentation-v01.pdf

eofa » ends of ends of audience

nic ridout summing up the conference with extemporaneous flair.

diary | 31 may 2012 | tagged: research · qmat · ends of audience

eofa » fair

drinks reception, dinner: of course. but also a fair, with performances, demos, and allsorts.

diary | 30 may 2012 | tagged: research · qmat · ends of audience

eofa » q&a debrief

martin welton drawing threads through the day and hosting an open discussion prompted by the q&a post-its.

diary | 30 may 2012 | tagged: research · qmat · ends of audience

eofa » questions and answers

we made a ‘q&a format for a workshop where ideas should be on the move’. here we have colombine reading questions and responses to her talk ‘teaching audience responses: an ethnography of street performance’. and, crucially, adding responses of her own, building a conversation and noting people to follow up with.

diary | 30 may 2012 | tagged: research · qmat · ends of audience

eofa » day one

the workshop kicks off. of course i recreated the flyer design with a live camera feed of the audience. here, we have the ‘excite-o-matic’ of ‘exposing your still beating heart: introducing biodata to the audience experience’, roughly this paper.

diary | 30 may 2012 | tagged: research · qmat · ends of audience

eofa » programme

hot off the digital anvil: the programme for ends of audience.

Pat Healey, Martin Welton, Michael Schober and Lois Weaver – Introduction

Laurence Payot – Performing Audiences: Experiments in Real Time

Paul Tennent, Sarah Martindale, Stuart Reeves, Joe Marshall, Brendan Walker, Steve Benford – Exposing your still beating heart: introducing biodata to the audience experience

Coffee break

Colombine Gardair, Patrick G.T. Healey, Martin Welton – Teaching Audience Responses: an Ethnography of Street Performance.

Judy Batalion – The Live Comedy Audience

Christian Heath & Paul Luff Audience, Participation and the Legitimacy of Events: Auctions of Fine Art and Antiques

Lunch

Keynote – Christoph Bregler – The Eye of the Crowd: Capturing, Sourcing, and Playing with Audiences.

Laurissa Tokarchuk, Matthew Purver, Stuart Battersby – Using social data to understand live festival audiences

Eirini Nedelkopoulou – The Phenomenology of Audience Interaction in Mixed-­Media Performance

Coffee break

Mariza Dima – MOBILE Stories: Exploring audience engagement through interactive mobile storytelling

Aneta Mancewicz, Joshua Edelman – Watching the Watching of Shakespeare

Short break

Q&A Debrief

 

Drink Reception & Dinner –
including Elbows on the Table by Valeria Graziano & Valentina Desideri and short talk by Barry Ife

Fair –
Angela Fernandez Orviz – New Models for Audience Engagement
Jon Armstrong – Towards a Psychological Theatre – Magic, Suggestion and The Audience’s Experience of Performance
Rachel Gomme – Mouth to Mouth
Ria Hartley – Play/Pause/Reflect/Submit
Kavin Preethi Narasimhan and Arash Eshghi – Watch it, we’re around

 

Keynote – Louise Blackwell & Kate McGrath – Producing fresh work for adventurous people.

John Sloboda and Helena Gaunt – Understanding audiences: helping creative artists to obtain richer information from their audiences.

Joslin McKinney – Empathy and Exchange: Audience Experience of Scenography

Johanna Linsley and Jan Mertens – The Library of Expectations

Coffee Break and Q&A session

Kim Skjoldager-Nielsen – Risky Interaction – Staged identity in SIGNA’s Salò

Anna Wilson – Ontroerend Goed’s The Audience

Philip Watkinson – A Mirror Staging: A Lacanian performance analysis of Romeo Castellucci’s On the Concept of the Face, Regarding the Son of God and its impact on the audience.

David Wiles – Picturing the historical audience

Lunch

Atau Tanaka – Music One Participates In

Isaac Schankler, Alexandre François and Elaine Chew – Mimi4x and Game Pieces: Creating an Audience of Performers

Coffee Break

Terri Power – Shakespeare’s Audiences

Graham White – Philosophical Theatres: From Descartes to Phenomenology

Orion Maxted – BANANA

Q&A Debrief

Short Break

Keynote – Nic Ridout – The Ends of Ends of Audience

Closing and Goodbyes

diary | 29 may 2012 | tagged: research · qmat · ends of audience | downloads: Ends of Audience Programme.pdf

austin office hours

CHI 2012 conference in full swing. i was attending for the liveness+HCI workshop; posting this has got so delayed a longer write-up will have to wait for some other catalyst.

diary | 12 may 2012 | tagged: research · qmat

the live in live cinema redux

from the big revision of the presentation to a write-up that took it in a quite different direction to now: a ground-up re-write. a scholarly work that builds an argument and from the theory offers provocations back to the practice. spoiler: smartphone screens. tl;dr: play your audience not your computers. put out there in a spirit of debate: all crit welcome!

Live Cinema is a contemporary performance practice built around audio-visual media. This essay questions the ‘live’ in Live Cinema, asking what Live Cinema events can tell us about liveness, and what liveness can tell us about the practice. Here’s why:

I’m at a Live Cinema event, but I’m troubled. This is an important moment: my collective has been invited to The School of Cinematic Arts, University of South California; the event is explicitly labelled a Live Cinema one. To my mind, Hollywood might just as well have said ‘hello Live Cinema’! Watching the opening performances, I see an audience rapt. Coming off stage after our performance, big cheers. But this audience… this audience appreciated the content, the staging, but what here was really live? The fact that I was behind a laptop screen pressing buttons? Moreover, I can’t help but feel the audience would have got a better show if we’d played out a recording of our rehearsal, and checked our email instead. Something is wrong here, and having taken on this label of Live Cinema, we owe ourselves and our audience an investigation.

This personal account of the author highlights that Live Cinema is gaining acceptance, is appreciated, but answering what that appreciation is for may not be straightforward. One thing is for sure: as a performance form whose ‘product’ is media and whose ‘draw’ is liveness, it should be an instructive study given an opposition of these two terms has shaped much of the literature on liveness.

diary | 29 apr 2012 | tagged: live in live cinema · research · qmat · live cinema

heckling at ontologies redux

lying with statistics is oh so easy, intentionally or otherwise. the graph on the previous poster showed… a correlation so great it instantly raised suspicion. rightly so, and so back to the data to make something not only more rigorous but more expressive. turns out it takes waaay longer, not just doing the first thing that comes to mind, and doing it well. ditto for the prose: much butchering of each other, it being a joint effort between saul and i.

diary | 05 apr 2012 | tagged: heckling at ontologies · research · qmat

designing for liveness position paper

the best thing you can be asked to after spending a year getting to grips with a phd and producing a document of goodness knows how many words is to take that and boil it down to two sides. thanks to newcastle’s culture lab (any surprise?/)for cornering me into this by proposing a workshop on liveness at the premier conference on human factors in computing. and best of all: my position paper has been accepted.

In the literature on liveness there is a surprising paucity of studies that look directly at the character of interactions between audience members. Partly as a consequence of this, technological interventions in the live experience have focussed primarily on enhancing the performers’ ability to project aspects of their ’act’ or on enriching the ‘generic’ audience experience. We argue that the dynamics of the interactions amongst audience members is key to the experience of a live event and that if we attend to this directly new opportunities for technological intervention open up.

diary | 16 feb 2012 | tagged: liveness · phd · qmat · research | downloads: tobyharris-livenesshci.pdf

the ends of audience

we’re organising a workshop on live audiences at queen mary. it’s conceived around opening conversations across disciplines. here’s the call –

People in audiences act: they talk, clap, heckle, sigh, inhale, exhale, rustle, twitch, tweet, dance, flirt, laugh, whisper, shuffle, cough… in doing so, they interact. There is a structure and dynamic to these responses which is central to the experience of being in a live audience. This workshop aims to bring together researchers and professionals with interests in performance, interaction and technology who are working on understanding, instrumenting or experimenting with these dynamics, and the shifting ends of audience that they reveal.

We invite proposals for oral presentations, live demonstrations, installations and performance experiments that explore the nature of interaction in audiences. We especially welcome interventions, participatory formats and creative approaches to convening workshop sessions. Topics include but are not restricted to:

  • the dynamics of collective and individual experiences of performance,
  • the communicative organisation of audience-audience interaction,
  • non-verbal interaction and emotional contagion,
  • remote and co-present audience interactions,
  • the phenomenology of audience interaction,
  • changing historical and cultural understanding of the audience,
  • technologies and methods for sensing audience dynamics,
  • technologies and methods for enhancing and manipulating audience engagement.

http://qmedia.qmul.ac.uk/audience

diary | 17 dec 2011 | tagged: research · qmat · ends of audience | downloads: Ends of Audience Flyer.pdf

the audience through time

an early saturday start to attend the ‘audience through time’ conference organised by the drama department at queen mary. it was a good effort, and my chairing of the ‘technology and liveness’ panel seemed to go down well – phew. i especially enjoyed martin barker’s talk, which was spot-on topic for me and presented with gusto: motivated by the issue of ‘liveness’ it started by asking how do audiences make sense of and respond to the near-live quality of streamed performances in cinemas, but soon progressed to an empirically backed provocation of a ‘scandal to theory’ that really showed the value of crossing disciplines.

its interesting seeing the different conventions of the disciplines at play, and i still cannot reconcile my love of the debate in drama seminars i have attended with the seeming pointlessness of reading out densely worded performance theory papers verbatim to a darkened hall (ref. my aside about auslander). something to ponder more, for i am one of the organisers of another conference on audience coming this may

diary | 03 dec 2011 | tagged: liveness · research · qmat

heckling at ontologies demo

to newcastle to the all-hands meeting of the digital economy programme, aka where my funding comes from. the summer had seen an internship project use the data i had generated in my bbc internship the year before, and from this me and saul got talking about how that could go further. importantly for me, in the demonstration proposal for it, the first bit of writing i’ve been happy with.

it also meant i could get back to the visualisation i’d made for the bbc project, and tie the display of the story-world information to the playing video. and my, how browsers have come on: processing.js, fonts and the canvas element are now happy bedfellows, and will happily alpha-blend over smooth playback of a movie. check it on github

Media consumption is increasingly networked, with richer experiences requiring ever-richer metadata to provide context and so link-ability. However creating meaningful metadata for rich media such as TV programming is fraught with practical and philosophical issues, starting with just who has the time to make it anyway. Through two Media & Arts Technology DTC internship projects – with the BBC (2010) and BT (2011) – two very different sets of metadata have been created that, representing the same TV programme, provide an interesting opportunity to investigate these issues. In one set we have a semantic, authorial representation modelling the content and narrative, in the other a free-text aggregation of mediated conversation about the programme by viewers. As the programme plays, we can compare viewers’ utterances with the TV production’s own modelling of the content.

Our demonstration will be an installation that plays the TV programme – an episode of Doctor Who – with corresponding animation juxtaposing the two sets of metadata. Our research agenda centres around the practical benefit of a mixing the two approaches in creating metadata and exploring the dissonance between the two representations. In short: how much top down do you need to make the bottom up work (or should that be the other way around?); where do attempts to map one to the other fall back to attempts to find some tractability fall back to conclusions that one or the other representation is invalid (and if so, which one – a librarian’s fantasy exposed or interactions ill-suited to being co-opted).

We would gladly host a ‘Heckle at Who’ session, where delegates will watch the TV programme and use their mobile device to contribute to the conversation around the programme. We could even turn this into semantic bingo: can we produce meaning from their utterance derived from the semantic modelling work. This would be well-suited to an evening, social activity.

diary | 15 nov 2011 | tagged: triplestore · processingjs · html5 · talk · heckling at ontologies · research · qmat

one year review » a rounded representation

it might have been finished on the plane out to holiday, but i and it got there.

[Supervisor] Of course, I think you do still need to do significant work to disentangle some of the different threads of reasoning that are now in the introduction. In fact, I recommend a complete re-write in which you try to do some more careful exposition of the different postitions people have taken.

…ah, the phd process. just when you’re happy you’ve got somewhere and achieved something, its back to square one: if better armed, and more skilled (the writing is getting better).

[removed document, as academic web services kept on trying to attribute to me, which while correct is a mis-representation given later development]

diary | 21 sep 2011 | tagged: liveness · phd · research · qmat

dtc meetup II: docfest

after nottingham, this time its to highwire at lancaster for the yearly gathering of the digital economy doctoral training centres. it was almost demoralising: we find a green field site and a wonderful brand new dedicated building designed and constructed seemingly in the blink of an eye. not the kind of trick that is possible at queen mary, campus crammed in london’s east end.

diary | 14 jul 2011 | tagged: research · qmat

nine month review » viva

thankfully the viva was like a good supervision session rather than a critical demolition. if only i had actually pressed the record button on the dictaphone app like i thought i had. possibly the best insight came right at the end, almost as an afterthought from my drama supervisor: its really all about attention.

in the written feedback:

The committee were impressed by the amount of work done and the quality of the literature review. This draws together some very interesting material and combines it well and shows good critical powers.

yay! ah - but these things always seem to have some kind of “subject to the usual corrections” clause. and, lo, mine does:

The committee requested that a revised submission should be made for a second review. No new reading is required, it is much more about refining the way the research issues are presented and giving a clear, coherent and tractable focus. There’s a lot of good work already done here but it would benefit from being sharpened. Specifically:

  1. Produce a new section that provides a clearer definition of the research
    questions and, in particular, a significant narrowing of the background concept
    of ‘liveness’ to a more conceptually and empirically tractable, and thus more
    focussed, issue (see below).
  2. Provide a new section that explains the methodological approach and, in
    particular how the initial system requirements / design will be motivated.
  3. Provide some discussion of how the work will link coherently - in terms of
    both key concepts and methodological strategies - between potentially diverse
    field environments.

diary | 20 jun 2011 | tagged: liveness · phd · research · qmat

nine month review » a title and 10+k words

three things you don’t want together: wedding organisation, alt-wedding organisation, and writing the first-year dry-run of your PhD thesis. all so important in life; all epic on the deadline front, all with just a week between them.

first to pass: the PhD nine-month review. 10+k words, finally a title i’m happy with, and most importantly, in it a coherent research programme that articulates both the bigger picture of why i got into this in the first place and the concrete in what i am going to study. liveness is a nebulous topic, and it has been quite the journey to get to this point.

the abstract is possibly the worst thing to put here, as it was the last thing to be re-written and i was beat by that point, but it gives the flavour. and in archiving this here, when the PhD is further along i can look back an wince…

Liveness: Exploiting the here and now of us together
The concept of liveness is fundamental to our understanding of what makes performance engaging but there is little consensus about what it is. This thesis will explore the issue by focussing on the role of interaction in liveness.
A review of technological interventions in these interactions has shown novel instrumentation, new modalities, and aspirations of immersion in dialog, yet overall the picture is one of clickers and twitter backchannels: little has been informed by any attempt to understand and design for the fine-grained interactional organisation of performer, audience and audience-member.
To address this a clear and appropriate problem has been identified, against which ideas of amplifying and augmenting interactional signals, behaviours and organisational features will be explored. In short: there is no point in a lecture continuing if the delivery is incomprehensible to the students, so how does the lecturer find out, how do the students let the lecturer know? Moreover, how do they do this while maintaining the shared focus of attention that is their very reason for being there? Pervasive media will be the means, and a iterative cycle of development, deployment and formative evaluation the process.
Leveraging human-computer interaction, this research shifts the analysis from crowd computing and active spectating to the performer-audience interaction required for informed performance.

diary | 17 jun 2011 | tagged: liveness · phd · research · qmat

qmedia open studios

the programme i’m part of at university – media & arts technology – is quite a leap for the department to have made. perhaps it could be characterised as they realised they had lots of technology research, but nobody around who did things with technology, so they got in some artists, hackers and whatnot in to see what would happen. but rather than create an island of ‘cool’, the plan was to embed MAT within the culture of a science and technology department. very laudable, but has been so frustrating at times: institutional intertia and so on.

qmedia open studios felt like the turning of the tide. an exhibition of work going on in under the umbrella of qmedia, it was a new way of doing things for the department, from minor victories like getting computer science to buy white art plinths and equipping a workshop as a proper hack lab, to helming conceptual shifts in how research can be linked up across the department and communicated to the public. i’m proud to have been a force behind it.

diary | 05 may 2011 | tagged: qmedia open studios · qmat · research

the live in live cinema in 4,000 words

having just overhauled the ‘about the live in live cinema’ presentation for the IMAP seminar, i thought it should be quite straightforward to translate this into a 4,000 word essay – my penance for sitting in on a module from QMUL’s excellent drama department last semester. how wrong could i be, the structure to my argument turned out quite differently. all the better for it, however.

Three silhouettes, bodies poised above glowing buttons; a piercing light scanning light beams across the void of the image. ‘Rhythms + Visions: Expanded + Live’ says the text. Flipping the flyer over, the venue as School of Cinematic Arts, University of South California lends an air of authority, and finally in the body text a definition: ‘a live-cinema event’.
I was there — in fact, I am one of the silhouettes on the flyer — and the ‘live-cinema event’ shall frame the following discourse on liveness, media-based performance, and how the role of performance in a true live cinema needs to be rethought.
Walking into the School of Cinematic Arts, there was no being led into the dark of a cinema theatre, rather this would be an exploration through the outdoor spaces of the complex. Moving image works were aligned onto the architecture, and scrims echoed projections in space. Finally, a stage area. The first act starting: Scott Pagano accompanied by four musicians. For the unconventional setting thus far, this is a setup all will recognise: there is what could be termed a cinema-grade screen with performers in front, and rows of seating laid out beyond.
The musicians are playing, seemingly consumed by their instruments and keeping in check with each other. Pagano is standing, the only one twisted around to face to the screen rather than audience. In his hands, an iPad upon which — and with — he is furiously gesturing. On the screen an abstract composition unfolding, organic forms built out with photographic elements, a triumph of aesthetic. The music is instrumental and amplified, without naming a genre it’s accessible to the Los Angeles audience: guitars, keyboards, percussion. The audience seem receptive; there is a pleasing fidelity and sheen to the work.
But what here is live? But what here is cinema? These are the questions in my mind as I watch, and to which we will return.
Next a performance from the collective of which I am part, but not a piece with my direct involvement; I am still in the audience. Endless Cities by D-Fuse. It is a film in the Ruttman and Vertov tradition, a montage of urban scenes from around the world, and is accompanied by a live score: musique concrète performed from laptop with percussion accompaniment. Again it seems accessible, and in the photographic detail there is much to latch onto and be absorbed by.
It’s Live Cinema in the sense that I first heard the term: a musical accompaniment to a silent film. A montage from the kino-eye, it’s easier here to answer ‘what here is cinema’ than the Pegano piece. But I still wonder, what is really live here, and why bother?
The final performance is in many ways my creation, and so here I cannot report from the audience, but can offer my view as a performer. Which is one of immense frustration. Starting out, I am in a good position: we have an expanded staging that breaks the imagery out of the single frame, a developed aesthetic that abstracts footage in sync to the music, and the impressive shot bank of Endless Cities to pull from. It’s less the dérive and more the impressionism of a late night taxi ride. And we’ve performed it really well before. But that is precisely what is killing me by the end of this particular performance. We have performed it better before, so wouldn’t a recording of that performance have served us better? It’s a recognition that performance in this context translates entirely to the audio-visual output, for our actual performance is opaque to the audience, operating somewhere between obscure symbols on an obscured screen and twitching trackpad fingers. At which point, rather than taking the best performance so far to play back, I ask myself why not just create a master version in the studio and be done with it?
This is the terrain from which I argue. My motivation is not to categorise art or debate concepts, but to get to the heart of what a true live cinema could be.

The essay needs a revision – given the time constraints its really just a first draft – for which 'about the live in live cinema’s next outing should provide, whether that is website, journal, seminar or lecture.

diary | 03 may 2011 | tagged: live in live cinema · research · liveness · qmat · live cinema

la » live in live cinema at imap

a day in a coffee shop making a major revision to the about the live in live cinema talk, and then back to USC to give it as a IMAP seminar. the host was holly willis – former editor of the amazing res magazine amongst other things – and somebody who not only is published on digital cinema and live cinema, but whose definition of live cinema is often embraced: “real-time mixing of images and sound for an audience, where the sounds and images no longer exist in a fixed and finished form but evolve as they occur, and the artist’s role becomes performative…” Holly Willis (Afterimage July/Aug 2009, Vol 37, No 1, p. 11/). my talk is not about definitions however, its about exploiting the liveness, and what that could mean.

also found the adage that everything can change in two blocks is indeed true, routing from the recommended coffee shop that turned out to be closed to one that was open put me through some streets i’d prefer not to walk down, let alone with a bag packed with my digital life. the no-healthcare-mashed-up-bodies of the homeless seemingly kettled / corralled to then be routed around: it isn’t a phenomenon unique to america, but its the one that alienates me the most.

diary | 25 apr 2011 | tagged: live in live cinema · research · talk · teaching · live cinema

vjing research panel » live in live cinema

and onto me

In the 1970’s cinema was expanded; in the 1990’s it met ‘new media’ as soft cinema. In 2010 the technological landscape is ripe to combine these, siting cinema in a live performance context. As such a body of work is built, called ‘live cinema’ by its practitioners and curators, it is worth taking a step back and asking just what the value of the live in live cinema could be?
To examine this closely, we first need to address what live cinema could be, and what current practice is.
To address what live cinema could be, we will extrapolate from the aforemen- tioned expanded cinema as characterised by Gene Youngblood, and soft cinema as characterised by Lev Manovich. We will consider a ‘cinema of the imagination’ as practised by oral storytellers, and hear of directors such as Peter Greenaway and Mike Figgis who have experimented with live perfor- mance as well as enjoying Hollywood success.
The current practice of live cinema will be presented through an experimental documentary offering a novel approach to representing this overtly ‘broken out of a pre-determined, linear, framed practice’ in pre-determined, linear and framed video.
In summarising the characteristics that could make cinema live, we will con- clude that an analysis purely of production and medium does not provide suf- ficient differentiation from previous forms of cinema to justify any claim of live cinema to offering what could not be offered before. We shall instead turn to studies of other kinds of live performance and focus on the human interaction and ideas of audience. By identifying some unique qualities of storytelling, we shall arrive at a conclusion of what could truly make live cinema an art form with unique, compelling qualities: where core to the experience is that as well as a story is told, the story world is explored as a group experience.

full presentation: http://vimeo.com/17485334

photo by blanca: http://www.flickr.com/photos/whiteemotion/5189942616

diary | 18 nov 2010 | tagged: research · vj · live in live cinema · talk · live cinema

laika » live cinema talk

to berlin for laika, an event investigating audio-visual culture. helene, its creator, wrote to me saying

my goal for laika is to make people aware in berlin that live cinema is something avantgarde and will be big in the future. i have the impression even with transmediale people in berlin underestimate the importance of the topic. i would like that u make it valuable to them.

quite the tall order, but a cause I’m happy to prod and push at. it also gave me the opportunity to start framing my recent obsession with what the ‘live’ in live cinema can/could mean. having laid out a pitch for why live cinema is something interesting in the (uninterestingly titled - tssk, toby) live cinema documentary, i really feel it is time to investigate what is unexpected or unobvious in the siting of cinema in a live context.

diary | 03 sep 2010 | tagged: vj · live in live cinema · research · live cinema

bbc stories viva'd

time on the bbc stories project is up, report is written, academic viva viva’d.

couldn’t help myself but use hitchhiker’s guide to the galaxy as the example of a property crossing media. there’s a kick-ass letter douglas adams wrote to some film executive when hhg2g was in development hell that starts to open up just how different the approach and result of the different adaptions should be. however quoting it (i’m sure i read it in the salmon of doubt) would have been an indulgence too far for a 10minute whip-through, so here is me venting that one.

its a good presentation, both an interesting topic and nice slide deck, i hope to get it recorded to go with the pdfs of report and whatnot i have made and put up as a tobyz.net project page: project/bbcstories.

diary | 31 aug 2010 | tagged: qmat · bbc stories · research

dtc meetup

to nottingham for the ‘dtc summer school’, a meet-up of all the doctoral training centres that fall under the digital economy initiative. hosted by horizon, it meant the theme was right up my street: the lifelong contextual footprint, ie. the act of living is generating all this data/media, so what are we going to do with it?

given it was a meet-up and we’re the first generation, networking and sharing of research interests was built-in to the programme. show-and-tell, however, would have just been too conventional, so there were we with t-shirts, pens and 80 sheepish expressions. and yes, thats my shirt above, excuse the terrible photoshop shlepping of back and front. bonus twitter comment from jeremy morley: “seeing the different dtc student groups rather like the bit in shaun of the dead where shaun’s group meet an alternate group.”

lots of interesting stuff, including cool talks from matt adams of blast theory and aleks krotoski of the guardian and much more. it was a shame not to be able to participate in the “life stories” workshop, which was my interest almost verbatim, but the reason was good: my DTC MAT were hosting their own, and it was a little bit crazy: what if data were the fifth dimension? a thought experiment and some design fiction. there’s a kind of slides-made-through-the-workshop pdf attached, including the diminished-reality-3000™.

the marvels of the internet and the crowd there being what they are, there is already a series of blog posts that outlines the various talks and activities there: props to liz valentine.

diary | 16 jul 2010 | tagged: qmat · research · bbc stories | downloads: DTCSummerSchool-QMUL-SpaceTimeData-Web.pdf

thatcamp

to thatcamp london, an “unconference” ahead of the juggernaut that apparently is digital humanities 2010. loved the fact that for a conference organised on-line and firmly embedded in a world of twitter hashtags and multi-channel ADHD, the actual schedule was organised on the day using a giant chalkboard and people putting up their arms.

i was there for the semantic narrative / semantic web work i’m doing with the BBC, which has a lot of serious implications and opportunities for historians and whatnot, but these tweets are much more fun:

  • dancohen not to adore #thatcamp too much, but what other conference has academics, the BBC “future media” group, and comic book junkies in one room?
  • mbtimney more #doctorwho at #thatcamp! now we’re onto spitfires in space (an apt metaphor for our TEI / comic book / fanfic / narrative discussion)?

diary | 06 jul 2010 | tagged: qmat · research · bbc stories

me at the bbc: mythology engine

as part of the PhD programme i’m on, there is a six month placement in industry. i’m super-stoked that i’ve landed a project with the BBC that is right up my street, and could serve my ideas about live cinema very well. they wrote a great post on their r&d blog about what they’ve done so far, and springboarding from that i’ll be looking at possible futures for that kind of stuff. check out their post: http://www.bbc.co.uk/blogs/researchanddevelopment/2010/03/the-mythology-engine-represent.shtml

diary | 30 mar 2010 | tagged: qmat · research

mat » instrumenting audiences - ...and the magic behind it

and mid-stripdown here are the audience tables, each one microphon and speaker’d up. these all turned into a lot of wires, a big multi-channel sound card, and a heroic max patch made by henrik. not a trivial task conceptually, making audience audio feed back - as in, bounce around, echo, etc. - without actually feeding back - as in, screeeeeeeeeeeeeeeeech!.

diary | 26 feb 2010 | tagged: i/o · qmat · research

mat » instrumenting audiences - 4'33"...

as part of the media and arts technology programme, a group of us have been investigating live performance in terms of the audience. its an area i have great interest in, believing that the entertainment can be an emergent property of the audience rather than something that has to be received from a singular performance/performer: eg. kinetxt. this project is more subtle than that, instead trying to tease apart what exactly makes an audience an audience and play with that. the first step is to stop thinking of an audience as a single thing. its a collection of people who at some point, hopefully, come together and somehow an audience emerges. its the interactions between the individuals that create an unstable state we call an audience…?

as our experiment, we created a mini-cabaret event and tried a few things out with our technological twists. here is keir performing john cage’s 4’33", and for once not because we suddenly needed to fill five minutes, but because we had the whole venue decked out as an audience noise feedback matrix, gently developing and becoming more overt through the piece.

diary | 26 feb 2010 | tagged: qmat · research · liveness

thor magnusson: thinking through technology

stand-out talk at the outside the box conference by thor magnusson, talking about making creative tools in the digital realm. ostensibly about the nature of digital music instruments, it really dug into how their design is far from a neutral act, and how in use our minds often extend to think through them.

there’s an academic paper from thor that deals with a lot of this at: http://www.ixi-audio.net/thor/EpistemicTools_OS.pdf

Through the analysis of material epistemologies it is possible to describe the digital instrument as an epistemic tool: a designed tool with such a high degree of symbolic pertinence that it becomes a system of knowledge and thinking in its own terms.

i’d be the first to admit its a much nicer way in to have the presentation before chewing on such texts directly, but its good stuff.

diary | 16 nov 2009 | tagged: qmat · research