Content

Heckling at ontologies

2011

‘Heckling at ontologies’ is a collaboration between myself and Saul Albert that built on our two Media and Arts Technology PhD placements. From my content modelling work at the BBC, I came away asking is it possible to model the world? who says my model is right? and who has time to it all anyway? Joining some dots between Saul’s reception-led work and my production-led work, you get to start to answer all of those, and a whole lot more.

This research created authorial descriptions and audience annotations of the same episode of Doctor Who, then compared them as media metadata.
Media is moving online, where richer metadata can enable more flexible and inter-linked production, distribution and reception processes. However, creating TV media metadata that is relevant to both authors, broadcasters and viewers is a significant practical and conceptual challenge. To investigate the differences between these contexts of media production and use, a formal, script-based media ontology was used to describe an episode of Doctor Who, which was then compared to transcripts of text-chat conversations between viewers of that episode. Results revealed a limited but potentially useful overlap in how authors, broadcasters and viewers describe and use media. Our research suggests directions that could exploit that overlap to create and evaluate media metadata in novel ways.

We’ve adapted the demonstration we staged into a video, and there’s a poster which gives a more rigorous and complete description.

The original demonstration was a visualisation of the two data-sets as the programme plays, set with three posters and our spiel. The video gets a cut down version of two of those things, and PDFs of the posters are attached to this page. The full visualisation should be live at http://heckle.at/ontologies, and all the code and data that runs it is hosted at http://github.com/qmat/ontoheckle. You’ll find an RDF file for each dataset, an html page to display the visualisation, and a whole lot of javascript to query the triplestore hosting the data and animate the results.

We’d like to develop the demonstration much further, at present its a passive display and we really should be having live heckle sessions with it. We have some ideas, ontological bingo being my favourite…

Credits

Industrial Partner - BBC Stories

  • British Broadcasting Corporation

Project Collaborators - BBC Stories

  • Rowena Goldman - Strategic Partnerships, BBC R&D
  • Paul Rissen - BBC Future Media & Technology
  • Dr. Michael Jewell - Goldsmiths College University of London
  • Dr. Tassos Tombros - Queen Mary University of London

Industrial Partners - Heckle at Social TV

  • BT / The People Speak

Project Collaborators - Heckle at Social TV

  • Andrew Gower - BT
  • Prof. Pat Healey - Queen Mary University of London

Files

Diary entries

heckling at ontologies demo

to newcastle to the all-hands meeting of the digital economy programme, aka where my funding comes from. the summer had seen an internship project use the data i had generated in my bbc internship the year before, and from this me and saul got talking about how that could go further. importantly for me, in the demonstration proposal for it, the first bit of writing i’ve been happy with.

it also meant i could get back to the visualisation i’d made for the bbc project, and tie the display of the story-world information to the playing video. and my, how browsers have come on: processing.js, fonts and the canvas element are now happy bedfellows, and will happily alpha-blend over smooth playback of a movie. check it on github

Media consumption is increasingly networked, with richer experiences requiring ever-richer metadata to provide context and so link-ability. However creating meaningful metadata for rich media such as TV programming is fraught with practical and philosophical issues, starting with just who has the time to make it anyway. Through two Media & Arts Technology DTC internship projects – with the BBC (2010) and BT (2011) – two very different sets of metadata have been created that, representing the same TV programme, provide an interesting opportunity to investigate these issues. In one set we have a semantic, authorial representation modelling the content and narrative, in the other a free-text aggregation of mediated conversation about the programme by viewers. As the programme plays, we can compare viewers’ utterances with the TV production’s own modelling of the content.

Our demonstration will be an installation that plays the TV programme – an episode of Doctor Who – with corresponding animation juxtaposing the two sets of metadata. Our research agenda centres around the practical benefit of a mixing the two approaches in creating metadata and exploring the dissonance between the two representations. In short: how much top down do you need to make the bottom up work (or should that be the other way around?); where do attempts to map one to the other fall back to attempts to find some tractability fall back to conclusions that one or the other representation is invalid (and if so, which one – a librarian’s fantasy exposed or interactions ill-suited to being co-opted).

We would gladly host a ‘Heckle at Who’ session, where delegates will watch the TV programme and use their mobile device to contribute to the conversation around the programme. We could even turn this into semantic bingo: can we produce meaning from their utterance derived from the semantic modelling work. This would be well-suited to an evening, social activity.

heckling at ontologies redux

lying with statistics is oh so easy, intentionally or otherwise. the graph on the previous poster showed… a correlation so great it instantly raised suspicion. rightly so, and so back to the data to make something not only more rigorous but more expressive. turns out it takes waaay longer, not just doing the first thing that comes to mind, and doing it well. ditto for the prose: much butchering of each other, it being a joint effort between saul and i.