Content

tagged: code

festival of ideas » channeled through *spark screenrunner

at the heart of the brain was the increasingly inappropriately named *spark titler, collating all the media and channelling it to the screen. it runs the screen, and gives just what you need to be responsive to the moment without breaking the visual illusion. so… *spark screenrunner?

whatever its grown-up name is, it monitored a fileshare for photos incoming from the caption-shot camera, illustrations and data-vis from ciaran and caroline’s laptops, listened to twitter accounts and hashtags, and, wonderfully, got updates in real-time from convotate, stef’s conversation annotation web-app. a technical shout-out here to pusher, the HTML5 websocket powered realtime messaging service, and to luke redpath’s objective-c library. and via the venue’s many-input HD vision mixer and a quartz composer patch or so more, we had treated feeds from above ciaran’s illustration pad, photoshop screen and whatnot.

it might be that you have to do this kind of job to grok the need, but i really think there’s something in *spark screenrunner, whether its just titling and transitioning between two presenters’ powerpoints or this kind of high-end craziness.

diary | 09 feb 2012 | tagged: code · mac os · quartz composer · titler · *spark · vj · festival of ideas · video-out · engaging audiences

dvi mixer - code released

things are building to a crescendo. as promised the software that runs the controller will be open-source, and so here it is being released.

http://mbed.org/users/tobyspark/code/SPK-DVIMXR/

notably -

i’ve also been corralling the OSC code available for mbed into a library: http://mbed.org/users/tobyspark/code/OSC/

for history’s sake, and perhaps it will help any hackers, attached is a zip of the arduino code i had before the leap to mbed was made, v07 to today’s v18. none of the interface goodness, but has got the fast serial communication technique i came to along with keying etc.

diary | 02 aug 2012 | tagged: code · release · dvi-mixer · vj | downloads: spk_dvimxr_v07_arduino_final.zip

event app as research

it doesn’t look like much at the moment, but this is the first step into my research group at university doing a study on real festival audiences at real festivals. i’m developing an interactive map / timetable app, which will have quite some interesting features and opt-ins by the time we’re done. the promoters we’ve been talking to already have an interactive map of sorts, i’ve already done some interesting things visualising live events, and of course there’s my phd on audiences and interaction.

diary | 10 aug 2012 | tagged: code · open frameworks · ios · imc at festivals · vj · liveness · research · qmat

forked: video annotation app

a major part of the analysis for comedy lab is manually labelling what is happening when in the recordings. for instance, whether an audience member is laughing or not – for each audience member, throughout the performance. all in all, this adds up to a lot of work.

for this labelling to be accurate, let alone simply to get through it all, the interface of the video annotation software needs to be responsive - you are playing with time, in realtime. i was having such a bad time with elan[1] that the least bad option got to be writing my own simple annotator: all it need be is a responsive video player and a bag of keyboard shortcuts that generates a text document of annotations and times. luckily, there was an open-source objective-c / cocoa annotator out there, and so instead i’ve forked the code and hacked in the features i needed. never have i been so glad to be able to write native os x applications.

if you need features such as annotations coming from a controlled vocabulary and are continuous, ie. non-overlapping over time or workflow such as annotation can be done in one-pass with one hand on keyboard and one on scroll-gesture mouse/trackpad, the application is zipped and attached to this post (tested on 10.8, should work on earlier).

if you are a cocoa developer with similar needs, the code is now on github and i can give some pointers if needed.


  1. to be clear, elan is a powerful tool for which the developers deserve respect, and through it’s import and export is still the marshalling depot of my data. the underlying issue i suspect is the java runtime, as trying alternatives such as anvil didn’t work out either. ↩︎

diary | 20 aug 2013 | tagged: code · mac os · comedy lab · phd · research | downloads: vcodevdata.zip

hospitality » gig-e vision test

friends who made a cinema camera out of industrial cameras are getting excited about gig-e vision for live video work. as am i.

more on this will come. in the meantime, hospitality at brixton academy was round the corner, had a friend running visuals, and i’d just got our plug-in running at 60fps.

  • soak test with d’n’b frequencies: check.
  • flexibility of just dropping in the tiny camera on it’s single cable: check.
  • image quality: check, the cheap-ass lens i used was surprisingly good, and i have a high-quality prime lined up.

while i’m here, justin has done a fine job with the hospitality staging - the massive ‘h’ fixture is proper class, and the visuals were perfectly designed for a judicious minimum of LED panels.

diary | 27 sep 2013 | tagged: code · mac os · video-in · quartz composer · gev · vj

SPK-RectPack

a screenrunner client wanted an animating tiled layout. it’s surprisingly non-trivial to code, at least if you want to go beyond hard-coding a few grid layouts. thankfully, the problem is academically interesting too, and lo! there’s a paper on spatial packing of rectangles, complete with MaxBinRectPack algorithm and c++ implementation. respect to jukka jylänki.

getting this working for me was time worth investing, and i’ve released the results: a quartz composer patch and animation technique. it’s up on github, and is something best seen in action, so check the quick demo video on vimeo.

diary | 18 nov 2013 | tagged: code · mac os · quartz composer · titler · vj

folkrnn composer mk.i

off the digital anvil: a bare-bones web-app adapation of the folk-rnn command line tool, the first step in making it a tool anyone can use. happily, folk-rnn is written in python – good in and of itself as far as i’m concerned – which makes using the django web framework a no-brainer.

- created a managed ubuntu virtual machine.
- wrangled clashing folk-rnn dependencies.
- refactored the folk-rnn code to expose the tune generation functionality through an API.
- packaged folk-rnn for deployment.
- created a basic django webapp:
	- UI to allow user to change the rnn parameters and hit go.
	- UI to show a generated tune in staff notation.
	- URL scheme that can show previously generated tunes.
	- folk-rnn-task process that polls the database (python2, as per folk-rnn).
	- unit tests.
- functional test with headless chrome test rig.

diary | 15 nov 2017 | tagged: machine folk · research · code

folkrnn composer mk.ii

the web-app adapation of the folk-rnn command line tool now has the start of community and research features. You can archive generated tunes you like, tweak them and save the results as settings of the generated tune, and comment. Plus dataset export.

diary | 14 dec 2017 | tagged: machine folk · code

folkrnn.org: 50x faster, and online

the web-app adapation of the folk-rnn command line tool is now online, and generating tunes 50x faster – from ~1min to 1-2s. still bare-bones, an ongoing project, but at least playable with.

diary | 29 jan 2018 | tagged: machine folk · research · code

folkrnn.org & themachinefolksession.org

now we’re talking: the folk-rnn webapp finally feels a proper website: it’s styled, the UI has instant feedback of e.g. invalid ABC being input, and the rnn generation appears note-by-note as it’s generated. there’s a definite instant-hit of satisfaction in pressing ‘go’ and seeing it stream across the page.

or rather, one of the apps feels a proper website, as it’s actually two websites now. The composer app runs folkrnn.org, and is focussed entirely on generating tunes. the archiver app runs themachinefolksession.org, and is focussed on the community aspect: archiving, human-edits, and so on.

diary | 26 feb 2018 | tagged: machine folk · code | downloads: folkrnn_live_abc.mov

folkrnn.org mk.v

server-side, straight-up django was the quickest way to get something working. but a fluid composition environment required no page refreshes and lots of client-side smarts. so, with much javascripting and websocketing, the composer becomes a single-page app. the tunes can pile up, you can filter out the ones you don’t like, you can copy and paste from a generated tune to prime the next, etc…

diary | 19 mar 2018 | tagged: machine folk · code | downloads: folkrnn_singlepageapp.mov

hands-on with time

first challenge for the drawing interactions prototype app is to get ‘hands-on with time’. what does that mean? well, clearly controlling playback is key when analysing video data. but that also frames the problem in an unhelpful way, where playback is what’s desired. rather the analyst’s task is really to see actions-through-time.

pragmatically, when i read or hear discussions of video data analysis, working frame-by-frame comes up time and time again, along with repeatedly studying the same tiny snippets. but if i think of the ‘gold standard’ of controlling video playback – the jog-shuttle controllers of older analogue equipment, or the ‘J-K-L’ of modern editing software – they don’t really address those needs.

so what might? i’ve been thinking about the combination of scrolling filmstrips and touch interfaces for years, promising direct manipulation of video. also, in that documentary the filmstrips are not just user-interface elements for the composer, but displays for the audience. such an interface might get an analyst ‘hands on with time’, and might better serve a watching audience. this is no small point, as the analysis is often done in groups, during ‘data sessions’. others would be able to tap the timeline for control – rather than one person owning the mouse – and all present would have an easy understanding of the flow of time as the app is used.

of course, maybe this is all post-hoc rationalisation. i’ve been wanting to code this kind of filmstrip control up for years, and now i have.

a little postscript: that panasonic jog-shuttle controller was amazing. the physical control had all the right haptics, but there was also something about the physical constraints of the tape playback system. you never lost track of where you were, as the tape came to a stop, and starting speeding back. time had momentum. so should this.

diary | 25 mar 2018 | tagged: drawing interactions · research · code

ready for the unveil

after an intense week, the drawing interactions app is ready to be unveiled. the iPad Pro and Apple Pencil turn out to be amazing hardware, and i’ve really enjoyed the deep dive into the kind of precise, expressive, powerful code that’s required for this kind of iOS app. it

  • plays a video
  • lets you navigate around the video by direct manipulation of the video’s filmstrip.
  • when paused, you can draw annotations with a stylus.
  • these drawings also become ‘snap’ points on the timeline
  • these drawings are also drawn into the timeline, partly to identify the snap points, and partly with the hope they can become compound illustrations in their own right
  • when not paused, you can highlight movements by following them with the stylus

i got there. huzzah! that last feature, drawing-through-time, i’m particularly pleased with. of course, there are bugs and plenty of things it doesn’t do. but it’s demoable, and that’s what we need for tomorrow’s workshop.

diary | 29 mar 2018 | tagged: drawing interactions · research · code

folkrnn.org mk.vi

not just a lick of paint: secure connection, a nifty piece of UX around new vs. deterministic tunes, and a rat-hole of automated backups.

diary | 16 may 2018 | tagged: machine folk · code

themachinefolksession.org mk.ii

the community site. straight-up django (“the web framework for perfectionists with deadlines”), but there’s a lot going on.

diary | 29 jul 2018 | tagged: machine folk · research · code

stats time

for any given tune, how much activity surrounded it?
for any given session, what happened?
what are the usage trends of folkrnn.org and of themachinefolksession.org?

to answer these kinds of questions, enter stats, a django management command for processing the use data of composer and archiver apps for insight / write-up in academic papers.

i used this to write the following, with input from bob and oded. it will surely be edited further for publication, but this is as it stands right now.

During the first 235 days of activity at folkrnn.org, 24562 tunes were generated by – our heuristics suggest – 5700 users. Activity for the first 18 weeks averages a median of 155 tunes weekly. In the subsequent 15 weeks to the time of writing, overall use increased, with a median of 665 tunes generated weekly. This period also features usage spikes. One week, correlating to an interview in Swedish media, shows 2.7x the median tunes generated. The largest, correlating to a mention in German media, shows an 18.4x increase. These results show making our tool available to users of the web has translated into actual use, and that use is increasing. Further, media attention brings increased use, and this use is similarly engaged, judged by similar patterns of downloading MIDI and archiving tunes to themachinefolksession.org.

Of the fields available for users to influence the generation process on folkrnn.org, the temperature was used more often then the others (key, meter, initial ABC, and random seed). Perhaps this is because changing temperature results in more obviously dramatic changes in the generated material. Increasing the temperature from 1 to 2 will often yield tunes that do not sound traditional at all. If changes were made to the generate parameters, the frequency of the resulting tune being played, downloaded or archived increased from 0.78 to 0.87.

Over the same period since launch, themachinefolksession.org has seen tunes 551 contributed. Of these tunes, 82 have had further iterations contributed in the form of ‘settings’; the site currently hosts 92 settings in total. 69 tunes have live recordings contributed; the site currently hosts 64 recordings in total (a single performance may encompass many tunes). These results show around 100 concrete examples of machine-human co-creation have been documented.

Of the 551 contributed tunes, 406 were generated on, and archived from, folkrnn.org. Of these entirely machine-generated tunes, 32 have had human edits contributed; themachinefolksession.org currently hosts 37 settings of folkrnn generated tunes in total. These examples in particular attributable human iteration of, or inspiration by, machine produced scores.

Further value of machine produced scores can be seen by the 30 registered users who have selected 136 tunes or settings as being noteworthy enough to add to their tunebooks. Per the algorithm used by the home page of themachinefolksession.org to surface ‘interesting’ tunes, “Why are you and your 5,599,881 parameters so hard to understand?” is the most, with 4 settings and 5 recordings.

While these results are encouraging, most content-affecting activity on themachinefolksession.org has been from the administrators; co-author Sturm accounts for 70% of such activity. To motivate the use of the websites, we are experimenting with e.g. ‘tune of the month’, see above, and have organised a composition competition.

The composition competition was open to everyone but targeted primarily at music students. Submission included both a score for a set ensemble and an accompanying text describing how the composer used a folkrnn model in the composition of the piece. The judging panel - the first author was joined by Profs. Elaine Chew and Sageev Oore - considered the musical quality of the piece as well as the creative use of the model. The winning piece Gwyl Werin by Derri Lewis was performed by the New Music Players at a concert organised in partnership with the 2018 O’Reilly AI Conference in London. Lewis said he didn’t want to be ‘too picky’ about the tunes, rather he selected a tune to work from after only a few trails. He describes using the tune as a tone row and generating both harmonic, melodic and motivic material out of it. Though the tune itself, as generated by the system, does not appear directly in the piece.

diary | 11 jan 2019 | tagged: machine folk · research · code

swedish model

folkrnn.org can now generate tunes in a swedish folk idiom. bob having moved to KTH in sweden, had got some new students to create a folkrnn model trained on a corpus of swedish folk tunes. and herein lies a tale of how things are never as simple as they seem.

the tale goes something like this: here we have a model that already works with the command-line version of folkrnn. and the webapp folkrnn.org parses models and automatically configures itself. a simple drop-in, right?

first of all, this model is sufficiently different that the defaults for the meter, key and tempo are no longer appropriate. so a per-model defaults system was needed.

then, those meter, key composition parameters are differently formatted in this corpus, which pretty much broke everything. piecemeal hacks weren’t cutting it, so a sane system was needed that standardised on one format and sanely bridged to the raw tokens of each model.

after the satisfaction of seeing it working, bob noticed that the generated tunes were of poor quality. when a user of folkrnn.org generates a tune with a previous model, setting the meter and key to the first two tokens is exactly what the model expects, and it can then fill in the rest drawing from the countless examples of that combination found in the corpus. but with this new model, or rather the corpus it was trained on, a new parameter precedes these two. so the mechanics that kicks off each tune needed to now cope with an extra, optional term.

so expose this value in the composition panel? that seems undesirable, as this parameter is effectively a technical option subsumed by the musical choice of meter. and manually choosing it doesn’t guarantee you’re choosing a combination found in the corpus, so the generated tunes are still mostly of poor quality.

at this point, one might think that exactly what RNNs do is choose appropriate values. but it’s not that simple, as the RNN encounters this preceeding value first, before the meter value set by the user. it can choose an appropriate meter from the unit-note-length, but not the other way round. so a thousand runs of the RNN and a resulting frequency table later, folkrnn.org is now wired to generate an appropriate pairing akin to the RNN running backwards. those thousand runs also showed that only a subset of the meters and keys found in the corpus are used to start the tune, so now the compose panel only shows those, which makes for a much less daunting drop-down, and fewer misses for generated tune quality.

unit-note-length is now effectively a hidden variable, which does the right thing… providing you don’t want to iteratively refine a composition, as it may vary from tune generated to tune generated. rather than exposing the parameter after all, and then have to implement pinning in as per the seed parameter’s control, a better idea was had: make the initial ABC field also handle this header part of the tune. so rather than just copying-and-pasting-in snippets of a tune, you could paste in the tune from the start, including this unit note length header. this is neat because as well as providing the advanced feature of being able to specify the unit note lenth value, it makes the UI work better for naïve users: why couldn’t you copy and paste in a whole tune before?

as per the theme there, implementing this wasn’t just a neat few lines of back-end python, as now the interface code that is loaded into the browser needs to be able to parse out and verify these header lines, and so on.

diary | 15 jan 2019 | tagged: machine folk · research · code

human-machine co-composition

i had a note squirrelled away that there were human-machine co-composition results still buried in the folkrnn.org write-up stats. and to me, that’s the heart of it.

with positive noises from journal paper reviewers in the air, i wrangled the time to go digging.

the following is the co-composition analysis the webapp’s stats management command now also produces.

Refining the folkrnn.org session data above, we can analyze only those tunes which are in some way a tweak of the one that came before. This iterative process of human-directed tweaks of the machine-generated tunes demonstrates co-composition using the folkrnn.org system. In numbers –
Of the 24657 tunes generated on folkrnn.org, 14088 keep the generation parameters from the previous tune while changing one or more (57%).
This happened within 4007 ‘iterative’ sequences of, on average 6 tune generations (mean: 5.9, stddev: 8.7).
The frequency of the generate parameters used now becomes:
key: 0.28, meter: 0.24, model: 0.093, seed locked: 0.34, start abc is excerpt: 0.02, start abc: 0.2, temp: 0.39

One feature now possible to expose is whether the user has identified a salient phrase in the prior tune, and has primed the generation of the new tune with this phrase. This is the strongest metric of co-composition available on folkrnn.org. This is reported above as ‘start_abc is excerpt’, tested for phrases comprising five characters or more (e.g. five notes, or fewer with phrasing), and as per other generation metrics reported here, not counting subsequent generations with that metric unchanged. This happened 283 times (2%)

Further evidence of human-machine co-composition can be seen on themachinefolksession.org, where 239 of the ‘iterative’ folkrnn.org tunes were archived. Using the tune saliency metric used by the themachinefolksession.org homepage, the most noteworthy of these tunes is ‘Green Electrodes’. This was generated in the key C Dorian (//folkrnn.org/tune/5139), and as archived (https://themachinefolksession.org/tune/294) the user has manually added a variation set in the key E Dorian. This shows a limitation of folkrnn.org, that all tunes are generated in a variant of C (a consequence of an optimisation made while training the RNN on the corpus of existing tunes), and shows that the human editing features of themachinefolksession.org have been used by users to work around such a limitation. Also, while not co-composition per-se, the act of the user naming the machine generated tune shows it has some value to them.

Direct evidence of the user’s intent can be seen in ‘Rounding Derry’ (https://themachinefolksession.org/tune/587). The user generated tune ‘FolkRNN Tune №24807’ on a fresh load of folkrnn.org, i.e. default parameters, randomised seed. The user played this tune twice, and then selected the musical phrase ‘C2EG ACEG|CGEG FDB,G,’ and set this for the start_abc generation parameter. The user generated the next iteration, played it back, and archived on themachinefolksession.org with a title of their creation. There, the user writes –
‘Generated from a pleasant 2 measure section of a random sequence, I liked this particularly because of the first 4 bars and then the jump to the 10th interval key center(?) in the second section. Also my first contribution!’

diary | 02 may 2019 | tagged: machine folk · research · code

åste at artful spark

to london for the build of CogX where i’m installing a touchscreen app i’ve developed for friends Observatory. but i was going anyway, for this: artful spark: data.

six demos, three speakers and two hours exploring what happens when artists work with data

and in particular, åste amundsen

a pioneer in immersive festival environments and SWCTN Immersion Fellow, brings news from the frontline of how data can augment live interaction between actors and audiences.

she didn’t dissapoint. was happy to see this slide too; i’ve got a drawing or two in that vein =]

diary | 05 jun 2019 | tagged: choose your own adventuresque · code

choose your own adventuresque

i’ve uploaded the code to choose your own adventuresque. it’s the generic bones of some commercial work, tidied up and with a own-brand look.

that commercial work started out as a fancy keynote deck; between master slides and the magic move transition keynote can do wonders. however introducing video quashed that: pausing on transitions, and so on. nomatter, i took the work knowing i could wrangle a fallback of some custom code.

having cut my teeth writing raw openGL to get acceptable framerates, it still feels both wonderful and weird that javascript and a web browser is a perfectly good rendering engine. thanks to pixy.js i really was able to dash the app off.

plus, with ES2018 and e.g. object spreads, javascript finally feels a language i might choose to use, rather than just have to.

this work also prompted a contribution to PixiJS

diary | 18 jun 2019 | tagged: choose your own adventuresque · code

1

2

3

4