if only all issues resolved as simply as this. deficiency identified, fix made, one-line PR contributed, reviewed and accepted.
if only all issues resolved as simply as this. deficiency identified, fix made, one-line PR contributed, reviewed and accepted.
diary | 23 jul 2024 | tagged: code
gave a lightning talk at dorkbot – we all know that a hack ain’t production-grade, well here’s me quantifying that.
‘taking stock: what made a maker business’ is far too big a talk to give at dorkbot, but this little excerpt made for a perfect bite.
diary | 11 jun 2024 | tagged: dvi-mixer · code · pervasive media studio · dorkbot
prototype one was feature complete for our base platform to experiment with… except that it slowed to a crawl in it’s first real-world session. prototype two has the engineering work to maintain a snappy drawing experience and UI. this hasn’t made the drawing itself performant, rather it has made the engine responsive regardless of the drawing performance. it keeps the act of drawing syncronous along with the rest of UI, while pushing the updating of the whole canvas in the background.
prototype one looked something like this, the standard fill-in-the-placeholder function to draw into the view –
override func draw(_ rect: CGRect)
the prototype two equivalent looks like this. it’s… a lot more complicated.
// Drawing canvas for the live stroke, which changes during stroking given prediction etc.
private var activeStrokeLayer: CALayer!
private var drawingLayerDelegate: DrawingLayerDelegate!
// Drawing canvas for the completed strokes; async, accumulating, and other sophistry
private var completedStrokesLayer: CALayer!
private var layerDelegate: LayerDelegate!
private var completedStrokesQueue: DispatchQueue!
private var completedStrokesBlocks: [DispatchWorkItem]!
private var completedStrokesContext: CGContext!
// Draw methods to abstract what’s going on behind the scenes
func drawCompletedStrokes(onlyAdd: [Stroke]? = nil)
func drawActiveStroke()
// When receiving a new set of strokes, if any stroke from the old set is missing from the new, we need to completely redraw. Conversely, new strokes can be drawn on top of the existing canvas.
func diff(_ oldElements: [Stroke], _ newElements: [Stroke]) -> diffResult
oh, and the timestamps on the audio timeline are nicely formatted now.
friend camille aubry was having thoughts about how her scribing practice could develop. i got an email that said “digital scribe?”; as she suspected, that’s something i have thoughts about too.
live illustration is something i’ve long played with – e.g. the concept and tooling for kinetxt – and in drawing interactions i have a recent iPad code-base for drawing-and-media. i’d been scheming that something in this vein would be interesting for future engaging-audiences work.
taking camille’s question as a prompt, i started coding. a base prototype was needed. first, to verify the basics: can camille draw with an iPad and its pencil? is the 10.5” size i have too limited? can my drawing code compete with a single marker pen and paper? with that established, we’d then have a platform to iteratively develop and test ideas.
so: spark scribe, an iOS app for illustrators documenting live events.
the core idea i have for the app is to combine the act of drawing with a dictaphone. by timestamping the strokes to the audio recording, the context for what is being drawn can always be returned to. plus, taking the user interface approach of drawing interactions, fluid and artful interactions should be possible.
despite the drawing interactions codebase, this is entirely new.
prototype one was used for the pervasive media studio’s lunchtime talk. it soon became clear that the drawing code needed work as the app slowed to a crawl. this can’t have made drawing with it easy for camille, let alone it being a new tool. the drawing example code being ‘unoptimsed’ refers to a few things, but mostly that it redraws every stroke, on every change. this was fine to verify the look, and tested fine in sessions of a few minutes at my desk, where the total number of strokes stays low. but is now proven not fine for the quantities of strokes that start to build up scribing a real talk. now i knew this would have to be addressed at some point, so it’s just brought that engineering task from the near-future to now. plus, thanks to this scribing session, i have real-world data i can load-test the app with.
Features: Prototype One
- Drawing engine
- A3 sized canvas with pan and zoom
- Pen tool with undo/redo
- Audio engine
- Records, plays back talk with timeline
- On playback, scrolling timeline displays illustration as drawn at that point in time
- Persistence
- Persists illustration + audio
- Imports and exports
- UTI `net.sparklive.scribe.archive`
- extension `.sparkscribe`
- format is a package wrapping
- `strokes.json`
- `audio.m4a`
i’ve uploaded the code to choose your own adventuresque. it’s the generic bones of some commercial work, tidied up and with a own-brand look.
that commercial work started out as a fancy keynote deck; between master slides and the magic move transition keynote can do wonders. however introducing video quashed that: pausing on transitions, and so on. nomatter, i took the work knowing i could wrangle a fallback of some custom code.
having cut my teeth writing raw openGL to get acceptable framerates, it still feels both wonderful and weird that javascript and a web browser is a perfectly good rendering engine. thanks to pixy.js i really was able to dash the app off.
plus, with ES2018 and e.g. object spreads, javascript finally feels a language i might choose to use, rather than just have to.
this work also prompted a contribution to PixiJS
to london for the build of CogX where i’m installing a touchscreen app i’ve developed for friends Observatory. but i was going anyway, for this: artful spark: data.
six demos, three speakers and two hours exploring what happens when artists work with data
and in particular, åste amundsen
a pioneer in immersive festival environments and SWCTN Immersion Fellow, brings news from the frontline of how data can augment live interaction between actors and audiences.
she didn’t dissapoint. was happy to see this slide too; i’ve got a drawing or two in that vein =]
i had a note squirrelled away that there were human-machine co-composition results still buried in the folkrnn.org write-up stats. and to me, that’s the heart of it.
with positive noises from journal paper reviewers in the air, i wrangled the time to go digging.
the following is the co-composition analysis the webapp’s stats
management command now also produces.
Refining the folkrnn.org session data above, we can analyze only those tunes which are in some way a tweak of the one that came before. This iterative process of human-directed tweaks of the machine-generated tunes demonstrates co-composition using the folkrnn.org system. In numbers –
Of the 24657 tunes generated on folkrnn.org, 14088 keep the generation parameters from the previous tune while changing one or more (57%).
This happened within 4007 ‘iterative’ sequences of, on average 6 tune generations (mean: 5.9, stddev: 8.7).
The frequency of the generate parameters used now becomes:
key: 0.28, meter: 0.24, model: 0.093, seed locked: 0.34, start abc is excerpt: 0.02, start abc: 0.2, temp: 0.39
One feature now possible to expose is whether the user has identified a salient phrase in the prior tune, and has primed the generation of the new tune with this phrase. This is the strongest metric of co-composition available on folkrnn.org. This is reported above as ‘start_abc is excerpt’, tested for phrases comprising five characters or more (e.g. five notes, or fewer with phrasing), and as per other generation metrics reported here, not counting subsequent generations with that metric unchanged. This happened 283 times (2%)
Further evidence of human-machine co-composition can be seen on themachinefolksession.org, where 239 of the ‘iterative’ folkrnn.org tunes were archived. Using the tune saliency metric used by the themachinefolksession.org homepage, the most noteworthy of these tunes is ‘Green Electrodes’. This was generated in the key C Dorian (//folkrnn.org/tune/5139), and as archived (https://themachinefolksession.org/tune/294) the user has manually added a variation set in the key E Dorian. This shows a limitation of folkrnn.org, that all tunes are generated in a variant of C (a consequence of an optimisation made while training the RNN on the corpus of existing tunes), and shows that the human editing features of themachinefolksession.org have been used by users to work around such a limitation. Also, while not co-composition per-se, the act of the user naming the machine generated tune shows it has some value to them.
Direct evidence of the user’s intent can be seen in ‘Rounding Derry’ (https://themachinefolksession.org/tune/587). The user generated tune ‘FolkRNN Tune №24807’ on a fresh load of folkrnn.org, i.e. default parameters, randomised seed. The user played this tune twice, and then selected the musical phrase ‘C2EG ACEG|CGEG FDB,G,’ and set this for the start_abc generation parameter. The user generated the next iteration, played it back, and archived on themachinefolksession.org with a title of their creation. There, the user writes –
‘Generated from a pleasant 2 measure section of a random sequence, I liked this particularly because of the first 4 bars and then the jump to the 10th interval key center(?) in the second section. Also my first contribution!’
diary | 02 may 2019 | tagged: machine folk · research · code
folkrnn.org can now generate tunes in a swedish folk idiom. bob having moved to KTH in sweden, had got some new students to create a folkrnn model trained on a corpus of swedish folk tunes. and herein lies a tale of how things are never as simple as they seem.
the tale goes something like this: here we have a model that already works with the command-line version of folkrnn
. and the webapp folkrnn.org
parses models and automatically configures itself. a simple drop-in, right?
first of all, this model is sufficiently different that the defaults for the meter, key and tempo are no longer appropriate. so a per-model defaults system was needed.
then, those meter, key composition parameters are differently formatted in this corpus, which pretty much broke everything. piecemeal hacks weren’t cutting it, so a sane system was needed that standardised on one format and sanely bridged to the raw tokens of each model.
after the satisfaction of seeing it working, bob noticed that the generated tunes were of poor quality. when a user of folkrnn.org generates a tune with a previous model, setting the meter and key to the first two tokens is exactly what the model expects, and it can then fill in the rest drawing from the countless examples of that combination found in the corpus. but with this new model, or rather the corpus it was trained on, a new parameter precedes these two. so the mechanics that kicks off each tune needed to now cope with an extra, optional term.
so expose this value in the composition panel? that seems undesirable, as this parameter is effectively a technical option subsumed by the musical choice of meter. and manually choosing it doesn’t guarantee you’re choosing a combination found in the corpus, so the generated tunes are still mostly of poor quality.
at this point, one might think that exactly what RNNs do is choose appropriate values. but it’s not that simple, as the RNN encounters this preceeding value first, before the meter value set by the user. it can choose an appropriate meter from the unit-note-length, but not the other way round. so a thousand runs of the RNN and a resulting frequency table later, folkrnn.org is now wired to generate an appropriate pairing akin to the RNN running backwards. those thousand runs also showed that only a subset of the meters and keys found in the corpus are used to start the tune, so now the compose panel only shows those, which makes for a much less daunting drop-down, and fewer misses for generated tune quality.
unit-note-length is now effectively a hidden variable, which does the right thing… providing you don’t want to iteratively refine a composition, as it may vary from tune generated to tune generated. rather than exposing the parameter after all, and then have to implement pinning in as per the seed parameter’s control, a better idea was had: make the initial ABC field also handle this header part of the tune. so rather than just copying-and-pasting-in snippets of a tune, you could paste in the tune from the start, including this unit note length header. this is neat because as well as providing the advanced feature of being able to specify the unit note lenth value, it makes the UI work better for naïve users: why couldn’t you copy and paste in a whole tune before?
as per the theme there, implementing this wasn’t just a neat few lines of back-end python, as now the interface code that is loaded into the browser needs to be able to parse out and verify these header lines, and so on.
Swedish Model
https://github.com/tobyspark/folk-rnn-webapp/pull/133
Feature: composer models have defaults for meter, mode and tempo
https://github.com/tobyspark/folk-rnn-webapp/pull/134
Feature: set unit note length within prime tokens, for models with L tokens
https://github.com/tobyspark/folk-rnn-webapp/pull/136
Feature: Headers in start ABC
https://github.com/tobyspark/folk-rnn-webapp/pull/138
diary | 15 jan 2019 | tagged: machine folk · research · code
for any given tune, how much activity surrounded it?
for any given session, what happened?
what are the usage trends of folkrnn.org and of themachinefolksession.org?
to answer these kinds of questions, enter stats
, a django management command for processing the use data of composer and archiver apps for insight / write-up in academic papers.
i used this to write the following, with input from bob and oded. it will surely be edited further for publication, but this is as it stands right now.
During the first 235 days of activity at folkrnn.org, 24562 tunes were generated by – our heuristics suggest – 5700 users. Activity for the first 18 weeks averages a median of 155 tunes weekly. In the subsequent 15 weeks to the time of writing, overall use increased, with a median of 665 tunes generated weekly. This period also features usage spikes. One week, correlating to an interview in Swedish media, shows 2.7x the median tunes generated. The largest, correlating to a mention in German media, shows an 18.4x increase. These results show making our tool available to users of the web has translated into actual use, and that use is increasing. Further, media attention brings increased use, and this use is similarly engaged, judged by similar patterns of downloading MIDI and archiving tunes to themachinefolksession.org.
Of the fields available for users to influence the generation process on folkrnn.org, the temperature was used more often then the others (key, meter, initial ABC, and random seed). Perhaps this is because changing temperature results in more obviously dramatic changes in the generated material. Increasing the temperature from 1 to 2 will often yield tunes that do not sound traditional at all. If changes were made to the generate parameters, the frequency of the resulting tune being played, downloaded or archived increased from 0.78 to 0.87.
Over the same period since launch, themachinefolksession.org has seen tunes 551 contributed. Of these tunes, 82 have had further iterations contributed in the form of ‘settings’; the site currently hosts 92 settings in total. 69 tunes have live recordings contributed; the site currently hosts 64 recordings in total (a single performance may encompass many tunes). These results show around 100 concrete examples of machine-human co-creation have been documented.
Of the 551 contributed tunes, 406 were generated on, and archived from, folkrnn.org. Of these entirely machine-generated tunes, 32 have had human edits contributed; themachinefolksession.org currently hosts 37 settings of folkrnn generated tunes in total. These examples in particular attributable human iteration of, or inspiration by, machine produced scores.
Further value of machine produced scores can be seen by the 30 registered users who have selected 136 tunes or settings as being noteworthy enough to add to their tunebooks. Per the algorithm used by the home page of themachinefolksession.org to surface ‘interesting’ tunes, “Why are you and your 5,599,881 parameters so hard to understand?” is the most, with 4 settings and 5 recordings.
While these results are encouraging, most content-affecting activity on themachinefolksession.org has been from the administrators; co-author Sturm accounts for 70% of such activity. To motivate the use of the websites, we are experimenting with e.g. ‘tune of the month’, see above, and have organised a composition competition.
The composition competition was open to everyone but targeted primarily at music students. Submission included both a score for a set ensemble and an accompanying text describing how the composer used a folkrnn model in the composition of the piece. The judging panel - the first author was joined by Profs. Elaine Chew and Sageev Oore - considered the musical quality of the piece as well as the creative use of the model. The winning piece Gwyl Werin by Derri Lewis was performed by the New Music Players at a concert organised in partnership with the 2018 O’Reilly AI Conference in London. Lewis said he didn’t want to be ‘too picky’ about the tunes, rather he selected a tune to work from after only a few trails. He describes using the tune as a tone row and generating both harmonic, melodic and motivic material out of it. Though the tune itself, as generated by the system, does not appear directly in the piece.
diary | 11 jan 2019 | tagged: machine folk · research · code
the community site. straight-up django (“the web framework for perfectionists with deadlines”), but there’s a lot going on.
Tweak: Search includes author names
https://github.com/tobyspark/folk-rnn-webapp/pull/109
Feature: Tune of the month
https://github.com/tobyspark/folk-rnn-webapp/pull/92
Tweaks: Tempo, Attributions, Add settings, ABC
https://github.com/tobyspark/folk-rnn-webapp/pull/90
Feature: Tunes page has order by added or popularity
https://github.com/tobyspark/folk-rnn-webapp/pull/85
Tweaks and fixes
https://github.com/tobyspark/folk-rnn-webapp/pull/73
Feature: Surface interesting tunes
https://github.com/tobyspark/folk-rnn-webapp/pull/64
Feature: Search
https://github.com/tobyspark/folk-rnn-webapp/pull/58
Feature: Tunebook
https://github.com/tobyspark/folk-rnn-webapp/pull/57
Feature: User files upload and serve configuration
https://github.com/tobyspark/folk-rnn-webapp/pull/32
Feature: Archiver mk.ii
https://github.com/tobyspark/folk-rnn-webapp/pull/25
https://github.com/tobyspark/folk-rnn-webapp/pull/37
https://github.com/tobyspark/folk-rnn-webapp/pull/50
diary | 29 jul 2018 | tagged: machine folk · research · code
not just a lick of paint: secure connection, a nifty piece of UX around new vs. deterministic tunes, and a rat-hole of automated backups.
Feature: HTTPS
https://github.com/tobyspark/folk-rnn-webapp/pull/24
Feature: Composer styling second pass
https://github.com/tobyspark/folk-rnn-webapp/pull/23
Feature: abcjs 5.0
https://github.com/tobyspark/folk-rnn-webapp/pull/21
Feature: Backup
https://github.com/tobyspark/folk-rnn-webapp/pull/20
Feature: Auto-seed
https://github.com/tobyspark/folk-rnn-webapp/pull/19
diary | 16 may 2018 | tagged: machine folk · code
after an intense week, the drawing interactions app is ready to be unveiled. the iPad Pro and Apple Pencil turn out to be amazing hardware, and i’ve really enjoyed the deep dive into the kind of precise, expressive, powerful code that’s required for this kind of iOS app. it
i got there. huzzah! that last feature, drawing-through-time, i’m particularly pleased with. of course, there are bugs and plenty of things it doesn’t do. but it’s demoable, and that’s what we need for tomorrow’s workshop.
diary | 29 mar 2018 | tagged: drawing interactions · research · code · ios
first challenge for the drawing interactions prototype app is to get ‘hands-on with time’. what does that mean? well, clearly controlling playback is key when analysing video data. but that also frames the problem in an unhelpful way, where playback is what’s desired. rather the analyst’s task is really to see actions-through-time.
pragmatically, when i read or hear discussions of video data analysis, working frame-by-frame comes up time and time again, along with repeatedly studying the same tiny snippets. but if i think of the ‘gold standard’ of controlling video playback – the jog-shuttle controllers of older analogue equipment, or the ‘J-K-L’ of modern editing software – they don’t really address those needs.
so what might? i’ve been thinking about the combination of scrolling filmstrips and touch interfaces for years, promising direct manipulation of video. also, in that documentary the filmstrips are not just user-interface elements for the composer, but displays for the audience. such an interface might get an analyst ‘hands on with time’, and might better serve a watching audience. this is no small point, as the analysis is often done in groups, during ‘data sessions’. others would be able to tap the timeline for control – rather than one person owning the mouse – and all present would have an easy understanding of the flow of time as the app is used.
of course, maybe this is all post-hoc rationalisation. i’ve been wanting to code this kind of filmstrip control up for years, and now i have.
a little postscript: that panasonic jog-shuttle controller was amazing. the physical control had all the right haptics, but there was also something about the physical constraints of the tape playback system. you never lost track of where you were, as the tape came to a stop, and starting speeding back. time had momentum. so should this.
diary | 25 mar 2018 | tagged: drawing interactions · research · code · ios
server-side, straight-up django was the quickest way to get something working. but a fluid composition environment required no page refreshes and lots of client-side smarts. so, with much javascripting and websocketing, the composer becomes a single-page app. the tunes can pile up, you can filter out the ones you don’t like, you can copy and paste from a generated tune to prime the next, etc…
Feature: Wildcard in Key, Meter, Initial ABC
https://github.com/tobyspark/folk-rnn-webapp/pull/18
Feature: Logging
https://github.com/tobyspark/folk-rnn-webapp/pull/17
Feature: State and URL management
https://github.com/tobyspark/folk-rnn-webapp/pull/16
Tweaks: ABC validation and UI
https://github.com/tobyspark/folk-rnn-webapp/pull/15
Feature: Single-page app
https://github.com/tobyspark/folk-rnn-webapp/pull/14
diary | 19 mar 2018 | tagged: machine folk · code | downloads: folkrnn_singlepageapp.mov
now we’re talking: the folk-rnn webapp finally feels a proper website: it’s styled, the UI has instant feedback of e.g. invalid ABC being input, and the rnn generation appears note-by-note as it’s generated. there’s a definite instant-hit of satisfaction in pressing ‘go’ and seeing it stream across the page.
or rather, one of the apps feels a proper website, as it’s actually two websites now. The composer
app runs folkrnn.org
, and is focussed entirely on generating tunes. the archiver
app runs themachinefolksession.org
, and is focussed on the community aspect: archiving, human-edits, and so on.
Fixes, refactors and tweaks for PR 9 and 10
https://github.com/tobyspark/folk-rnn-webapp/pull/12
Feature: ABC generation displayed live
https://github.com/tobyspark/folk-rnn-webapp/pull/10
Feature: Multiple models with compose client side logic
https://github.com/tobyspark/folk-rnn-webapp/pull/9
Feature: Composer styling first pass
https://github.com/tobyspark/folk-rnn-webapp/pull/8
Feature: Archive as app (i.e. themachinefolksession.org)
https://github.com/tobyspark/folk-rnn-webapp/pull/7
diary | 26 feb 2018 | tagged: machine folk · code | downloads: folkrnn_live_abc.mov
the web-app adapation of the folk-rnn command line tool is now online, and generating tunes 50x faster – from ~1min to 1-2s. still bare-bones, an ongoing project, but at least playable with.
Feature: Deploy
https://github.com/tobyspark/folk-rnn-webapp/pull/6
Feature: Async architecture, with folk-rnn-fast now a worker within the app
https://github.com/tobyspark/folk-rnn-webapp/pull/5
diary | 29 jan 2018 | tagged: machine folk · research · code
the web-app adapation of the folk-rnn command line tool now has the start of community and research features. You can archive generated tunes you like, tweak them and save the results as settings of the generated tune, and comment. Plus dataset export.
Redesign: Tunes with Settings. Feature: ABC validation
https://github.com/tobyspark/folk-rnn-webapp/pull/4
Feature: Dataset Export
https://github.com/tobyspark/folk-rnn-webapp/pull/3
Feature: Publish to Archive, Comment on Archived Tunes.
https://github.com/tobyspark/folk-rnn-webapp/pull/2
Feature: Swap between RNN original composition and your own editable version.
https://github.com/tobyspark/folk-rnn-webapp/pull/1
diary | 14 dec 2017 | tagged: machine folk · code
off the digital anvil: a bare-bones web-app adapation of the folk-rnn command line tool, the first step in making it a tool anyone can use. happily, folk-rnn is written in python – good in and of itself as far as i’m concerned – which makes using the django web framework a no-brainer.
- created a managed ubuntu virtual machine.
- wrangled clashing folk-rnn dependencies.
- refactored the folk-rnn code to expose the tune generation functionality through an API.
- packaged folk-rnn for deployment.
- created a basic django webapp:
- UI to allow user to change the rnn parameters and hit go.
- UI to show a generated tune in staff notation.
- URL scheme that can show previously generated tunes.
- folk-rnn-task process that polls the database (python2, as per folk-rnn).
- unit tests.
- functional test with headless chrome test rig.
diary | 15 nov 2017 | tagged: machine folk · research · code
a screenrunner client wanted an animating tiled layout. it’s surprisingly non-trivial to code, at least if you want to go beyond hard-coding a few grid layouts. thankfully, the problem is academically interesting too, and lo! there’s a paper on spatial packing of rectangles, complete with MaxBinRectPack algorithm and c++ implementation. respect to jukka jylänki.
getting this working for me was time worth investing, and i’ve released the results: a quartz composer patch and animation technique. it’s up on github, and is something best seen in action, so check the quick demo video on vimeo.
diary | 18 nov 2013 | tagged: code · mac os · quartz composer · titler · vj
friends who made a cinema camera out of industrial cameras are getting excited about gig-e vision for live video work. as am i.
more on this will come. in the meantime, hospitality at brixton academy was round the corner, had a friend running visuals, and i’d just got our plug-in running at 60fps.
while i’m here, justin has done a fine job with the hospitality staging - the massive ‘h’ fixture is proper class, and the visuals were perfectly designed for a judicious minimum of LED panels.
diary | 27 sep 2013 | tagged: code · mac os · video-in · quartz composer · gev · vj
a major part of the analysis for comedy lab is manually labelling what is happening when in the recordings. for instance, whether an audience member is laughing or not – for each audience member, throughout the performance. all in all, this adds up to a lot of work.
for this labelling to be accurate, let alone simply to get through it all, the interface of the video annotation software needs to be responsive - you are playing with time, in realtime. i was having such a bad time with elan[1] that the least bad option got to be writing my own simple annotator: all it need be is a responsive video player and a bag of keyboard shortcuts that generates a text document of annotations and times. luckily, there was an open-source objective-c / cocoa annotator out there, and so instead i’ve forked the code and hacked in the features i needed. never have i been so glad to be able to write native os x applications.
if you need features such as annotations coming from a controlled vocabulary and are continuous, ie. non-overlapping over time or workflow such as annotation can be done in one-pass with one hand on keyboard and one on scroll-gesture mouse/trackpad, the application is zipped and attached to this post (tested on 10.8, should work on earlier).
if you are a cocoa developer with similar needs, the code is now on github and i can give some pointers if needed.
diary | 20 aug 2013 | tagged: code · mac os · comedy lab · phd · research | downloads: vcodevdata.zip
it doesn’t look like much at the moment, but this is the first step into my research group at university doing a study on real festival audiences at real festivals. i’m developing an interactive map / timetable app, which will have quite some interesting features and opt-ins by the time we’re done. the promoters we’ve been talking to already have an interactive map of sorts, i’ve already done some interesting things visualising live events, and of course there’s my phd on audiences and interaction.
diary | 10 aug 2012 | tagged: code · open frameworks · ios · imc at festivals · vj · liveness · research · qmat
things are building to a crescendo. as promised the software that runs the controller will be open-source, and so here it is being released.
http://mbed.org/users/tobyspark/code/SPK-DVIMXR/
notably -
i’ve also been corralling the OSC code available for mbed into a library: http://mbed.org/users/tobyspark/code/OSC/
for history’s sake, and perhaps it will help any hackers, attached is a zip of the arduino code i had before the leap to mbed was made, v07 to today’s v18. none of the interface goodness, but has got the fast serial communication technique i came to along with keying etc.
diary | 02 aug 2012 | tagged: code · release · dvi-mixer · vj | downloads: spk_dvimxr_v07_arduino_final.zip
at the heart of the brain was the increasingly inappropriately named *spark titler, collating all the media and channelling it to the screen. it runs the screen, and gives just what you need to be responsive to the moment without breaking the visual illusion. so… *spark screenrunner?
whatever its grown-up name is, it monitored a fileshare for photos incoming from the caption-shot camera, illustrations and data-vis from ciaran and caroline’s laptops, listened to twitter accounts and hashtags, and, wonderfully, got updates in real-time from convotate, stef’s conversation annotation web-app. a technical shout-out here to pusher, the HTML5 websocket powered realtime messaging service, and to luke redpath’s objective-c library. and via the venue’s many-input HD vision mixer and a quartz composer patch or so more, we had treated feeds from above ciaran’s illustration pad, photoshop screen and whatnot.
it might be that you have to do this kind of job to grok the need, but i really think there’s something in *spark screenrunner, whether its just titling and transitioning between two presenters’ powerpoints or this kind of high-end craziness.
diary | 09 feb 2012 | tagged: code · mac os · quartz composer · titler · *spark · vj · festival of ideas · video-out · engaging audiences
sane control of the media and scenography needs to be partnered with the animation mechanics to handle it all gracefully. luckily, thats what i do – and what tools like quartz composer enable – and i had the best materials to work with in the form of made-by’s brand video. it’s great. watch it, and you’ll also see how perfect it was to be remade into a never-ending animation with dynamic content interspersed with the hand-animated elements.
best of all, now i have the interface and back-end largely worked out i can concentrate on creating bespoke animation for future gigs: everybody wins.
diary | 14 oct 2011 | tagged: code · mac os · quartz composer · titler · *spark · vj · MADE-BY
how did joanna run the screen? with *spark titler v3: no longer a now-and-next titler, more the means for a live brand video. into an animation template go tweets, titles and all sorts of media, and the user is presented with a sane way of wrangling that media and controlling the output.
the app as a whole is mac-native in the best of ways, with the behaviours a naive user might expect. i’m especially proud of the interface, which takes the standard elements and extends them where necessary[1], all to be used without fear of killing the output or screwing up the content.
suffice to say i now know a lot more about subclassing cocoa views than i used to: say hello SPKTableView and SPKArrayController ↩︎
diary | 14 oct 2011 | tagged: code · mac os · quartz composer · titler · *spark · vj · MADE-BY
and how did those live graphics make it to the screen? i sat down and took the idea of *spark titler from sheep music and remade it as a fully fledged cocoa+quartz composer application. the idea being it can’t muck up: animation designed to gracefully transfer from state to state, participant names pre-filled in a drop-down menu, no mouse cursors on the output, text fields that commit their edits on pressing take… the little details that make-or-break a live application. oh - and it exports its title animations as quicktimes for integration with playback pro.
diary | 04 sep 2011 | tagged: code · mac os · video-out · quartz composer · titler · *spark · just tell the truth · vj
there is now a mac pro in china running DFD, something that has been consuming my time for a while now. the roadshow d-fuse have been developing is our first big foray into automated dynamic content, lighting and audience interaction, and so without us being there for every gig holding it down with a hacked-up vj setup we needed something that you could just power on and the show would start. and so d-fuse/dynamic was hatched, a quicktime and quartz composer sequencer which reads in presets and its input/output functionality from a folder we can remotely update, and essentially just presents a “next” button to the on-site crew.
what i think is particularly novel about DFD is it was designed to output a consistent framerate, rendering slightly ahead of time so the fluctuations in QC and QT frame rendering are buffered out. i’m not sure is the effort/reward of this was worth it, but it will be an interesting code base to come back to and re-evaluate.
for the roadshow, it is playing out any number of four sources at 1280x576, including the generative, controlled by iPads in the audience and LED balls on stage, audio-reactive core of the show, sending the central 1024x576 to the main screen, driving 10 LED 72x1px strips from the remaining 576x128px on either side, and sending DMX back out to the stage lighting and LED balls.
big thanks to vade, luma beamerz, and memo for helping me one way or the other grok anti-aliased framebuffer rendering.
having spent much time i didn’t have trying to get 64bit QTX giving me openGL frames at QuickTime 7 efficiencies, life saving thanks also to vade and tom for v002 movie player 2.0, for which there is patch back with them giving it the ability to play the QT audio to a specific output.
lastly, a perennial thanks to kineme, couldn’t have gone this direction without knowing their DMX, Axis Camera, and audio patches were out there.
i’m not sure what to do with the code at the moment. it was made as a generic platform, but its current state is still very much tied to that specific project. or rather, the inevitable last minute hacking as it hit china needs to be straightened out. it has been made and funded as a tool for d-fuse to build on, so that needs to be taken into account too. in short, if anybody has a concrete need for such a thing, get in touch and we’ll see what could be done.
diary | 27 oct 2010 | tagged: quartz composer · vj · video-out · code · mac os · dfuse
happiness is twelve hex bytes, generated by a pocketable custom LED fixture on detecting a bounce, transmitting that via xBee, receiving into the computer via RS232, being parsed correctly, outputting into a QC comp, doing a dance, and commanding back to the fixtures via Artnet via DMX via xBee.
diary | 16 oct 2010 | tagged: quartz composer · vj · code · mac os · i/o · dfuse · embedded
there is a big d-fuse production in the works, where the brief rather wonderfully was emphasising interaction with and within the audience. as briefs often do, things have changed a lot since the heady time of working on and winning the pitch, but the core of it is still generative graphics and punter control from the club floor. and so here, courtesy of dr.mo’s crack team of coders is an in-development iPad app talking over WiFi to a QC plugin, where my two fingers-as-proxies-for-collaborating-audience-members are sketching locally and that is being incorporated on the club’s screens.
diary | 06 oct 2010 | tagged: quartz composer · liveness · dfuse · vj · code · ios · mac os · engaging audiences
tresor backstage, 1am, get to the final compile of here+now for the 3am performance. there is never enough time in this world, and for experimental projects on the side doubly so. the dream of just hanging out at a festival…
diary | 11 jun 2010 | tagged: *spark · quartz composer · vj · code · mac os · video-in · visual berlin · herenow
and if chase & status wasn’t enough to be getting on with, there was also a long awaited project with moving brands, of which neither i nor they can talk about beyond saying i sat behind a mac and xcode for a week.
diary | 22 sep 2009 | tagged: vj · code · mac os · moving brands
…and now having used it in anger, here we have
diary | 11 may 2009 | tagged: quartz composer · live illustration · vj · code · mac os · kinetxt · release | downloads: SPK-Calligraphy-v1.2.zip
…and here is the bugfix release.
diary | 07 may 2009 | tagged: *spark · quartz composer · live illustration · vj · code · mac os · release | downloads: SPK-Calligraphy-v1.1.zip
KineTXT has spurred many custom plug-ins, generally either esoteric or usurped by kineme or the next major release of QC. the latest however probably deserves to see the wider light of day, and so here is a snap-shot of it having just passed a notional ‘v1.0’. its two patches designed to capture and render handwriting and doodles from a tablet, but they should be pretty useful to anyone who wishes for some form of digital graffiti in their QC compositions.
if you want anti-aliasing, you’ll need to leave the QC app behind unfortunately, but if you can run without the patching editor window its just three lines of code to add to the qc player sample application and voila: this plugin and all 3D geometry become anti-aliased. vade worked it all out and outlines the territory here: http://abstrakt.vade.info/?p=186.
if you want different nibs, pen-on-paper-like textures or suchlike… well i have my needs and ideas, but the source is there. share and share alike!
the plug-in is released under gplv3, and is attached below.
diary | 28 apr 2009 | tagged: *spark · quartz composer · mac os · vj · code · release | downloads: SPK-Calligraphy-v1.0.zip
a little sneak peek of a quartz composer plug-in in development: spk-calligraphy, a set of patches for recording and playing back 2d strokes. the basic patch is equivalent to the kineme GL line structure patch, but draws the line as if it were a chisel nib at 45° and with a flow fade-out. the other two are what is going to enable a big part of the next kinetxt development: handwriting to go alongside the rendered text.
diary | 27 mar 2009 | tagged: *spark · quartz composer · live illustration · vj · code · mac os · kinetxt
and here’s another, in a slightly less formal environment. suffice to say, the tv feature is used a lot more here!
diary | 21 mar 2009 | tagged: quartz composer · code · mac os · mpc screen
i’ve been somewhat remiss in not posting up the finished mpc screen, but thanks to a little cover work there i had the chance to go round and take some photos of it out in the wild. so here is the one i’ll always remember most as it was the first screen to be was deployed in the wild.
in its menu it has around thirty 1080P showreels and vfx breakdowns of the various film and commercial work, along with the ability to browse documentation pdfs and the mpc client and public facing web sites, and 15 channels of tv to flick between. for anything that isn’t suited to that branded, full screen environment, the last option flips the screen into a mac desktop with finder etc., complete with apple-store style desktop buttons giving quick access to all the apps that might be wanted in a meeting room. best of all, it will time out back to the full screen carousel menu, and then time out to its own screensaver, so as you walk around the building the screens always showing something visually pleasing and branded into the building.
its been quite a project, and that doesn’t even touch the system administration side.
diary | 21 mar 2009 | tagged: code · mac os · quartz composer · mpc screen
it might not be the most exciting photo, but this desk has seen three weeks coding a big project for a soho production house. “would you like to make something akin to “front row” to aggregate the content and services at this facility, to fit on all the 50” plasmas we’re about to get in the building?” “yes please.”
diary | 10 oct 2008 | tagged: code · mac os · quartz composer · mpc screen
being able to code custom plug-ins is really making quartz composer so much better: not just giving the ability to make different types of ‘teh pretty’, but letting qc’s patching world do what its best at - fiddling with views - and leaving the coordination and control aspects to a dedicated lump of code, like a brain sitting in the middle of the patch.
long story short, this kinetxt installation is seeming like a case study in the object-orientated / model-view-controller way.
diary | 25 feb 2008 | tagged: *spark · quartz composer · vj · code · mac os
having worked through the hillegass cocoa book, its time to start putting that to good use. and project number one was always going to be one of the big glaring omissions in quartz composer to my mind: a means of animating a string on a per-character basis.
if you want to compete with after-effects, then you need to be able to produce the various type animations directors are used to, and you need to do so at a decent framerate. to animate say the entry of a character onto the screen, you would create the animation for one character and then iterate that operation along the string. the problem is, rendering each glyph inside the iterator is both massively expensive and massively redundant, but thats the only approach qc allows, hacks on the back of hacks apart. a much better approach would be to have a single patch that takes a string and produces a data glob of rendered characters and their sizing and spacing information, firing off just once at the beginning and feeding the result to the animation iterator: at which point you’re just moving sprites around and the gpu barely notices.
the patch is released under gplv3, and is attached below.
a massive shout to the kineme duo for leading the way with their custom plug-ins and general all-round heroic qualities. in particular their ‘structure tools’ patches were the enabler for those early text sequencing experiments.
diary | 15 feb 2008 | tagged: *spark · quartz composer · vj · code · mac os · release | downloads: SPK-StringToImageStructure-1.0.zip
as shown in the ‘pun me this’ entry, the *spark titler was used in nascent form at sheep music, and the promise to tidy-up and release as open-source software has been followed through. so, please find attached: sparktitler-v1.1.zip.
the titler’s interface allows you to take between two sets of title/subtitle, with the choice of four backgrounds: black / green / a quicktime movie or a folder of images. the output window will automatically go full-screen on the second monitor if it detects one is available at launch, otherwise it will remain a resizable conventional window.
it is released with the intention that it can be reused for other events without changing a single line of code: you can design the animation and incorporate quicktime movies in the design by editing the ‘GFX’ macro in the quartz composer patch, and its a matter of drag and drop replace the logo in the interface.
for those who wish to dig deeper and improve the whole package, the source is released under GPL. the xcode project provides an adequate shell for the patch, implemented with just two cocoa classes and an nib file complete with bindings between the qc patch and the interface window. the classes are required to tell the quartz composer patch where to find the resource directory of the application’s bundle (neccessary for any ‘image with movie’ nodes), and to subclass the output window so it is sent borderless to the second display if appropriate. features apart, there is certainly room for improvement, a ‘open file’ dialog instead of the raw text fields would be good, likewise solving the text field update issue.
if you do use it, let us know: operator@tobyz.net
diary | 30 jul 2007 | tagged: titler · release · *spark · vj · code · mac os · quartz composer | downloads: sparktitler-v1.1.zip
the latest visuals technology development to come off the *spark anvil is a mac-native titler application, made by wrapping a quartz composer patch with some fullscreen code and interface builder bindings. props to roger bolton of quartonian for the guts of the fullscreen xcode project, shared under gpl so expect to see the titler soon once it’s been tidied up.
truly came into its own during the ups and downs of the first day of the festival, where a huge rainfall threatened to wash away half the site. we could upload videos and images taken moments before, and pun the titles out till they got beyond baaaaaaaad. ‘shave yourself’ still my favourite.
diary | 20 jul 2007 | tagged: quartz composer · titler · vj · sheep music · code · mac os
its a strange day when you look at what you and your lady are reading for fun… and find they’re both developers books. suffice to say, programming in objective-c isn’t quite as much fun as the developer’s bible, but then again when the summer has included bulgakov’s the master and margarita nothing can quite compare… satan kicking up a ruckus in atheist 1930’s moscow. with a talking, cigar smoking black cat. and historical passages involving pontius pilate.
diary | 25 jul 2006 | tagged: errata of life · code