my favourite bit of tekton is of course what nobody sees, or for that matter should see. this is the view from behind, and while yes, it’s the makers mark, what gets me is the rainbow cable against soft yet rectilinear aluminium. something quite pure about it.
made gallery opening night. just. the video the gallery commissioned is great: watch it.
image credit: still from joey kennedy’s video, linked above.
it’s on. much gazing.
d-fuse in conference. it looks done,right?
photo credit: mandy fiernans
PSUs installed. belt installed. motor and driver installed. ribbon hung and wired. controller electronics installed. LED bars mounted on and hooked up.
power on. ssh in over wifi. run test programs. no explosions.
so, with hours rather than days to go, finally get to sit in front of the machine and start to think of it as art. although this is not finessing: the motors are still not behaving, not all eight led strips are working, i’m still working with test sequences. soldering iron and ssh connections in hand, time to pull the rabbit out of the hat.
the upright bars should have been in place on my arrival, that was the pre-requisite for me being able build the installation. as things were, the now partially pre-assembled units were only going up two days later, with only two days to go. this wasn’t good, but it did at least force some pleasant down-time where i got to do some things i’d wanted to do for ages, but motor issues had kept me from. so here is the oled finally looking nice, and not impairing the machine running at 60fps. drawing any text using python’s imaging system was way too slow, but with a little inspection of the adafruit driver library, i saw i could swap out the image buffer and flush that to the screen, which would be near-instantaneous. so with a little pre-rendering, the oled could now update while the controller waits for the next frame’s start time. controller code, now at ~9th revision
pre-render before starting the animation sequence, capturing each rendered buffer instead of sending the buffer:
def render_buffer_cache(prefix): for i in range(10): draw.rectangle((0,0,oled.width, oled.height), outline=0, fill=0) draw.text((0, 0), prefix + str(i), font=font, fill=255) if host_upsidedown: oled.image(image.rotate(180)) else: oled.image(image) buffer_cache[i] = list(oled._buffer)
for each frame of animation, point the buffer at the next pre-rendered one and send the contents to the display:
oled._buffer = buffer_cache[frame_index % 10] oled.display()
finally, two uprights and the two slide assemblies installed. i am about to spend a lot of time up that ladder.
into the gallery at the crack of dawn; out, blinking, some time later. look behind us: waa! that massive billboard wasn’t there earlier. shame the print has over-saturated the beauty out of paul’s hero image, but so it goes; out of our control.
in wood st. galleries. inevitable delays. machine yet to be proven. hours ticking. and so it starts.
hotel room view for the d-fuse show install. welcome to pittsburgh, quite the view.
of course, there was a whole lot of pain before it so neatly packed up…
day before one of the new motors arrives. not the best timing, i’d admit. this would have happened at least a week earlier, but it was hard to verify the suitability of the motor while the motor control wasn’t reliable. we seemed to be at the edge of the 3nm’s ability: it could do that sinusoidal cycle video’d above, but that took a few iterations of settings and was slow and only travelling half the range. the 3nm choice was made by calculation and looking at the torque curves, and reassured by having same shaft size as pulley it mates with… but this was out, an 8nm was in.
and with a new motor, new mounting issues. things are painful for a while. here’s the motor mating plate that never happened, as after various false starts guaranteed-good bested over-engineered and we used the item 50nm coupling and assembly, with some machining from gary at zapp automation.
nonetheless, the machine needs to ship to the gallery in the us, and so it’s disassembled and packs down into two 2.5m long tubes and two peli cases. pretty compact, and standard sizes for ups / fedex / etc.
thanks to trotec for the laser cutter at south london makerspace: that perfectly sized and zip-tie slotted clear piece of perspex replaces a bit of wood i had to hand as i first mounted the ribbon cable. in-situ, it completes my favourite bit of the whole machine
aside from the ongoing motor issues, the last thing to be done in london was upgrading the test arduino used to drive the motors to two production units, complete with 12v boost circuitry to run the induction sensor used to detect the half-way point of the belt. thanks to artists & engineers who made these white custom arduinos available to us.
arduino loops to the microsecond to give smooth motion weren’t the only performance-critical bit of code required. the animation sequence wasn’t playing far too slowly - nowhere near the 60fps the animations were authored for, and now we can see it moving, it was clear it needed to be that fast and smooth to play with your perception as you looked on. updating the oled display and addressing the led pixels was taking too long. i knew you could drive this kind of oled crazy-fast as i’d written my own library from scratch for the mixer hardware, and i knew there was a lower-level way of driving the led strips. so: abandoned updating the oled for the time being, and advanced the controller python code to construct a bytearray per strip rather than set each led individually… and voila! naked-eye illusion. code. tho’, somewhat by definition, the video doesn’t quite capture that (tho’ watch the leds as the bar accelerates up…).
what the above skips over is that if an arduino is now going to sit by the stepper and drive it, the main raspberry pi controller now needs to control the arduino, not the stepper drive’s pulse pins. and so, perhaps inevitably, the custom board i made to extend the pi gets a hack to disconnect two direct control lines and rewire them to the pi’s hardware serial pins. at least the pcb design was good, it’s just that the spec changed!
to get this working, the arduino just interpreted the last byte received as a 0-255 value and scaled this up to the step range, around 20,000. that was pretty coarse and certainly less than the precision of the float value supplied in the animation sequence, and so while the machine was in transit i developed a protocol to pack an integer that could represent that kind of number into a series of bytes, keeping things as efficient as possible.
in python, on the controller: from a single byte, coarse and with no messaging capability –
to three bytes each with two bit command type and six bit payload –
#command 01 = sequence run vpos start vpos_steps = int(meta*drive_max_steps arduinoserial.write(chr(0x40 + ((vpos_steps >> 12) & 0x3f)) + chr((vpos_steps >> 6) & 0x3f) + chr(vpos_steps & 0x3f))
oh, did i mention there’s two of everything? the model is actually two floor-ceiling bars each with the rails of slide bar, carriage running on them, and led bar cantilevered from the carriage. being way over schedule and needing to ship the model to the gallery in the us, here is the excellent david meckin helping to wire up led bar #2.
in the midst of those debug wires was a logic analyser that hooks up to a desktop app. best thing ever – thanks again to ninja friend Arron Smith. using the logic probe to examine the control signals, i stripped down the control script to create a constant turn and the result is not good. stepper pulses, top: looks even. stepper pulses, middle: hell no. that’s just a little later in the same sample.
no wonder the motor doesn’t have smooth motion, with the shuddering action making horrible noises and vibrations through the structure. realisation one is that it’s not possible to get the timing precision to get near the pulse frequency to drive the motor at a decent speed, and realisation two is that the timing accuracy is just not there. running this machine from a python script on a raspberry pi with it’s stock linux installation was an experiment, and this is where “let’s try the future” fell down. simply, it’s not a realtime environment. however, a bare bones sketch on an arduino, flipping a digital out between delaymicroseconds() of 5 and up did work beautifully. somehow a middle-ground had to be reached.
the first stage of that was to simulate the 60fps commands of the driving animation while being a stand-alone program. a sin calculation also takes ~120µs on an arduino, considerably longer than the minimum pulse time of around 5-10µs. the beautiful step signal shown bottom is the output of calculating sin(time) every 1/60s, and for every loop in-between, calculating a linear interpolation between the last two sin points, comparing that with the current position, and issuing a step and/or direction change if needed.
later, in the down-time while shipping and the gallery install, i’d spent time working through options to drive the motor better. the machine would run smoothly – purr, even – if i drove it with something known-smooth like a sine wave, but whenever i switched from this a problem appeared. the intention had always been to add some kind of inertial smoothing to the control, so that whatever input the controller took would translate into something that was mechanically viable for the machine. accelerating the led bar puts torque on the linear bearing, and the weight of the whole assembly will simply stall a stepper motor pulsed from still to a constant turn. the answer should be to employ a physical model in the code, which with a little research the accelstepper library effectively does. you can command a step position, and it will take the motor there within a defined envelope of speed and acceleration. every stepper project should use this… except it turns out the maths is too computationally expensive to achieve anything close to stepping fast enough for this project on an 8mhz arduino.
problem was, any other strategy i tried in my own stepper driving code proved insufficient – i had high hopes for a simple median filter. this may have been partly due to the other side of driving the stepper smoothly: the microsecond consistency of pulse timings (as discovered originally, trying to drive direct from the pi). i suspect the serial read of commanded step position from the pi was interfering with this and/or disrupting the exact regularity between received commands that the linear interpolation relied upon.
in hindsight, two things stand out. one is that the raspberry pi is incredibly fast and powerful compared to the arduino, so running such a library fast enough would present no issue, if it ran a realtime operating system akin to the arduino. this required more linux-fu than i had headspace for at the time, but is surely the correct way to go. the other is a little more embarrassing, in that research since has shown the stepper driver – the hardware box that takes the logic level step pulses and makes it so – has an optional smoothing filter that is configurable by software. i never got to this, as mentally the driver was sold as not needing software configuration, the very point being it was pre-tuned for this series of motors, and physically, the interface cable was bespoke and i didn’t buy it. just perhaps breaking that mental mould to make up that cable could have saved me nights worth of grief.
to cut back to sitting in front of the machine with the opening in hours not days, anything that wasn’t that sinusoidal self-driving arduino sketch was causing horrible, horrible sounds (something like a resonant frequency in the half-coupling and splined shaft that slotted into the pulley), and so running sinusoidal was what it was going to be.
and to underline just how crazy the whole motor episode was, simply adding a debug line to send the current position of the motor to the serial console when the belt mid-point marker passes the sensor caused mechanical vibrations that made you want to turn the whole thing off. that’s how sensitive the pulse timings are. throughout, it has been the worst case of observing changing the observed.
of course, things are never straightforward and the particular nightmare of this project starts: motion control. every single aspect has problems. start the analysis, isolate issues…
LED baton mounted on the vertical slide mounted on the floor-ceiling bar, power supplies installed and wired up, first pass software to run the LED strips… let there be light!
to this point, everything has been to plan and as per researched data sheets. the motors do not prove that way, in any way. mechanically mating them to the pulley end-unit proves to be fraught with issues; likewise for their control.
the particular motor chosen has a shaft that matches the diameter of the bore in the pulley, this i had checked. what i had overlooked, is that unique to this particular motor in the range, the shaft length is much less than the others. so short, that engaging into the pulley is an issue. this requires making a motor mating plate as thin as possible, and some horrible work arounds with offset grub screws, taps and holes where they shouldn’t be, and access holes where they shouldn’t be to be able to tighten the result up. eventually it was done, but it was a bruising night.
having got a mechanical fit of motor to frame and pulley, then onto control. a quick implementation in the python script running the controller, and the vertical position is being read from the text file and translated into stepper pulses to sync the mechanical position. code
so… it’s alive! it works….
…but actually it doesn’t. the control isn’t giving smooth operation. it hurts just thinking about the jittery movement and how the cantilevered led bar amplifies that.
carriage blocks machined, pcb assembled, parts mounted and LED bars have LED strips fitted and on their way to being wired. damn fiddly, not without it’s trials and certainly time consuming, but looking damn sharp.
fitting the ‘dotstar’ LED strips in particular is both wonderful and frustrating. such a nice fit when fitted, but all sorts of trouble getting in and out, an action liberally aided by silicone lubricant. lots of silicone lubricant. but many gotchas: the rubber sheath stretches as you pull the assembly through, so woe betide you if you need to adjust afterwards or pull out again (which, this being both prototype and first assembly, you’ll end up doing a few times); the dotstars are not continuous strips and you either need to de-solder the appropriate joint of the 60/m low-density strips which sold as 5m rolls but are actually factory soldered 0.5m sections, or completely remove the copious silicone and whatnot of the 1m sections of 144/m high-density strips and solder together. just doing that on the bench could be unreliable, and when sheathed and pulled into the groove… yeah.
blocks of metal, calipers, drill presses… haven’t done this since mechanical engineering at uni in the nineties.
top photo is me verifying the modifications required for the carriage to fit the baton, the belt bar, and the controller. 11 holes from m3 to m6, some with recessed heads, some tapped.
i spent a long time figuring out what was best to run the led bar up and down. roller wheels on a solid bar came out best, as roller wheels are fast and being at the corners good for torque, and if what they’re running on is incompressible then the carriage should hold square. after asking some specific questions about the various item linear bearing parts to their staff, got a duff answer and what came out the box had ball bearing races. this part was chosen as being aesthetically the neatest, so there’s still good in having them, but there’s more friction than i’d like, and ball races have turned out to be fragile. if there was to be a tekton one three mkii, this is something i’d change.
this is a daughter-board for the raspberry-pis that will run each arm of tekton one three. in theory, it was trivial: bridging the pi’s header pins to connectors for the specific cable runs that needed to be made, via 3.3v-5v driver chips. in reality, it was a painstaking hand layout: as well as the data lines, this was the power distribution board for whole lot of LEDs. double digit amps, most of the ribbon cable between moving carriage and fixed upright is simply power.
to glastonbury to exhibit small global: extreme energy. greenpeace asked d-fuse to feature their work on fracking with the school of advanced study, university of london as the centrepiece of their ‘clIMAX’ dome. happy to oblige, on all fronts.
watch, it’s really quite good. an online video is not the same as an installation, but put it full-screen, keep with it and you’ll get a real sense of the mounting intensity and serious subject.
b-seite charge two: a performance. it’s good to do things away from the stresses of big theatres, to be able to experiment amongst your peers. and wonderfully, there was mesh, smoke machines and the very capable tim vis. there’s a one minute video on vimeo.
jupiter, through the lens of d-fuse - and, less desirably, the lens of my phone.
to the opening concert of belfast festival: d-fuse have been commissioned to provide a visual score for holst’s ‘the planets’. here, the inevitable shot of test card on gauze screen.
one flaw in this line of expression is that you never have the setup to fully explore the form or pretty much even rehearse most of the time. history repeated itself here with much time lost to flying a succession of screens and battens to get an acceptable framing, and the belfast philharmonic providing a coda to the light level negotiation that was beyond parody… far, far from the professionalism you’d expect with belfast festival the loser.
nonetheless, amongst all that, we had that magic moment where it all comes together: alone in the hall with lights down, imagery commanding a vision in space… and, simply put, that music.
warmed-up from the apple store gig, its down into a concrete bunker for two d-fuse sets at the redsonic festival. small, intimate and stuffed with 50 speakers in a surround sound dream, its quite the gig. matthias rocks out, his musique concrète scores now chasms of sound; latitude is the immersive drift that totally captured the audience, the trip of particle finally is (how else to describe it?) dancing in abstraction to the music, and mixes to a finale of new work by paul that in drawing the audience back into a breathing-like minimal simplicity had them in the palm of his hand. gasps and whoops: nice!
with mike sharpening his critical muscles doing an information environments mres, there’s not been much d-fuse action since los angeles earlier in the year. but here we are, in an apple store for latitude – not quite the theatrical staging, but oh my: the slightly surreal setting disappears upon using the new generation of laptops in anger for the first time. vdmx b8 + quad core i7 + latest radeon + ssd = finally, flawless performance for us. the ghost of perfectly smooth hardware playback has been hanging over my head since moving us to a software setup is banished in a blaze of compositing and audio-reactive tweak. happy days.
last two days in LA spent with the excellent morgan bernard, though again largely behind laptop screen: trying to pull the PhD and whatnot back into focus. a welcome diversion came by way of scenic diversion coupled with morgan’s realisation there was a GH1 camera in an office in town… cue hours of amazing filming around san pedro docks. perfect blue gradated sky with industrial everything scrolling by against it. and we even found the same container and car trains that had been snaking through the windfarmed desert at the beginning of the trip.
straight from the rhythms and visions workshop to “levi’s film workshop” for a performance of latitude. the place is a bit like santa’s grotto for visualists, all sorts of equipment there to be borrowed and used. its also in the same building as moca’s street art exhibition, an embarassment of riches with some standout shephard fairey pieces (yes, its his constructivism cliché, but the textural qualities of the physical pieces was amazing).
as part of the rhythms and visions event, a day of workshops was organised. los angeles visual artists - lava had the morning, and covered the past and present of ‘visual music’ works. mike, matthias and i had the afternoon, which we nailed the four hours precisely with a tour through the d-fuse oeuvre and a journey through our particle production process. the latter was my main contribution, and its a tricky balance to give: lots of really cool stuff – shooting, taking crops, building abstracting effects – but with what can reduce down to a sea of noodles and buttons. pretty happy though, good feedback that the thread was there and it all tied up: people got it.
endless cities over, the scrim is dropped and its time for particle. the last performance of particle – cynetart – had the sense of finally getting to a definitive, rounded piece. this, then, with the new season upon us should have been the start of particle phase two, starting with much upped audio-visual linkage and a higher-res two-up 16:9 format. the world has a way of conspiring sometimes, and instead for me it was one step forward and three back as the challenges of the format re-working and no time outside this trip ate any creative or rehearsal time just to get a functioning show. which isn’t to say that it still wasn’t quite an experience for the audience. as the photo shows, it was not the average film school evening!
first d-fuse performance: a new cut of endless cities with live score from matthias and guest brian lebarton. me and mike have to stay on stage even though its press play on the visual side, so i spend 45 minutes in front of my laptop trying to not look idle while not touching it for fear of disturbing the playback.
update: footage of matthias interviewed on the radio is upload on youtube
onto the reason behind the trip: d-fuse are to headline the rhythms and visions: expanded and live event put on as part of the visions and voices initiative at the university of south california. arriving there, i do like seeing that magic photo printed up nicely and put around.
the tech check turns surreal as the campus is invaded by frat house types in swimming gear attempting to dip in all the fountains… except that the cinematic arts one is barricaded off as we set up around it. cue scenes of zombies at the gates.
and those wind turbines were placed there for a reason: keeping the tripods still wasn’t trivial.
here we are outside LA nearing palm springs, wind turbines and freeway, and rail track just out of this shot seeming to also be transporting mile-long trains of new cars. the turbines, somewhat ironically, power las vegas. might be a cut too far for endless cities, but couldn’t help linking seeing pensioners acting out their retirement program, trundling around on golf carts in courses terraformed out of the desert with thoughts of america’s by-gone obsession with space… infinite expansion and resources on the new, intergalactic frontier.
live audio-visual projects are hard to take a measure of. its something that only exists in the moment, but in that moment you, as the performer, are as far removed from a position to judge the work as possible: you’re not in the audience experiencing the whole effect, but on-stage in the middle of those screens locked in a specific mindset… if you’re lucky you get to look up from the laptop screen to the monitors, which themselves are no real representation of the final projected whole.
so its all very subjective. the first gig of particle was a triumph - or felt that way - and i think that was a combination of the release of something that you’ve worked on turning out good as it came into being for the first time properly, along with some more-than-we-realised luck with the staging, and a lot of simply mentally editing out the bad bits. at least here i can qualify it somewhat with comments on the [quick edit i made] from our recording of the gig, pretty much all of which are great.
since then, lots of ups and downs; starting to concentrate on the weaker sections, lots of juggling the content around, spending too much time trying to get the midi-sync working (particle is jinxed there, i know not why), and finding out that the particular geometry of stage, auditorium, projector and screen is much more critical and subtle than at first it seemed.
so it was really good to finally have a performance where it felt like we’d really smashed it, a true high. the above image are frames from a ten second excerpt from our recording that in itself forms such a satisfying loop. we don’t have a full recording, but we certainly have the footage to make a proper promo edit with that as the base. huzzah.
comedy bonus: there’s a video podcast from the festival with us in it. the footage is a rum selection, but yes, it really happened.
straight on the heels of the ‘holotronica’ onedotzero d-fuse performance, to dresden for ‘cynetart: international festival of computer based arts’. after the experiment of holotronica - more diversion and distraction in the end - this was the return to the real thing, and the chance to up its game with a refined set of footage.
the festspielhaus turns out to be an immense room, but moreso their technical production turns out to be impeccable. setting the staging for performances like particle is always tricky as the immersive effect we are after for the audience is so dependent of the geometry of the space, and at cynetart it all came together. the projection onto the front screens was able to fill the front screen fully while bathing the whole seating area in the spill-through projection, yet without the audience being blinded by that projection’s source, looking down the lens. rigged for a ‘trip’, rather than as cinema.
check the attachment too, the whole space was quite special, not to mention cycloid-e behind us.
as the musion setup is a transparent (pepper’s ghost screen in front of a ruffled curtain to set a depth cue for the audience, and the d-fuse piece ‘particle’ we were to perform is an explicitly multi-layered projection, we needed to replace the rear curtain with a screen. at kinetica, the effect wasn’t what we hoped for, so we went to musion’s vair old-money hq in the week beforehand to experiment. after testing some combinations of screens and projector placement, the real breakthrough was just how transformative a performer’s presence was between the screens. having a body on stage, even if stuck behind a laptop, transformed the imagery from something easily flattened into a composite to something that had depth and mystery as the straight lines typically projecting across the stage to screens behind would wrap around the organic form, dipping in and out of the void. that was partly to be expected, but i think there was something also more subtle where it grounded the scale of the piece, adding back something that was subtly set by the texture of the curtain in a straight musion setup.
of course, we arrived at the bfi and were shown to our performance space… front of house. it was a compromised gig in many ways, testing out a dual 1024x768 setup and content redux instead of our well developed triple 640x480 setup and media bank, but that really killed it for me. c'est la vie.
there was some consolation in that we had a top-spec mac pro waiting to be returned off the back of a commercial project, and so i got to test how particle runs on a machine that isn’t my four year old laptop. you can guess the answer, mmm! first, however, to get the solid-state drive in there… courtesy of a lollipop stick, some tape, which wasn’t much short of a wing and a prayer. and with the lack of a musion hq test shot or crowd-sourced photo of the gig, thats what sets this entry.
there is now a mac pro in china running DFD, something that has been consuming my time for a while now. the roadshow d-fuse have been developing is our first big foray into automated dynamic content, lighting and audience interaction, and so without us being there for every gig holding it down with a hacked-up vj setup we needed something that you could just power on and the show would start. and so d-fuse/dynamic was hatched, a quicktime and quartz composer sequencer which reads in presets and its input/output functionality from a folder we can remotely update, and essentially just presents a “next” button to the on-site crew.
what i think is particularly novel about DFD is it was designed to output a consistent framerate, rendering slightly ahead of time so the fluctuations in QC and QT frame rendering are buffered out. i’m not sure is the effort/reward of this was worth it, but it will be an interesting code base to come back to and re-evaluate.
for the roadshow, it is playing out any number of four sources at 1280x576, including the generative, controlled by iPads in the audience and LED balls on stage, audio-reactive core of the show, sending the central 1024x576 to the main screen, driving 10 LED 72x1px strips from the remaining 576x128px on either side, and sending DMX back out to the stage lighting and LED balls.
big thanks to vade, luma beamerz, and memo for helping me one way or the other grok anti-aliased framebuffer rendering.
having spent much time i didn’t have trying to get 64bit QTX giving me openGL frames at QuickTime 7 efficiencies, life saving thanks also to vade and tom for v002 movie player 2.0, for which there is patch back with them giving it the ability to play the QT audio to a specific output.
i’m not sure what to do with the code at the moment. it was made as a generic platform, but its current state is still very much tied to that specific project. or rather, the inevitable last minute hacking as it hit china needs to be straightened out. it has been made and funded as a tool for d-fuse to build on, so that needs to be taken into account too. in short, if anybody has a concrete need for such a thing, get in touch and we’ll see what could be done.
happiness is twelve hex bytes, generated by a pocketable custom LED fixture on detecting a bounce, transmitting that via xBee, receiving into the computer via RS232, being parsed correctly, outputting into a QC comp, doing a dance, and commanding back to the fixtures via Artnet via DMX via xBee.
- What is AV:in — Introduction To Audiovisual Arts
Audiovisual Culture is rapidly gaining momentum with new technology and information resources quickly aiding the drive. In 2010 more and more people are seeking and acquiring the skills required for audiovisual production and interactive creation. This global movement is spreading into screens, phones and other commercial applications as well as providing a rich source of culture for digital communities inspiring new trends in design and art.
AV:in is a new media course for this market. We have designed AV:in as a comprehensive online educational program for audiovisual studies. It follows a trend of regular workshops and lectures and professional training, which have continued to grow in popularity all around the world on the subject of audiovisual production and performance.
the iPad-screens interaction isn’t even half of it, though. the project is to be a roadshow travelling through china, a modular setup of stage and staging reaching out into the club. two ‘hero’ videos with dance choreography and interactive props (hello LED balls), a video feedback piece, a live drawing piece, the backbone of the evening as the iPad interaction and audio-reactive graphics, all feeding out onto the mains screen, duplicated onto any in-house video system each venue might have, with LED sticks extending the video canvas out from the stage and video controlled stage lighting effects.
…it makes for a nice diagram.
there is a big d-fuse production in the works, where the brief rather wonderfully was emphasising interaction with and within the audience. as briefs often do, things have changed a lot since the heady time of working on and winning the pitch, but the core of it is still generative graphics and punter control from the club floor. and so here, courtesy of dr.mo‘s crack team of coders is an in-development iPad app talking over WiFi to a QC plugin, where my two fingers-as-proxies-for-collaborating-audience-members are sketching locally and that is being incorporated on the club’s screens.
particle’s air conditioning units finally get an airing, midi sync or no. it turned out a really good gig to try new things, i was really happy with the ‘noodle’ i got going.
with the musion to play with once more, to the shunt vaults with for an experimental performance of particle.
may wouldn’t be the same without the annual pilgrimage to the excellent mapping festival. fourth time running now, having played as part of narrative lab performances and workshops, then journalism, and last year doing a workshopped kinetxt performance. this time, its as part of d-fuse, performing particle.
as i write this, i still haven’t got the shots of mapping from d-fuse hq, but just found this one on the internet. its by mapping’s genius photographer ork, check out his portfolio of mappings past and present.
aaah, a proper theatre. well resourced, good tech crew, and a lovely auditorium for the audience.
there’s no denying its like a flight deck of buttons, but props to vdmx’s configurability and plug-in friendliness. full playback control of both single screens and the dualhead spanning as quicktime sources, and a hardcore quartz composer patch wrapping a lot of custom openGL code, fronted by an interface builder laid out UI panel.
d-fuse were asked to do something with the musion screen at kinetica art fair. particle was designed for a transparent screen layered in front of the cinema screen behind, so mike got to work on a 15 minute redux. playback was just hitting play on two laptops simultaneously, and even for this oh how i was reminded that i should write that basic show playback app…
this friday i will be doing my vj thing together with mike, matthias and sarah of d-fuse, paul of labmeta, and mo of electrovision. we have the run of the london transport museum, who are opening late with a bar. its the first time we’ll have done something there, so it will be baby steps but i think there’s real potential with the victoriana building with the spatial feel their scale of exhibits gives, the covent garden location, and the real gem of their proper screening theatre downstairs.
- Sounds of the Suburbs, Friday 6 November 2009 Tune in to a multi-sensory journey of light, image, colour and sound with D-Fuse, Labmeta, *spark and Electrovision and their collaborative set - Urban vs suburban with VJing, audio visual performance and screenings complete with silent disco. Check out Designated Area artist Andy Morgan’s live illustration of a classic cityscape with a suburban twist.
a sunset to end, aaah. lucky with the weather for our filming day!
and here we go, leading into the bridges
coming into newcastle, but can’t shake the oil rigs yet, just too good.
two screengrabs of output featuring footage andrew shot at tynemouth. oil rigs, beaches and a bleaching sun.
the title should really be ‘live at sea’ by the looks of this clip. not to mention the setup looked a bit like the bow of a ship.
the performance had some really beautiful moments, and clips made to pan across the full triplehead work so well in that triangular setup, making lighthouse beams in the space. in my opinion we let ourselves down by not having time for a full run through beforehand, the to and fro between ‘particle-as-is plus a newcastle bit’, or ‘newcastle-in-the-style-of-particle’, resolving itself live and as a factor of SSD drive capacities.
the performance was to be the newly transformed great north museum‘s event space, a big, white, and empty room. standard two layer 'proscenium arch’ presentation not so suitable, mike came up with a triangular staging.
up to newcastle with d-fuse for a performance of particle as part of reinventing the city. we collaborated with locals novak for a day of filming and a day of performance/prep. nice to be out with them with a camera on a glorious day rather than worrying about wiimotes and audience interaction.
geordie hospitality really can be great. me and andrew were just scoping out an alleyway in byker that got the sunset over the city centre, and literally at the moment we turned around and saw a balcony above a house’s garage perfectly positioned for the shot, when the owner rocked up and offered to take us up there. cups of tea and all.
(and excuse the contrast pump on the shot, couldn’t help myself)
of course, you’re missing six of the satellite screens, two facing in just out of sight offset at the far end, and the four to mirror that end behind the camera’s position. buts the nature of the beast: you can’t watch a 11,000px quicktime, you can’t watch a 3200x1080 crop alongside four 768x768 crops, you can’t get everything in with the viewing angle of the unaided eye even in the stand. thats what immersive, or certainly surround, means.
i hope we can pitch again next year, impressive as it all was i really want to break away from these big white walls…
having spent a day documenting our work, i can say this steadycam rig may look the part but is the most awful thing to actually try and use. makes you realise why the film-grade ones are basically terminator suits.
a tribute to fellow frequent d-fuser paul mumford: its pretty tough: you’ve been holed up for weeks on end with the impossible brief of this video environment; master of after effects or no, a 11,000px canvas is going to hurt, and you’re finally seeing the results… and you can’t even get a photo of it all, ‘cos its wrapped around walls and towers surrounding you!
from pitch and the day-long meetings to kick it off, its now three months later and i’m actually in the sony stand at ifa 2009. a non-disclosure agreement, a 11,000x1080px canvas in after effects, 15 full-bore mac pros compositing their slice of the 400GB drive image full of particle layers and product prisms, eight screens pixelmapped in surround across the stand’s towers… its been one of those projects of scary statistics, corporate reputations riding on the line and knowing there is no preview, the not-yet-there architecture is the output monitor.
to take a step back, d-fuse won the commission to create the video environment that forms the centrepiece of the stand. having contributed to what in effect were a series of brand films last year, this year was the real thing: together with andy visser’s sound design, creating an audio-visual environment to give a… well, how to describe: something between an emotional journey and a feeling of space to the stand. i like freestate‘s vision for the stand, almost as parkland with scattered mini-booths of products, rather than the products! products! products! you find elsewhere.
so its a nice moment to see have mike, mr d-fuse, and adam, mr freestate, in-situ and discussing the finer points of trade-show land with smiles, looking at a stand that pretty much nailed what was discussed those months ago.
…but all that fades away against the rift of paraisópolis and morumbi. o m g. to see this was why we’d hired the helicopter. just unreal.
urban conditions…? i would just stick this on a cinema screen for an hour and let people deal with thinking it through.
lots of urban visual goodness from the air in são paulo… great concrete vistas, modernism old and new…
and then there we were. above the city, doors removed, pointing cameras wherever we could.
for the record, this is one of the more stress-full things i’ve done: knowing the cost per minute, not having a tripod fitting let alone the gyro stabilised camera of the newscopters, being blown around by the tremendous headwind, barely being able to see the screen in the glare, everything going too quick, and only having a car seatbelt type affair holding me in as i leant out. but it was amazing. just need to raise the money to hire a full HD filming equipped ‘copter: having experienced the diy version and then looked at the real deal in its hanger, i’d say worth every penny. you just need something to justify it….
são paulo is an interesting city for a number of reasons. statistically, its a big hitter in the d-fuse world of urban conditions: the first or second most populous metropolitan area in the americas with a cool 21 million, and with the rich escaping the traffic via the largest helicopter fleet in the world. and so, mulling on that, we found ourselves at the heliport…
with the standard in the bank, it was onto the experiemental performance, the world premiere of ‘particle’.
Particle explores urban conditions on an abstracted level. While projects like Undercurrent, Latitude and Surface look at city life in its social and psychogeographical dimensions, Particle zooms in on details of the urban fabric and reveals a web of rhythms, patterns and textures.
particle is also in many ways a rite of passage for me; its not often you get the chance to take an HD film and transform it into the next-generation ‘we wrote the book on vjing’ d-fuse performance. there’s a lot more work to do, especially in creating an audio-visual syncronicity in collaboration with particle’s musician matthais kispert, but we rocked it and got such a positive response. there is a video showing some excerpts of the performance here: http://vimeo.com/5787905
i’m really happy with
thanks to itaú cultural for the photo
produced for twin dvds and a bit of optional laptop noodle, here at on_off is its first outing as an entirely laptop based performance. the change is to bring it in-line with the new HD-savvy live setup i’ve been developing, where adding ever more SD dvd streams won’t cut it. the benefit is we should get better quality content with twin 800x600 progressive outputs rather than twin PAL/NTSC, and we have far greater creative control now the performance sits in our vdmx setup. the try-it-for-the-first-time surprise was that we weren’t getting 100.00% smooth playback for the straight sequences, which on a cinema screen for a theatrical audience becomes an issue. so with some juggling with quicktime player, we had the best of both worlds and rocked it.
of course, now that we’re dvi/vga+progressive, and have spent the past year dealing with HD, we need to go back to the original sources and remaster it all!
thanks to itaú cultural for the photo
abertura was the first time i really got to throw myself into a ‘particle-esque’ performance, using the d-fuse content with the live setup i’d created. as a warm-up for são paulo, it was a great one: the music and visuals really came together to give an intense show in the relatively small space of abertura’s hall, it really gave me a confidence boost.
i captured five minutes worth from abertura’s documentary footage, and its on vimeo here: http://vimeo.com/5731407
as a warm-up to d-fuse‘s trip to brazil, we performed a test of the new live piece “particle” at electrovision. it definitely felt like a test, as things were plugged in and loaded up to be used in anger in for the first time, and while i wasn’t so happy creatively with the form this first rendition took, that is secondary to what happened there: the dvi-mixer i’ve been dreaming of for years - and that my work turning transforming d-fuse’s live shows has been predicated on - worked as simply and unobtrusively as it should. all the sophistication and craziness lives in the laptop, where we have creative control as far as we’re wishing to configure and code, and we have hardware reliabiliy to ensure we can a) guarantee solid output signal to the projectors no matter what is going on with the laptops and b) mix together and tag team the performance between two visuals laptops.
i am working on getting this out to the vj world for a limited run, and a full announce will follow. it will cost somewhere in the range of a $1000 - £1000 , input and output VGA and DVI, and allow you to do what nothing else will: dualhead at 1600x600, triplehead at 1920x480, HD at 1920x1080@60Hz. it is based on a conference bit of kit that i have developed a controller for, and that i hope to get some extra vj-love put into its firmware. the trick to both those goals is to aggregate our demand into one order big enough for a production run of the controller to be made and for it to be worthwhile for the developers of the conference kit to spend some time enhancing it for our uses. so expect a full announcement once everything is locked down and orders can be taken.
beyond visible things like keane3D, lots of work has been going on in the background with d-fuse this past year or so. some of it pitching, some of it pushing along internal projects, and at the moment a massive commercial job under NDA: all things which don’t really get to this diary. but i’m glad to announce a little preview of something i’ve been working on a while, which is transforming d-fuse live.
as part of electrovision on saturday the 18th july at roxy bar and screen, d-fuse will be performing for the first time with their high-def laptop live setup. it will be something between a test of the setup and a preview of the performance that will become ‘particle’, so all are welcome and beta-tester feedback appreciated!
there’s much more to say about what has been developed for this setup, but in the meantime here is the blurb i wrote for electrovision:
D-Fuse present a work-in-progress viewing of their new live performance, an experimental audio-visual triptych exploring urban conditions. Having mastered an HD production process for films such as Brilliant City and Surface, they have challenged themselves to bring this back into the live arena and with the graphical sensibility they are noted for.
pictured: vdmx work-in-progress setup, with amongst other things custom quartz composer 4x3 into 12x3 layer, backed by unreleased open-gl based qc plugins, and fronted by a native vidvox control layout.
after much shenanigans, keane3D did happen and d-fuse were part of it. no mapping-on-keane-artwork-triangles-with-antivj, no live layered-in-space reactive projections, either of which should have worked amazingly with the multi-3Dcamera filming setup, instead the program graphics. this meant on the day i was in charge of getting the content into the more-than-impressive and bespoke for the occasion outside broadcast rig - always be prepared to reencode everything squatting round the back of a rack of equipment - so always a nice moment when you actually see it going out live from the gallery.
i started writing an essay about this gig - the light surgeons performed “true fictions” and d-fuse presented “surface”, both huge and important works shown to a sell-out audience on the biggest screen in europe - but the window of opportunity to get it coherent let alone finished has passed for now, so here is a placeholder to say at least to say it happened.
thanks to laura fiorio for the photo
this is during the ‘straight’ performance of latitude, essentially an edit. as i hoped, there was call for an encore and i had vdmx ready for a much more layered, graphical remix. its so gratifying when an improvisation like that goes really well, especially when the time you’d planned to rehearse and shape it was taken with dealing with after effects consistently crashing half way through the updated led renders.
thanks to multiplicidade for the photo.
weird to see the clarity of this photo against my blurred impression from within the performance. having been told there was upward of a thousand people out there waiting to see us, they are of course in pitch black it being a theatre/cinema and i couldn’t see anything… until the moment came to release the secret weapon, bathing the place in light when we got to the buddha-like moment before knocking it down to highlights of the imagery.
thanks to multiplicidade for the photo.
ok so actually this is just being put up in the hour before the gig, but that didn’t stop 900 people coming to the gig. lets just repeat that: 900 people attended a live cinema gig. on a tuesday. rio and multiplicidade rock.
…and the laptop looking positively dwarfed by nasa* mission control, aka d-fuse live. setup finished the day before the gig, soundcheck and all: definitely the way forward. especially when your 45 minute render for the led panels crashes after an hour and a half right near the end.
…with the led panels on the side walls firing with my pixelmapped dfuse test sequence…
thats two big projectors, a lot of seats and the beginnings of the live setup…
hello batman, director of multiplicidade, rio de janeiro’s festival of ‘unusual image and sound’. and quite something too, a marathon through spring, summer and autumn producing a new event every 15 days. and on tuesday, d-fuse will be presenting ‘latitude’ as the opening event of multiplicidade’s new strand presenting works in a theatre that takes their capacity from 100 to 1000. so between now and then, i’d better work on making assets to fit the led panels at the side of the theatre space, and working out an encore with musician si-cut-db, as for this trip, d-fuse is me.