Content

tagged: code

field

retrospective post, now that there’s a public page and credit to link to – https://field.io/work/ibm-watsonx

i was tasked with porting the high-end interactive experience at events to low-end laptops wherever they may go. not glamorous, but in many ways an ideal freelance gig: a green-field implementation of a good design, for a quality studio.

diary | 23 jul 2024 | tagged: code

tauri contributor

if only all issues resolved as simply as this. deficiency identified, fix made, one-line PR contributed, reviewed and accepted.

diary | 23 jul 2024 | tagged: code

dorkbot / quantifying progress

gave a lightning talk at dorkbot – we all know that a hack ain’t production-grade, well here’s me quantifying that.

taking stock: what made a maker business’ is far too big a talk to give at dorkbot, but this little excerpt made for a perfect bite.

diary | 11 jun 2024 | tagged: dvi-mixer · code · pervasive media studio · dorkbot

audience / 2

artists can not just offer control to their audiences, but they can charge for it. this is a development shot of the first test – it had to be the archetype volta element, the pulse ring. 10 bits to trigger.

these may be baby steps, but they’re crucial: conceptually, audience interaction is what differentiates a live stream from playback of one. economically, micro-transactions from fans could sustain the artist (c.f. 1,000 true fans).

diary | 28 mar 2023 | tagged: volta · code

audience / 1

been working on audience interaction for volta, and this is a happy moment: our twitch extension in production, publicly available, working not just on desktop but mobile too.

just as an artist can link to up a midi controller to control their volta world, they can link to audience control. i.e. if you have a button in volta, you can give that button to your audience.

diary | 21 mar 2023 | tagged: volta · engaging audiences · code

voltaservices

long term, volta will need a web platform, for example artists might have pages listing performances, or there be some kind of content marketplace. and that web platform should scale for the mass-market volta is aiming for, and be able to cope with the inherently spiky demand of something based around events or performances.

so i’ve been building that, or rather building the things we need in the short term with a view to that. thinking more akin to a platform than the expedient one-off services volta had made before i joined. i’d landed on sst as the best way to do this, effectively a developer-experience do-over of writing serverless apps on amazon web services.

the work had started on sst zero-point-something, and since sst had launched v1. upgrading to that turned into a near complete re-write, but it was time well spent. javascript → typescript, raw dynamodb calls → electrodb, integration tests… to me, serverless development finally felt coherent and sane.

diary | 14 nov 2022 | tagged: volta · code

stream engine

it’s upside-down and the colour ain’t right, but it’s the proof of my proof-of-concept: we can stream ourselves from within unity, no need to rely on third party solutions with different priorities and unresponsive support. using a c# wrapper around the libav ffmpeg libraries, volta stream engine lives!

diary | 14 jun 2022 | tagged: volta · code

2D→3D demo, volumetric fx

…continued from last post; 4 of 4

This this is me, this visual source down here and that, and you can see here that this is another take on the kind of dot cloud kind of idea. This is an element that early, early early adopters will know as the chroma key billboard and more recent adopters will know as the Volumetric FX. This is us doing volumetric effects to what would be a flat 2D surface.
One of the problems we’ve had with this before was that you couldn’t have it cut out, you couldn’t have it so that the dots were just on you.
Another problem we’ve had with it was that it was introducing different colours. So now the colours are locked to the the colours of the source that, that you give it. But if you want, you can give it another visual source and now it will kind of on some of these, it will take highlight colours from that and we can get rid of that just by taking it back.
You’ve got all of these nice things to play with like bubbles. So hello again, that’s make his bubbles go up, up, up, up, up, up, up anyway.
Another thing here is that with these image masks… let’s set the volumetric effect surface, back, maybe just to that place holder image, we’ve we’ve seen so much now. I was masking with a still image before, but of course, we’re masking with visual sources so we can do anything. There’s a Volta logo bouncing along. Let’s add that as an image key. There it is. And so now you can see, you know, we’re starting to get some really nice 3d things going on there where the trails and the particles.
So that was a quick tour. Volta Create 0.12. You can do a lot more with your 2D sources.

diary | 19 mar 2022 | tagged: volta · code

2D→3D demo, dot cloud

…continued from last post; 3 of 4

Everything so far, we’ve been taking 2D images and putting them in 3D, but we’ve kind of been keeping them as those flat images in some way. They’re not kind of natively 3D.
If you look at this, you can see that this is a bit different. This is that same webcam placeholder image, but you can see that this is somehow more in 3D. We turn the pixels into dots.
If I go to dot Cloud, which is what this new element is called, you can see I can change those dot sizes from something really small up and up. But the real thing here is that we can play with a displacement slider.
The way this is working at the moment, it’s taking the displacement from the brightness of each pixel. You can imagine there are ways that we’re gonna take this in the future. But yeah, let’s just explore how we can use this artistically for a moment.
You’ve seen this with the webcam placeholder image. But I commute in, I mean, I work remotely but I commute into the office every now and then. So I have these kind of early morning, late night trains and I look like looking out the train window. And so here you can see this is just that sideways, scrolling out of the train window, passing through stations. But look, you know, we can really ramp up the displacement and you can start to be in it.
Also if we just bring that back, you can see that we’ve kind of got all these black pixels here that we’ve still got this kind of rectangular frame in the space, which is kind of somehow, yeah, we, we, we want these pixels just to be, we don’t want them in that black frame. This would be a good time for a luminance key, a luma key. So I need to add that to the source.
Here’s the visual source. I’ve got a luma key on it and I’m going to turn up that threshold for keying out. It’s going to take out anything that’s below almost black. It’s gonna take out total black.
Now all of a sudden we can see that we’re kind of creating abstract 3D art, truly 3D in that 3D space.

…continued in next post

diary | 19 mar 2022 | tagged: volta · code

2D→3D demo, keys

…continued from last post; 2 of 4

We’ve been working with DJ Yoda and here he is mixing and and so we need to key him out. He wants to appear in front of the visual output that he’s scratching. So how do we do that?
We have got a human segmentation thing. This will, using machine learning, find the person in the visual source and key them out.
So there we are, we can see it’s pretty good and you can see actually if you look down in the corner, there we are. That’s me, I’ve got, that’s a webcam running with human segmentation on it too.
But machine learning, look, it can’t be perfect in these things. And as you might have noticed Duncan actually has got a little green screen behind him. So let’s see if we can use that.
We’re going to add a behaviour.
Now we’ve got a chroma key. This the hue will find the colour and the tolerance is kind of the smoothing around that. Slide the the hue slider around till it just homes in on the colour you want to take out. There we are, we’ve got it perfectly do that with a tolerance down at zero.
Then you bring the tolerance up just until those crunchy green edges kind of go away, but don’t do it too far ‘cos otherwise things will start going semi transparent. So it’s just just about there.
But you can see that it’s a very nice key. Problem is of course, we can still see his room, but we know how to fix that because we can add an image key to that. So we can just mask that out there.
It was already set to this visual source here, which is just a manual mask I made in a black and white image in my kind of image editor program. So there we are.
That’s so now, here we can put Duncan into a 3D kind of stage set, into a 3D world around him. We could have audio reactive generator elements, we could have his big screen behind him. So that’s good.

…continued in next post

diary | 19 mar 2022 | tagged: volta · code

2D→3D demo, surface element

volta is a 3D app, coming from immersive VR. i find that so interesting. it’s not what I’m used to, VJ software like vdmx and resolume are 2D, like photoshop. so how to leverage what I know into this new world?

this isn’t just a me-thing. 3D is hard, and what people bring to volta is 2D, say a logo or folder of videos. seemed to me there’s a real product challenge here, in getting people to do cool stuff in 3D with a kind of 2D skill-set. the latest release has a lot of work from me on that. and so with the release, a ten minute demo from me. here in the diary, as a transcript and four stills.

Volta Create 0.12

One of the focuses of this release is doing more with 2D imagery. Getting you, with your webcam, your images and your videos, into the 3D world.
Here I’ve built a little kind of stage set out of three images. In this case a webcam, you’re seeing the webcam placeholder.
The first new thing here is that we have an element that’s suited for getting your stuff into the world. It’s called the surface.
And so the first thing to say about the surface is that it respects transparency. So I’m going to take that one at the back, copy it, paste it and now I’m going bring it to the front.
I’m using the Gizmo feature that’s also new in 0.12.
And now on this surface we can set to a PNG that I’ve got in here that is transparent. Already that’s looking nice in 3D.
That’s just a transparent PNG. You can make those in your image editing program, to make frames and all sorts.
But we can do more than that…
I’ve got these two surfaces on the side, and I’m going to set these to a different visual source. I’ve got that webcam but on these, you can see that there’s some transparency too.
So we have in 0.12 a whole new kind of image keying, masking kind of thing going on.
On this visual source, it’s just a webcam again. But here I’ve got an image key on it. I’ve added this image key and I’m using another visual source as the mask. That’s just a black and white image that’s masking it up.
And so all of a sudden, we’ve got this whole kind of thing that really truly exists in 3D, but we’ve built it just out of simple flat images.

…continued in next post

diary | 19 mar 2022 | tagged: volta · code

nine earths » end to end

d-fuse won a british council call to create an artwork to be part of the cop26 climate change conference. the idea was to create a piece on consumption built out of user-generated content from around the world. working with partners around the world, we’d take a cutural probe approach, and frame it all through the idea of consumption rates vs. what the earth can sustain, as exemplified by earth overshoot day. in a time of covid, this all could be remote, and it be presented online.

and so: a first draft of what that might take. a server app to ingest the media, and a client web app that can animate the title card, go through an introduction, and then display a mosaic of the submitted content, indefinitely. far from a refined design or the full content in mike’s increasingly elaborate after-effects pre-vis / guide, but an end-to-end something.

diary | 26 aug 2021 | tagged: dfuse · nine earths · code

ORBIT open-sourced

To complement the release of the dataset, here’s the infrastructure used to create it – my code open sourced for future dataset collection projects to build on our work. And as our work shows, machine learning needs more inclusive datasets.

https://github.com/orbit-a11y/ORBIT-Camera
https://github.com/orbit-a11y/orbit_data

diary | 31 mar 2021 | tagged: orbit · research · release · code

ORBIT Camera v2

I was brought back to do some tweaks to the app for the second phase of data collection. Motivated to simplify the procedure required of the participants, the big ticket item was to lose the open-ended collection in favour of a fixed number of things, with a fixed number of videos each.

Not quite knowing whether to laugh or cry, this fixed-slot paradigm is what I’d proposed in the first place. Seemed pretty clear to me. Perhaps I could have argued for it better. Anyway, oh my golly did it not only simplify their procedure, but also a great-and-good simplification of the sighted and accessible UX of the app.

diary | 03 nov 2020 | tagged: orbit · code

ORBIT Phase One data

Data collection phase one comes to an end: test and training imagery for 545 things, in the form of 4568 videos. Having built the system with barely a page of test data, it never gets old seeing the paginator having to truncate itself.

diary | 16 jul 2020 | tagged: orbit · research · code

hand daubed dots

another thing browsers didn’t use to have: blend modes. so highlighting a dot goes from stamping a red circle on top to applying an additive red. gets the hand-drawn feel.

diary | 13 may 2020 | tagged: each egg a world · code

hand drawn speech bubbles

hand-drawn speech bubbles, in CSS. on the one hand, it’s kinda amazing you can – shape-outside: polygon() is a recent thing. on the other hand, what a faff. It’s shape-outside, not shape-inside, so let’s go write a script to subtract the traced outline from a rect, and split into two halves…

diary | 08 may 2020 | tagged: each egg a world · code

ORBIT Camera → App Store

ORBIT Camera is available on the UK App Store, for iPhone and iPad. And with that, phase one data collection starts. Huzzah!

The app used by blind and low-vision people to collect videos for the ORBIT dataset project – Object Recognition for Blind Image Training.

If you are blind or have low-vision, read on! We are collecting a dataset to help develop AI recognition apps that will use your mobile phone’s camera to find and identify the things that are important to you. For example, has someone moved your house keys? Do you regularly need to identify specific items while out shopping? What about your guide cane – have you forgotten where you put it, or gotten it confused with someone else’s? Maybe you want to recognise the door of a friend’s house? Imagine if you did not have to know exactly where your things were in order to find or identify them again.

To build these recognition apps, a large dataset of videos taken by blind and visually impaired users is needed. As part of the ORBIT dataset project, you will be asked to take multiple videos of at least ten things that are meaningful to you or that you regularly need to find or identify. We will combine these videos with submissions from other ORBIT contributors to form a large dataset of different objects. This dataset can then be used to develop new AI algorithms to help build apps that will work for blind and visually impaired users all over the world.

Not that it was without drama…

Thank you for contacting App Store Review to request an expedited review. We have made a one-time exception and will proceed with an expedited review of ORBIT Camera.

diary | 07 may 2020 | tagged: orbit · release · code

ORBIT Camera, the app

Hot off the digital anvil, an iOS app for blind and low-vision people to collect videos for the ORBIT dataset project. The client for the ORBIT Data server. Written in Swift, using UIKit, UIAccessibility, AVFoundation, GRDB[1], and Ink.

First-run

On first-run, the user gives research project consent. This is a two-page modal screen, first with the ethics committee approved participant information, then the consent form. The participant information page has a share button ①, which will produce a HTML file. This provides an appropriate and accessible equivalent to providing a physical copy of a participant info hand-out.

On providing consent, the server creates the participant record and supplies credentials that the app will then use to further access the API endpoints. The app does not have any record of the personally identifiable information given as part of consent, or which participant ID it represents, it only has the set of credentials.

This two-screen, step-through, first-run is simpler than my UX pitch. That had the idea of being able to test the app before consent, in order to ’show not tell’. I was in a ‘selling it’ mindset, whereas our participants would already have been recruited to some degree, so the onus was on getting them collecting data as quickly as possible, with the UX quality to keep them there.

Things list screen

The app follows the Master-Detail pattern. The Master screen lists all the Things the user has added. A thing is something that is important to the user, that they would like a future AI to be able to recognise. Things are created with a user-supplied label ③.

Plus an app/project information modal screen, accessed via ②.

A glaring ommission from my UX pitch, fixed here in ④, is tallies for video counts.

Thing record and review screen

This is the detail screen, of the thing-centric Master-Detail pattern. The aim of the app is to capture imagery of a thing, following a certain filming procedure. Visually, this screen presents a carousel of videos organised into the different categories of procedure. For each category, there are the videos already taken, plus a live-camera view as the last entry to add a new video. Information and options relating to the video selected in the carousel appear below, with a camera control overlay when appropriate.

The big change since my UX pitch is losing the fixed slot paradigm. There was a desire not to ‘bake in’ the data collection procedure, so many of the conceptual or UX simplifications have been lost in favour of open-ended collection.

The voiceover experience has a different structure. Here, the user first selects the procedure category Ⓐ, within which they can add a new video to that category Ⓑ or review existing videos for that category Ⓒ.

Plus a filming instructions screen, accessed via ⑤.

UI Notes

For voiceover, the touch targets of the UI elements were often inadequate. The clearest example of this is the close button of the first-run / app info / help screen. Swiping left and right through the long-form text to get to this control was impractical. Plus the button was hard to find, partly because it’s small and top-right, and partly of being swamped by this content. So the accessible experience was re-jigged to have this close control be a strip along the right-hand-side edge. Another clear example is the camera’s start/stop button accessible touch target extends to the screen edges ⑦. This means that most screens actually have an entirely bespoke accessiblity layer.

More gratuitously, the slick UX centered around the carousel took some coding. The pager does a lot more work, being split into filming procedures with the ‘add new’ tailing element for each procedure section ⑥; it got super-positive feedback from testers used to the first iteration with just a plain pager and selection of filming type for every recording. The carousel itself features multiple camera viewfinders, which meant a lower-level approach than AVCaptureVideoPreviewLayer was required.


  1. “because one sometimes enjoys a sharp tool” ↩︎

diary | 03 may 2020 | tagged: orbit · code

ORBIT Data, the server

Also hot off the digital anvil, the data back-end to partner the collection app. Hence ORBIT Data, a web-app to collate and administrate the ORBIT dataset. Comprises REST API for consent and data upload+status from the iOS app, and admin functionality for verification and export of the data. Built with Python using the Django framework.

diary | 03 may 2020 | tagged: orbit · code

1

2

3

4