I made a bot a while ago called Song Rememberer, which asks if you remember a song. It’s continually remembering songs but can’t quite name them – really just an extension of the sound system in my own head. It asks questions like:
I am one of the only people who follows it, but every day I get to remember an imagined song or two. Occasionally someone replies with an attempt to name the song. You can follow it too at @songrememberer.
The code was made with Kate Compton’s amazing Tracery, and hosted via Cheap Bots Done Quick!. The source code is available here, if you want to make something similar.
It doesn’t do a lot – just presents and reads aloud each letter of the alphabet in order. You can switch between uppercase and lowercase by clicking on the letters, and toggle auto-playing the alphabet. The colour changes with each letter thanks to randomColor by David Merfield, and the font is Manrope by Mikhail Sharanda. The source code is available on Github. I’m sure anyone with a little coding knowledge could improve it.
The audio on the site is the stock Mac voice Fiona, who has a Scottish accent. My daughter now copies Fiona’s pronounciation whenever she reads the alphabet from the website (despite having a fairly English accent the rest of the time).
If there is a small person in your life, here is a site you can safely leave them with. Just go to abc.olliepalmer.com, put your phone/tablet in locked mode, and let them click away.
I’ve just been digging around some old hard drives and came across this screenshot from a project I did for Krieder O’Leary back in 2012. It was an experimental camera that moved back and forth along an aluminium track, writing small changes in the space over the top of its existing images. Unfortunately the prototype suffered an electrical malfunction when I installed it in the Tate Britain (entirely my fault) and so it never got a change to take slow pictures of people moving around space.
It’s funny how ideas ricochet around inside one’s own head, morphing over time and through practice – nine years later, I’m mid-way through a project that collates audio in a similar way, with an almost identical tendency to fail at the critical moment.
Through the use of a special headset, a member of the public is transported into a parallel cinematic world, where the familiar urban landscape, people and landmarks still seem to be there, but are now part of an immersive film plot. The player become a central figure in a dramatic story – but what is real and what is not? And who is pulling the strings?
Three parallel filmic worlds exist simultaneously.
Immersive reality theatrical experience; live. Project in development.
I decided to participate in NaNoGenMo (National Novel Generation Month) this year with a project called Directory Directory – an online directory of fictional companies, all located within the Alphaville-Zulutown region. It’s organised like an old phone book, by service type, and each company has a name, slogan, address, and phone number.
Some day in the future I’ll update the directory to have more information, and use more advanced grammar, and maybe even be printable. But the project was a nice excuse to learn some new things (the Tracery library for python is fun to play with; it’s also the first time I’ve built a workflow to build a whole generative website).
This week, I made a silly Twitter bot. It was mostly an attempt to make a tutorial about making Twitter bots using Dreamhost servers, but ended up being a bot who periodically tweets lines from Des’ree’s 1998 hit Life.
The bot itself is inspired by the africa by totobot, which simply tweets a random line from the song every few minutes. It is actually so irritating that I’ve stopped following it myself, as I found my days permeated by twee earworms about preferring toast to ghosts, or the desire to fly around the world in a beautiful balloon.
The codebase is on Github – you can use it to build bots yourself if you use Dreamhost, or adapt the code slightly if you use another host (or have your own server).
I am sure it’s been done before, possibly hundreds of billions of times, but as a small coding exercise whilst writing my PhD I wrote a little piece of code which renders random iterations of Raymond Queneau’s Hundred Thousand Billion Poems on a web page. I re-found it whilst working on another project. Here it is:
A couple attempt to communicate from afar using an interface which translates their movements into words.
Structured across three micro-acts, Scriptych takes precision in choreography to an extreme, embedding sensors on dancers which measure their movements and control both the music and the words spoken aloud, in real time. The couples’ communication becomes increasingly fragmented as the piece develops, posing questions about the location of meaning in messages and movements, and the impossibility of communicating true intent.
3 x 3-minute choreographed sequences for 2 dancers. Custom computer interface with machine-learnt three-dimensional word database.
Ina, the French Audiovisual Institute, made a video about the collaboration between myself and Simon Valastro below. More information about this project can be found in Chapter 2 of my PhD thesis.
A limited number of signed prints of this performance are available for purchase. Please get in touch for details.
La Rumeur des Naufrages Opera Garnier, Paris 18 June 2016
Arctic Moving Image and Film Festival Harstad, Norway October 2017
Architecture Film Festival London Institute of Contemporary Arts / Oxo Bargehouse June 2017
Film | Making | Space Royal Academy, London February 2017
Thanks to the Opera National de Paris
Director Stéphane Lissner
Dance director Benjamin Millepied
Project realised under the Pavillon Neuflize OBC programme 2015-16 (research lab of the Palais de Tokyo), during its collaboration with the Opera National de Paris, the Institut national de l’audiovisuel and the Groupe de recherches musicales (INA – GRM).
A performance visually remixing and reinterpreting Alfred Hitchcock’s classic Psycho (1960).
Working with footage from the Institute National Audiovisuel (France), the Prelinger Archives (USA) and my own material, I have built software to analyse the visual and audio content of each frame in Psycho. The frames are then compared to a database of archival footage, and replaced with ‘matching’ stills and video clips.
The rate of frame-replacement varies according to the volume of the film’s iconic soundtrack – so that the audial freneticism is reflected on the screen. The result is a mesmerising, chaotic experience, and a reworking of a highly memorable film.
This is part of an ongoing body of work examining the technology of cinema.
A real-time film composed of images that show up in a Google Image Search for the exact time at that moment (e.g. 11:41:14). The film plays in real-time, and takes a full day to watch.
Images do not necessarily bear a relationship to each other, besides a similar metadata tag. Thus, it is the audience who read meaning into the assemblage of images, creating stories and hypotheses about the images.
The images were gathered using the Google Image Search API, using masked IP addresses so that a search would appear to be from a random global location. As an unconnected string of images, the film forms a visceral snapshot of the US-indexed internet in late 2015.