I’ve been thinking about visualisers after a discussion I had over the weekend about psychedelic visualisations when playing back music on your PC in apps like Winamp/iTunes.
I contributed a bit of a history to the discussion around audio sync in relation to VJing which I used to do with friends including a guy who was based in Finland for a while that helped develop a custom VJ app.
We were VJing our 3D animations at some quite large shows like Roni Size around the turn of the century using this prototype VJ software to loop and effect rendered video footage. We had bpm sync but no audio input. Most systems that create patterns, fractals, lines etc… use a volume level to sync to the music. This is pretty rudimentary and has been common since the 90s.
It wasn’t until apps started filtering the frequency spectrum to trigger or generate visuals from specific types of sounds that it started to get more interesting.
Rendering the actual vibrations of sound is a whole different paradigm. It requires a lot more processing power – especially in 3D. This is what I’m currently experimenting with.
The first video that really blew me away with sync was in 2002 by long time innovators Autechre.
The animation was not automated – it was hand synced frame by frame!
ALEX RUTTERFORD ON THE CREATION OF THE GANTZ GRAF VIDEO