In my spare time over the past few months, inspired by the work I was doing building interactive installations for the Love Music Festival, I’ve been exploring other ways of using computer vision to generate sound and visuals. This is relatively new for me, but there’s obviously enormous potential (from creating an synthesized fly buzz to gorgeous interactive face recognition installations).
I also tend to spend a lot of time on trains, which I love. I often have my camera with me and I have loads of footage taken from trains, particularly of the electricity lines passing overhead.
I recently combined these things to make software that analyzes the video footage in real time and looks for lines. When it recognizes a line it will draw it on the screen. By pushing various parameters the software will begin to see lines that we don’t see, and make connections between them. I automated these various parameters and applied them to some of my footage from trains. It’s a very basic idea that uses some of the most standard features of the cv.jit library, but I found it to be very moving. I was originally going to have it generate sound, but I have come to really prefer it silent. I will probably improve it slowly, eventually adding music and better video footage, but for now I’m very happy with it.
This video, and indeed the ongoing project, is dedicated to my sister Gwen.