Memory Cloud

The Memory Cloud

You have a new memory

The Memory Cloud

The Memory Cloud is a granular synthesizer that uses your own cloud media library. It is an intensely personal and radically small scale instrument that repositions our personal media as something to be played with. In doing so it challenges the dominant paradigms of growth and technological “innovation” in musical instrument design.

I’ve written a paper about this instrument for NIME, you can have a read for a lot more details.

If you’d like updates on this project, please join the mailing list! I’ll send a message when I need people to test, and if any devices (or kits) are available for sale.

It is essentially just a standard granular synthesizer - the main defining characteristic that makes it the Memory Cloud is that the instrument is loaded with all of the sounds from a single person’s cloud media library. One sound can be loaded into the granular engine at a time to be played with. This is achieved by pressing a button on the instrument which will select a random sound from your library. A sound will not be loaded a second time until you have loaded all of the other sounds in the library, so pressing the button means leaving your current sound behind.

To make the sound library, all of the videos in the cloud library are exported and the audio is stripped from the file. The files are all converted into the same format and then loaded into the instrument. This is quite an involved multi-step process that takes longer than you’d think, especially when doing it for someone else. For my own Memory Cloud this resulted in about 1200 sounds taken from videos dating back about 12 years. These represent an odd snapshot of my life, a mixture of rehearsals, playing in the park, happy birthday videos, documentation of projects, and more.

The instrument is essentially a physical build of the ys.granular synthesis system I have used for a number of different projects, and the code is available there along with a rough manual, the PCB files, and the faceplate design.

The Memory Cloud

Some frequently asked questions:

How is it built?

Inside the instrument is a Raspberry Pi 4 running the Pure Data code. The knobs and interface stuff is all connected to a Teensy microcontroller, which sends the control data to the Pi using the [comport] serial object in Pure Data. The audio comes out the Raspberry Pi. I am probably going to change this system over to something else in the next version, for various reasons, but it works well for now.

Can I have one?

I have made a Memory Cloud for someone else before (you can read about it in the paper linked above) - because of the challenges of preparing the sound library this is a fairly challenging process but also very rewarding. If you’d like one please get in touch by emailing me (yann at yannseznec dot com). My main concern is that I only want to make one for someone who is really willing to embrace the limitations and concept - I’m not going to make one that just has a bunch of random sounds in it, for example, for the instrument to work it really needs to be the whole thing.

Can I build one myself?

Yes, please do, I think I have put everything you need in the github project, but get in touch if you have any questions. I’d love for you to make your own version of it.

Shouldn’t you make a version that is actually connected to the cloud?

This is a “cloud library synthesizer” in the sense that I have taken all of the sounds from videos in my cloud library, however it is not actually connected to the cloud. That means if I add a new video to my cloud library, this will not get automatically loaded into the instrument. If I want to update the library in the instrument I have to “manually” export the videos from my cloud library, convert the audio locally, and load it onto the instrument SD card. Does that compromise the concept? You decide! But for me, this is a necessary compromise. I don’t want the instrument to be connected to the internet, and for the sound buffering to be acceptable the audio needs to be stored locally anyway. On top of all that, the level of expertise necessary to use cloud library APIs to access the audio from video files and load it into Pure Data is far beyond me. I’d also argue that the process of exporting the files is a fantastic way of engaging with the overwhelming mass of media that we tend to collect.