Application: interfacing sound with visuals.

We have not gotten to the learning point of understanding exactly how the chip communicates with an external device.

However, we did realize that the generator has some kind of output signal/sound out. This output is a frequency ranging in the audible spectrum and as long as we interface it with something that can receive this signal it can work as a TRIGGER FOR VIDEO manipulation and arrays.

For live video arrays we will use a program (like a video synthesizer) called Keystroke (FREE download for beta testers!!). This program works as a live patcher for video, and among other things it enables audio follow or video commands be triggered by sound frequencies. This means that we can be generating sound real time that triggers video by the sounds signal frequency changes. Luxz is pretty familiar with this program, and claims that the audio follows capabilities are a bit limiting, but as a prototype for further development on a platform that perhaps would work better for these means it will do.

So after experimenting with this prototype for a week and a half and having presented it in class there are, of course, some things to say about our beautiful
sound-generating chip.

First, in response to the previous experimentation of the various channels or, more properly posed, freqOut loop commands , we did confirm that the chip indeed only outputs one command at a time. What happens is if both switches are on, telling it to play both loop commands; it will alternate between the two and thus cut the time intervals in half. This was quite interesting and if used well can sound great. The actual sound output of this chip is quite particular and its quality definitely justifies the name of the command: freq out! (Please HEAR a couple of demo samples made by Luxz and donated to the site for communal freqout experiences!).

Freak-out sound samples: config.mp3 (400K) config2.mp3 (388K)
Video Application by Luxz

In terms of the video application in theory it worked quite well. As a matter of fact, while I were experimenting with the Keystroke patch made for the project, it was responding pretty well to sound changes.

In the presentation, however, the image was frequing out with the audio, but was not responding very well to audio CHANGE. Let explain the patch created so that this is clearer: The patch that I made took an external camera source via fire wire in. The output of this was routed to a buffer and a mirror effect (on the Keystroke).

The audio input was patched to an envelope follower, which has only two outputs: peak and frequency. Now, neither of these have tweakable parameters, so we were working with a pretty stiff command. The peak was routed to the buffer size and record on/off inputs. The frequency was routed to an oscillators rate that was, in turn, patched to the input for the mirror modes (vertical, horizontal, first half, both, second half, etc).

So basically the mirror modes and buffer sizes would change according to the output of the chip, generating loops of small snippets of live video inputs and then abstracting this image by mirroring every so often.

I think that the keystrokeÕs audio capabilities are not so good. I have tried to use them before but not with such an elaborate application. It seems quite unstable because it had worked quite effectively in experimentation mode, but then would not respond as well in practice. And even when it did work, I find that the fact that you cannot specify what frequency or what peak and be able to manipulate this information is annoying. Anyway, I will go to try this on Jitter.

I am sure it will work better on a dual audio/video platform!