Thursday, November 12, 2009

pitch tracks

So playing back a score in a non-tempered scale was resulting in zippy noises. A look at the midi stream revealed the culprit quickly enough: notes are retuned with pitch bend and since none of them were overlapping they were all being allocated to the same midi channel. However, they actually are overlapping since they have a bit of decay time after note off. The tails of the previous notes being bent to prepare for the next note was causing the zippy noises.

So clearly the thing to do was take advantage of a feature I had in place but hadn't plugged in yet: each instrument can have a decay time, which is used to extend a past its note off time. This way the next note will get a new channel to avoid overlapping with the deay of the previous one.

However, this led to another difficulty. Notes are assigned a pitch signal from their pitch track. Each note conceptually has its own independent pitch signal, but since the notes from one track don't overlap I can assign the same single pitch signal to them all, and they simply ignore the bits that are before the beginning or after the end of the note. This makes me happy because there are going to be lots of notes with lots of pitches and having them all share the same signal seems like a big win. However, with the new decay fudge factor they do overlap, which means a note can pick up the change in pitch from its subsequent note's beginning, which makes it look like the note tail is bending to match the beginning of the next note, which makes it look like they can share MIDI channels no matter what.

This means I need to clip the pitch signal for each note to end when the next one begins, logically the pitch signal only applies to one note at a time. Fortunately, the underlying array that implements the signals implements efficient slices by simply changing offset and length values, so trimming the signal per note won't make me do a lot of splitting and copying to give each note its own signal. However, this brings up yet another problem.

My convention for a signal is that past the last sample it simply holds the final value forever. Rather than making sure every signal processing function knew what to do when it hit the end, I thought it would be simpler to have the constructor tack a sentinel on the end at the highest possible x value. However, trimming a signal will slice that sentinel off, and appending it back on will force a copy and defeat my sharing! And it seems anyway that once I start having to do tricks to maintain the sentinel any gains in simplicity are compromised. So to support easy trimming I have to kill the sentinel, which means that the resample function has to behave without one.

Signals are implemented as arrays of (x, y), which is effectively a variable sampling rate. This has various pros and cons, and one of the cons is that comparing two signals (say to see if they share pitch signals) pointwise means they have to be resampled to have coincident x's. Every once and a while I spend a whole day banging my head against a problem which seems trivial, and wind up with an ugly incomprehensible mass of code that appears to work. The day I wrote resample was one of those days. A signal without a sentinel goes to 0 after the last sample, which is not correct, but I don't have the fortitude to go in there and try to figure out what to change.

Instead I'll just rewrite it. It should be a lot faster this time. Since signals are not lazy yet, it can emit a list instead of a pair of resampled signals, that way as soon as the caller establishes that the signals are not equal it can stop resampling. Of course later there will certainly be callers that want signals back out again, but I should have lazy signals by then anyway.