Thursday, November 12, 2009

pitch tracks

So playing back a score in a non-tempered scale was resulting in zippy noises. A look at the midi stream revealed the culprit quickly enough: notes are retuned with pitch bend and since none of them were overlapping they were all being allocated to the same midi channel. However, they actually are overlapping since they have a bit of decay time after note off. The tails of the previous notes being bent to prepare for the next note was causing the zippy noises.

So clearly the thing to do was take advantage of a feature I had in place but hadn't plugged in yet: each instrument can have a decay time, which is used to extend a past its note off time. This way the next note will get a new channel to avoid overlapping with the deay of the previous one.

However, this led to another difficulty. Notes are assigned a pitch signal from their pitch track. Each note conceptually has its own independent pitch signal, but since the notes from one track don't overlap I can assign the same single pitch signal to them all, and they simply ignore the bits that are before the beginning or after the end of the note. This makes me happy because there are going to be lots of notes with lots of pitches and having them all share the same signal seems like a big win. However, with the new decay fudge factor they do overlap, which means a note can pick up the change in pitch from its subsequent note's beginning, which makes it look like the note tail is bending to match the beginning of the next note, which makes it look like they can share MIDI channels no matter what.

This means I need to clip the pitch signal for each note to end when the next one begins, logically the pitch signal only applies to one note at a time. Fortunately, the underlying array that implements the signals implements efficient slices by simply changing offset and length values, so trimming the signal per note won't make me do a lot of splitting and copying to give each note its own signal. However, this brings up yet another problem.

My convention for a signal is that past the last sample it simply holds the final value forever. Rather than making sure every signal processing function knew what to do when it hit the end, I thought it would be simpler to have the constructor tack a sentinel on the end at the highest possible x value. However, trimming a signal will slice that sentinel off, and appending it back on will force a copy and defeat my sharing! And it seems anyway that once I start having to do tricks to maintain the sentinel any gains in simplicity are compromised. So to support easy trimming I have to kill the sentinel, which means that the resample function has to behave without one.

Signals are implemented as arrays of (x, y), which is effectively a variable sampling rate. This has various pros and cons, and one of the cons is that comparing two signals (say to see if they share pitch signals) pointwise means they have to be resampled to have coincident x's. Every once and a while I spend a whole day banging my head against a problem which seems trivial, and wind up with an ugly incomprehensible mass of code that appears to work. The day I wrote resample was one of those days. A signal without a sentinel goes to 0 after the last sample, which is not correct, but I don't have the fortitude to go in there and try to figure out what to change.

Instead I'll just rewrite it. It should be a lot faster this time. Since signals are not lazy yet, it can emit a list instead of a pair of resampled signals, that way as soon as the caller establishes that the signals are not equal it can stop resampling. Of course later there will certainly be callers that want signals back out again, but I should have lazy signals by then anyway.

Tuesday, October 13, 2009

event selection

Things are always more complicated than you first think.

That was the theme for today. I was fixing a pair of functions for inserting time or deleting time. They just either nudge events forward or pull them back. They nudge everything after the selection for the size of the selection, and for convenience if the selection is a point they nudge by the current time step.

Ok, but what if the selection is overlapped by an event? Well, I should shorten or lengthen that event too, since I'm inserting or deleting time in its middle. But wait, if I'm shortening the event and it I can't shorten it more than its total duration, it should at most shorten down to the point where I'm nudging. But wait again, what if the time I'm deleting overlaps with the beginning of an event? Well, I shouldn't delete the whole event, I should only clip off the beginning before shifting it back. Of course, if that means clipping the whole note, then it gets deleted after all. So "simple" event nudging is not so simple after all.

All of this slicing and shifting exposes a weakness in the interface for modifying events in Ui.State, so I wound up rethinking that. The problem is that the traditional definition for a range is half-open, which means that its everything greater or equal than the start but less than the end. However, the nudge commands, along with a fair amount of others, should affect an event directly on the selection even when it's a point, which in a traditional half-open range will never select anything. So I wind up with a function for a range, a function for a point, and a lot of code that checks start==end and tries one or the other. This seemed error-prone so I wanted to have the range functions handle that, but baking a nonstandard exception like that into a primitive function seemed like a bad idea, so I wind up with three versions of each function: one for points, one for ranges, and one point/range hybrid.

So even deleting a range of events is not so simple after all.

Hopefully I can put these somewhat complicated but convenient behaviours into standard utilities. Then commands will behave more uniformly and it will be easier to implement them. So while three versions of each ranged function sounds excessive, I think it's probably best is the long run.

Sunday, October 11, 2009

merged tracks

Today I finally completed the merged tracks implementation. Since the merged track is hidden, I had to implement hidden tracks too, and it was more complicated than I thought. Simply pretending to the UI that the track was removed means that tracknums coming from the UI are wrong. I could insert a layer to correct incoming updates with tracknums based on the current hidden tracks, but it just seemed too complicated, so I decided to implement track collapse in the c++ layer. I actually implemented it as collapsing to a divider, since I was never happy with there being both hidden and collapsed tracks.

It meant a bit of hackery because c++ has to remember the state of the collapsed track so it can restore it. Keeping state in c++ is a bit sketchy because it's duplicated information and because it's internal operations instead of the normal diff -> update -> sync avenue. For example, what happens if a collapsed track is resized in haskell? When it's expanded, it won't get the new size. Mabybe the tracknum translation would have been cleaner? Oh well, I suppose I can switch back if need be.

Along the way I fixed a few long-standing bugs in how tracks are resized.