So here's what I put in my TODO file that finally got me to start the blog. This is what I've been working on for the last week or so:
Adding a simple feature: play back score step by step instead of in realtime. It's useful to hear how each note gets added to the mix and listen for as long as I like, so I want a feature where I can press a key to advance from note to note.
Simple, right? Well, first I need to step note by note, so that means adding "event edge" to the time step system. No wait, it's actually note beginnings, and that triggers a rewrite of time steps to support merged steps and multiple steps (e.g. since I want to start a little while before the insertion point, and "start 4 notes back" seems like a useful thing to be able to say). Wait, and then it needs an additional mode to stop as soon as can no longer step instead of aborting, because if you say "start from 4 notes back" it would be better to start from 3 notes back if the fourth doesn't exist.
Ok, then I need an efficient way to seek around in the note stream. So add a simple kind of zipper ([reverse a], [a]) and some tricky functions to zip forward and backward. All of this prompts some additions and then reorganization of the State record. I've found that it's nice to divide state up along its access pattern: a constant bit, a monoidal "collecting" bit, and a true stateful bit that must be threaded since previous computations may rely on the modifications of previous ones. It makes it easier to understand what should be reverted and what should be retained after a computation aborts, and clear what the troublesome bits are likely to be (it's the true stateful part).
Now, how to display the current position of the step play? Well, another selection like the realtime playback selection would be logical, but then I run into the problem that a user selection is a single contiguous block, but playback can be at different times simultaneously due to multiple tempi. The realtime playback gets around that by going directly to the GUI instead of being stored in the song state... which makes sense since you don't want to save the realtime playback position with the piece. So now I need a separate non-contiguous selection concept, or a way to extend the realtime playback hack to work with step time playback. This involves somewhat deep changes so I punt for the moment on correctly displaying multiple tempi.
Now the actual playing of the notes... forward works fine, but what about backward? MIDI is extremely stateful so you can't just play it backward. So, record the state at each point on the way forward, and then when going backward emit the messages that take the synthesizer state from the current one to the previous one. Fortunately I already have a "synthesizer simulator" used for tests that I can repurpose for this. I punt again and only play forward. I'll do backward later.
Ok, and then for the tests... the existing test framework is based around running a single command and examining the results, I need to extend it to support chaining of several commands and inspecting the state in between. After a couple of false starts here along with a general refactoring it's good enough to do what I want, and turns up a few bugs, but there are still more that only come up when testing by hand.
One of these bugs leads to another bug-fixing session in the time step code for some cases that didn't get tested. Another bug reveals that converting from RealTime to ScoreTime to RealTime isn't identity, there's a tiny bit of error that I never noticed before because nothing else tries to do a round-trip (and it does the round trip because I need to step forward by a certain amount relative to a certain track, then find out the real time difference to play events, and then find out the subsequent score position on *all* tracks). On further thought, it's apparent that it's because RealTime is fixed point and ScoreTime is a Double, so of course you can't convert without losing precision. RealTime is fixed point so that floating imprecision doesn't cause things to no longer be lined up right when it comes time to performance (you can't hear the difference, but if a pitch doesn't line up with its note then it screws up channel allocation).
So... either switch ScoreTime to fixed as well, or switch RealTime to Double and have a separate fixed Timestamp? My knee-jerk reaction is to want to switch ScoreTime to fixed, but that's the reaction that turns these simple feature additions into epic polar expeditions. Maybe I don't need to solve the problem, and can fake it by not bothering to display the position correctly in the presence of multiple tempi.
I always had a bad feeling about using Double for time, all the way from the beginning. In fact, originally the time was integral, but then I switched it to floating since it seemed like it's more convenient to work with and of such high precision that any built up inaccuracy should be well below the 5ms or so I require to be able to hear the difference. It's convenient to be able to work with arbitrarily divisible units, especially since the convention was that all time is normalized from 0--1 and then stretched to its final size. But I had a bad feeling about it even back then. Well, those chickens may be coming home to roost, I guess. Unfortunately it's a hassle of a change since it goes all the way down to the C++ layer.
Anyway, the main point was that every seemingly simple little thing gets complicated in a hurry. I keep thinking some day I should have enough bugs worked out and enough test utilities and enough support in place that this won't happen any more. But now I'm thinking... maybe that never happens. Maybe it's unavoidable that every 5 minutes of adding something new is preceded by 2 hours of fiddling behind the scenes.