A common theme of a lot of my posts is me noting how something that seems simple is actually really complicated, followed by a long exposition of how complicated it really is. I guess that's because I'm continually surprised by how much work it is to do anything. Even in haskell, a language which is supposed to conducive to expressing complicated things.
By writing these expositions I'm trying to answer a question that is always in my mind while programming. That is, is music just inherently complicated, or am I failing to see some important abstraction or generalization or even specialization that would simplify things? When I struggle through some never-ending feature-add or bug-fix session I'm wondering what I did wrong, and how I could avoid it next time.
Of course this is one of those perennial questions about programming, about inherent versus incidental complexity. People who are good at what they do, and good languages, are supposed to reduce the latter. I'd like to do that, to become good at what I do, and ultimately save time.
Today I wanted to finish a feature to allow piano and ASCII keyboard input for scales with varying numbers of degrees, then slice up and create a sampler patch for the recorded kendang samples from last week, practice mridangam, and then practice kendang (using the new patch). The topeng dancer is coming to tomorrow's rehearsal and I'd like to practice the cedugan part so as to not embarrass myself too badly.
It's looking like the feature add is not going to get done in time, so I'm putting it on hold while I go to practice. I've actually been working on it for a full week, on and off. It's so hard that it's discouraging, and that slows me down. And it sounds so simple!
The basic problem is that I have input, coming from either a piano-style MIDI keyboard, or the ASCII keyboard mapped to an organ-like layout, and I need to figure out what note in the current scale should come out.
Previously I had a (somewhat) straightforward way. The input key was a Pitch.InputKey, which was just a newtype around an integer, which was the number of semitones from MIDI note 0, otherwise known as the Midi.Key. Then it was up to the scale to map from InputKey to a Pitch.Note.
This worked well enough at first, but when scales got more complicated it started to have problems.
It started with relative scales, e.g. sa ri ga style. I wanted sa to always be at a constant place, so I had C emit sa. That works fine diatonically but runs into trouble when you start using accidentals. If D major is being played starting at C, the black keys are in the wrong place, and you wind up with some notes unplayable. It got worse when I tried to get Bohlen-Pierce to map right. It has 9 diatonic and 13 chromatic steps per "octave" (tritave, actually), so its layout is nothing like a piano keyboard's.
So it seemed like I should treat the ASCII and piano keyboards differently, so I could make the ASCII keyboard always relative, while the piano keyboard would remain absolute as usual. So the InputKey would get its own structure, with octave, pitch class, and accidentals. This involved updating Pitch.InputKey (and renaming Pitch.Input while I was at it), and then redoing how all the scales converted input to symbolic pitches, and that involved a bunch of refactoring for scales, and then fussing around with octave offsets to get them roughly consistent across scales.
The actual conversion is a bit of head-hurting modular arithmetic, to convert from a 7 or 10 key keyboard to an n-key scale, wrapping and aligning octaves appropriately. In theory it's straightforward, but finicky and hard for me to keep straight in my head.
But now it all seems to work, and I can get rid of an awful hack that hardcoded Bohlen-Pierce in favor of all scales adapting based on the number of pitch classes.
Worth it? Maybe? Pitch input is what I spend most time on when actually writing music, so anything that makes that smoother seems worthwhile.
What else? Slow progress lately, since I've been distracted by travel and sampling. Speaking of which, I recently finished the patches for the gender wayang. That's another thing that sort of boggles the mind. Pemade and kantilan, with umbang and isep variations, ten keys on each, with four dynamics and from three to eight variations of each, played with gender panggul and calung panggul, and muted kebyar style and loose gender style. That's 4,313 samples recorded, edited, organized, and mapped to the right key, velocity range, round robin configuration, and envelope. 2.6gb worth, and it took two weekends to record and several weeks on-and-off to do the editing. Even with lots of automation, it's a lot of work, and you need a well defined process and lots of notes to not make mistakes. There are still tweaks to be made, with jagged dynamics to be smoothed and missing articulations to be rerecorded. I guess that's another thing that's complicated about music.
And I'm not done yet, I still need to do the gender rambat and reyong / trompong, both of which have more keys.