Wednesday, July 8, 2020

another 4 years

It's been so long I'm not even sure what the major changes are since last time. A no-longer-so-new job has kept me much more busy for the last 2-3 years so things have gotten a lot slower.

Looking at the highlights from last time:

Im, aka 音 aka the synth backend has gotten fairly usable, though it hasn't been proven to really scale yet. The main backends are still the sampler and a faust backend.

Solkattu is pretty mature and I use it for all my lessons. Of course there are still bugs and features I could add, though I've pretty much just added new scores for the last year or so maybe it could be considered "done enough." I've even retreated on some features, I added a built-in audio realization, but I actually hardly ever use it, and it adds so many dependencies to the ghci session that I don't want it imported by default.

There is a new text-only score, which is actually its own language with its own syntax. I did a few pieces with it and I think it has some promise, but it's hard to say where its place is. I got tangled up detecting renames and moves when integrating back into karya and left it there.

Progress has been especially slow over the last 3 months after lockdown due to being even more busy with work and probably a bit of exhaustion. But over the last few weeks I started getting some things done again, so here's the latest:


I got sidetracked looking into GUI scrolling efficiency and wound up experimenting with the `druid` rust toolkit. The conclusion was that it's too young, and probably has the same performance problem, so I did another approach where I cache the whole window in an image and that makes scrolling quick but due to being "eagerly evaluated" makes zooming slow. I'll probably have to go to some tile-oriented, but it's a whole new level of complicated. Drawing 2d GUI with text is still an unsolved problem. Anyway, that was a bunch of rust and C++.

I decided I'd remake a piece I did back in high-school. That piece had a sampled breakbeat section, so I decided to start with notation support for breakbeats, which means coming up with sample start offsets for interesting parts of the sample and a scheme to semi-automatically name those offsets, based on measure position. Then I can interpolate to make a general notation for addressing an offset by either position or name (e.g. `sn1-2` for snare falling on measure 1 beat 2, or `n 1.2` for the same... in an abuse of decimal notation, the most confusing part of which is that measure.beat is 1-based, not 0-based). Then a fair amount of faffing about to figure out how to map those names to keyboard such that "octaves" correspond to an integral number of measures, where how many measures can fit depends on beats per measure / time step increment.

I somehow managed to find the CD I had sampled from, um, 25 years or so ago, and the exact break showed up on the Amazon track preview. Last time it was plugging the CD player into some outboard ADC on the Amiga for some nice 8 bit 8khz samples, this time it's download the whole track in 1s, open it in a DAW instantly, and just clip out the right bit.

Now of course since I have measure position I can estimate BPM with some of my stone-age statistics (discard quartile 1 and 4, take the mean of 2 and 3... I have no idea if this is a real thing, but the idea was to not get too thrown off by outliers), and can adjust BPM by adjusting the resample ratio. That in turn rekindled a desire to bind to the `rubberband` library to get time stretch and pitch shifting, and let's use `c2hs` this time instead of `hsc2hs`. Fortunately I had already added support for the `libsamplerate` binding, so it just meant rereading its minimal documentation with some reference to its source code and various blog posts. `rubberband` has a pretty straightforward C API and it was really easy to bind compared to the ordeal that was libsamplerate, but it helped quite a bit that I used the offline non-streaming version. It might be possible to save and restore `rubberband` state by just serializing its struct, but I just don't have the energy for that so let's live with loading and converting the whole sample at once until it proves to a problem. That let me hook up the breakbeat notation with BPM control via either pitch or time stretching, and I discovered that even with "percussive" settings, `rubberband` is not able to do this well. Attacks tend to get severly mangled. Ok, then never mind about rubberband for now. Maybe it'll be good for special effects.

"Special effects" made me think of the "pitch via comb filter" thing I did for pretty much the first "real" piece I was able to do with karya. At the time, I had to hook up Reaktor to the output of a MIDI VST, and fiddle with MIDI routing (which is a simple task that Reaper somehow makes ridiculously complicated... and still had weird restrictions and broken things with MIDI routing that I forget now). But now that I have my own sampler and FAUST I figured it was time to put them together so I can augment samples with effects. I had been intending from the beginning to merge the `faust-im` and `sampler-im` synthesizers, and failed each time due to their fundamentally different attitudes towards overlapping notes. Now it's looking like I'll incorporate faust into the sampler for effects procesing, but they'll otherwise both remain independent.

Which I managed to do, with relatively little hassle, since I already had all the pieces in place for compiling and binding to faust processors, and saving and restoring state. "Relatively little" hassle is still quite a bit of hassle... dealing with sample streams and note hashes and processor state is still one of trickiest parts to deal with, hard to think about, hard to debug, hard to write tests for.

I initially intended to add effects per-sample, so each sample could have independent effects in the same way as rubberband, but it seemed to be easier and more generally useful to have one effect per instrument. There's actually a place for both, but so far only per-instrument is implemented. Other than that, I don't need any routing. Each instrument has 0 or 1 effect, because if I want multiple effects I can combine them in faust. Of course we can't precompile every permutation, so that could make adding an effect into an awkward wait for recompile, but that's a general problem with karya. Faust does have an llvm backend and it should be possible to do a quick recompile and reload the processor dynamically.

Messing with the breakbeat notation and BPM tuning I thought I should try it out on a trikalam. This is a simple Carnatic form where you play the same material in three speeds, usually chatusram, tisram, and then melkalam chatusram. I've never heard it done for a breakbeat before, but why not? Just to use all the new things, I also applied a tuned comb filter to it to give it a sort of melody and emphasize the rhythmic contour. The result is pretty successful at being weird: https://drive.google.com/file/d/1PA_Dub6NJ-zXDqfx7ZmIRR5RX1YjeTVv/view?usp=sharing

So in the end I got about 2 weeks for implementation for the sake of one part of one section of one piece... that's what distraction looks like! I don't feel too bad about it though, because faust effects are generally useful and I had been planning to add it for a long time.

I'll probably use breakbeats too, for some reason I like them, even though I've never really listened to music that uses them. I probably should!