Entries from November 2010 ↓

19.11.2010 SpectMorph: its fast now, too – and has many sound examples

I’ve finally managed to make a new release of SpectMorph, a C++ based project for creating and morphing sound models from samples; it still doesn’t have the morphing part, but at least its fast now, too. Depending on the CPU used, 100 to 300 simultanaeous voices are realistic, which should be enough for almost any composition.

Since SpectMorph can now import SoundFont files, I used this to build many many sound examples to compare how the SpectMorph models sound, and how the SoundFont sounds. Ideally they would be identical. After listening to quite a few of these files, I’d say that the SpectMorph approach in principle works for a wide variety of sounds, BUT that the encoding algorithm will produce more or less audible artefacts for some sounds, which hopefully can be fixed by improving the encoder.

The SpectMorph Homepage has all the samples (flac, ogg and mp3), so I’m just going to link one example here, to give you an idea of how good SpectMorph and original can match: Bach on a Church Organ using SpectMorph and using the Original Samples

05.11.2010 Profiling in the new millenium

In the last weeks, my top priority for SpectMorph development was performance optimization. I wanted to make sure that musicians will be able to use SpectMorph in highly polyphonic music, so the main goal was to get the per-voice CPU usage down to something close to the theoretical minimum.

In this blog post I’d like to describe the tools I used, because any developer will need to do performance optimizations every once in a while. There are three basic techniques that I found of great use. In fact, I believe using these three techniques is sufficient, and other tools (like gprof) may have been useful in the past, but are no longer ideal these days.

Direct time measurement is the perfect, undistorted view of what really happens. If a function is slow, running that function in a loop for 100000 times (or so), calling gettimeofday() before and after the loop will give you an idea of how much time is spent within that function per invocation. This is the obvious way to measure performance, and since it is not impacted by the measurement process, its the only “true” performance measure. There is no way around this technique, because anything else you do to measure performance will always affect execution time. So even if your profiling tool (discussed below) tells you that you have made some progress with a change in the code: do not believe it unless the actual execution time goes down even without the profiling tool. I’ve had my share of “good advice” and given by some profiling tool, which turned out to have absolutely no effect (or making things slower) when applied to the real world code.

Valgrind Callgrind + KCachegrind is an easy to use combination, to get an overview of the parts of a program are expensive. Using valgrind virtualization (–tool=callgrind), you can run your program and count the number of instructions that are used in each function. This may be the time to talk about the “new millenium”. Back in 199x, optimization was a fairly easy task. Each processor instruction would take a more or less well defined amount of cycles, so to minimize the running time for a function was the same as to minimize the sum of cycles of the instructions. You could know beforehand that replacing two instructions taking 3 cycles each with one instruction taking 4 cycles would result in a performance gain. If valgrind had been written back then, it probably would have a table with instruction costs built into it, so it could tell you the precise running time for each function (instead of the number of instructions).

However, processors have changed. There are multiple units for integer and float computations, so sometimes two additions can be executed in parallel, so sometimes replacing two additions with one will not speed up anything. There is branch prediction which means that a correctly predicted jump may take a lot less time than an incorrectly predicted jump. Modern processors have pipelining, so they are effectively not executing one instruction at a time, but many in parallel. Data dependancies between instructions will cause this process to be slower in some cases and faster in other cases. Caches further complicate the issue because if something is not in the cache one instruction might take forever, although its something simple (like an integer addition).

Why bother counting instructions with valgrind, then? Because still the number of instructions is often a good first step to get an idea of what is going on. Rewriting a function that used to take 100 instructions in a way that it only takes 20 instructions is not guaranteed to speed things up (do a direct time measurement to find out), but it is likely. Still you have to be careful when interpreting the results of valgrind, because for instance the FSIN instruction will take forever (90 cycles or so), so replacing one FSIN with ten other instructions will usually speed up things, although valgrind tells you the function takes a bigger CPU share of everything now.

Finally, one last remark about why I like valgrind’s callgrind tool so much, when visualized with kcachegrind: it can give you a good visualization of which functions call which functions, and how many instructions (or percentage) are spent where. You do not need to recompile anything (like with gprof) to do this. You can inspect the assembler code along with the source code, to see exactly which instructions a line of source code produces. And its not a statistical tool, but it counts exactly what would happen when the program would be running without valgrind.

OProfile + KCachegrind is the most sophisticated combination I found, to approximate what really happens while the program is executed. As I’ve described above, there are a lot of factors that influence how long something /really/ takes. The number of instructions is an approximation, but not the truth. There is no table to use to find out how long something will take (like in 199x). However, OProfile simply uses statistics it collects while the program is running (on the real CPU, not a virtualized one like valgrind), and then assigns the costs to the source/assembler code. There is a nice tool called op2calltree, Debian has it in kcachegrind-converters, that allows browsing the source code/assembler code of each function while displaying the oprofile cost.

Its basically what you want, and I found it really useful. Just one word of warning: I’ve briefly described that in reality, a processor does not execute one instruction after another, but there are complex, parallel processes that really execute many instructions at a time, on many units (like executing more than one integer multiplication simultaneously, maybe even in combination with fpu instructions). So any tool that assigns costs to instructions (and thats what OProfile does) can never represent what is really happening in this way. So even with this tool, you can not see what truely happens if the program runs, but only get a simplified view of what happens.

One example: I’ve seen cases a good share of the costs of computing a for() loop was assigned to the loop instructions (jump, add,…). This made me believe that if I would unroll the loop, the function would be faster. I unrolled the loop, and the function was not faster. Why? Probably because before unrolling, the instructions within the loops’ body were executed in parallel with the loop instructions (jump, add, …), and after the unrolling there was no more parallel execution so that in both cases the factor that determined the time it took to execute the for() loop were the instructions within the loop, and not the jump, add, … .

To sum it up: I think these days, understanding why something is slow or how to make it faster is a little more tricky than it used to be in 199x, but by combining the three methods I described, you can get the information you need to optimize the performance of your code.

This blog entry is already longer than what most people would like to read, and I’ve spent many weeks doing nothing else than performance optimizations, but if you managed to read this far, I’ll just give you a few bullet points to give you an idea on the other issues that may be worth keeping in mind, but would make this much much longer if described in detail:

* optimize only what you know to take a lot of time – if something is not really an issue, you need not bother spending time with making it faster

* your intuition of why something is slow will often be wrong – you need to understand the reason why something is slow if you want to make it faster

* branch prediction is important for fast code – if you have a branch which the processor can not predict, your code will be much slower than if you have a branch which the processor can predict

* SSE instructions are your friend when optimizing FPU code; if you know that some part of your algorithm is using lots of FPU instructions, then SSEifying may make it 2-4 times faster

* changing algorithms is often the most powerful optimization – if you can replace an O(N log N) algorithm with an O(N) algorithm, this may reduce the time it takes to do this step much more dramatically than if you try to squeeze out instructions of your original approach

* what is optimal depends on the machine that executes your code – for instance naive float->int conversion is notoriously slow on x86, but not so slow on AMD64; if you want good performance across both CPUs, you need to measure/profile on both targets