The open source project I’ve been working on since quite some time is called SpectMorph. It is so far a partial answer to the question what makes a piano sound like a piano. The initial idea was that by analyzing single notes of a piano, it would be possible to decompose the samples into a representation that allows to reproduce the original sound, yet allowing advanced operations (in a better way than operating on the wave files directly).
If the SpectMorph model of a piano could capture the piano-like aspects of piano samples, and if the SpectMorph model of a trumpet could capture the trumpet-like aspects of trumpet samples, then a morphing algorithm could blend these two models into a sound that sounds somewhat between a piano and a trumpet. Thats the goal of SpectMorph in general. The kind of model that SpectMorph uses is that it decomposes the original wave files into a sum of pure sine waves, with time-varying magnitude and phase information. This is a much better representation for morphing samples – at least I think so – than the original wave files. Note that I haven’t invented something new here, the basic technique is known as Spectral Modelling Synthesis.
Yesterday, I released spectmorph-0.0.2, which can analyze piano samples (and probably others like trumpet or saxophone samples) and produce a fairly convincing imitation, based on the models alone. The morphing remains to be implemented, and other problems which become apparent during listening to original and SpectMorph versions of the sounds – I created a few listening examples – remain to be fixed; mostly the sound attack of the piano samples sounds harder than the SpectMorph reproduction.
So this is not end user digestable software yet, but if you’re interested in the current state of the code, you can find it here.