Entries Tagged 'General' ↓

09.04.2012 bfsync-0.3.0: now scalable enough to do backups

Traditionally bfsync is used to get something like “Dropbox”. A shared set of files that is available at any computer you use. And since you can use it with your own server, you don’t need to pay GB/month fees.

Before this release, the number of files that could be stored in a bfsync repository could not become very large. However, after a few months of porting the code from SQLite to Berkeley DB, and optimizing other aspects of the code, it can finally deal with huge amounts of files.

So as for a “Dropbox”-replacement, this release is as good as the last release. But with this release you can use the bfsync FuSE filesystem to store your backups. Since each file content will only be stored once, after the initial backup the repository size shouldn’t grow too fast.

Here is the release of bfsync.

01.02.2012 bfsync: the journey from SQLite to Berkeley DB

My software for keeping a collection of files synchronized on many computers, bfsync, is perfect in the current stable release if the number of files in the collection is small (at most a few 100000 files). But since I’ve always wanted to use it for backups, too, this is not enough. I blogged about my scalability issues before, and the recommendation was: use Berkeley DB instead of SQLite for the database.

And so I started porting my code to Berkeley DB. I can now report that its quite a journey to take, but it seems that it really solved my problem.

* Berkeley DB is not an SQL database: I first tried the Berkeley DB SQL frontend, but this was much slower than using the “native” Berkeley DB methods, so I got rid of all SQL code; this especially turned out to be a challenge since the Python part of my application used to use the SQLite python binding, but now I had to write my own binding for accessing the Berkeley DB functions.

* Berkeley DB does not have a “table data format”: all it can do is store key=value pairs, where key and value need to be passed to the database as binary data blocks. So I wrote my own code for reading/writing data types from/to binary data blocks, I then used as key and value.

* Database scheme optimization:While I ported the code to Berkeley DB, I changed a few things in the database scheme. For instance, the old SQLite based code would store the information about one file’s metadata by the file’s ID. The ID was randomly generated. So if you would backup and 100 files were added to /usr/local, the file metadata would be stored “somewhere” in the database, that is the database would be accessed at 100 random positions. As long as the database is small, thats not a problem. But if the database is larger than the available cache memory, this causes seeks, and therefore is slow. The new (Berkeley DB) database scheme will generate a prefix for each file ID based on its path. This will for our example mean that all 100 files added to /usr/local will share the same path prefix, which in turn means that the new data will be stored next to each other in the database file. This results in much better performance.

I’ve designed a test which shows how much better the new code is. The tests adds 100000 files to the repository, and commits. It repeats this over and over again. You’ll see that with the old SQLite code, the time it takes for one round of the test to complete grows pretty quickly. With the Berkeley DB version you can see that more and more files can be added, without any difference in performance. Adding 100000 files takes an almost constant amount of time, regardless if the repository already contains zero or 20 million files.

.

It will still take some time before the Berkeley DB version of bfsync is stable enough to make a release. The code is available in the bdb-port branch of the repository, but some things remain to be done before it can be used by end users.

23.12.2011 bfsync-0.2.0 OR keeping music/photos/videos on many computers

Many users use more than one computer on a regular bases. For me, using git means that I can work on the same projects, no matter whether its on my home pc, work pc or laptop. Git allows me to keep the data in sync.

However for music/photos/videos this doesn’t work, so I wrote a tool for big file synchronization. This new release is almost a complete rewrite: the old code would still use git to store the history/meta data, whereas bfsync-0.2.0 uses an SQLite database for that job. This means that were before merge conflicts would probably be unintuitive to resolve, bfsync-0.2.0 will ask the user in a better way.

I also added a FuSE filesystem, which means that you no longer need special commands to add, move or remove data. You can use a file manager / rsync / a shell to edit the repository, and bfsync will automatically know what changed on commit. So: if you have big files that you want to have on each computer you work with: try bfsync.

Here is the release of bfsync.

22.11.2011 Dear lazyweb: what to do if sqlite is too slow?

I am working on a backup software, and the idea was to store all informations about each file in an sqlite db. So there basically is a “file” table that contains things like file size and sha1 hash (and other information). This works nicely as long as there are 1 million files. Insert speed is acceptable, querys are reasonably fast.

However, if I am assuming that someone will backup the contents of a 2 TB disk, and each file is about 20 kb, there would be over 100 million entries in the file table. I did some tests, and the problem is that sqlite gets slower and slower as the number of files (table entries) grow, to the point where its absolutely unusable. Of course if we’re conservative and assume that each table entry is 100 bytes including all index and internal information, we’re talking about a 10 GB database, which on most systems will neither be cached completely by the kernel, nor fit into the available memory.

Is there any open source alternative (I’d prefer a serverless solution, like sqlite, because its easier to setup), that can handle huge tables like the one I need? Preferably with C or C++ and Python language bindings. I’d also use a non-sql solution, I don’t use many sql features anyway.

23.08.2011 bfsync-0.1.0 OR managing big files with git home

I’ve been using git to manage my home dir for a long time now, and I’m very happy with that. My desktop computer, laptop, server,… all share the same files. This solution is great for source code, web pages, contacts, scripts, configuration files and everything else that can be represented as text inside the git repo.

However when it comes to big files, git is a poor solution, and by big files I mean big, like video/music collection, iso images and so on. I tried git-annex which seems to adress the problem, but I never was happy with it.

So I decided to write my own (open source) big files synchronization program called bfsync, and today I made the first public release. It keeps only hashes of the data in a git repository (so the directory is managed by git), and is able to transfer files between checkout copies on different computers as needed.

You can find the first public release of bfsync here.

30.07.2011 gst123-0.2.1

A new release of gst123, my commandline media player based on GStreamer is available. If you’re using 0.2.0 and see annoying warnings about option parsing on startup, you probably want to upgrade. There are no other user visible changes compared to 0.2.0.

21.07.2011 SpectMorph: new release, new sounds

SpectMorph is a project that analyzes instrument sounds so that they can be combined to create new sounds (morphing). It took me a while to get the morphing part implemented so that it sounds reasonable, but here is the result of more than half a year of development time: SpectMorph 0.2.0.

Now, how does it sound to play some chords with an instrument that slowly changes between a trumpet and a male singer?

Trumpet/Ah Example

The release, instruments, and many other examples (ogg/mp3/flac) are available under www.spectmorph.org

11.04.2010 Dear lazyweb: Screencasting under Linux

I’d like to produce screencasts of BEAST, an sequencer/synthesizer under Linux. So ideally, I’d like to record my voice via headset, the screen with the mouse and the output from BEAST. I tried to install Istanbul, mainly because its already packaged for Debian, but it didn’t work.

So if anyone has done screencasting of audio apps (or even normal apps) under Linux and can recommend something that just-works-out-of-the-box, any suggestion is welcome.

19.12.2010 Switching from x2x to synergy

For some time now, I’ve been using x2x to combine two X servers to one virtual workspace. It’s nice, because you can move your mouse off the display (say, your desktop machine) to the X server of another computer (in my use case: my laptop), and continue typing and clicking with your main mouse/keyboard. However, x2x gave me a few headaches, since the copypasting (middle mouse button) would not always work correctly between the two monitor setup you get.

However, I found out that synergy does the right thing when it comes to copypasting, so I switched to using synergy instead of x2x, and if you’re seeing the same problems with x2x, I can only recommend trying synergy. Most likely it should “just work” for you, too.

19.11.2010 SpectMorph: its fast now, too – and has many sound examples

I’ve finally managed to make a new release of SpectMorph, a C++ based project for creating and morphing sound models from samples; it still doesn’t have the morphing part, but at least its fast now, too. Depending on the CPU used, 100 to 300 simultanaeous voices are realistic, which should be enough for almost any composition.

Since SpectMorph can now import SoundFont files, I used this to build many many sound examples to compare how the SpectMorph models sound, and how the SoundFont sounds. Ideally they would be identical. After listening to quite a few of these files, I’d say that the SpectMorph approach in principle works for a wide variety of sounds, BUT that the encoding algorithm will produce more or less audible artefacts for some sounds, which hopefully can be fixed by improving the encoder.

The SpectMorph Homepage has all the samples (flac, ogg and mp3), so I’m just going to link one example here, to give you an idea of how good SpectMorph and original can match: Bach on a Church Organ using SpectMorph and using the Original Samples