Yes, dear reader, you read it well: Hormiga now handles collections too! That means you can now manipulate SPARQL collections (multi valued predicates) as you’d manipulate a normal Gee collection.

A simple example

Say you have loaded a list of resources, as we did in my last post. So you have a list of Photo objects, which are mapping nmm:Photo resources. Now you want to create a tag and assign it to the photo. This is as simple as:

var my_tag = Tag.create ();
my_tag.label = "My first tag";
my_tag.save ();
photo.tags.add (my_tag);
photo.save ();

Now there are a couple of new functions used here:

  • create () is used to create a new resource, and assign it a random urn. The resource is saved into the database immediately. It is different from load (), which takes an urn, and can return you a proxy to a resource that does not exist in the database.
  • save () does save the modifications made on a proxy in the database. You can also use rollback () if you decide you want to revert the proxy to its original state.

If you look at the generated Vala code, you’ll note that GObject properties mapping multi valued predicates are read only. This is because you have to work with the collection directly, you cannot do this:

photo.tags = new HashSet<Tag> ();

This is because under the hood custom classes are used, and you cannot replace them with a “simple” HashSet.

The complete example is available as usual on my git forge.

OMGWTFBBQ, but how does it works ?

There are two key classes in the way I handle collections, those are PersistentSet and ValueSet.

PersistentSet is a sort of versionned set: it is initialized with the values loaded from database, and can then be modified by the user. When saving, it allows you to get a diff with the original values, and therefore do the saving in a smarter way than erase-all-then-save-all.

ValueSet is an abstract class, and has a subclass for every handled data type (ValueSetInt, ValueSetString etc.). It is a thin layer above a PersistentSet of GValues that allows you to manipulate the values held inside each GValue directly. For example, ValueSetInt sits above a PersistentSet of GValues of type int64, and behaves as a Gee collection of int64. The modification to the ValueSet are replicated in the underlying PersistentSet (which will be used when saving).

Test it! Crash it!

As always, I’m interested in your feedback, be it a remark on the API, the design, or a crash report. You check out the source at git://git.mymadcat.com/hormiga.git , or browse it online. A short reference for mapping files and various examples are available.

Just a quick followup to yesterday’s post about hormiga having its first interesting features, the following code now works:

public class HormigaTest {
    public HormigaTest () {
    }
    public void run () {
        try {
            foreach (Photo p in Photo.loadAll ()) {
                if (p.tags.size != 0) {
                    foreach (Tag t in p.tags) {
                        message ("Photo %s has tag %s", p.title, t.label);
                    }
                }
            }
        } catch (Error e) {
            warning ("Couldn't load tags: %s", e.message);
        }
    }
}

public static int main (string[] args) {
    var t = new HormigaTest ();
    t.run ();
    return 0;
}

What does that mean? We now have support for predicates pointing to other resources (here, the nao:hasTag predicate), and basic support for collections, aka multi valued predicates.

For the record, the mapping files are included below:

Photo.map

{
    "Class": "nmm:Photo",
    "Name": "Photo",
    "Properties": [
        {
            "Property": "dc:title",
            "Name": "title"
        },
        {
            "Property": "nao:hasTag",
            "Range": "nao:Tag",
            "Name": "tags"
        }
    ]
}

Tag.map

{
    "Class": "nao:Tag",
    "Name": "Tag",
    "Properties": [
        {
            "Property": "nao:prefLabel",
            "Name": "label"
        }
    ]
}

I added a Range keyword the Property in the mapping, because sometimes you want to override the range specified in the ontology. In this case, nao:hasTag has a range of rdfs:Resource, and we want to retrieve nao:Tag objects.

PS. Dear GNOME admins, could we have some antispam mechanism on blogs.gnome.org? I’m flooded with spam for ant killing products…

Ants never die

04/07/2010

Except if you squash them1

I’ve been slowly but steadily making progress on Hormiga, the Tracker based ORM. I wish I had more time to dedicate to this project, but my work on Tracker and Buxus, the company I’m founding with a Chilean friend, have been keeping my rather busy.

Still, I finally reached a point where I can generate a mapping file, run hormiga on it, and get a vala proxy which I can use to access data in Tracker. Here is a quick demo of how it works:

Writing a mapping file

Mapping files are JSON documents. That means they’re easy to write, and they could even be generated by a graphical frontend in the future (or automatically from ontology files, whatever). Here’s a simple mapping file for the nao:Tag class:

{
	"Class": "nao:Tag",
	"Name": "Tag",

	"Properties": [
		{
			"Property": "nao:prefLabel",
			"Name": "label"
		}
	]
}

Here we say we want to generate a proxy to access objects of class nao:Tag, and we want to bind the property nao:prefLabel (the label of the tag) as the “label” property of our proxy. The data type (here, a string) will be automatically deduced from the ontology.

Generating the proxy

Generating the proxy should ideally be part of your build process, and is not more complex than running

hormiga -O /usr/share/tracker/ontologies Tag.map

This command generates a Tag.vala file, which you can compile with the rest of your project. The -O is used to tell Hormiga were ontology files are.

Using the generated proxy

If you look at the generated code, you’ll notice the constructor is private. Instead, you have to load the proxy using one of the dedicated functions. Well, so far, there’s only one, to load a proxy for an existing resource. This is the first point I’ll improve next week. So, say you have a tag in Tracker with an uri urn:my-tag. You can use a code like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class HormigaTest {
	public HormigaTest () {
	}
 
	public void run () {
		try {
			var tag = Tag.load ("<urn:my-tag>");
 
			if (!tag.exists) {
				warning ("Resource %s does not exist", tag.urn);
			}
 
			message ("label: %s", tag.label);
		} catch (Error e) {
			warning ("Couldn't load tag: %s", e.message);
		}
	}
}
 
public static int main (string[] args) {
	var t = new HormigaTest ();
	t.run ();
 
	return 0;
}

Where are we now, and where we’re going

This small demo shows around 100% of the current features implemented in Hormiga. You could say it’s useless, and you wouldn’t be totally wrong :). The interesting part is what comes now that I have my architecture set up.

Next week, I’ll work on loading functions: you’ll probably want to load all tags, or only those matching a certain property. I also intend to let developers use SPARQL to load objects, I don’t think abstracting all SPARQL under an API is a good idea.

I also have to work on data type mapping: currently, Hormiga deals with string, integers and dates. It does not handle yet “pointers” to other objects (basically, all predicates pointing to a resource), which is a rather serious limitation. But again, one piece at a time… I don’t want to rush now, and have to do a rewrite in three months because I did a bad design!

I’ll of course appreciate any type of contribution (feedback, patches), but I’m aware that at this stage the project is not very attractive :). Still, if you feel motivated, just go for it!


1No animals were hurt during the redaction of this blog article. And the title is because “hormiga” means “ant” in Spanish.

And here go my five answers to the five questions:

  1. Who are you and what do you do?

    I’m an almost freshly graduated French CS student. I’m currently living in Chile (and have been for one year), but I’m coming back to France very soon.

    As a programmer, I contribute to the Tracker project, where I’m working on building modules that synchronize your online data with your Tracker database. I’ve also been doing some IPC optimization work, which will hopefully soon be merged into master. If you come to GUADEC (and you should, if you can), don’t miss my talk about Tracker and the integration of online services!

  2. How did you get into GNOME?

    I began using Linux using a RedHat 7.2, and I don’t remember what GNOME version it was using. We’ve come a long way since then, and I’m still excited as a kid when a new feature appears 🙂

  3. Why are you coming to GUADEC?

    I’m coming to GUADEC first for the talks, and second for getting a chance to meet people I’ve been working with for some time now, but whom I never met in real life. I really expect to hear/see interesting things at GUADEC!

  4. In 1 sentence, describe what your most favorite recent GNOME project has been. (Doesn’t have to be yours!

    Ha ha, as a Tracker developer I’m obviously biased… Well, obviously Tracker is awesome, but I’m really excited by what Gnome Shell brings.

  5. Will this be your first time visiting the Netherlands?

    Nope, I’ve been to Amsterdam before… But I’ve never been to Den Haag, so still a discovery for me 🙂


  6. GUADEC badge
    Sponsored by GNOME fundation

Hello Planet! Here’s my weekly GSOC report for Hormiga, the Tracker based ORM.

While last week was dedicated to coding the basic blocks of the ORM (ontology lexer and parser), this week I set up the first blocks of the ORM itself. What I did:

  1. Define the mapping format: it will be JSON based, and look like that:
    {
        RdfClass: "nao:Tag",
        Name: "Tag",
    
        Properties: [
            {
                RdfName: "rdfs:label",
                Name: "label"
            }
        ]
    }

    The good thing is that JSON makes it expansible (the parser will just ignore any unknown element), so that we can add missing bits later.

  2. I wrote a parser for this format
  3. I wrote an ontology parser (what I had was a turtle parser, producing statements but not “interpreting” them). It’s pretty basic ATM, only enumerating classes and properties.
  4. I wrote the first steps of the mapping, that is load the mapping file, check it, load the ontology files, check them, and check that the classes and properties used in the mapping are defined in the loaded ontologies.

I though I would be able to do a bit more, but I got more work that what I expected (blame my internship tutor who gave me a lot of interesting things to do). So far, I’m not late, but just on time.

What can we expect next week ? Well, more thorough checking of the mapping file, and the specification of how code generation will be done. I’m still not totally clear on how I’ll do that, but I’m pretty sure my mentor will have interesting ideas, since he’s the author of Vala.

Hello there!

Here comes my first weekly report. For those who don’t know/remember, I am working this summer on making a Tracker based ORM.

This week, I began to work on the proxy generator. Basically, the generator generates basic proxy classes for the RDF classes you feed it, and those proxies handle all the plumbing with Tracker (you just manipulate the objects as normal objects, and changes are reflected in the DB). All the class methods are virtual, so they can be overriden to implement custom behaviours.

So far, I’ve got a working RDF ontology parser, and began working on the mapper itself. The mapper will load map files, that tell it what RDF classes to map, and how to map them. Format of mapping files will be JSON based, and I still have to define it formally.

So next week will be dedicated to defining the mapping format, and building a parser for that format. Once that is done, I will implement code generation. I’m still not totally sure about how code generation will work. I know I want the ORM to support more than one language, but I’m not yet fixed on how I’ll implement it.

I don’t progress very quickly so far, because other works are keeping me pretty busy. However, I’m not late (yet), and I hope to deliver a very basic version quickly, so stay tuned 🙂

I just received a mail from the GNOME Travel Committee, who say they approved my request and will help me get to GUADEC. I’m really (really) happy, and would like to give a big thank you to the GNOME Foundation, and to all those who make that funding possible.

So I can now proudly wear that badge: I'm going to GUADEC!

And this one too: Sponsored by GNOME Foundation

Last week, I published a report comparing various IPC methods when transferring a lot of data between two programs. In this report, I compared how DBus performed against a custom IPC system using UNIX sockets. The test case was transferring the result set of a large query ran against an SQLite database (300 000 rows, around 220MB to pass from server to client). As expected, the UNIX socket system was much faster and cheaper, around 20% slower than directly accessing the database.

We got quite a lot of useful feedback about the report. In particular, Alexander Larsson suggested that we look at a feature present in DBus 1.3 allowing to pass UNIX file descriptors over the bus. The idea was to use DBus for “high level” stuff (client/server discovery, preparing query etc.) and a pipe for passing results from the server to the client. And he was also kind enough to help me when I was trying to use vmsplice in the server, he therefore deserves a beer at next GUADEC 🙂

So I compiled a recent DBus, and started experimenting with FD passing. It turns out it is remarkably easy to do, FDs are passed in messages just like any other DBus parameter. So what happens is that the server creates a pipe when asked to prepare a request. The client then calls Fetch, and the server sends the results over the pipe.

In our benchmarks, this solution scored even better than the UNIX socket based IPC. I also tried to experiment with vmsplice to save one memory copy, but didn’t succeed because of some kernel bits missing.

The updated report, along with a spreadsheet containing the result of my benchmarks are available in the same git repository that also holds all the test programs and measure scripts.

Also, thanks to the Planet GNOME friends who added me!


Exposing a SQLite database remotely: comparison of various IPC methods

Adrien Bustany
Computer Sciences student
National Superior School of Informatics and Applied Mathematics of Grenoble (ENSIMAG)

This report was originally published on May 31, 2010 and was updated on May 19, 2010

This study aims at comparing the overhead of an IPC layer when accessing a SQLite database. The two IPC methods included in this comparison are DBus, a generic message passing system, and a custom IPC method using UNIX sockets. As a reference, we also include in the results the performance of a client directly accessing the SQLite database, without involving any IPC layer. Newer versions1 of DBus can pass a file descriptor in messages, allowing the creation of a direct pipe between the client and server. This hybrid approach (between bare socket and DBus) is also evaluated in this report.

Comparison methodology

In this section, we detail what the client and server are supposed to do during the test, regardless of the IPC method used.

The server has to:

  1. Open the SQLite database and listen to the client requests
  2. Prepare a query at the client’s request
  3. Send the resulting rows at the client’s request

Queries are only “SELECT” queries, no modification is performed on the database. This restriction is not enforced on server side though.

The client has to:

  1. Connect to the server
  2. Prepare a “SELECT” query
  3. Fetch all the results
  4. Copy the results in memory (not just fetch and forget them), so that memory pages are really used

Test dataset

For testing, we use a SQLite database containing only one table. This table has 31 columns, the first one is the identifier and the 30 others are columns of type TEXT. The table is filled with 300 000 rows, with randomly generated strings of 20 ASCII lowercase characters.

Implementation details

In this section, we explain how the server and client for both IPC methods were implemented.

Custom IPC (UNIX socket based)

In this case, we use a standard UNIX socket to communicate between the client and the server. The socket protocol is a binary protocol, and is detailed below. It has been designed to minimize CPU usage (there is no marshalling/demarshalling on strings, nor intensive computation to decode the message). It is fast over a local socket, but not suitable for other types of sockets, like TCP sockets.

Message types

There are two types of operations, corresponding to the two operations of the test: prepare a query, and fetch results.

Message format

All numbers are encoded in little endian form.

Prepare

Client sends:

Size Contents
4 bytes (int) Prepare opcode (0x50)
4 bytes (int) Size of the query (without trailing )
Query, in ASCII

Server answers:

Size Contents
4 bytes (int) Return code of the sqlite3_prepare_v2 call
Fetch

Client sends:

Size Contents
4 bytes (int) Fetch opcode (0x46)

Server sends rows grouped in fixed size buffers. Each buffer contains a variable number of rows. Each row is complete. If some padding is needed (when a row doesn’t fit in a buffer, but there is still space left in the buffer), the server adds an “End of Page” marker. The “End of page” marker is the byte 0xFF. Rows that are larger than the buffer size are not supported.

Each row in a buffer has the following format:

Size Contents
4 bytes (int) SQLite return code. This is generally SQLITE_ROW (there is a row to read), or SQLITE_DONE (there are no more rows to read). When the return code is not SQLITE_ROW, the rest of the message must be ignored.
4 bytes (int) Number of columns in the row
4 bytes (int) Index of trailing for first column (index is 0 after the “number of columns” integer, that is, index is equal to 0 8 bytes after the message begins)
4 bytes (int) Index of trailing for second column
4 bytes (int) Index of trailing for last column
Row data. All columns are concatenated together, and separated by

For the sake of clarity, we describe here an example row
100 4 1 7 13 19 1aaaaabbbbbccccc

The first 100 is the return code, in this case SQLITE_ROW. This row has 4 columns. The 4 following numbers are the offset of the terminating each column in the row data. Finally comes the row data.

Memory usage

We try to minimize the calls to malloc and memcpy in the client and server. As we know the size of a buffer, we allocate the memory only once, and then use memcpy to write the results to it.

DBus (without file descriptor passing)

The DBus server exposes two methods, Prepare and Fetch.

Prepare

The Prepare method accepts a query string as a parameter, and returns nothing. If the query preparation fails, an error message is returned.

Fetch

Ideally, we should be able to send all the rows in one batch. DBus, however, puts a limitation on the message size. In our case, the complete data to pass over the IPC is around 220MB, which is more than the maximum size allowed by DBus (moreover, DBus marshalls data, which augments the message size a little). We are therefore obliged to split the result set.

The Fetch method accepts an integer parameter, which is the number of rows to fetch, and returns an array of rows, where each row is itself an array of columns. Note that the server can return less rows than asked. When there are no more rows to return, an empty array is returned.

DBus (with file descriptor passing)

In this case, we use DBus for high level communication between the server and client, but use a dedicated pipe for transferring the data itself. This allows very fast transmission rates, while keeping the comfort of using DBus for remote method calls.

We keep the same methods, Prepare and Fetch, although their prototype change a bit.

Prepare

The prepare method accepts a query, and returns a file descriptor from where to read the results. In case the preparation fails, an error message is returned. Internally, the server is creating a pipe, and returning its descriptor to the client.

Fetch

When the Fetch method is called, the server will write the results on the previously returned file descriptor. The protocol used is the same as with UNIX sockets.

Results

All tests are ran against the dataset described above, on a warm disk cache (the database is accessed several time before every run, to be sure the entire database is in disk cache). We use SQLite 3.6.22, on a 64 bit Linux system (kernel 2.6.33.3). All test are ran 5 times, and we use the average of the 5 intermediate results as the final number.

For both the custom IPC and the method using DBus with file descriptor passing, we test with various buffer sizes varying from 1 to 256 kilobytes. For simple DBus, we fetch 75000 rows with every Fetch call, which is close to the maximum we can fetch with each call (see the paragraph on DBus message size limitation).

The first tests were to determine the optimal buffer size when transmitting data over a socket or a pipe. The following graph describes the time needed to fetch all rows, depending on the buffer size:

The graph shows that the buffer size leading to the best throughput is 64kB. Those results depend on the type of system used, and might have to be tuned for different platforms. On Linux, a memory page is (generally) 4096 bytes, as a consequence buffers smaller than 4kB will use a full memory page when sent over the socket and waste memory bandwidth.

After determining the best buffer size, we run tests for speed and memory usage, using a buffer size of 64kb for both the UNIX socket based and the DBus using file descriptors method.

Speed

We measure the time it takes for various methods to fetch a result set. Without any surprise, the time needed to fetch the results grows linearly with the amount of rows to fetch.

IPC method Best time
None (direct access) 2910 ms
UNIX socket 3470 ms
DBus 12300 ms
DBus with FD passing 3329 ms

Memory usage

Memory usage varies greatly (actually, so much that we had to use a log scale) between IPC methods. Memory usage when using DBus without file descriptor passing is explained by the fact that we fetch 75 000 rows at a time, and that DBus has to allocate all the message before sending it, while the socket/pipe based IPC methods use 64 kB buffers.

A note about vmsplice

While a good share of the CPU time is obviously spent in SQLite itself, memory copies are also pretty expensive. When experimenting with DBus file descriptors and pipes, I tried to use the vmsplice function of the Linux kernel. vmsplice used with its SPLICE_F_GIFT flag allows us to map some user data directly into a pipe, saving therefore a memory copy. The memory being gifted to the kernel, it must not be modified until it’s not being read by anyone (else, data is being read and modified at the same time). In our case, the server is using a ring buffer (allocated once at the start of the program), which can’t be reused while the client is still reading data. The trick to be sure that mapped memory is not in use anymore is to map to the pipe a buffer twice as large as the maximum pipe buffer size. When vmsplice returns, we are sure that at least a half of our buffer has been “consumed”, so we can use it again and write to its first half (since its second half might still be under IO). However, the system call to get the maximum pipe buffer size, F_GETPSZ, is not implemented as of today. I wasn’t therefore able to produce a vmsplice based version that was robust against memory corruption. Anyway the performance gains observed the few times the test program using vmsplice worked were not significant.

Alexander Larsson deserves for that part a huge thanks for his help on trying to get things right with vmsplice.

Conclusions

The results clearly show that in such a specialized case, designing a custom IPC system can highly reduce the IPC overhead. The overhead of a UNIX socket based IPC is around 19%, while the overhead of DBus is 322%. However, it is important to take into account the fact that DBus is a much more flexible system, offering far more features and flexibility than our socket protocol. Comparing DBus and our custom UNIX socket based IPC is like comparing an axe with a swiss knife: it’s much harder to cut the tree with the swiss knife, but it also includes a tin can opener, a ball pen and a compass (nowadays some of them even include USB keys).

Actually, the new file descriptor passing feature of DBus (added by Lennart Poettering) brings us the best of both worlds: high level, flexible IPC for method calls, and fast data transfer using a pipe for heavy transfers. On the benchmarks, pipes are even a bit faster than sockets.

The code source used to obtain these results, as well as the numbers and graphs used in this document can be checked out from the following git repository: git://git.mymadcat.com/ipc-performance . Please check the various README files to see how to reproduce them and/or how to tune the parameters.

1DBus 1.3, not released as “stable” as of today

Tracker ORM: Step 0

02/05/2010

So I’ve been accepted for the second year as a Google Summer of Code student. I want to thank all the people who supported my proposals (yes, there were two of them, obviously only one got accepted), and of course Google for sponsoring Open Source work.

For this summer, my job will be to build an ORM using Tracker. Those who looked at other articles in this blog know that I’ve been working on Tracker for quite some time now, so it’s no surprise I’ll continue working on the same project 🙂

Introducing Hormiga

So far, I’m in the early planning of the project, mainly looking at how other do it, both in SPARQL and in SQL. What I know for sure, is that this project will be called “Hormiga”. “Hormiga” as “ORM” inside, and means “ant” in spanish. An ant is lightweight but strong, which would be two qualities nice to have in the final product.

What is it for ?

Hormiga aims at lowering the bar when it comes to working with SPARQL and RDF databases. Some operations are sometimes complex or tedious, Hormiga aims at making them easier. However, I also intend to leave the possibility to make direct SPARQL requests, for those who know what they do.

How will it work ?

I’m not totally sure yet of how Hormiga will be used. Most ORM solutions I saw for RDF use reflection in the language they target (Java, Python, Ruby). I don’t want to restrict Hormiga so much, so reflection is ruled out for now. I’m thinking more of a workflow along the lines of Apache Cayenne, which generates a simple proxy that can be subclassed to implement custom behaviors.

Using such a behaviour, the workflow would be :

  1. Write a mapping file, possibly in JSON or XML. The mapping file(s) define the (RDF) classes and properties to map, and how to map them.
  2. Run the ORM generator on the mapping file to produce proxies for each mapped class (I’m thinking about targetting Vala at first, which gives us C for free, but python or javascript backends could be added in the future).
  3. Subclass (if needed) the produced proxies. The generated proxies should not be modified, in fact generation should be part of the build process.
  4. Use the produced classes as if they were normal objects.

We will probably have an entity manager, as most ORMs do, to do queries and update objects (from/to the DB).

Things I’d like to get right from the beginning

  • Lazy loading: the more we can defer the actual work, the snappier the application will be
  • Doing the maximum on the SPARQL side: do as much as possible of the filtering on Tracker’s side, and only load what’s needed
  • Stay flexible: always allow the user to fall back to SPARQL if the API is not complete enough

I know there’s not a lot of concrete things in this blog post, but it should get better in the weeks to come 🙂 Meanwhile, you can always tell me your thoughts (if you have some about Hormiga) in the comments!

Well, Philip has already blogged about it. But it’s now public! And it has a name!

What the heck are you talking about?

About Otto, a small application built on the MeegoTouch (formerly called Dui) framework. MeegoTouch will be the Ui framework powering Harmattan, the next Maemo version.

Ok, and what is it supposed to do?

Otto uses Tracker (which will be deeply integrated into Harmattan) to look for untagged music tracks (but you can also search for a specific track to tag). And for a track, it can search for tags online automatically, by computing an audio fingerprint and matching it against the MusicDNS service. Tags, if available, are retrieved from MusicBrainz, a community driven metadata database. All that, in one click (or touch, if you’re using a touchscreen). Because tagging tracks is boring, it should be as automatic as it can be.

Interesting, where do I get it?

I put the code on Gitorious. To compile it, you need a recent enough version of MeegoTouch, and libofa. The former you’ll probably to compile from source, the latter is packaged in most distributions out there.

This thing totally doesn’t work! Help!

Well, it works well enough for me, but of course I don’t expect the code to be bug free. I’ll gladly answer any question in the comments. Also, merge requests are enabled on Gitorious, so send your patches!