Today I finally added support in Hormiga (your favourite Tracker ORM ((It is the only one. So it’s necessarily your favourite.))) for loading proxies providing a value for one of their properties. In short, this means that you can now do

var my_proxies = MyProxyClass.load_from_my_property ("my_value");

Following the previous example, you could now use the following code to load all the photos with tag “foobar”:

var tag_foobar = Tag.load_from_label ("foobar");
var photos = Photo.load_from_tag (tag_foobar);

The documentation was updated too.

Next step will be to allow direct SPARQL queries to load proxies, as I don’t intend to hide every SPARQL feature under the API (yes, I do think developers can learn SPARQL, come one, it’s easy).

Yes, dear reader, you read it well: Hormiga now handles collections too! That means you can now manipulate SPARQL collections (multi valued predicates) as you’d manipulate a normal Gee collection.

A simple example

Say you have loaded a list of resources, as we did in my last post. So you have a list of Photo objects, which are mapping nmm:Photo resources. Now you want to create a tag and assign it to the photo. This is as simple as:

var my_tag = Tag.create ();
my_tag.label = "My first tag";
my_tag.save ();
photo.tags.add (my_tag);
photo.save ();

Now there are a couple of new functions used here:

  • create () is used to create a new resource, and assign it a random urn. The resource is saved into the database immediately. It is different from load (), which takes an urn, and can return you a proxy to a resource that does not exist in the database.
  • save () does save the modifications made on a proxy in the database. You can also use rollback () if you decide you want to revert the proxy to its original state.

If you look at the generated Vala code, you’ll note that GObject properties mapping multi valued predicates are read only. This is because you have to work with the collection directly, you cannot do this:

photo.tags = new HashSet<Tag> ();

This is because under the hood custom classes are used, and you cannot replace them with a “simple” HashSet.

The complete example is available as usual on my git forge.

OMGWTFBBQ, but how does it works ?

There are two key classes in the way I handle collections, those are PersistentSet and ValueSet.

PersistentSet is a sort of versionned set: it is initialized with the values loaded from database, and can then be modified by the user. When saving, it allows you to get a diff with the original values, and therefore do the saving in a smarter way than erase-all-then-save-all.

ValueSet is an abstract class, and has a subclass for every handled data type (ValueSetInt, ValueSetString etc.). It is a thin layer above a PersistentSet of GValues that allows you to manipulate the values held inside each GValue directly. For example, ValueSetInt sits above a PersistentSet of GValues of type int64, and behaves as a Gee collection of int64. The modification to the ValueSet are replicated in the underlying PersistentSet (which will be used when saving).

Test it! Crash it!

As always, I’m interested in your feedback, be it a remark on the API, the design, or a crash report. You check out the source at git://git.mymadcat.com/hormiga.git , or browse it online. A short reference for mapping files and various examples are available.

Just a quick followup to yesterday’s post about hormiga having its first interesting features, the following code now works:

public class HormigaTest {
    public HormigaTest () {
    }
    public void run () {
        try {
            foreach (Photo p in Photo.loadAll ()) {
                if (p.tags.size != 0) {
                    foreach (Tag t in p.tags) {
                        message ("Photo %s has tag %s", p.title, t.label);
                    }
                }
            }
        } catch (Error e) {
            warning ("Couldn't load tags: %s", e.message);
        }
    }
}

public static int main (string[] args) {
    var t = new HormigaTest ();
    t.run ();
    return 0;
}

What does that mean? We now have support for predicates pointing to other resources (here, the nao:hasTag predicate), and basic support for collections, aka multi valued predicates.

For the record, the mapping files are included below:

Photo.map

{
    "Class": "nmm:Photo",
    "Name": "Photo",
    "Properties": [
        {
            "Property": "dc:title",
            "Name": "title"
        },
        {
            "Property": "nao:hasTag",
            "Range": "nao:Tag",
            "Name": "tags"
        }
    ]
}

Tag.map

{
    "Class": "nao:Tag",
    "Name": "Tag",
    "Properties": [
        {
            "Property": "nao:prefLabel",
            "Name": "label"
        }
    ]
}

I added a Range keyword the Property in the mapping, because sometimes you want to override the range specified in the ontology. In this case, nao:hasTag has a range of rdfs:Resource, and we want to retrieve nao:Tag objects.

PS. Dear GNOME admins, could we have some antispam mechanism on blogs.gnome.org? I’m flooded with spam for ant killing products…

Ants never die

04/07/2010

Except if you squash them1

I’ve been slowly but steadily making progress on Hormiga, the Tracker based ORM. I wish I had more time to dedicate to this project, but my work on Tracker and Buxus, the company I’m founding with a Chilean friend, have been keeping my rather busy.

Still, I finally reached a point where I can generate a mapping file, run hormiga on it, and get a vala proxy which I can use to access data in Tracker. Here is a quick demo of how it works:

Writing a mapping file

Mapping files are JSON documents. That means they’re easy to write, and they could even be generated by a graphical frontend in the future (or automatically from ontology files, whatever). Here’s a simple mapping file for the nao:Tag class:

{
	"Class": "nao:Tag",
	"Name": "Tag",

	"Properties": [
		{
			"Property": "nao:prefLabel",
			"Name": "label"
		}
	]
}

Here we say we want to generate a proxy to access objects of class nao:Tag, and we want to bind the property nao:prefLabel (the label of the tag) as the “label” property of our proxy. The data type (here, a string) will be automatically deduced from the ontology.

Generating the proxy

Generating the proxy should ideally be part of your build process, and is not more complex than running

hormiga -O /usr/share/tracker/ontologies Tag.map

This command generates a Tag.vala file, which you can compile with the rest of your project. The -O is used to tell Hormiga were ontology files are.

Using the generated proxy

If you look at the generated code, you’ll notice the constructor is private. Instead, you have to load the proxy using one of the dedicated functions. Well, so far, there’s only one, to load a proxy for an existing resource. This is the first point I’ll improve next week. So, say you have a tag in Tracker with an uri urn:my-tag. You can use a code like

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public class HormigaTest {
	public HormigaTest () {
	}
 
	public void run () {
		try {
			var tag = Tag.load ("<urn:my-tag>");
 
			if (!tag.exists) {
				warning ("Resource %s does not exist", tag.urn);
			}
 
			message ("label: %s", tag.label);
		} catch (Error e) {
			warning ("Couldn't load tag: %s", e.message);
		}
	}
}
 
public static int main (string[] args) {
	var t = new HormigaTest ();
	t.run ();
 
	return 0;
}

Where are we now, and where we’re going

This small demo shows around 100% of the current features implemented in Hormiga. You could say it’s useless, and you wouldn’t be totally wrong :). The interesting part is what comes now that I have my architecture set up.

Next week, I’ll work on loading functions: you’ll probably want to load all tags, or only those matching a certain property. I also intend to let developers use SPARQL to load objects, I don’t think abstracting all SPARQL under an API is a good idea.

I also have to work on data type mapping: currently, Hormiga deals with string, integers and dates. It does not handle yet “pointers” to other objects (basically, all predicates pointing to a resource), which is a rather serious limitation. But again, one piece at a time… I don’t want to rush now, and have to do a rewrite in three months because I did a bad design!

I’ll of course appreciate any type of contribution (feedback, patches), but I’m aware that at this stage the project is not very attractive :). Still, if you feel motivated, just go for it!


1No animals were hurt during the redaction of this blog article. And the title is because “hormiga” means “ant” in Spanish.

Hello Planet! Here’s my weekly GSOC report for Hormiga, the Tracker based ORM.

While last week was dedicated to coding the basic blocks of the ORM (ontology lexer and parser), this week I set up the first blocks of the ORM itself. What I did:

  1. Define the mapping format: it will be JSON based, and look like that:
    {
        RdfClass: "nao:Tag",
        Name: "Tag",
    
        Properties: [
            {
                RdfName: "rdfs:label",
                Name: "label"
            }
        ]
    }

    The good thing is that JSON makes it expansible (the parser will just ignore any unknown element), so that we can add missing bits later.

  2. I wrote a parser for this format
  3. I wrote an ontology parser (what I had was a turtle parser, producing statements but not “interpreting” them). It’s pretty basic ATM, only enumerating classes and properties.
  4. I wrote the first steps of the mapping, that is load the mapping file, check it, load the ontology files, check them, and check that the classes and properties used in the mapping are defined in the loaded ontologies.

I though I would be able to do a bit more, but I got more work that what I expected (blame my internship tutor who gave me a lot of interesting things to do). So far, I’m not late, but just on time.

What can we expect next week ? Well, more thorough checking of the mapping file, and the specification of how code generation will be done. I’m still not totally clear on how I’ll do that, but I’m pretty sure my mentor will have interesting ideas, since he’s the author of Vala.

Hello there!

Here comes my first weekly report. For those who don’t know/remember, I am working this summer on making a Tracker based ORM.

This week, I began to work on the proxy generator. Basically, the generator generates basic proxy classes for the RDF classes you feed it, and those proxies handle all the plumbing with Tracker (you just manipulate the objects as normal objects, and changes are reflected in the DB). All the class methods are virtual, so they can be overriden to implement custom behaviours.

So far, I’ve got a working RDF ontology parser, and began working on the mapper itself. The mapper will load map files, that tell it what RDF classes to map, and how to map them. Format of mapping files will be JSON based, and I still have to define it formally.

So next week will be dedicated to defining the mapping format, and building a parser for that format. Once that is done, I will implement code generation. I’m still not totally sure about how code generation will work. I know I want the ORM to support more than one language, but I’m not yet fixed on how I’ll implement it.

I don’t progress very quickly so far, because other works are keeping me pretty busy. However, I’m not late (yet), and I hope to deliver a very basic version quickly, so stay tuned :)