Profiling GMarkup

Since I’ve been re-writing the XBEL parser in order to use GMarkup in place of libxml2, I’ve also felt the need to get some profile going.

I’ve always felt that one of the biggest bottlenecks in the recent-files was its use of GMarkup for the recent files storage; it appears that I was wrong – at least, as far as GMarkup is concerned.

I’ve added some profile code in my local development tree (I’ve yet to upload it to libegg, since it’s not finished or polished enough), and here’s some results:

P: [ load_from_file:000.000000] - loading from file 'test-file.xbel'
P:  [ load_from_data:000.000000] - begin loading from data
P:   [     parse_data:000.000000] - beginning parse_data
P:  [     parse_data:000.163825] - finished parsing
P: [ load_from_data:000.164151] - finished loading data
P:[ load_from_file:000.167050] - finished loading from file 'test-file.xbel'
782 bookmarks loaded successfully.

(numbers are seconds and microseconds timestamps, as returned by gettimeofday(3), launching from a cold cache state hot cache (the cache wasn’t too cold, as later profiling showed) – with the file and libraries in cache the times are pretty much halved). As you can see, test-file.xbel is an XBEL compliant stream containing 782 bookmarks. I know this is a Poor Man’s Profiling(tm), but better than nothing anyway.

As you cans ee, the parsing of the file with GMarkup takes up ~0.16 seconds, mostly due to the parsing (obviously); I’ve implemented the loading from a file using a buffered read (4KB a time, which means that the read cycle does ~400 iterations in order to store the test-file.xbel file); then I feed the buffer to a function that parses everything from a string in memory.

Next run, I’ll break down the parse_data section into the per-element parsing for the bookmark, mime-type and application elements, which should be the most time-consuming, parsing-wise:

P: [ load_from_file:000.000000] - loading from file 'test-file.xbel'
P:  [ load_from_data:000.000000] - begin loading from data
P:   [     parse_data:000.000000] - beginning parse_data
P:    [ parse_bookmark:000.000000] - begin bookmark element
P:   [ parse_bookmark:000.000486] - end bookmark element
P:    [     parse_mime:000.000000] - begin mime element
P:   [     parse_mime:000.000094] - end mime element
P:    [parse_application:000.000000] - begin application element
P:   [parse_application:000.000124] - end application element
...

As you can see, the bookmark element is the biggest offender, here; this because it has four attributes (href, added, modified and visited), three of which hold a timestamp stored is the ISO8601 format (“YYYYMMDDTHH:MM:SSZ”, where T and Z are fixed characters) – which must be parsed too. The application element holds another four attributes (name, exec, count and timestamp), so it’s the second place where the parser spends most of its time. GMarkup holds the attributes names and values passed to the callbacks in two string arrays, so scanning them is a O(n) operation; better would be accessing them using a attribute_value = g_markup_parse_context_get_attribute (context, element, "attribute_name") which internally uses an hash table: this would make the attribute parsing O(1) – but it would also break API; we could provide a flag for activating a new callback signature:

void (*start_element) (GMarkupParseContext  *context,
		       const gchar          *element_name,
		       GError              **error);
void (*end _element)  (GMarkupParseContext  *context,
		       const gchar          *element_name,
		       GError              **error);

...

G_CONST_RETURN gchar *
g_markup_parse_context_get_attribute (GMarkupParseContext  *context,
				      const gchar          *attribute_name,
				      GError              **error);

This would be neat, even though, given the numbers, this could be seen as micro-optimization.

Consideration:: most time, as shown by cold-cache runs, is spent in I/O; thus, GMarkup shows virtually no bottleneck, even with parsing a bit more intensive than plain dumb 0 == strcmp (element_name, element)) checking.