AM_MAINTAINER_MODE is *not* cool

if you have a configure.ac script in your project, and it contains a line that says “AM_MAINTAINER_MODE”, you’re doing it wrong. period.

what this macro means is that changes to your Makefile.am will not automatically result in the Makefile being regenerated unless –enable-maintainer-mode is given to ./configure. i’m almost sure that this isn’t what you want. it’s also breaking jhbuild whenever you add a new source file (since people pull the updated version and end up trying to build it with the old Makefile).

*not* using “AM_MAINTAINER_MODE” means that your makefiles will always be updated in response to changes to Makefile.am.

“AM_MAINTAINER_MODE([enable])” is acceptable. this means that Makefile updates are enabled by default but you have the option to pass “–disable-maintainer-mode” to disable them. i personally think that this is stupid, but i understand that some distributions think that this is kinda useful for some strange reason. since it’s useful to them and causes no harm to me, this is actually what i recommend.

fredp made a report page for packages using AM_MAINTAINER_MODE. green mean no “AM_MAINTAINER_MODE” at all (good). yellow “low” means “AM_MAINTAINER_MODE([enable])” which is also fine (perhaps better than green, in fact). orange “average” means that your package is currently broken and needs to be fixed.

note: many packages attempt to work around this issue by passing –enable-maintainer-mode to ./configure at the bottom of their autogen.sh. i do not consider this to be a particularly good solution.

boston summit is in montréal

we just announced that the boston summit is being held in montréal this year. please read the announcement here:

https://mail.gnome.org/archives/foundation-list/2011-September/msg00001.html

quick network-manager note

very many have noticed that the new network-manager appears to be quite a lot slower to connect to… just about anything, really.

the reason for this (as confirmed on #gnome-hackers after being suggested by ray) is ipv6 support: when network-manager connects to a network, it has to assume that there might be an ipv6 router advertisement. it waits for this (and usually doesn’t hear it). that’s the delay.

if you don’t care about ipv6, the fix is quite simple: in network settings, under wireless, click “options”, go over the ipv6 tab and select “ignore” from the dropdown.

that fixes it up for one connection. i’m not sure if there is a way to do it globally.

desktop summit talks

we just sent out the letters for talk proposals for desktop summit 2011. everyone who proposed a talk should have heard from us by now.

if you haven’t heard from us then please email us to let us know that something got lost along the way.

we will be opening proposals for BoF sessions soon. if your talk was rejected for the main programme, it might still be appropriate to propose a BoF. stay tuned.

one more thing: now is the right time to send applications to the travel committees to request sponsorship.

“i support the release team”

i read the news today, oh boy…

somewhat annoyed to discover the the “GNOME 3″ release hackfest that i traveled to india for will not actually result in any sort of a release. it’s true that things have been pretty crazy, but it really seemed like it was going to happen this time around.

i’m also not sure i believe that the “doubts” being discussed about gnome shell are particularly constructive, but i digress…

to all of the (likely many) people who will be upset by today’s announcement, i’d just like to mention one thing: these guys have been working like crazy the past week. this is not being caused by any form of lazy behaviour and i doubt that they’d do it unless it was absolutely necessary.

everyone’s stickers are probably looking a bit beat up by now…

pre-release craziness

i’m in bangalore this week, hanging out with the release team and marketing team guys for the GNOME 3.0 release hackfest.

i’ve never witnessed a release process this up-close-and-personal before, and i have to say that it’s totally insane. i have no idea how we ever get a release out the door. the amount of work being done by the release team is crazy. vincent and frederic have barely slept at all. andre says he had about 2 hours last night.

if you want proof, look for yourself: this month on release-team@. the gzipped archive is over 20 times the size of the month before. seems that “hard code freeze” really means “okay! go really fast now!”…

i’d feel guilty about my past transgression in this regard except for the fact that (release team member) vincent untz is busy landing a 15000 line set of changes to the panel at the moment.

the three of them disappeared today along with andreas and brian for a couple of hours. i’m not sure where they went but i think it has something to do with ingesting large amounts of crystal meth.

in other news, the location is pretty interesting. we got a chance to hang out at the intel offices and now we’re at a local university. the weather has been cooler than i expected (which is nice) and the traffic is really fantastic. we’ve come close to being dead in a terrible accident only 175 times or so. i still have to write my slides for my talk….

docs hackfest in Toronto

as i’m writing, the last of the docs hackers are leaving toronto. from what i can tell, they had an extremely productive week. from my perspective it was sort of fun to act as the on-the-ground guy for a change. i also managed to write a patch or two against yelp, so my attendance at the hackfest wasn’t entirely symbolic…

the hackfest was hosted by CDOT (the centre for development of open technology) at seneca college in (very) north toronto. these guys are really cool — a bunch of fedora and mozilla hackers as an official department of a college. a big thanks to anyone there who is reading this post; it was really awesome to hold a hackfest in such a cool place.

a step back

(standard junk: this is my personal opinion and i’m possibly ethically compromised because i’m currently on contract working for canonical, etc. etc. blah)

canonical does a lot of things that i would classify as pretty boneheaded in terms of their relationship to various free software communities. they have an interesting and colourful history with quite a lot of projects and our project is pretty close to the top of that list.

it’s my opinion that canonical takes a more pragmatic approach than most free software projects have. they have a bit more of a “…and damn the consequences” attitude. they’ve made a lot of decisions that have put them at odds with a lot of people. i’ve found myself on both sides — defending their choices when i agree and calling them out when i don’t.

binary drivers so that it “just works”? win. copyright assignment? not such a win. this mess with banshee? ya. that’s pretty lame.

i’m sure everyone can think of a few more “situations” off of the top of their heads.

canonical gets a lot of flack around these parts, and rightly so. they often make decisions that leave a lot of us scratching our heads and wondering why. they need to be called out. i’m glad to see it happening. i’d do more of it myself, but actually i hate writing blog posts.

to some casual readers of planet lately, it might seem as though the opinion of canonical in the gnome community is quite negative. i think that even those deeply involved in our community, in heated moments, get pretty pissed off.

taking a step back though, i think that it’s clear that just about every poster here would agree on one thing: that canonical is a net positive to the world of free software and that they are helping us achieve our most important goals.

as a fun thought experiment, imagine if even the worst dreams came true: next year canonical takes their copyright ownership of their qt rewrite of unity and makes it closed source to make lots of money selling it on embedded devices. imagine one of these devices is actually awesome and reasonably priced. i can tell you one thing about that: i’d be the first in line to get my hands on this device. i’d be excited as hell about it. a few closed components on an otherwise totally open os sounds pretty good to me. better than, say… android (that everyone seems to love so much). pretty comparable to meego (which everyone loved even more until quite recently).

now snap back to reality and remember that these worst dreams are just dreams and actually canonical is actually even better than that. there’s no closed source component at all. they’re actually taking a pretty high road with just about everything that they’ve done so far.

when you think about the amount of slack that we’ve cut companies like google and nokia and stack it up against the amount of condemnation that we’ve seen hurled at canonical it becomes easy to forget one thing that i think most of us would agree on: canonical is a very close friend.

the paradox is true: you save your strongest criticisms for those you love most.

more on dconf performance, btrfs and fsync

dconf performance

i’ve been working on dconf recently. one of the things i’ve been up to is writing some proper test cases. of course, testing is a good chance to check performance…

the test case is written against the dconf gsettings backend. it generates 1000 random changesets and writes them into an empty dconf database while tracking the changes in two hash tables. one hash table is managed explicitly by adding all changes to it and the other is managed implicitly by connecting to change notification signals and updating itself when they are received. after each of the 1000 changesets is written a full three-way comparison is performed to ensure that the tables match each other and the contents of the dconf database.

the test system is a quad core i7 at 2.67GHz.

while the 1000 writes are going on the performance is quite awful. each dconf lookup takes approximately 30µs (compared to approximately 0.16µ for a GHashTable lookup, for comparison). that’s about 200 times slower. this can be almost entirely blamed on the fact that the gsettings keeps a local list of outstanding changes (those that have not made it to disk yet) and scans that list linearly on each read. you won’t run into this performance case unless your app is doing a *lot* of writes all at once.

the test then waits for the writes to make it to disk and runs the three-way comparison 1000 times over again. the results of this test are a more fair indication of dconf read performance under “normal” conditions. it takes approximately 1µs to do a lookup using the dconf gsettings backend (which is approximately 7 times as long as it takes to do a GHashTable lookup). for some time i’ve been telling people that “dconf lookups should be within an order of magnitude of GHashTable performance” and now i have hard numbers for that. this also gives a lower bound for GVDB performance (since dconf uses it).

tl;dr: a dconf read takes 1µs (which is approximately 7 times GHashTable).

btrfs and fsync

when doing this testing i noticed that dconf is really slow. it takes 64ms for the dconf-service to write each change to the disk. the reason for this is because it does an fsync() after each and every change. i have spinning disks. i sort of expected this and i designed around it; that’s why the gsettings backend for dconf keeps the local list of changes in flight: your app doesn’t wait for this 64milliseconds (and is actually completely unaware of it). it starts to look bad, though, when you’re doing 1000 writes.

i did a bit of reading and i discovered something very cool: btrfs guarantees that overwrite-by-rename is atomic. no need to call fsync() between writing the data to the temp file and renaming it over the old file. i have a patch to glib in #637544 to detect when g_file_set_contents() is being used on btrfs and skip the fsync() in that case. this reduces the amount of time taken to write a change from 64ms to 3.5ms.

tl;dr: btrfs is cool.

edit: another interesting dconf statistic: the dconf database generated by one instance of the test case is 475680 bytes on disk while containing 4455 entries. that’s about 107 bytes per entry.

edit2: after applying the glib btrfs no-fsync patch the 30µs dconf lookup time in the first phase of the test drops to about 4µs per lookup. this can be attributed to the fact that the service is having a much easier time keeping up with the write requests and therefore the list of outstanding changes (which is linearly searched) is kept much smaller.

gsettings is fast

sometimes people will come up to me at a conference and one way or another mention that they are avoiding using gsettings because they need their app to start “really fast”. at uds for example, someone asked me “i should be using a keyfile for this, right?”.

gsettings has dconf as its backend. there are a couple of things that i assumed were common knowledge about dconf that surprised people when i told them. the main two things to note are that the on-disk dconf database is a hashtable that gets mmaped into your process and that reads (after the first read) typically involve zero system calls — just direct access to the hash table.

i still decided that it would be helpful to get a hold on some actual numbers here, though. i did some testing (nothing serious — but it gives a good idea of the ballpark figures involved here).

my methodology for measuring how long it takes to do something is this:

time (for i in `seq `1000` do; ./something; done > /dev/null) and dividing the ‘real’ time by 1000

running ‘/bin/true’ takes about 1.1ms
running a do-nothing program linked against libgio and calling g_type_init(): 2.2ms

when i went to benchmark gsettings i noticed that it was a bit slower than i thought. about 9ms to run the gsettings command line tool to “get” a setting. (for comparison, initialising gtk is 30ms and qt 350ms). still, i was wondering why it was so slow. it turns out that the largest part of that was that i was blocking on gdbus initialisation which (due to the chatty nature of the dbus protocol initialisation and the fact that dbus-daemon is a slow talker) takes quite a long time. gdbus needs to be initialised along with gsettings in order to add match rules for change notification.

i’ve fixed the backend so that we don’t block on gdbus initialisation anymore — it happens asynchronously and in another thread. those changes will land on master today. after the changes, running the gsettings commandline tool in ‘get’ mode takes about 4.2ms.

so for the record: the cost of initialising gsettings and reading a value out of dconf adds about 2ms to your program startup — less than 1/10th of the time it takes to initialise gtk and on the same order as the length of time it takes to spawn the process, load the shared libraries and call g_type_init().