Wayland vs /usr/share/xsessions

Past

A long time very display manager had its own way of determining the various sessions. Eventually this was standardized via freedesktop.org using /usr/share/xsessions. Display managers can figure out the various sessions using .desktop files in previous mentioned directory.

Present

With Wayland coming along, you want to perhaps know which of those sessions should run under Wayland, which under X. A specific header was added to the gnome-wayland session file, X-GDM-NeedsVT=true.

Adding a new header is problematic. Adding this means breaking compatibility. But maybe we’ll just ignore any compatibility problems. Before worrying about this, I noticed that in Mageia /usr/share/xsessions is auto-generated. Any file you place there will be overwritten on reboot! Meaning, no such header will appear in Mageia. No GNOME Wayland.

I asked around why these things are still being overwritten. Apparently in the past we used to have some other method which on Mageia is converted into /usr/share/xsessions. Anyway, clearly legacy and time to get rid of this. I quickly looked into what Debian does. They go from xsessions to the old way of doing things, still. Debian being Debian.

Imagine my surprise in discovering that we cannot just kill this code. XDM (fallback display manager on Mageia) only supports that old way! So even though it has been 10+ years (in my mind at least), we continue to live on with two ways of doing this. Plus in bits that we cannot just ignore.

Basic functionality

Showing sessions seems rather basic functionality, solved ages ago. Anyone would expect display sessions to show up in every display manager you might install. Reality is a mess. There is choice in display managers, but there is a lot of complexity in supporting this. The way of doing that is different per distribution. Although we have Linux Plumbers conferences and freedesktop.org (which is not specific to Linux), this never was simplified.

Simplifying

Why to simplify code? For that I rather point to something known to developers, meaning code refactoring. In general, simplifying is usually done to ease either maintenance and/or make it extensible. A clear example is above, there are outright bugs in various distributions triggered by this. The maintenance of this is higher than it should be. And this for something really basic, ensuring that the sessions are the same no matter which display manager you use.

Another way of thinking: The CEO of a very large non-technical company sometimes talks about technical legacy and the need to simplify. Isn’t it time to acknowledge this in free software? Pretty safe to assume that the free software community is way more technical than this CEO or an average person in that company.

Another layer of abstraction

Now in my previous blogpost I talked about logind and systemd. An argument raised there is that “power management” is something anyone should be able to expect in 2013. This seems overly similar to the expectation that every session shows up in any display manager.

To solve this, let’s not continue in having abstraction layers around for another 10+ years for something as basic as power management. The different solutions should define one API and stick with it, whatever that is. Let’s not push such complexity into desktop environments. That would lead to similar differences as the session support in Debian vs Mageia. Want to offer choice in power management or another display manager? Go for it! Using one API (/usr/share/xsessions rings a bell :P). Not loads. Not anything which requires an abstraction layer. How to get there? Who cares! Talk about it on Linux Plumbers conferences, freedesktop.org, by email, by implementing the same API as systemd or whatever you think is best. But let’s not pretend to go for choice while going for complexity.

It’s 2013, let’s fix things in the right place!

Wayland vs /usr/share/xsessions?

For those who are wondering about the original topic of this blogpost, this is how things were solved: Wayland specific sessions are placed in /usr/share/wayland-sessions. This avoids breaking compatibility in non-GDM display managers and avoids breakage in Mageia. The right thing to do because breaking display managers is bad. That said, having and expecting all display managers to support /usr/share/xsessions properly is long overdue.

GNOME and logind+systemd thoughts

Distributions usage

At most 3 weeks ago I noticed by then already month old thread on gentoo-dev discussing that GNOME 3.8 has a dependency on systemd. At most this should be about logind, even though logind is optional. The assertion of Gentoo is different than what we communicated. For one, in the last stages of GNOME 3.8.0 as release team we specifically approved some patches to allow Canonical to run logind without systemd. Secondly, the last official statement still stands, No hard compile time dep on systemd for “basic functionality”. This is a bit vague but think of session tracking as basic functionality.

Figuring out why Gentoo really believes systemd is a requirement took a while to figure out. For one, gentoo-dev is unfortunately like a lot of mailing lists. Loads and loads of noise. Out of the 190+ messages, only one or two has a pointer to some more information. One was Bugzilla, another was that logind now requires systemd. Apparently our (=GNOME) assumption that logind was independent from systemd changed since systemd v205 due to the cgroups kernel change. This is really unfortunate, but GNOME 3.8 does not require logind. I discussed the non-dependency of logind+systemd on #gentoo-desktop and why they thought different. Apparently GDM 3.8 assumes that an init system will also clean up any processes it started. This is what systemd does, but OpenRC didn’t support that. Which means that GDM under OpenRC would leave lingering processes around, making it impossible to restart/shutdown GDM properly. The Gentoo GNOME packagers had to add this ability to OpenRC themselves. Then there were various other small little bugs, details which I already forgot and cannot be bothered to read the IRC logs. 😛

Due to 1) logind now requiring systemd, 2) that they don’t have time to develop missing functionality in OpenRC 3) supporting non-systemd + systemd at the same time likely resulting in bugs and a lot of support time, they decided it is much easier to just require systemd/logind. This also get the features that systemd and logind offer and avoid any weird bugs (as most GNOME developers seem to use systemd).

Debian GNOME packagers are planning the same AFAIK; they rather just rely on systemd (as init system, not just some dependencies). In the end, the number of distributions not having systemd decreases. This despite clarifying that GNOME really does not need systemd, nor logind and trying to help out with issues (though GNOME is not going to maintain distribution specific choices).

Wayland

GNOME 3.10 has Wayland as technological preview. The Wayland support in Mutter is being tracked in a special branch and tarballs are released as mutter-wayland. The Wayland support in GNOME will rely on logind to function (to be clear: Wayland in GNOME, not Wayland in general). If you have read my entire blog, you’ll notice that though we knew about logind runing on Ubuntu, as of version 205, logind is now tied together to systemd.

GNOME session

For GNOME 3.12, a feature has been proposed called systemd for the user session. This feature is explained as follows:

When booted on systemd systems, we can use systemd to also manage parts of the user session. There are a number of benefits to this, but the primary one is to place each application in its own kernel cgroup. This allows gnome-shell to do application matching more reliably, and one can use resource controls to (for example) say Epiphany only gets 20% of system RAM.

Furthermore, this lays some fundamental groundwork for application sandboxing.

It’s important to note that with these patches, we still support non-systemd systems (as well as older systemd). How far into the future we do so is an open question, but it should not be too difficult to leave non-systemd systems with the previous model over the next few cycles.

Upstart has something similar, called Session Init. I am not sure if what Upstart does is the same as systemd, just that they seem the same. In Ubuntu/Unity this is already used (though not sure to what extend), reasoning is described here (recommend to read it).

Making use of systemd in short term just provides some benefits and allows us to eventually support application sandboxing. However, long term hopefully gnome-session can die and such code in systemd. There it could possibly be reused by other desktop environments (only aware of Enlightenment).

ConsoleKit

We’ve been relying on ConsoleKit for a long time. If you see the git history, you’ll note that it was first written by a GNOME developer and my impression is that he wrote the majority of the code. Since preferring logind, ConsoleKit development has as good as completely stopped. No development in 1.5 years.

Upstream vs downstream

I remember the days where we had a program which tried to change some “OS” settings. E.g. maybe the timezone. IIRC this was handled using a Perl backend which would try and determine the OS/distribution and then would do whatever it needed to do. A complication was that things might change between the version of the OS/distribution, so the version also needed to be tracked. As a result, this program would sometimes ask you if your distribution was the same or similar to one of the distributions it knew about.

Only since very recently, we rely on fancy new things like dbus and the dbus specification described at http://www.freedesktop.org/wiki/Software/systemd/timedated/. Since the GNOME 1.x days, we’ve gone from trying to support all the differences out there, to promoting standardization (across desktop environments as well as OS/distributions). And in some cases like this timedated dbus specification, we either provide a function or it won’t work. It will be up to a distribution/OS to ensure that the function is available.

Future

Personally speaking, it seems that there is little going on to change the direction in which GNOME is heading. GNOME is getting rid of more and more code which overlaps with other code. Fallback mode, ConsoleKit, power management vs systemd handling this, etc. Then for new functionality, GNOME is also relying on new things. Think of Wayland, timedated, localed, application sandboxing, etc.

At the same time, I don’t see people working on ConsoleKit. Or ensuring that either there is a replacement for logind or ability to run logind without systemd. Development of any init system other than Upstart (user session is cool) seems low and in need of extra help.

Having GNOME run on non-Linux based operating systems (*BSD) and distributions not willing to switch for whatever reason to systemd is great. But it seems distributions rather depend GNOME on systemd than maintain things themselves. Leaving out *BSD, GNU/Hurd and Ubuntu. 🙁

Wayland

This is obviously my personal opinion. Also, I don’t work for an open source / free software company.

The conscious split

On Nov 4 2010, Mark Shuttleworth announced:

The next major transition for Unity will be to deliver it on Wayland, the OpenGL-based display management system.

To me such an announcement implies a commitment of resources.

They also considered writing their own solution, but thought it was a bad idea:

We considered and spoke with several proprietary options, on the basis that they might be persuaded to open source their work for a new push, and we evaluated the cost of building a new display manager, informed by the lessons learned in Wayland. We came to the conclusion that any such effort would only create a hard split in the world which wasn’t worth the cost of having done it. There are issues with Wayland, but they seem to be solvable, we’d rather be part of solving them than chasing a better alternative. So Wayland it is.

About 6 to 9 months ago, Canonical moved from their idea that Unity at one point magically would run Wayland, to their own solution. Doing your own thing is perfectly fine by me. What I heavily dislike is keeping that complete change of direction a secret. There is no law against it, but hiding such things for a very long time makes me assume that I’ll never hear anything timely at all. I still have not seen that this decision was taken on technical reasons and it just removes the trust I had in Canonical.

Now, there an amusing call for GNOME and KDE to join Mir. Doing that to me would mean:

  • To give away your copyright, something which the GNOME project is against
  • To know that you not be consulted in decisions and big decisions will be made known 6-9 months after the fact
  • To write and maintain yet another abstraction layer to make Wayland, X and Mir work
  • To (seemingly) rely on LightDM (no GDM!)
  • To likely switch switch distributions, as code upstreaming is not a strong suite of Canonical. Maybe Mir will work, but I expect loads of patches to Qt/Gtk+ for a Mir backend, as well as other components (accessibility, etc). I think this due to the amount of patches Unity required and the sudden code dumps that GNOME sometimes got.
  • You’re working with someone consciously is ok with creating a ‘hard split in the world’

Mir seems totally out of the option. After the really public announcement of Canonical, I was expecting them to have invested resources into making happen what they publicly promised. Instead, that is not to happen, so that slack has to be picked up.

Development speed of Wayland

Some people (not me!) spend a few days investigating the current status of Wayland and what is still left to do. This as we only 6-9 months after the fact we know we had to do this ourselves.

Unfortunately, the various blogposts about the 6-9 months of hidden Mir work, plus the incorrect assumptions and statements made about Wayland have resulted in a various incorrect impressions that I see often repeated. To correct a few:

  • There was already a lot of work done for Wayland
  • Speed was not slow, there was just no timeline on when to complete the rest
    Seems quite logical to at least wait for a 1.0 release and some adoption from distribution, but oh well
  • Supporting X, Mir and Wayland implies way more than just a “gdk backend” patch to Gtk+ and something similar for Qt
  • Wayland does not do everything that X does, but Mir is lacking that and way more
    Yet another abstraction layer is not really the preferred way of working in GNOME. Adds to the things that has to be maintained, plus you can only use something as much as your abstraction layer allows
  • Competition on this level is not good
    See: yet another abstraction layer

Finally some progress

Anyone still thinking: finally some progress has really ignored that various GNOME applications already work on Wayland. Furthermore, we already had a port of Mutter. This all before we finally are allowed to know for sure to not expect any resources from Canonical towards Wayland.

Of course, it still is nice to be quicker to release something quicker than the other person. But let’s focus instead on providing something which works as nicely as the old thing. Including things as XSettings, XRANDR, keeping track of idle time, colour management, accessibility, etc. Competing with Mir is stupid anyway. If we make applications work under Wayland, it will benefit Mir as well. We could release something and call it stable, but easier if we release it when we think it is good enough. That will still be too early for some, but oh well 😛

If we’d known 6-9 months ago that we couldn’t have relied upon Canonical, we could’ve taken that into account. In my opinion, keeping that decision secret slowed things down.

Communication and GNOME

Now, I am pretty harsh towards Canonical regarding their communication. I think GNOME can hugely improve as well, though I don’t think that is really relevant. The “but you do the same” is just a bad excuse. I had big issues (especially after the work done for Ubuntu GNOME remix) that we still did not have anyone from Canonical/Ubuntu in the release team. If we had someone on the release team, we could still be bad at communicating, but at least there would be one person who should be knowing what was going on on both sides. I still think it would be nice (though do not think it is that big of an issue after all the heavily delayed statements from Canonical) to have someone from Canonical, but not sure if the person would ever be allowed to share anything, thus the benefit seems much lower.
PS: The release team bit is not new. I’ve said this initially privately, but also publicly many months ago, see release-team archives. GNOME does almost everything in the open.

Meritocracy causing depression and killing people

In response to a blogpost by Taryn Fox. Unfortunately anonymous comments are disabled and OpenID just seems annoying.

In the blogpost it was said that the “the idea of “meritocracy” causes depression and kills people“. I see the reasoning behind it as unfamiliar and not related to what I see as meritocracy.

For one, blaming others for failures and punishing them? I don’t see that in GNOME at all. There should be an atmosphere where that is not acceptable. I think we already have it using the Code of Conduct, though lately I have not really been wanting to look at things due to the huge amount of discussion some of my actions have caused. So better do nothing than to get crap. I still believe we’re doing pretty ok in GNOME. Maybe in some other project meritocracy is used as an excuse to behave badly. If it happens in GNOME and it is more than a one-off, then raise it. Similar to having a Code of Conduct explaining the minimum we think anyone should behave, we can make an explanation on meritocracy.

There is another thing that people are somehow worthwhile and get rewarded. That is not the idea. The idea is that people put in effort. This is based upon work, not something vague like “worth”. Worth is difficult to measure. Having done X amount of triaging, X amount of translations or X amount of git commits is something you can measure. Then you also have things like helping out at conferences, or just plain attending. I find it pretty logical that the one putting in most effort can dictate more and is listened to more. It is very easy to have an opinion or think that something should work in some way. But unless anyone actually does something, all those ideas are just that, ideas.

The idea behind why I call something meritocracy is that everyone is treated in a similar way. In the blogpost it is even said that some people need help and not anyone is able to do the same thing. Which is why if someone is able to be a maintainer, the person should become one. You don’t make someone maintainer because you think they’re a cool person, you judge on measured effort. In any company it usually works in an arbitrary way. You can have people move “up”, while their work would suggest something entirely different. The promotion could have been done because of anything. E.g. being friends with the right person.

I don’t want to get personal, but do think the blogpost is very focused on a possible negative aspect of meritocracy. I don’t have too much experience with depressions, aside from e.g. after a breakup. At that time everything is negative and it is very easy to make conclusions which to yourself are entirely logical and reasoned. I think it is best to share your thoughts to anyone and notice the response it generates. Though it might make sense to not share your thoughts, it is actually not logical at all. One person does not know everything. For meritocracy for instance, of course it might have drawbacks. The reason I really promote meritocracy is because of the benefits it brings. But that does not mean that any drawbacks are acceptable. With promoting meritocracy people are promoting the good it brings. Anything will have drawbacks, promoting any idea does not imply you want the whole thing. One other example is those “light” drinks. Benefit is to be more healthy (less sugars), but you might get cancer from it. Promoting those drinks is not done to promote getting cancer.

Packaging GNOME for Mageia

Introduction

I have been using Mandrake Mandriva Mageia for a while now. I noticed that Mageia is pretty friendly to new packagers. Every new packager (even if experienced) will get a mentor. That person is there to answer questions and to guide you to become a good packager. Once the mentor decides you’re good enough (time varies), you’ll become a full packager. The ease of joining, together with a lack of bureaucracy made me want to try and help out.

I started out with just packaging random things that people wanted. That had a big drawback: you’re responsible to handle the bugs in those new packages. Some packagers I never use nor care about. Aaargh!

I switched to only package GNOME and a few small things that I use myself (maildrop, archivemail, a few others).

GNOME packaging

Having never packaged for a distribution before, I found it relatively easy. I guess a great benefit is that GNOME is pretty stable. Not too much changes. The things I found annoying:

  • Problems related to linking
    Other Mageia packagers know how to solve these. I just file a bug and wait for the GNOME maintainer to give me a patch. Sometimes while I am waiting for upstream, another Mageia packager will already add a patch for the problem (no Mageia bugreports involved).
    I think packaging as quickly as possible is part of the “release early, release often” thought. People consciously run the development version of a distribution. Although things shouldn’t be broken knowingly, the focus should be on getting the new software to the development version users as quickly as possible.
    Something broken? Either have upstream release a new (micro/pico) version; else: add a patch.
  • Usage of -Werror
    If this gives an error, expect Mageia packagers to add a patch to remove -Werror usage. If you want to be notified of warnings; write a system to notify you of warnings! I find -Werror a waste of time.
  • -Werror and deprecations (hugely annoying!)
    Fortunately, there is some gcc switch to not error out on deprecations. Most modules seem to use that now, fortunately.
  • New modules
    Example: Boxes. Loads of new dependencies. Plus some already packaged software needs new configure options. This can easily take a week.

Boredom

My main issue with packaging GNOME that it consists of loads of tarballs and that most of the work is really, really boring. I mean that usually you just:

  1. Download the new tarball
  2. Update the version number in the .spec file and change release to 1
  3. Submit the spec file to the Mageia build system

Note that I completely ignore a lot of things:

  • Stable distribution
    I’ve been packaging for Cauldron (“unstable” / “rawhide” / “Factory”). The process for submitting updates to the stable version is (obviously) very different.
  • Build errors
    I don’t test. I just rely on the Mageia build system to bomb out.
  • New major versions of libraries
    In Mageia we package per major version. New major? Doesn’t matter, build system will bomb out (the spec file looks for the major).
  • Major functionality changes
    Usually noticeable by the version number. This way of packaging is also nice because the distribution can actually have the same library packaged with multiple major versions. Although we do recompile everything immediately, this avoids a lot of headache when something doesn’t compile anymore.
  • Testing the software
    Any packager in Mageia can add patches, so if something is totally broken there a lot of of people who can fix it. In practice I almost never test things before submitting. If there is a problem, better to have anyone inform upstream asap. From the bugs that have been filed, it usually are things I wouldn’t have noticed anyway.

Avoid boredom, script it!

To avoid getting overly bored, I instead wrote a script to automate changing the .spec file. I’d watch my ftp-release-list folder and look at all the incoming emails. Based on that I’d:

  • Call my script to increase the version number and reset the release
  • Call a Mageia command to commit all the changes
  • Call a Mageia command to submit the new package

This was nice, but quickly became boring as well. Usually I’d just call my script, check nothing, then commit and submit and wait for either an email about the new RPM, or an failure email.

Submit it already!

I changed my script and added a --submit option. This would make my script call the commit + submit commands automatically (and abort as soon as something failed).

Now I was submitting as soon as I saw a new email in ftp-release-list. I made another script to actually download the tarball from master.gnome.org to avoid the master.gnome.org vs ftp.gnome.org lag. There is about a 5 minute difference between the ftp-release-list email and when it actually appears on ftp.gnome.org. Downloading directly from master.gnome.org would avoid that lag.

Patches which do not apply

As I was submitting everything to the Mageia build system, I noticed that some builds were failing just because a patch had been merged. That’s something I could’ve checked myself. This was annoying as it can take a while before the Mageia build system notifies you that there is a problem. Time that is basically wasted; I want the tarball provided as a rpm package asap. So another addition to the script to make it verify that the %prep stage actually succeeds. This ensured that I’d notice immediately if a patch wouldn’t apply. As a result, the number of obviously incorrect Mageia submissions decreased (probably making the Mageia sysadmins happy), but more importantly: this decreased the time it takes before a tarball is available as rpm.

Funda Wang

There was another problem. During the time I was sleeping Funda Wang was awake, busy packaging all the GNOME tarballs. Leaving nothing for me to do.

The only way to solve this was to link my script directly to ftp-release-list. To do that I’d had to solve a few problems:

  • Package names can be different from a tarball name
    I already solved that partly in the script. I decided to have the script bomb out in case a tarball is used within multiple packages (e.g. gtk+ tarball is used by 2 packages; gtk+2.0 and gtk+3.0. So the script would handle NetworkManager (tarball) vs networkmanager (package), but not gtk+.
  • Version number changes
    I added code to have the script judge the version number change according to the way GNOME uses version numbers. GNOME versions are mostly in the form of x.y.z. In case y is odd, it is a development version. To judge version numbers it basically comes down to: changes in x is bad and automatically going from from a even y to an odd y is bad as well.
  • Verify tarball SHA256 hash
    I wanted to be sure that the downloaded tarball had the same SHA256 hash as what was advised by ftp-release-list. So I wrote some code to do that.
  • Be informed what the script is doing
    Everything that the script does based on ftp-release-list is automatically sent as a followup in the same folder as the ftp-release-list emails.
  • Wait before downloading
    The script doesn’t have access to master.gnome.org. So it had to wait a little bit before trying to download the new tarball. I decided on 5 minutes. This quickly failed because maildrop doesn’t allow a delivery command to last longer than 5 minutes. A os.fork() addition solved that issue.

Reading logs is boring

Having my script send followups to the original ftp-release-list emails was nice. But that meant I was reading every followup to check if the script was doing what it should. After a few emails, this became too cumbersome.

I changed the script to add “(ERROR)” to the subject line in case of errors. After a while, I noticed most errors were due to the same problems. I didn’t need to actually see the entire email, just knowing the error message was enough. As an enhancement, I ensured the subject line actually contained the error message. To determine the error message from commands that were run, I assumed if a command fails due to an error (noticeable by the exit code), that the last line would have the error message. This is a pretty reliably assumption.

Waiting 5 minutes?

Before downloading a tarball, the script would wait 5 minutes. Obvious, because of mirror lag. I noticed a few problems with that:

  • Resubmitting ftp-release-list emails
    Every so often I’d fix a cause for the script to fail. I’d then pass the original ftp-release-list email again to the script. The script would still wait 5 minutes. The entire wait was unneeded, and increased the change that another packager would package the tarball meanwhile (and, yeah, this happened).
  • Lag sometimes more than 5 minutes
    Although 95% of all tarballs were available within 5 minutes, some tarballs weren’t yet available.
  • ftp-release-list lag
    Sometimes the ftp-release-list email takes a few minutes to arrive (instead of the same second). Thus making the script wait way more than needed.

To solve these problems I changed the script to

  • Make use of the ftp-release-list Date: field
    The script uses the date specified in the Date: header to wait until 5 minutes after the date specified in the Date: field. If the same email is processed again, the script determines that there is no need to wait. It helps that I know both the GNOME server and my machine are synced to NTP.
  • Repeat download for up to 10 minutes
    I enhanced the script to repeatedly try and download the file for up to 10 minutes in 30 second intervals.
  • Start initial attempt after 3 minutes
    As the script would retry the download anyway, I decreased the initial waiting time to 3 minutes (instead of 5 initially). This to that the package is available asap, but it also minimizes the time to notice errors (e.g. merged patches).

Automatically packaging gtk+

The script only handles 1 package for every tarball. Having the script fail for gtk+ really bothered me. Partly as some modules needed a newer gtk+, and they were failing while gtk+ was released already (and could’ve been packages). Secondly, a script which doesn’t handle gtk+ is just bad.

I solved this by having the script look at all the possible packages. Then ignore any package which has a version newer than the just released tarball (e.g. if gtk+ 2.24.11 is released, ignore the package which has gtk+ version 3.3.18). Then just take the package(s) which either have the same or are closest to the new version.

The version number change is still judged later on (as explained previously: basically: don’t automatically change major versions or upgrade to a development version). Furthermore, a library which changes its major will result in a failure. So this should be pretty much fine.

Which patch fails to apply?

Ensuring that patches apply is good. But when that fails, I had to run the command again and ask for log output.

As a very common reason for a patch not to apply anymore is because it has been merged (or was taken from) upstream, seeing that in the log output would makes things much easier.

Today

Above explains how I developed the script until today. The result is a 858 lines long script. If you want to look at it, I put it in Mageia svn.

Above screenshot shows the various automated replies to ftp-release-list emails (aside from other emails). If you look closely, you’ll see that Mageia hasn’t packaged gnome-dvb-daemon. Furthermore, the initial GDM was rejected as it concerned a stable->unstable change. Patches failed to apply for: gnome-documents, banshee. Lastly, all the “FREEZE” error messages are because Mageia is in version freeze and I’m not allowed to submit new packages during a version freeze. Lastly, the script didn’t respond yet to the release of the atk and file-roller tarballs (it has meanwhile).

The nicest thing is the time difference between the ftp-release-list email and the response. In that time, the script has downloaded the tarball, uploaded it to Mageia and performed various checks in between. Building a package should take less than 10 minutes, tops. It then needs to be uploaded to the Mageia mirrors. The slowest tier 1 mirror only checks for new files once per hour. Meaning 30 minutes delay on average. All in all, it should be quite manageable to provide most GNOME tarballs to Mageia Cauldron users within 1 hour.

Further boredom avoidance

I still have various things which still annoy me:

  • Updating BuildRequires
    configure.{ac,in} has PKG_CHECK_MODULES to check for BuildRequires. That should just be automatically synchronized with whatever is in the spec file. Not to sure what to do with BuildRequires which Mageia doesn’t want/need. I’m thinking of still keeping these in the .spec, but put them in a %if 0, %endif block.
  • Merged patches
    Ideally you just remove them from the spec and be done with it. Not sure how to determine that it was merged from the script (exact “patch” return code + how to call “patch”; some patches want -p1, some -p0, etc). Furthermore, some patches require autoreconf as well as additional BuildRequires (gettext-devel). Those additions should be removed as well. I’m wondering if I either should just ignore that, or add some special comment to the .spec file to inform the script on what should be done. I’ll wait with this until I have a bit more experience with merged patches.
  • Mirror lag
    GNOME sysadmin thing. Not really a priority.
  • Build ordering
    Evolution wanting latest (maybe unreleased) evolution-data-server. GNOME shell wanting latest (maybe unreleased) Mutter. NetworkManager-vpnc wanting latest NetworkManager.

Psst

People (not me) are planning a public beta of a certain gnome shell related website 🙂 ETA December 1st.

CLA and similar abbreviations


In April 2008, the FAS (Fedora Account System) process was converted to a click-through process instead of a manual process of GPG signing and fax/email.

IMO CLA and all similar abbreviations: too much of a hassle to have to sign stuff. Came across above (though old) image and it seems to suggest that others feel the same.

Desktop Summit

Impressions/thoughts/rambling:

  • Yay for Germans procedures! 😛 Like having to mark how many t-shirt were sold despite my experience that it doesn’t work. Still, these procedures resulted in a nicely run event
  • At GUADEC we usually have a freakishly huge GNOME banner. Apparently this year it had to be taken down, which was done during a talk. I still don’t like that or get why this happened.
  • Banner in the Audimax
    Banner in the Audimax
  • Getting a tablet is great. Though noticed a lot of bugs in Meego and the light sensor on this hardware is really annoying. Tried updating Meego to a newer version and only the 3rd version I downloaded worked. In the other versions either the USB installer didn’t boot, or Meego didn’t boot.
  • Alexanderplatz is awesome! Seems always something was happening around that place and elsewhere in Berlin.
  • Overheard that press was interested in ‘juicy stuff’ between GNOME and KDE. Bit unfortunate. I think it is ok to make fun of things, e.g. trying to trick people by stating that either the Uncyclopedia GNOME article is the official goal for GNOME and similar things about KDE (loads of settings is always a nice one). I’d rather be misunderstood while being obviously sarcastic/trolling than having to phrase everything politically, but I don’t want it appearing as a ‘$PROJECT said this’ article either. But on the very plus side, didn’t notice any negative articles.
  • Not totally sure about the frequency of the Desktop Summit versus GUADEC. This is not anything about not wanting to work together, but for me GUADEC is meeting the people I already know or haven’t met yet during last year. I almost don’t know anyone within KDE and if I talk to a Plasma developer I don’t have anything to talk about (which to me is logical and I don’t see anything wrong with this), while for anything within GNOME I usually have a good understanding. Further, there seems to be a bit of difference between Akademy and GUADEC in what usually happens during a conference. I haven’t been to any Akademy, but my idea about GUADEC is usually that there are talks, but the goal is to meet+plan+party. Focus was not so much on entire hacking days. Further, I like the unstructured choas (pants, stealing hats, turning vuntz’ icecream crazyiness into competitions, etc) that makes GUADEC. This Desktop Summit was in Germany though (procedures :P) and I haven’t been to the previous one so maybe my view is a bit off.
  • Got the impression that there were way less GNOMEis around than usual, while I had the impression that there are more active developers than previously.
  • Hopefully wrong, but got the idea that people working for Canonical seemed to feel uneasy and thinking now I at least forgot to greet seb128 except when running by in a hurry… though I am really terrible with remembering what affiliation people have (I also don’t care about affiliation).
  • Transmageddon is awesome! Used it to convert the video files from my photo camera to webm.
  • Transmageddon screenshot
  • A 7 / 8 hour train ride is not so bad when I compare it to the many unneeded annoyances you have to deal with when flying. Delay of an hour on the way back, but still ok. Couldn’t sleep though.
  • The black humour of the ‘state of the union’ talk is fun. Bit unfortunate having to say it is black humour / noticing talk being used as ‘GNOME developers say X’.
  • Was invited to a KDE release team BoF. Really interesting. Gave a few comments on how things are done within GNOME. Though realized bringing a laptop was a bad idea (tends to distract me).
  • Talk from Vincent Untz called Ramblings of a retired release manager was impressive. Hit a quite few sore points.
  • Vincent Untz and his need for a green GNOME tie
  • Football and Volleyball competition was fun. Loads of people drinking beer while Germans were running tracks around us.
  • Football match
  • Realized that not all hostels have a ‘common room’, which was the main reason for going to a hostel.
  • Week went by quickly. Didn’t have a release-team meeting, didn’t see as much of Berlin as I intended, etc.