The Cost of Going it Alone

community, freesoftware 20 Comments

These are speaker notes from a presentation I gave at the Desktop Summit 2011, on a topic I’ve written about in the past. The slides for the presentation are on the conference website (PDF link).

I’m going to talk about the costs associated with modifying and maintaining free software “out of tree” – that is, when you don’t work with the developers of the software to have your changes integrated. But I’m also going to talk about the costs of working with upstream projects. It can be easy for us to forget that working upstream takes time and money – and we ignore that to our peril. It’s in our interests as free software developers to make it as cost-effective as possible for people to work with us.

Hopefully, if you’re a commercial developer, you’ll come away from this article with a better idea of when it’s worthwhile to work upstream, and when it isn’t. And if you’re a community developer, perhaps this will give you some ideas about how to make it easier for people to work with you.

Softway

In 1996 and 1997, Softway worked on bringing a POSIX API to Windows NT. This involved major patches to all the components of the GCC toolchain – the compiler, linker, assembler, debugger, etc. To make the changes they needed, they hired an 18 year industry veteran of compilers and operating systems, and over the course of 6-8 months, the main body of work was done.

Conscious of the costs involved in maintaining that much work out-of-tree, Softway approached upstream developers to propose that their changes be integrated. The reactions ranged from “this is great, but…” to “NT? Don’t care about it”.

After this initial failure, Softway turned to the GCC company at the time: Cygnus Solutions. Cygnus had hired many GCC maintainers, and at that time if you wanted anything done with GCC, they were the guys to talk to. Their quote? $140,000. And they wouldn’t be able to start work on the project for 14 months.

Deciding this was too expensive, Softway eventually hired another company called Ada Core, maintainers of the Ada front-end for GCC, to rework the patches and get them upstream. Ada Core cost $40,000, and could start next week. That’s roughly the same amount of money as was originally spent developing the features in question.

Getting Things Done

At the highest level, the question that people want to answer is: “How can I get what I want done, as quickly and cheaply as possible?” This software exists, it does 80% of what I need, I need to change it a little bit to fit those needs. What’s the best way for me to do that?

The most common strategy is to pick a release on which to build on, and hack away from there. In fact, in probably 90% of cases that’s as far as it goes. I add the features I need for the project I’m working on Right Now, ship it, and never even talk to upstream about what I did, or why.

The costs involved in this approach are all the “stuff” that gets added upstream (features, bug fixes and security patches) which you never see. Mal Minhas, former CTO of the LiMo Foundatioon, labelled this “unleveraged potential” (PDF link) – missing out on the work of other people who are doing things in your interests. The underlying assumption is that you will end up redoing at least some of this work during your maintenance of your own work.

To avoid missing out on this work, it’s recommended to merge regularly changes from upstream into your local copy of the upstream package. But this merge is typically not free – and the bigger your changes, the more likely it is you will find significant conflicts between upstream and your local copy. There is also an additional, often-forgotten overhead involved with regression testing and validation post-merge. Every time I upgrade a component to a new version, I need to verify that it hasn’t broken anything, either because of changed behaviour at integration points I’m using or simply because some regressions were introduced when fixing other issues.

And the worst part about this maintenance cost is that you will incur it every time you upgrade. Almost every time you pull code from upstream, you will have substantial costs involved in merge resolution and regression testing. And to keep the delta between your code and upstream as small as possible, you should do these merges regularly.

Inevitably, someone will suggest that the maintenance costs have grown to the point where it’s worth your while “giving back” (where this is often a synonym for “dumping our stuff upstream”). The goal is to reduce the delta to a point where only client- or project-specific patches are maintained out of tree, and anything which might be useful to someone else is sent back to the upstream project, to be maintained there. Jim Zemlin recently said in an interview “Let me tell you, maintaining your own version of Linux ain’t cheap, and it ain’t easy”… in his words, “if some aren’t giving back as much as others today, I just think it will naturally happen over time. It always is in their business interest to do so”.

It’s at this point that you will run into the community overhead.

Community overhead

Open Source developers expect contributors to jump through all sorts of hoops to get their code upstream. Most maintainers will request that you re-format patches according to community norms, submit patches which apply cleanly to the head of the development branch, and may suggest alternative approaches to how you should write your patch or feature. Jonathan Corbet has described how this works with the kernel community:

A patch of any significance will result in a number of comments from other developers as they review the code. Working with reviewers can be, for many developers, the most intimidating part of the kernel development process […] when reviewers send you comments, you need to pay attention to the technical observations that they are making. Do not let their form of expression or your own pride keep that from happening. When you get review comments on a patch, take the time to understand what the reviewer is trying to say. If possible, fix the things that the reviewer is asking you to fix. And respond back to the reviewer: thank them, and describe how you will answer their questions.

To take just one example of how projects expect people to do extra work to contribute, think about what version of a piece of software a company is likely to want to integrate into their product or solution. When we worked on QuteCom, the rule I expected our developers to follow was to use only stable releases of any libraries we depended on, which were included in recent releases of major distributions, and were present in Debian testing. I didn’t want my guys to be debugging unstable versions of software written by other people, we had our own stuff to get out the door. And by requiring that components be present in releases of popular distros, I felt I was lowering the bar to participation, by allowing community members to get started by installing devel packages from the distro, rather than downloading and compiling dependencies.

So what happens when you make some changes to the projects you depend on? Those changes are made to a stable version of the software. If your product release will take several months, it is likely that upstream will have moved on. And we expect patches against development branches, not against stable releases. So any work that I do needs to be merged first with the development branch, before it’s submitted.

A developer is left with some choices, none without costs or risk:

  • Ignore upstream completely, scratch your own itch, and sell upgrades to your end client to pay for maintenance costs – which wastes developer time and does not benefit upstream at all, not to mention leaving your software open to security issues and bugs as they are discovered and fixed upstream.
  • Fork a vendor branch off a stable release, and “when the project is finished” work on merging it upstream – but by that time, other projects have come along, there’s a substantial cost involved in rebasing your work to the latest and greatest upstream – “when I have time” doesn’t happen very often in the Real World. One way to get around this is to hire someone from upstream to “take care” of getting your code upstream – as we have seen with Softway, this can be a significant financial and time investment. But it can almost guarantee that your code will get upstream.
  • The ideal situation – work on a feature branch on the development branch of upstream, and back-port your work to the slow-moving stable version, submitting it for inclusion upstream as soon as it’s finished. Martin Fowler recently pointed out some of the problems with feature branches, but they’re much better than big merges and code dumps. The cost here is that you’re paying up-front the merge cost by making small frequent merges, and adding an extra overhead keeping two branches in sync. Also, there is a significant cost involved in the risk that once you’ve put in all the work, the result will be rejected by upstream developers anyway.

Some stories can illustrate the relative weight of the costs involved in the second and third of these options.

Maemo GTK+

In 2003 or 2004, Nokia started working on modifications to GTK+ for its Nokia 770 tablet. By the time the tablet was released in 2005, the Nokia delta to GTK+ was tens of thousands of lines of code. In addition, a set of mobile-only widgets had been packaged into the Hildon package, which depended on Nokia’s vendor version of GTK+. At that point, when the project became public, Nokia wanted to propose these changes for inclusion upstream in GTK+.

To help with this work, Nokia contracted Imendio (now called Lanedo) to help. At the time they started working on the project, the delta was over 50,000 lines of code. Over the course of 4 years, a lot of work was done to reduce this delta, sometimes by re-writing Maemo features to make them acceptable upstream, sometimes by shepherding changes in, and in part by rebasing Maemo’s GTK+ on GTK+ 2.10, which solved some of the problems which needed to be addressed before.

And yet, even after four calendar years and many more man-years of effort, Hildon, the (reduced) set of mobile widgets which many Maemo application developers used, did not work perfectly on top of a stock upstream GTK+. When Nokia made a grant of $50,000 to the GNOME Foundation to be spent to enhance the developer experience for MeeGo developers, integrating Hildon widgets upstream and ensuring that Maemo application developers could easily port their applications to MeeGo was a big part of the winning bid.

A huge amount of this work could have been avoided by following Andrew Morton’s advice, given to a developer of the experimental filesystem Tux3:

Do NOT fall into the trap of adding more and more stuff to an out-of-tree project. It just makes it harder and harder to get it merged.There are many examples of this.

But to ask the question the other way around: could the GTK+ maintainers have done more to facilitate the submission of this work upstream?

Wakelocks

In 2005, Google acquired a little-known, stealth mode company called Android, which we now know was developing a Java- and Linux-based phone operating system. By 2007, when the platform and first Android phones were announced, the company had made significant changes to the Linux kernel for its needs. One of those changes was called wakelocks – system services and kernel drivers could request that the kernel not go to sleep in the absence of user input (thus locking the device awake, giving the name). Matthew Garrett gave a pretty clear description of what wakelocks do, and why they were added to Android (from his point of view as maintainer of power management in the kernel) at last year’s LinuxCon.

Wakelocks allow the system to save battery, even when there are poorly behaved applications running on the system. For a production environment with thousands of applications of varying quality, that makes sense. So in early 2009, a little known kernel developer working on Android, Arve Hjønnevåg, proposed that wavelocks be included in the kernel. To that end, he sent the following mail, along with a big patch, to the Linux kernel mailing list:

The following patch series adds two apis, wakelock and earlysuspend. The Android platform uses the earlysuspend api to turn the screen and some input devices on and off. The wakelock code determines when to enter the full suspend state.

These apis could also be useful to other platforms where the goal is to enter full suspend whenever possible.

It’s fair to say that these changes were not initially well understood or well received. The initial reaction was covered at the time by Jonathan Corbet of LWN.

After a few rounds of revisions, the proposal appears to have been dropped. At least, no significant efforts were made to have the patches proposed from March 2009 until early 2010, when a number of things happened that converged to a perfect storm around the issue. At the end of 2010, Greg Kroah-Hartman deleted a number of Android drivers from the staging tree, saying “no one cared about the code”. Partly as a result of that, a number of key kernel figures met in April 2010 at the collaboration summit to discuss “the Android problem” with Google engineers. Soon afterwards (perhaps by coincidence), Arve re-posted a new set of patches, which seemed to be well received.

That thread erupted, however, and 1500 emails and some hurt feelings later, an alternate implementation called suspend blockers by Rafael Wysocki was integrated into the kernel.

All of this took a lot of work from Google, essentially after the feature was finished. According to Ted Ts’o, speaking in August 2010:

Android team members have spent literally hundreds of man hours (my mail folder on the suspend blocker thread has over 1500 mail messages, and is nearly 10MB), and have tried rewriting the patches several times, in an attempt to make them be main-line acceptable.

Chris DiBona, speaking in an interview with Paula Rooney, said that at that time that “there were some developers at Google working on it who “feel burned” by the decision but he acknowledged that the “staffing, attitude and culture” at Google isn’t sufficient to support the kernel crew.”

Clearly, there is a cost involved in trying to submit code to the kernel and to other projects, and there is a significant risk that the code will be rejected, in spite of that effort.

Given the heavy-hitting kernel developers working inside Google, I can’t help but wonder whether things would have gone smoother if Andrew Morton were asked to help shepherd wakelock functionality into the kernel. As Matthew Garrett said in his LinuxCon post-mortem: “Getting code into the kernel is always easier if you have a recognised name associated with it”.

EVMS and IBM

Even when you do everything right, it is possible to have code rejected by upstream – at which point, the best solution is often to suck it up, and port your work over to whatever API was provided for the same problem space by upstream.

In 2000 or 2002, IBM started work on EVMS, the Enterprise Volume Management System/ EVMS was a set of kernel drivers and user space tools which allowed users to manage several virtual disk drives, potentially across several disks and physical partitions. Dan Frye, speaking about the project in August 2002, said it was “a quantum step forward in terms of ease of use, reliability and performance”.

The project was published first on SourceForge in 2001, and was developed in the open, with releases regularly being made against the most recent upstream kernels, as well as for the major distributions.

On October 2nd, 2002, Kevin Corry proposed EVMS for inclusion in the 2.5 kernel. At the time, he wrote:

To make this as simple as possible for you, there is a Bitkeeper
tree available with the latest EVMS source code, located at:
http://evms.bkbits.net/linux-2.5
This tree is sync'd with the linux-2.5 tree on linux.bkbits.net
as of about noon today (Oct 2).

At this point, IBM had done everything right, as far as I can tell. They tracked upstream development, talked about their plans and encouraged feedback, and when they proposed EVMS for inclusion, it was intended to make it as easy as possible to accept it.

At this point, things get a little fuzzy. The end result, though, was that LVM2, an alternative kernel framework for logical volume management developed by a small company called Sistina (later acquired by Red Hat), was merged on October 29th. And on November 5th, Kevin Corry announced a change in direction for EVMS:

As many of you may know by now, the 2.5 kernel feature freeze has come
and gone, and it seems clear that the EVMS kernel driver is not going
to be included. With this in mind, we have decided to rework the EVMS
user-space administration tools (the Engine) to work with existing
drivers currently in the kernel, including (but not necessarily limited
to) device mapper and MD.

Again, IBM did “the Right Thing”, and ported their tools to the APIs which were in the kernel. Making that decision earned the team a lot of respect among the kernel community. But at the end of the day, it also cost them – all of the wasted work on their kernel drivers, and all of the work porting their tools to new APIs. A back of the envelope calculation, based on the releases of EVMS after that date, suggests that it set the project back ~6 months, and ~18 man-months of work.

When to engage?

So when does it become useful to engage upstream? And, more importantly, is there anything that we, as upstreams, can do to lower that barrier, to make it easier for new companies to engage?

The reality is, if you’re only making small patches to a project, it’s going to take you longer to get your patches upstream than it will to maintain them over time. It’s just not worth your while, unless there is some intrinsic motivation for working with the project.

If you have a moderately sized delta with upstream, and you expect to have to maintain your version over time, then it may be time to start thinking about getting some of that code upstream. There are a couple of approaches you can use: The first is to train up developers working in-house and have them build some relationship capital over time. The second is to hire a company who already employs maintainers to review your code, suggest changes and help shepherd anything that is appropriate upstream, as Softway did with Ada Core. The relative costs of these options will depend on how nice the project is to new developers, how well documented the community processes are, and of course the quality of your code. In any case, at this point you will need to consider the cost involved in rebasing to a later version of upstream, and consider how often you will have to do it.

However, once your delta goes over a certain threshold, you will end up spending a substantial amount of time in maintenance and regression testing. At this point, the repeated long-term cost of conflict resolution and regression testing every time you merge will outweigh the cost of getting code upstream. Again, there are two options – train up one of your developers, or subcontract the upstreaming work. You might even consider killing two birds with one stone, and including mentorship of one or two of your developers in the subcontracting contract, to grow some in-house project developers. It may also be worth head-hunting someone with an existing reputation in the project.

Finally, if you are using a piece of software as a strategic plan of a product portfolio, and you are making substantial changes to it, then you would be insane not to have a maintainer, or at least a senior developer, from the project on payroll. When Samsung decided to include EFL in their phone platform, they hired Rasterman. Google hired Andrew Morton. Red Hat hired many of the GTK+ maintainers of the era when they created Red Hat Advanced Development Labs to develop GNOME back in 2000. Collabora hired Wim Taymans, Edward Hervey and Christian Schaller from the GStreamer project. And the list continues. But if you do hire a maintainer, do so in the knowledge that their primary allegiance may be to “their” project, and not to your company. Being a good maintainer entails doing a lot more than writing code – factor into your schedule that 20% – 30% of their time will be spent on patch review, rolling releases, documenting roadmap plans, chatting on the mailing list, etc.

Let me leave you with this take-away: We will never make it financially interesting for someone who’s done a quick hack to get that upstream, or get them to fix their “bug” the Right Way. It will always be in the interests of companies with a strategic dependency on a piece of software to hire or train someone to be a core member of the community developing that software. But for everything in between, as free software developers, we can help. We can make it easier for companies to figure out who they can hire to train or mentor their developers, or shepherd changes upstream. We can lower the bar to getting features upstream by documenting community norms and practices, by being nicer to new developers on the list, and by instituting a mentoring programme. By improving patch review processes, we can decrease friction and make it nicer and easier to contribute to the project. If it’s easier for a developer to do a merge every three months than it is for him to talk to you, then your project is missing out.

 

The Real Life “Lord of the Flies” experiment that went wrong

community, gnome 8 Comments

While reading “The Illusion of Asymmetric Insight”, it occurred to me that the real cost (and tragedy) of Unity and GNOME Shell, or KDE and GNOME, is that it turns us into “us” and “them” – and at that point it is really easy to fall into the trap of reducing all criticism to “haters gonna hate”.

 

Harmony Agreements reach 1.0

community, freesoftware 9 Comments

The Harmony agreements reached a significant milestone this week, as they were tagged 1.0 and left the “beta” stage. As someone who has previously taken position regarding contributor licensing agreements, I was asked this week what my thoughts on Harmony are.

First off, let me say that I have not followed the Harmony process closely. Indeed, the process, which was semi-open, but operated under Chatham House Rules (any participant can quote what was said in a meeting, but cannot name the person who said it), is one of the major issues I have seen people take with Harmony. The lack of a clearly identified team taking responsibility for the contents and standing behind the agreement texts is unfortunate, but I think it’s an issue completely independent of their content and the project’s goals.

The goal of the project, as far as I can tell, is to provide a set of templates for people who might want to use a Contributor Licensing Agreement (CLA). As far as it goes, that is fine. Where there is a danger is if the existence of such a template is used to encourage the adoption of CLAs (including copyright assignment) as a “best practice” to be followed.

A CLA is actually a conflation of two very different things: the first is asking a contributor to certify that they have the right to make their contribution (that it is original work, that they agree to the project’s license, that their employer has given permission for the contribution, etc). The Mozilla project gets their contributors to sign a similar document upon becoming committers, to ensure that they perform due diligence before accepting a patch from a proposer. So this aspect of CLAs is sensible and useful for most projects.

The second part of CLAs is copyright licensing or assignment. This creates an asymmetric situation in the project where a central copyright holder has the power to make certain decisions for the project, including for some code which they did not write. As I said previously, copyright assignment has its down sides: it will prevent (or, at best, make much more difficult) the formation of a diverse developer community around the core of your project. If that is part of what you want to achieve, then you should be aware of that. However, if you are happy to be basically the sole contributor to the project core, and the ownership of all of the code in that core is useful for other goals, then copyright assignment may well be appropriate for your project.

Harmony does attempt to make copyright assignment more acceptable by including a licensing promise in some variants. This is fine. I don’t think I would sign such an agreement, but I am sure that the promise that contributions will always be available under a certain license may be enough to reassure other potential contributors.

The main “flaw” which others have identified in Harmony is the lack of a patent promise from the assignee to the assigner. I kind of think that this is a red herring, because such a promise should really be explicit in the license under which the contributor got the software in the first place. Having such a promise in a CLA really doesn’t feel necessary or useful.

Overall, I’m sure that some people will find Harmony agreements useful – they will hopefully save communication time between projects with CLAs and developers, and lawyer fees for companies considering the adoption of a CLA. Yet, my priority will continue to be to question the assumptions which lead people to adopt a CLA without fully thinking through the consequences.

Do you really need a CLA to achieve your objectives? Is it, in fact, harmful to some of what you want to achieve? At the end of the day, my position remains the same: the goal should not be to write a better CLA, it should be to figure out whether we can avoid one altogether, and figure out how to create and thrive in a vibrant developer community.

 

Article: Collaboration Myths from Gartner

community, General 4 Comments

Interesting article from Gartner which has some relevance to my recent proposal for a gnome-design mailing list: Gartner Identifies Five Collaboration Myths.

Exerpt:

Myth 1. The right tools will make us collaborative

Technology can make it easier to collaborate when applications mirror a more intuitive, fluid work style, but selecting a tool without addressing roles, processes, metrics and the organization’s workplace climate is putting the cart before the horse.

 

Effective mentoring programs

community, freesoftware, General, gimp, gnome 18 Comments

I’ve been thinking a lot recently about mentoring programs, what works, what doesn’t, and what the minimum amount of effort needed to bootstrap a program might be.

With the advent of Google Summer of Code and Google Code-In, more and more projects are formalising mentoring and thinking about what newcomers to the project might be able to do to learn the ropes and integrate themselves into the community. These programs led to other organised programs like GNOME’s Women Summer Outreach Program. Of course, these initiatives weren’t the first to encourage good mentoring, but they have helped make the idea of mentors taking new developers under their wing much more widespread.

In addition to these scheduled and time-constrained programs, many projects have more informal “always-on” mentoring programs – Drupal Dojo and GNOME Love come to mind. Others save smaller easier tasks for newcomers to cut their teeth on, like the “easy hacks” in LibreOffice. Esther Schindler wrote a fantastic article a few years ago documenting best and worst practices in mentoring among different projects.

Most mentoring programs I have seen and participated in don’t have a very good success rate, though. In this article, I look at why that is the case, and what can be put in place to increase the success rate when recruiting new developers.

Why most mentoring fails

Graham Percival, a GNU/LilyPond developer, decided in 2008 to run an experiment. At one point, Graham decided that he would quit the project, but felt guilty about doing so in one go. So he  started the “Great Documentation Project” to recruit a replacement documentation team to follow on after his departure. He then spent 12 months doing nothing but mentoring newcomers to get them involved in the project, and documented his results. Over the course of a year, he estimates that he spent around 800 hours mentoring newcomers to the project.

His conclusions? The net result for the project was somewhere between 600-900 hours of productivity, and at the end of the year, 0 new documentation team members. In other words, Graham would have been better off doing everything himself.

Graham found that “Only 1 in 4 developers was a net gain for the project” – that is, for every 4 apprentices that Graham spent time mentoring, only 1 hung around long enough for the project to recoup the time investment he put into mentoring. A further 1 in 4 were neither gain or loss – their contribution roughly equalled the mentor time that they took up. And the remainder were a net loss for the project, with much more time spent mentoring than the results could justify.

The GNOME Women’s Summer Outreach Program in 2006 had 6 participants. In 2009, the GNOME Journal ran a “Where are they now?” follow-up article. Of the 6 original participants, only one is still involved in free software, and none are involved in GNOME. Murray Stokely did a follow-up in 2008 to track the 57 alumni of Summer of Code who had worked on FreeBSD. Of these, 10 students went on to get full commit access, and a further 4 students were still contributing to FreeBSD or OpenBSD after the project. Obey Arthur Liu also did a review of Debian participants in 2008. Of 11 students from 2008 who had no previous Debian developer experience, he found that 4 remained active in the project one year later.

From my own experience as a replacement mentor and administrator in the Summer of Code for the GIMP in 2006, we had 6 projects, most of which were considered a success by the end of the summer, yet of the participating students, none have made any meaningful contribution to the GIMP since.

I feel safe in saying that the majority of mentoring projects fail – and Graham’s 1 in 4 sounds about right as an overall average success/failure rate. This begs the question: why?

Most mentored projects take too long

What might take a mentor a couple of hours working on his own could well take an apprentice several days or weeks. All of the experience that allows you to hit the ground running isn’t there. The most important part of the mentoring experience is getting the student to the point where he can start working on the problem. To help address this point, many projects now require Summer of Code applicants to compile the project and propose a trivial patch before they are accepted for the program, but understanding the architecture of a project and reading enough code to get a handle on coding conventions will take time. It will also take mentor time. It takes longer to teach a newcomer to your project than to do the work yourself, as anyone who has ever had a Summer intern will attest.

When you set a trainee task which you estimate to be about 4 hours work, that will end up costing a few weeks of volunteer effort for your apprentice, and 8 to 10 hours mentoring time for you during that time. Obviously, this is a big investment on both sides, and can lead to the apprentice giving up, or the mentor running out of patience. I remember in the first year of Summer of Code, projects were taking features off their wishlists that had not been touched for years, and expecting students new to the project to come in and work full time implementing them perfectly over the course of 12 weeks. The reality that first year was that most of the time was spent getting a working environment set up, and getting started on their task.

Mentoring demands a lot of mentors

As a free software developer, you might not have a lot of time to work on your project. Josh Berkus, quoted in Schindler’s article, says “being a good mentor requires a lot of time, frequently more time than it would take you to do the development yourself”.  According to the Google Summer of Code FAQ, “5 hours a week is a reasonable estimate” for the amount of time you would need to dedicate to mentoring. Federico Mena Quintero suggests that you will need to set aside “between 30 and 60 minutes a day“.

When you only have 10 hours a week to contribute to a project, giving up half of it to help someone else is a lot. It is easy to see how working on code can get a higher priority than checking in with your apprentice to make sure everything is on track.

Communication issues

More mentoring projects fail for lack of communication than for any other reason.

Apprentices may expect their mentors to check in on them, while mentors expect apprentices to ask questions if they have any. Perhaps newcomers to the project are not used to working on mailing lists, or are afraid of asking stupid questions, preferring to read lots of documentation or search Google for answers. In the absence of clear guidelines on when and how parties will talk to each other, communication will tend towards “not enough” rather than “too much”.

No follow through

Many mentoring programs stop when your first task is complete. The relationship between the mentor and the apprentice lasts until the end of the task, and then either the apprentice goes off and starts a new task, with a new mentor, or that is the end of their relationship with the project. I would be really interested to hear how many Summer of Code mentors maintained a relationship with their students after the end of the Summer, and helped them out with further projects. I suspect that many mentors invest a lot of time during the program, and then spend most of their time catching up with what they wanted to do.

Project culture

In her OSCON keynote in 2009, Skud talked about the creation of a welcoming and diverse community as a prerequisite for recruiting new developers. Sometimes, your project culture just doesn’t match newcomers to the project. If this happens regularly, then perhaps the project’s leaders need to work on changing the culture, but this is easier said than done. As Chris di Bona says in this video, “the brutality of open source is such that people will learn to work with others, or they will fail”. While many think that this kind of trial-by-fire is fine, the will not be the environment for everyone. It is really up to each project and its leaders to decide how “brutal” or forgiving they want to be. There is a trade-off: investing time in apprentices who will contribute little is a waste of time, but being too dismissive of a potential new developer could cost your project in the long run.

Mentoring best practices

Is all the effort worth it? If mentoring programs are so much hassle, why go to the bother?

Mentoring programs are needed to ensure that your project is long-term sustainable. As Graham says in his presentation: “Core developers do most of the work. Losing core developers is bad. Projects will lose core developers.” Do you need any other reason to start actively recruiting new blood?

There are a few simple things that you can put in place to give your mentoring program a better chance of success.

Small tasks

Mentored tasks should be small, bite-sized, and allow the apprentice to succeed or fail fast. This has a number of advantages: The apprentice who won’t stick around, or who will accomplish nothing, has not wasted a lot of your mentor’s time. The apprentice who will stay around gets a quick win, gets his name in the ChangeLog, and gains assurance in his ability to contribute. And the quick feedback loop is incredibly rewarding for the mentor, who sees his apprentice attack new tasks and increase his productivity in short order. Graham implies that a 10 minute task is the right size, with the expectation that the apprentice might take 1 hour to accomplish the task.

A ten minute task might even take longer to identify and list than it would to do. You can consider this cost the boot-strapping cost of the mentoring program. Some tasks that are well suited to this might include:

  • Write user documentation for 1 feature
  • Get the source code, compile it, remove a compiler warning, and submit a patch
  • Critique 1 unreviewed patch in Bugzilla
  • Fix a trivial bug (a one line/local change)

Of course, the types of tasks on your list will change from one project to the next.

Mentoring is management

Just as not everyone is suited to being a manager, not everyone is suited to being a mentor. The skills needed to be a good mentor are also those of a good manager – keeping track of someone else’s activity while making them feel responsible for what they’re working on, communicating well and frequently, and reading between the lines to diagnose problems before it’s too late to do anything about them.

When you think of it in this way, there is an opportunity for developers who would like to gain management experience to do so as a mentor in a free software project. Continually recruiting mentors is just as important as recruiting developers. Since mentoring takes a lot of time, it’s important that mentors get time off and new mentors are coming in in their place.

Pair apprentices with mentors, not tasks

An apprentice should have the same mentor from the day he enters the mentoring program until he no longer needs or wants the help. The relationship will ideally continue until the apprentice has himself become a mentor. Free software communities are built on relationships, and the key point to a mentoring program is to help the creation of a new relationship. Mentoring relationships can be limited in time also, 6 months or a year seem like good time limits. The time needed to mentor will, hopefully, go down over this period.

Regular meeting times

Mentors and apprentices should ensure that there is a time on their calendar for a “one on one” at regular times. How regularly will depend on the tasks, and the amount of time you can spend on it. Weekly, fortnightly or monthly are all reasonable in different situations. This meeting should be independent of any other communication you have with the person – it is too easy for the general business of a project to swallow up a newbie and prevent their voice from being heard. Rands said it well when he said “this chatter will bury the individual voice unless someone pays attention.”

Convert apprentices into mentors

Never do you understand the pain of the initial learning curve better than when you have just gone through it. The people best suited to helping out newcomers to the project are those who have just come through the mentoring program themselves.

This is a phenomenon that I have seen in the Summer of Code. Those students who succeed and stay with the project are often eager to become mentors the following year. And they will, in general, be among the best mentors in the project.

Keep track

For all involved, it’s useful to have some idea of the issues newcomers have – ensure that documenting solutions is part of what you ask. It’s also useful to know how successful your mentoring program is. Can you do better than the 1 in 4 success rate of LilyPond? Keeping track of successes and failures encourages new mentors, and gives you data to address any problems you run into.

Manage the mentors

All of this work has overhead. In a small project with 1 or 2 core developers, it’s easy enough to have each core developer take an apprentice under their wing, and co-ordinate on the mailing list. In bigger projects, keeping track of who is a mentor, and who is mentoring who, and inviting new mentors, and ensuring that no-one falls through the cracks when a mentor gets too busy, is a job of itself. If your mentoring program goes beyond more than ~5 mentors or so, you might want to consider nominating someone to lead the program (or see who steps up to do the job). This is the idea behind the Summer of Code administrator, and it’s a good one.

Go forth and multiply

Developer attrition is a problem in open source, and recruitment and training of new developers is the only solution. Any project which is not bringing new developers up to positions where they can take over maintainership is doomed to failure. A good mentoring program, however, with a retention rate around 25%, organised continuously, should ensure that your project continues to grow and attract new developers.

Replenishing your stock of mentor tasks and recruiting new mentors will take effort, and continual maintenance of someone putting in a few hours a week. If you execute well, then you will have helped contribute to the long term diversity and health of your project.

Patches – gifts, or pooled resource?

community, freesoftware 13 Comments

During UDS recently, Mark Shuttleworth talked about contributor agreements during his keynote. Mark compared contributing a patch to a project while refusing to sign a CLA, to giving someone a plant for their garden, while attaching the condition that they couldn’t sell the house without your permission.

This got me thinking. Is a patch really like a gift?

If you’re contributing a one line patch to a big corpus of code, there’s a good argument that this is insufficient to grant you any kind of authority in the project.

But how about if you’re contributing a major feature? Surely you get some say in how your code evolves over time? And if you’re a company, contributing thousands of man-hours and dozens of features to the project, isn’t it reasonable that you get a say in all the decisions related to the project, including licensing decisions?

Let’s take the analogy of the gift of the plant & run with it.

If I offer you a potted plant, I have no reasonable expectations of you. I can’t even tell you where to plant it. It’s yours, the house is yours, my involvement is finished.

But let’s say we start a relationship, and it starts to get serious. I start to sleep over some nights, maybe leave a toothbrush in your bathroom. Do I now have some say about what happens in the house? Probably not about anything important, but you might solicit my opinion for any furniture purchases, it might be OK for me to tidy up once in a while.

Now, things get really serious, and I move in. It’s still your house, but surely I get a say now in everything. Of course, everything that was there before I arrived is yours, maybe you keep the game room just the way you want it. But we discuss and reach an agreement for everything from decorating decisions to which cable supplier we’re going to get. We’re building something shared. Sure, I moved into your house, but now it’s our home. If you decide to sell it, there’s not much I can do, but if you did it without talking to me, I’d be really pissed off. It would signal the end of our relationship, probably.

Let’s go one step further. We get married. We commit to sharing our lives. Surely I get a say in whether you sell the house now?

Unless…

Unless when I offered you that plant on our first date, you asked me to sign an agreement, saying that what was yours was yours, and any future improvements we might make to the house together would be yours too, and of course I could not exercise a claim of ownership over the house.

You never know, I might sign it. I might even offer you a second plant for the house. If you date a lot, you might get a lot of plants. I don’t know if I’d spend money to help you renovate the kitchen, though.

Getting people together

community, freesoftware, gimp, gnome, guadec, libre graphics meeting, maemo, openwengo 3 Comments

One of the most important things you can do in a free software project, besides writing code, is to get your key contributors together as often as possible.

I’ve been fortunate to be able to organise a number of events in the past 10 years, and also to observe others and learn from them over that time. Here are some of the lessons I’ve learned over the years from that experience:

Venue

The starting point for most meetings or conferences is the venue. If you’re getting a small group (under 10 people) together, then it is usually OK just to pick a city, and ask a friend who runs a business or is a college professor to book a room for you. Or use a co-working space. Or hang out in someone’s house, and camp in the garden. Once you get bigger, you may need to go through a more formal process.

If you’re not careful, the venue will be a huge expense, and you’ll have to find that money somewhere. But if you are smart, you can manage a free venue quite easily.

Here are a few strategies you might want to try:

  • Piggy-back on another event – the Linux Foundation Collaboration Summit, OSCON, LinuxTag, GUADEC and many other conferences are happy to host workshops or meet-ups for smaller groups. The GIMP Developers Conference in 2004 was the first meet-up that I organised, and to avoid the hassle of dealing with a venue, finding a time that suited everyone, and so on, I asked the GNOME Foundation if they wouldn’t mind setting aside some space for us at GUADEC – and they said yes.Take advantage of the bigger conference’s organisation, and you get the added benefit of attending the bigger conference at the same time!
  • Ask local universities for free rooms – This won’t work once you go over a certain size, but especially for universities which have academics who are members of the local LUG, they can talk their department head into booking a lecture theatre & a few classrooms for a weekend. Many universities will ask to do a press release and get credit on the conference web-site, and this is a completely fair deal.The first Libre Graphics Meeting was hosted free in CPE Lyon, and the GNOME Boston Summit has been hosted free for a number of years in MIT.
  • If the venue can’t be free, see if you can get someone else to pay for it – Once your conference is bigger than about 200 people, most venues will require payment. Hosting a conference will cost them a lot, and it’s a big part of the business model of universities to host conferences when the students are gone. But just because the university or conference center won’t host you for free doesn’t mean that you have to be the one paying.

    Local regional governments like to be involved with big events in their region. GUADEC in Stuttgart, the Gran Canaria Desktop Summit, and this year’s Desktop Summit in Berlin have all had the cost of the venue covered by the host region. An additional benefit of partnering with the region is that they will often have links to local industry and press – resources you can use to get publicity and perhaps even sponsorship for your conference.

  • Run a bidding process – by encouraging groups wishing to host the conference to put in bids, you are also encouraging them to source a venue and talk to local partners before you decide where to go. You are also putting cities in competition with each other, and like olympic bids, cities don’t like to lose competitions they’re in!

Budget

Conferences cost money. Major costs for a small meet-up might be
covering the travel costs of attendees. For a larger conference, the
major costs will be equipment, staff and venue.

Every time I have been raising the budget for a conference, my rule of
thumb has been simple:

  1. Decide how much money you need to put on the event
  2. Fundraise until you reach that amount
  3. Stop fundraising, and move on to other things.

Raising money is a tricky thing to do. You can literally spend all of
your time doing it. At the end of the day, you have a conference to put
on, and the amount of money in the budget is not the major concern of
your attendees.

Remember, your primary goal is to get project participants together to
advance the project. So getting the word out to prospective attendees,
organising accommodation, venue, talks, food and drinks, social
activities and everything else people expect at an event is more
important than raising money.

Of course, you need money to be able to do all the rest of that stuff,
so finding sponsors, fixing sponsorship levels, and selling your
conference is a necessary evil. But once you have reached the amount of
money you need for the conference, you really do have better things to
do with your time.

There are a few potential sources of funds to put on a conference – I
recommend a mix of all of these as the best way to raise your budget.

  • Attendees – While this is a controversial topic among many communities, I think it is completely valid to ask attendees to contribute something to the costs of the conference. Attendees benefit from the facilities, the social events, and gain value from the conference.Some communities consider attendance at their annual event as a kind of reward for services rendered, or an incitement to do good work in the coming year, but I don’t think that’s a healthy way to look at it.

    There are a few ways for conference attendees to fund the running of the conference:

    1. Registration fees – This is the most common way to get money from conference attendees. Most community conferences ask for a token amount of fees. I’ve seen conferences ask for an entrance fee of €20 to €50, and most people have not had a problem paying this.

      A pre-paid fee also has an additional benefit of massively reducing no-shows among locals. People place more value on attending an event that costs them €10 than one where they can get in for free, even if the content is the same.

    2. Donations – very successfully employed by FOSDEM. Attendees are offered an array of goodies, provided by sponsors (books, magazine subscriptions, t-shirts) in return for a donation. But those who want can attend for free.
    3. Selling merchandising – Perhaps your community would be happier hosting a free conference, and selling plush toys, t-shirts, hoodies, mugs and other merchandising to make some money. Beware: in my experience you can expect less from profits from merchandising sales than you would get giving a free t-shirt to each attendee with a registration fee.
  • Sponsors – Media publications will typically agree to “press sponsorship” – providing free ads for your conference in their print magazine or website. If your conference is a registered non-profit which can accept tax-deductible donations, offer press sponsors the chance to invoice you for the services and then make a separate sponsorship grant to cover the bill. The end result for you is identical, but it will allow the publication to write off the space they donate to you for tax.

    What you really want, though, are cash sponsorships. As the number of free software projects and conferences has multiplied, the competition for sponsorship dollars has really heated up in recent years. To maximise your chances of making your budget target, there are a few things you can do.

    1. Conference brochure – Think of your conference as a product you’re selling. What does it stand for, how much attention does it get, how important is it to you, to your members, to the industry and beyond? What is the value proposition for the sponsor?

      You can sell a sponsorship package on three or four different grounds: perhaps conference attendees are a high-value target audience for the sponsor, perhaps (especially for smaller conferences) the attendees aren’t what’s important, it’s the attention that the conference will get in the international press, or perhaps you are pitching to the company that the conference is improving a piece of software that they depend on.

      Depending on the positioning of the conference, you can then make a list of potential sponsors. You should have a sponsorship brochure that you can send them, which will contain a description of the conference, a sales pitch explaining why it’s interesting for the company to sponsor it, potentially press clippings or quotes from past attendees saying how great the conference is, and finally the amount of money you’re looking for.

    2. Sponsorship levels – These should be fixed based on the amount of money you want to raise. You should figure on your biggest sponsor providing somewhere between 30% and 40% of your total conference budget for a smaller conference. If you’re lucky, and your conference gets a lot of sponsors, that might be as low as 20%. Figure on a third as a ball-park figure. That means if you’ve decided that you need €60,000 then you should set your cornerstone sponsor level at €20,000, and all the other levels in consequence (say, €12,000 for the second level and €6,000 for third level).

      For smaller conferences and meet-ups, the fundraising process might be slightly more informal, but you should still think of the entire process as a sales pitch.

    3. Calendar – Most companies have either a yearly or half-yearly budget cycle. If you get your submission into the right person at the right time, then you could potentially have a much easier conversation. The best time to submit proposals for sponsorship of a conference in the Summer is around October or November of the year before, when companies are finalising their annual budget.

      If you miss this window, all is not lost, but any sponsorship you get will be coming out of discretionary budgets, which tend to get spread quite thin, and are guarded preciously by their owners. Alternatively, you might get a commitment to sponsor your July conference in May, at the end of the first half budget process – which is quite late in the day.

    4. Approaching the right people – I’m not going to teach anyone sales, but my personal secret to dealing with big organisations is to make friends with people inside the organisations, and try to get a feel for where the budget might come from for my event. Your friend will probably not be the person controlling the budget, but getting him or her on board is your opportunity to have an advocate inside the organisation, working to put your proposal in front of the eyes of the person who owns the budget.

      Big organisations can be a hard nut to crack, but free software projects often have friends in high places. If you have seen the CTO or CEO of a Fortune 500 company talk about your project in a news article, don’t hesitate to drop him a line mentioning that, and when the time comes to fund that conference, a personal note asking who the best person to talk to will work wonders. Remember, your goal is not to sell to your personal contact, it is to turn her into an advocate to your cause inside the organisation, and create the opportunity to sell the conference to the budget owner later.

    Also, remember when you’re selling sponsorship packages that everything which costs you money could potentially be part of a sponsorship package. Some companies will offer lanyards for attendees, or offer to pay for a coffee break, or ice-cream in the afternoon, or a social event. These are potentially valuable sponsorship opportunities and you should be clear in your brochure about everything that’s happening, and spec out a provisional budget for each of these events when you’re drafting your budget.

Content

Conference content is the most important thing about a conference. Different events handle content differently – some events invite a large proportion of their speakers, while others like GUADEC and OSCON invite proposals and choose talks to fill the spots.

The strategy you choose will depend largely on the nature of the event. If it’s an event in its 10th year with an ever increasing number of attendees, then a call for papers is great. If you’re in your first year, and people really don’t know what to make of the event, then setting the tone by inviting a number of speakers will do a great job of helping people know what you’re aiming for.

For Ignite Lyon last year, I invited about 40% of the speakers for the first night (and often had to hassle them to put in a submission, and the remaining 60% came through a submission form. For the first Libre Graphics Meeting, apart from lightning talks, I think that I contacted every speaker except 2 first. Now that the event is in its 6th year, there is a call for proposals process which works quite well.

Schedule

Avoiding putting talks in parallel which will appeal to the same people is hard. Every single conference, you hear from people who wanted to attend talks which were on at the same time on similar topics.

My solution to conference scheduling is very low-tech, but works for me. Coloured post-its, with a different colour for each theme, and an empty talks grid, do the job fine. Write the talk titles one per post-it, add any constraints you have for the speaker, and then fill in the grid.

Taking scheduling off the computer and into real life makes it really easy to see when you have clashes, to swap talks as often as you like, and then to commit it to a web page when you’re happy with it.

I used this technique successfully for GUADEC 2006, and Ross Burton re-used it in 2007.

Parties

Parties are a trade-off. You want everyone to have fun, and hanging out is a huge part of attending a conference. But morning attendance suffers after a party. Pity the poor community member who has to drag himself out of bed after 3 hours sleep to go and talk to 4 people at 9am after the party.

Some conferences have too many parties. It’s great to have the opportunity to get drunk with friends every night. But it’s not great to actually get drunk with friends every night. Remember the goal of the conference: you want to encourage the advancement of your project.

I encourage one biggish party, and one other smallish party, over the course of the week. Outside of that, people will still get together, and have a good time, but it’ll be on their dime, and that will keep everyone reasonable.

With a little imagination, you can come up with events that don’t involved loud music and alcohol. Other types of social event can work just as well, and be even more fun.

At GUADEC we have had a football tournament for the last number of years. During the OpenWengo Summit in 2007, we brought people on a boat ride on the Seine and we went on a classic 19th century merry-go-round afterwards. Getting people eating together is another great way to create closer ties – I have very fond memories of group dinners at a number of conferences. At the annual KDE conference Akademy, there is typically a Big Day Out, where people get together for a picnic, some light outdoors activity, a boat ride, some sightseeing or something similar.

Extra costs

Watch out for those unforeseen costs! One conference I was involved in, where the venue was “100% sponsored” left us with a €20,000 bill for labour and equipment costs. Yes, the venue had been sponsored, but setting up tables and chairs, and equipment rental of whiteboards, overhead projectors and so on, had not. At the end of the day, I estimate that we used about 60% of the equipment we paid for.

Conference venues are hugely expensive for everything they provide. Coffee breaks can cost up to $10 per person for a coffee & a few biscuits, bottled water for speakers costs $5 per bottle, and so on.  Rental of an overhead projector and mics for one room for one day can cost €300 or more, depending on whether the venue insists that equipment be operated by their a/v guy or not.

When you’re dealing with a commercial venue, be clear up-front about what you’re paying for.

On-site details

I like conferences that take care of the little details. As a speaker, I like it when someone contacts me before the conference and says they’ll be presenting me, what would I like them to say? It’s reassuring to know that when I arrive there will be a hands-free mic and someone who can help fit it.

Taking care of all of these details needs a gaggle of volunteers, and it needs someone organising them beforehand and during the event. Spend a lot of time talking to the local staff, especially the audio/visual engineers.

In one conference, the a/v guy would switch manually to a screen-saver at the end of a presentation. We had a comical situation during a lightning talk session where after the first speaker, I switched presentations, and while the next presentation showed up on my laptop, we still had the screensaver on the big screen. No-one had talked to the A/V engineer to explain to him the format of the presentation!

So we ended up with 4 Linux engineers looking at the laptop, checking connections and running various Xrandr incantations, trying to get the overhead projector working again! We eventually changed laptops, and the a/v engineer realised what the session was, and all went well after that – most of the people involved ended up blaming my laptop.

Have fun!

Running a conference, or even a smaller meet-up, is time consuming, and consists of a lot of detail work, much of which will never be noticed by attendees. I haven’t even dealt with things like banners and posters, graphic design, dealing with the press, or any of the other joys that come from organising a conference.

The end result is massively rewarding, though. A study I did last year of the GNOME project showed that there is a massive project-wide boost in productivity just after our annual conference, and many of our community members cite the conference as the high point of their year.

Lessons learned

community, freesoftware, gnome 119 Comments

After my rather controversial question a few days ago and multiple reactions from around the KDE & Canonical world, a lot of reading and digging into archives, and a lot of conversations with people across the spectrum, I have some preliminary findings and lessons which I hope can serve us going forward to help improve things. Nothing in here is controversial, I think, but each of these is a contributing factor to the current mess we find ourselves in.

tl;dr

For those without the patience to read this article (which is much longer than I intended it to be when I started!), here are the headline points:

  1. FreeDesktop.org is broken as a standards body
  2. Mark Shuttleworth doesn’t understand how GNOME works
  3. GNOME is not easy to understand
  4. Deep mistrust has developed between Canonical, GNOME & KDE
  5. Difficult people are prominent in each of these projects
  6. Behind closed doors conversations are poison
  7. For people to work together, they need to be in the same place

In summary, there are a number of things we can do to move forward from where we are now: improve processes & structure for freedesktop.org (this will require buy-in from key GNOME & KDE people), make the operation of GNOME (and the operation of individual modules) more transparent from outside the project, cut out a lot of the back-channel conversations that have been happening over the phone, in person & on IRC, in favour of documented & archived discussions and agreements on mailing lists & wikis, and work to ensure that people working on similar problem areas are talking to each other.

The major challenge we have is how to move beyond the deep mistrust which has evolved between members of our communities, who are all to eager to assume the worst of others, and how we can improve the tone of discourse when some of the most prominent members of our communities are also hard to work with.

Now, to elaborate:

FreeDesktop.org is broken as a standards body

This is not surprising when you consider that it’s written right there on the front page: “freedesktop.org is not a formal standards organization”.

In the case of the StatusNotifier spec, the brokenness shines through. Work started in April 2009 by Aaron Seigo, using the Galago spec as a starting point. Once KDE had begun working on an implementation, Marco Martin started on an initial draft of a spec. The first round draft there was mostly done on September 17 and proposed as the KNotificationItem spec. Then Aurélien Gâteau and Ted Gould made some (offline) suggestions, resulting in a rename, and some revisions, in late October. The spec was proposed as the StatusNotifier spec in December 2009.

At the point that GNOME developers Dan Winship & Mathias Clasen, and Citrix developer Giles Atkinson, reviewed the spec and made comments on it, too much had been invested in it to make major revisions. At that point, it is disingenuous to call StatusNotifier a cross-desktop standard. Hosting a document on the freedesktop.org wiki does not a cross-desktop standard make.

It’s interesting and ironic to see Aaron mention the nascent DConf specification from 2005 in these terms:

instead, the idea was, “If we propose it on fd.o, then people have to accept it because otherwise they won’t be cooperating with fd.o.” this is completely different from trying to work with others and having those efforts ignored.

In fact, that is exactly how StatusNotifiers were perceived (and exactly how Mark & Aaron are messaging GNOME’s non-adoption of the spec).

There is no freedesktop.org process for proposing standards, identifying those which are proposals and those which are de facto implemented, and perhaps more importantly, there is no process for building consensus around a specification, and signalling that consensus.

If I were in charge, I would require every spec to start with a problem definition. Only by agreeing on the problem can we hope to arrive at a solution which will be acceptable to all. The problem statement is the guiding light of a spec. Then I would make sure that the people with an interest in solving the problem were committed to the project. Only then do you start working on a spec and implementations. Without agreement on a problem, and without the right people at the table from the start, the effort is doomed. Some guidance on the process for the creation of a spec would be a start.

In this case, there was no founding problem statement. The spec proposed by Marco Martin listed this as the problems which it was solving:

The new protocol is based upon D-Bus, and separates the presentation of the items from the logic, in our case the painting is completely controlled by Plasma and the applications registers via D-bus (with a small client library shared across KDE) to a central server, while there can be zero or more instances of the systemtray. if either the serve or no instances of systemtrays that supports this protocol are registered the system will fall back using the old freedesktop.org systray specification.

This is not a compelling problem statement. No user ever had a problem because notifications didn’t use D-Bus.

It’s clear when reading Dan Winship’s follow-up comments that there was disagreement on the problem to solve, as well as disagreement on how to solve it. Dan felt that a spec should include policy, and document expected behaviour, while Aaron and Marco were at that point committed to the separation of “the visualisation” and the API. With a better problem statement, this would have been a minor implementation point, without one, two people ended up arguing over positions, and not interests.

If, instead, people had agreed on the problem of the panel or the issues they wanted to address before starting to work on a solution, things might have gone more smoothly. Note that the wiki page linked above was created at the end of December 2009, and the mailing list post was from February 2010 – to communicate what had already been written, not to concentrate people on a common problem.

Mark Shuttleworth really doesn’t know how GNOME works

This one really surprised me, but I think it’s indisputable. Mark wants GNOME to have “strong, mature technical leadership”. He talks about a GNOME cabal, and GNOME’s strategy being “whatever Jon McCann wants to do with the panel”. Mark and others don’t understand why libappindicator was rejected as an external dependency, misunderstanding that external dependencies are, by definition, dependencies of GNOME modules. He admits himself that he has failed to have Canonical developments considered as “internal to Gnome”, and clearly does not understand the position that the GNOME community as a whole has taken with respect to copyright assignment, or the history behind that position.

My understanding of GNOME is this: GNOME does not have technical leadership – it hasn’t had clear technical leadership since, as I understand it, the creation of the GNOME Foundation (at which point, by design, the board was given a mandate to build and define GNOME, and then soon afterwards removed that mandate from itself). The foundation does not now dictate any vision or direction for GNOME.

It can be argued that this is something which should be changed. That change will be effected by people involved in the foundation and the project. It is not enough for Mark to tell the project that “you need leadership”, or Jono Bacon telling foundation members (as he told me in 2007) that they should step up to the plate. Decisions are made by those who turn up – and I consider Mark, Jono, Ted Gould and others as members of the GNOME project, with as much mandate to change GNOME as I do. If Mark wants GNOME to have strong leadership, then he needs to help make that happen.

Given that this is not (yet) how GNOME works, to get things done in GNOME, you need to talk to the right people. That means, defining your problem, and identifying the stakeholders who are also interested in that problem, and working out a solution with them (am I repeating myself?). Mark seems to want GNOME to behave like a company, so that he can get “his people” to talk to “our people” and make it happen. I think that this misunderstanding of how to wield influence within the GNOME project is a key problem.

But then again, over the years I have heard similar feedback from GNOME Mobile participants, and people in Nokia – so it’s not all Mark’s fault. As Jono says here: GNOME does have a reputation of being hard to work with for companies – no point in denying it (then again, so does the kernel, and they seem to get along fine).

GNOME is not easy to understand

When I evaluated GNOME’s governance for Simon Phipps recently, I scored the project 0 (on a scale of -1 to 1) for the criteria of oligarchic governance. The notes from the evaluation were:

Newcomers to GNOME often have trouble figuring out who’s in charge. The Release Team is responsible primarily for the release process and has not traditionally set any strategic direction for GNOME, and individual module governance rules are varied. The foundation board is responsible primarily for maintaining the infrastructure of the foundation, and dealing with sponsors and benefactors, and does not set any technical direction.

Score: Governance is open, membership of the release team oligarchy is meritocratic – scoring zero for oligarchy because much of the governance is devolved to maintainers, making it hard to figure out how to accomplish project-wide change.

Finding the right person inside the GNOME project to help work on a given problem is not straightforward. If you want to make a change to one module, then it’s as simple as working with the maintainers. If, on the other hand, you want to propose a system-wide change, it is a much harder job – you need to work with module maintainers to get them to adopt your proposal, then work with the release team & the wider GNOME community through the module proposal period to get your library included in one of the module sets. Libraries I can think of in recent times that have not gained sufficient traction include Beagle, Geoclue, Soylent, or LeafTag. Other projects like Pyro, GNOME Online Desktop, or Zeitgeist have had baptisms of fire. Even libraries like GStreamer and Telepathy have taken a long time to get traction in core GNOME applications.

Even once you’re in the right place, having work reviewed can take time & effort. I have been told stories of dropped or unreviewed patch-sets by developers I’ve known across a number of projects for many years – one that is mentioned frequently is GNOME Control Center. Maybe persistence was all that would be required, maybe the patches were submitted in a way other than the usual method, or maybe the maintainer was just stuck for time & forgot – in any case, patches were lost, or their integration delayed, and contributors ended up disenfranchised.

But then you can say the same thing about the Linux kernel – contributing to the project is so confusing that Jon Corbet wrote a book about how to contribute, or even KDE – Stuart Jarvis wrote a timely article yesterday explaining that “KDE is not like [a company]. We don’t have leaders. We have prominent community members, but they tend to operate within their own areas of expertise.” Sounds familiar.

The bottom line is that GNOME can improve, but it is not going to change its nature, and working with GNOME needs to be done on the terms of the community you’re working with, not on your terms.

Deep mistrust has developed between Canonical, GNOME and KDE

Regardless of the causes & the history, it’s been made very clear over the past two days that people in our communities are prepared to believe the worst about their fellow free software developers.

Aaron Seigo, for example, clearly has no confidence in GNOME developers as a whole. He writes in a comment that:

@Tom: “do you think the Gnome my way or the high way attitude is connected to company agenda?”

i don’t think so. […] it really seems to be something common to the culture of the project rather than the culture of the companies they work for.

and later:

it’s not a belief that GNOME has decided to not collaborate on this (and other) initiatives for no good reason: it’s a fact. there is a demonstrated “if isn’t invented here, it isn’t used here” pattern of behavior.

Mark clearly believes that GNOME Shell is a Red Hat project. He feels short-changed, feels he and his team have made a good faith effort to engage which has been rejected, offer suggestions which have been ignored. On the other hand, Jon McCann does not see things the same way. And a lot of GNOME people see the move to Unity as a deliberate effort to undermine GNOME Shell, one more in a series of initiatives designed to give Ubuntu differentiation over their competitors without feeding the results into the upstream ecosystem.

Looking at some of the tweets & comments on the various posts, I see an employee of Intel, a developer from the Junta de Andalucia, a number of ex Canonical employees, a Novell employee, an unaffiliated volunteer, and others.Mark’s article blaming GNOME for the problems in the relationship was literally met with “WTF”s and laughter.

Ill will toward Canonical as a company is not limited to GNOME – Greg Kroah Hartman’s infamous presentation at the Linux Plumber’s Conference in 2008 comes to mind. Clearly frustration has been building across the community for a number of years, and it’s far too easy to dismiss it as jealousy because Ubuntu has so many users.

Difficult people are prominent in each of these projects

At this point, the participants in what has become a menage à trois each have a world view which is so different and a prioris so ingrained about the motivations & attitude of everyone associated with another project that undoing the damage will be very difficult.

It’s made even more difficult because a number of key contributors in the projects in question have a reputation of being hard to work with. In GNOME we have our share of people who, to use a phrase Jon Corbet uses to describe kernel hackers, are “not always concerned with showing a high degree of politeness”. I could come up with 10 names of hackers who might themselves be surprised to be on the list, who would be considered by people who have worked with them to be “prickly” to say the least. These people can be found at all levels of the project – prominent on the GNOME Foundation mailing list, maintainers of modules, employees of all of the major companies working with GNOME, even on the release team.

On the KDE side, Aaron also has this reputation – one KDE contributor I spoke to recently said that if Aaron had been a little more open to the feedback he received rather than adopting his “habitual air of superiority” that things might have gone better. And he’s not alone.

Reading the thread on freedesktop.org where the StatusNotifier spec was being discussed, it’s clear that Dan and Matthias considered that Aaron was being dismissive of their concerns – and I can certainly see why. Aaron, on the other hand, in his blog post, considers that “there was a lot of communication about Status Notifiers on the freedesktop.org xdg list where good feedback was offered and the specification improved significantly as a result. So communication really can’t be to blame here, at least not communication by those outside of GNOME” – there is a clear disconnect between how the thread was perceived by Aaron and by other participants.

Mark himself is no angel – I’m sure he will recognise that he is not one to avoid polarising positionn, or to change his mind easily once it’s made up. On the subject of copyright assignment, his mind is made up, for example. There is no revenue motive  behind the decision (I am convinced of this), but on principle, Mark has come to believe that controlling the copyrights to code is a best practice, and nothing will change his mind about that. Similarly, he has made up his mind on GNOME Shell it is “McCann’s” plaything, design suggestions made early in the process were ignored, and even though he now admits that the result is better than the early mock-ups, it is clear to me that there is no chance that Ubuntu will ever voluntarily adopt GNOME Shell.

We have two problems: first, that key figures in our communities can rub people up the wrong way. Second, it’s easy to ascribe to entire groups the characteristics of the people we come in contact with.

To solve the second problem, we need to start using names instead of project names. It’s too easy to ascribe ill-will to an anonymous faceless project, it’s another thing to do so to an individual with a name & a face. GNOME didn’t reject the StatusNotifier spec – two GNOME contributors on the xdg@freedesktop.org mailing list who read the spec (and who were in a position to do something about it) felt that their concerns were getting short-changed, and disengaged. I’d wager that 90% of GNOME project members didn’t even know about xdg before this week. Let’s call each other by our names (and be nice while we’re at it).

Behind closed doors conversations are poison

Another major issue we’ve had is a distinct lack of tracability. It isn’t helped by many Canonical developers using infrastructure which the appropriate GNOME developers don’t – but in fact I don’t care what public forum you use to develop & talk about your software. What is harmful is what I’ve dubbed the Water Cooler – when too many conversations happen in private email, at conferences over drinks, or on IRC, there is no tracability. One example: A key event in the timeline of shell/notifiers/Unity was the UX hackfest in 2008. Different people say different things were said. And there is no written trace of agreements or proposals concerning notifications. I was told that there were some conversations after the conference on IRC, but nothing was sent to a mailing list or recorded in a wiki page.

A member of the GNOME Shell team recently, in response to some questions about design decisions, said that a lot of the discussions & reasons behind the design would have happened on IRC. So there is no trace.

I have personally been involved in dozens of OTR discussions with Canonical people. I recall one where Jono urged me to “step up to the plate” and provide technical leadership for GNOME – my response was that that wasn’t the role of the board, that the distributors who depended on GNOME for their products had to set the lead. It would have been nice to have the discussion in public.

I know that writing stuff down after the fact is a pain. But it is required to allow for a traceability of community decisions and agreements – and also to highlight misunderstandings. If I had a dollar for every time I’ve had a conversation with a client about a technical spec, and he understood something different than me, I’d have enough money for a nice meal. By writing down understandings in clear and unambiguous terms, you are also giving the other party to the discussion the opportunity to correct misunderstandings early.

I also understand that there is an interest in putting on a good face, and not airing your dirty laundry in public (ironic, eh?) – for the past few years, the party line in Canonical has been “We love GNOME, we’re a GNOME shop” while behind the scenes there have been heartfelt conversations about the various problems which exist in GNOME & how to address them. The problem is, because these discussioons happen behind the scenes, they stay there. We never get beyond discussions, agreeing there is a problem, but never working together on a solution.

The party line in GNOME & KDE has been “we’re all pulling in the same direction, we like each other” – and for the most part it’s true. But if this week has shown nothing else, it’s shown that senior members of the KDE & GNOME projects actively mistrust each others motives & don’t believe we have the same interests at heart. At least, this is clear from Aaron Seigo’s comments on his blog post.

For people to work together, they need to be in the same place

I have seen a number of people say that Canonical worked on libappindicator and Unity “internally, not in the open” or that “a lot of design in free software are (sic) developed in secret”. Yet the code is open, the entire history is in Bazaar… how is this consistent?

First, design is not code. Design documents can behave like code, however, with peer review and an iterative process, and can be wedded to the process of developing code, evolving as technical constraints and schedule pressure get in the way of the original design. Good designers work with coders – and this is how it happens in Canonical too.

However, Canonical has occasionally opted to create new projects, housed in Launchpad, rather than engage existing projects to evolve them. libappindicator is an example – several people suggested that it should/could be part of GTK+ – what changed between January 2010 when Ted Gould said ” I’d like to think that the code in libappindicator would useful, and maybe even migrate into a replacement for GtkStatusIcon in GTK+” and February 2010 when he wrote ” Q: Shouldn’t this be in GTK+?
A: Apparently not”?

Canonical has a policy that Canonical development is done in Launchpad, using Bazaar. Sometimes that’s fine – if you’re originating a project, then you get to choose the infrastructure. Bazaar & Launchpad are working just fine for a plethora of projects. But when you are working with other projects, you need to be where they are.

Cody Russell, long-time GNOME contributor, former Canonical developer, and the developer of client side decorations for GTK+ among other things, wrote in a comment to Aaron’s blog:

CSD is really not a good example of how stuff development between Canonical and GNOME should work. I’m the person at Canonical who started CSD, but never finished it.

It started as just an experimental hack, and somehow got picked up as a “Canonical project”. Once that happened my immediate manager told me to stop committing code to GNOME git and do any further work on it privately in bzr.

For me this made developing it further much more difficult, because it was an extremely large and intrusive change into GTK+ source code and my manager didn’t want upstream developers to help me with at least peer code review.

Apparently there was originally some desire to have libappindicator developed as part of GTK+. I don’t know why this did not happen, but perhaps the quote above can give some insight into why the project was developed as an independent module.

Similarly, having a discussion on a freedesktop.org list does not ensure that you are getting appropriate cross-platform buy-in for your ideas. There is no guarantee that you are talking to the right people.

Most free software developers I know are on lots of mailing lists, and for all but a small number directly related to their day-to-day work, they glance over them. I certainly fall into this camp – there are about half a dozen lists I’m on where I will open maybe only 1 in 10 emails, with a subject that looks like it might concern me directly. If you want to directly get my attention outside of that, a personal email, IRC ping or IM asking me to comment on something is a good way to get my attention.

In the early days of freedesktop.org, this is how things worked. There were well defined problems that needed solving, and the people concerned made a conscious effort to get the right people into a central desktop-agnostic mailing list. As time goes on, maintainerships evolve, people change jobs, new people arrive – there is no longer any guarantee that the people on the freedesktop.org mailing lists are the best people to be talking to.

Moving on

So where do we go from here? Well, first GNOME 3. We have a release coming up, and so does Ubuntu, and we’re both going to have a bumpy ride for the next few months, so that is presumably going to be the priority for everyone.

After that, the Desktop Summit will be an opportunity to start building bridges. We’ve made an effort this year to avoid tribalism in the conference, by framing the call for papers according to problem area (multimedia, mobile, platform, etc.) rather than by desktop. We will be continuing this, I’m sure, through paper selection and drafting the agenda. That said, you can bring a horse to water, but you can’t make him drink.

Looking through the list of headings here, a number of them are easily fixable, a number of them are much more troubling, and a result of letting discontent fester for months or years.

We can certainly improve the operation of freedesktop.org – currently there is no freedesktop.org as such, it’s a wiki & a mailing list server. To improve it, there needs to be a process whereby things are agreed, and a way to ensure that all concerned parties are engaged in that process. There were discussions about this in Gran Canaria, including members of both the GNOME and KDE projects. But to evolve freedesktop.org, buy-in of a number of key GNOME developers is essential – I can’t imagine any long-term changes happening without Owen Taylor’s agreement, for example.

We can increase the transparency of the operation of individual GNOME modules. This is one of the things I hoped to achieve last year with the GNOME census. By identifying the key contributors for each module, and the processes under which each module operates, we can help reduce the friction when people try to figure out how to work with GNOME. Ideally, something like Jon’s guide to the kernel will help reduce the number of dropped or unreviewed patches, make it easier for people to see what kinds of contributions would actually be welcomed by module maintainer teams, and help people figure out how to gain influence in a specific module and eventually become a maintainer themselves.

We must reduce the amount of back-channel discussions between the various project participants. Any important decisions or agreements that happen off-line must get written down & agreed to after the fact. IRC usage has become predominant in some teams, resulting in a lack of transparency of operation – GNOME should adopt the Apache policy of “if it didn’t happen on the mailing list, it didn’t happen” and encourage companies who want to encourage change in the project to do the same. I would appreciate all participants committing to a general policy of releasing design specs and code early to peer review – and in the case of Canonical, working upstream before working in their own distribution.

I think there is a potential for a GNOME Design group, for example, with qualified designers in a publicly archived, but invitation-only, mailing list, to allow design collaboration without a high level of poor quality amateur participation which has typified public usability or design lists.

Finally, smaller, focused teams, started on a case by case basis, will serve us better than long-living “collaboration” mailing lists like desktop-architects or xdg. To ensure that the right people are at the table, they need to be invited, and their presence needs to be documented, on a project-by-project basis. Of course discussions on these lists should be publicly archived, but they should only be useful as long as the specific problem area is being addressed, and should die a natural death afterwards.

That’s the easy stuff.

The more difficult issue is that we have allowed relationships to degrade so far. It feels like GNOME & Canonical are in a bad relationship – we used to love each other, and now every time we talk it seems like we’re speaking a different language. For a while, it seemed like GNOME & KDE contributors were working productively together & overcoming some of the historical issues between the projects, but over the past 3 years, it’s been clear that the progress we had achieved was illusory, and that deep-seated ill-feeling among a small number of project leaders have ensured that any early progress has been squandered.

In addition, all of GNOME, KDE & Canonical have allowed personality issues to build up. One need only follow the discussions within the GNOME foundation concerning the Code of Conduct to see that the GNOME community has allowed some loud & confrontational characters to gain positions of authority in the project, and KDE is also no stranger to such personality issues among prominent developers.

Solving this problem is much more difficult, if it’s solvable at all. Change inside the GNOME project can only come from the grass roots, and the same goes for KDE. Adopting a code of conduct is less important than actually being nice. Too many people confusing being rude and abrasive as being terse and efficient. And getting a critical mass of community leaders in the same place at the same time to concentrate on common issues and approaches to solving them is difficult when there is so much pent up frustration and ill-will involved.

The Desktop Summit will be an important meeting point this year, where hopefully some of these issues can be resolved. In the meantime, I hope that we can start some small conversations soon to get people talking and trusting each other’s motives again. It will be a long and arduous process, and will require everyone to accept part of the blame for the situation we find ourselves in, and to accept the better days are possible.

As I said to Jono Bacon yesterday when he suggested we should all just get along & stop digging up the past: “Those who ignore the past are doomed to repeat it. And we have been failing for some time to understand the issues which people have working in freedesktop.org, or with GNOME, KDE or Canonical”. Unlike some people commenting on the various blog posts this week, I think that getting some of the dirty laundry out into the open will be beneficial to the general working environment. Sunlight is a great disinfectant, they say, and a number of issues have been kept under wraps too long by people who want to put a brave face on & pretend for public benefit that everything is rosy.

Well, everything isn’t rosy, and now even a fool could see that. But I don’t think there’s anything broken that we can’t fix. Let’s concentrate on getting GNOME 3.0 and Ubuntu 11.04 out the door, and then get to work mending relationships.

Has GNOME rejected Canonical help?

community, freesoftware, gnome 74 Comments

Through the fall-out from the Unity decision, and now the fall-out from the packaging of Banshee on Natty, I have repeatedly read Canonical & Ubuntu people say “We offered our help to GNOME, and they didn’t want it”.

Exhibit #1:

For starters, some people in the GNOME community moan about how Ubuntu doesn’t pull its weight upstream.They then make it difficult for Ubuntu-y folks to contribute things upstream.

Exhibit #2:

For the app indicators we also had a lot of community involvement, it was based on a Freedesktop.org spec, worked on with consultancy from KDE, we invited GNOME developers to participate in the Freedesktop discussion and proposed them to the GNOME community for inclusion, but it’s not up to us, if they take it or not

Exhibit #3:

Where tensions between Canonical and GNOME have occurred, according to Bacon, is in Canonical’s desktop innovations for improved usability, such as the Ayatana indicators for sound and social media, and the new Unity desktop, all of which were submitted to GNOME and rejected, leaving Canonical to develop them outside the GNOME project. […] Asked whether Canonical could have developed its usability modifications within GNOME, he replies, “To be honest with you, I don’t think it could have been done. The fact that nothing’s been accepted is a pretty reasonable indicator that the two projects have widely different directions.”

Exhibit #4:

We committed to build Unity […] because we had ample reason to believe that the trajectory of the alternatives was going to fail. And it did fail – Gnome 3 looks much more like the vision we painted with Unity than the original vision […] I am sorry that a few Gnome leaders have blocked Gnome’s adoption of Unity API’s, and the stress that will cause, but I feel proud that we had the guts, and the capacity, to design and deliver something wonderful.

I have seen and heard this mentioned by others too, but cannot find any others right now – additional pointers in comments would be welcome!

So – given that GNOME is a project which scores very highly as being Open By Rule (disclosure: I put together the evaluation of GNOME for Simon), I thought I would go back through the archives and see how true this was.

Looking at what was actually proposed for inclusion in GNOME from Ayatana work, libappindicator was rejected because (quoting directly from the release team’s decision):

  • it doesn’t integrate with gnome-shell
  • probably depends on GtkApplication, and would need integration in GTK+ itself
  • we wished there was some constructive discussion around it, pushed by the libappindicator developers; but it didn’t happen
  • there’s nothing in GNOME needing it

I went back to see where the discussion happened for the libappindicator proposal. There was a discussion, some over & back, Ted was (as usual) forthcoming & helpful, and things appeared to be moving approximately in the right direction. There were some issues over copyright assignment, and the discussion petered out. No feedback I could see from the GNOME Shell team – positive or negative – to depending on the library.

Now, this is hardly ideal. I would love to see debate on why there wasn’t a more in-depth debate on using libappindicator in GNOME Shell. Was this ever proposed? If so, where? I can’t find any reference. Was there any reaction other than “we don’t think it’s an issue” to the copyright assignment issue? Perhaps there was a lot more discussion in another forum that I haven’t linked to – on the release-team list, on IRC, or elsewhere? Comments, please!

I would love to point to other instances of work which has been proposed upstream from Canonical and which has been rejected, but my (admittedly, brief) search has not turned up much useful stuff. I can’t find any online reference to displeasure with the GNOME Shell vision, or proposals of alternatives, nor can I find situations of “Paper Cut” patches being rejected because they were from Canonical or Ubuntu. In fact, the one reference I found to the UX hackfest in 2008 from Mark seemed quite positive about the whole thing.

There are apocryphal stories about patches submitted twice by different people before they were accepted, other stories about people being “impossible to work with”, design feedback being ignored, and more – I would love to see some evidence of this, or some documented criticism from 2008 of some of the GNOME Shell design documents. I hear often that some of the design decisions were unacceptable, but ask which ones, where the discussion took place, or how much effort was spent trying to get things changed, and hand-wavy “lots of stuff” type answers is what you get back.

I would really like to shed some sunlight on this – if we do not have publicly archived references to places where these disagreements have happened, then there are a couple of possible conclusions we can draw: either insufficient effort was made to collaborate, or the effort was made, and GNOME Shell is not sufficiently transparent for the developers and designers to be accountable.

So please – pile in on the comments. I want to know of instances when GNOME has (allegedly) refused contributions or help from Canonical, with links to Bugzilla, mailing lists, even IRC logs or wiki pages. Let’s get to the bottom of this & see if we can’t solve the problem.

Updated to clarify that the reasons for rejecting libappindicator were not mine, but were copied from the release team decisions, after reading Aaron Seigo’s response

Where do we go from here?

community, freesoftware, maemo, meego, work 14 Comments

The post-Elopocalypse angst has been getting me down over the past few days. It’s against my nature to spend a lot of time worrying about things that are decided, done, dusted. It was Democritus, I think, who said that only a fool worries about things over which he has no control, and I definitely identify with that. It seems that a significant number of people on mailing lists I’m subscribed to don’t share this character trait.

I prefer to roll with the punches, to ask, “where do we go from here?” – we have a new landscape, with Nokia potentially being a lot less involved in MeeGo over the coming months. Will they reduce their investment in 3rd party developers? Perhaps. I expect them to. Will they lay some people off? I bet that there will be a small layoff in MeeGo Devices, but I’d wager that there will be bigger cuts in external contracts. In any case, this is something over which I have no control.

First up – what next for MeeGo? While MeeGo is looking a lot less attractive for application developers now, I still think there’s a great value proposition for hardware vendors to get behind it in vertical markets. Intel seem committed, and MeeGo (even with Nokia reducing investment) is much broader than one company now. A lot of people are betting the bank on it being a viable platform. So I think it will be, and soon.

Will I continue contributing time & effort to MeeGo? My reasons for contributing to MeeGo were not dependent on Nokia’s involvement, so yes, but I will be carefully eyeing business opportunities as well. I’d be lying if I said that I didn’t expect to get some business from a vibrant MeeGo ecosystem, and now I will need to explore other avenues. But the idea of collaborating on a core platform and building a set of free software form-factor specific UIs is still appealing. And I really do like the Maemo/MeeGo community a lot.

Luckily, the time to market difficulties that Nokia experienced are, in my opinion, issues of execution rather than inherent problems in working with free software. Companies have a clear choice between embracing proprietary-style development and treating upstream as “free code” (as Google have with Android), or embracing community-style development and working “The Open Source Way” (as Red Hat have learned to do). Nokia’s problems came from the hybrid approach of engage-but-keep-something-back, which prevented them from leveraging community developers as co-developers, while at the same time imposing all the costs of growing and supporting a large community.

I expect lots of companies to try to learn from this experience and start working smarter with communities – and since that’s where I can help them, I’m not too worried about the medium term.

I would bet on Nokia partners and subcontractors battening down the hatches right now until the dust settles, and potentially looking for revenue sources outside the MeeGo world. If I had a team of people working for me that’s what I’d do. If some Nokia work kept coming my way, I’d be glad of it, but right now I’d be planning a life without Nokia in the medium term.

For any companies who have followed Nokia from Symbian to MeeGo, my advice would be to stick to Linux, convert to an Android strategy, and start building some Windows Phone skills in case Nokia’s bet works out, but don’t bet the bank on it. And working effectively with community developed software projects is a key skill for the next decade that you should be developing (a small plug for my services there).

For anyone working on MeeGo within Nokia, the suspense over who might lose their jobs is worse than the fall, let me reassure you. Having been through a re-org or two in my time, I know that the wait can last weeks or months, and even when the cuts come, there’s always an itching suspicion of another one around the corner. Nothing is worse for morale in a team than wondering who will still be there next month. But you have learned valuable and sought-after skills working on MeeGo, and they are bankable on the market right now. If I were working on MeeGo inside Nokia right now, I think I’d ignore the possibility of a lay-off and get on with trying to make the MeeGo phone as great as possible. If I got laid off, I’d be happy to have a redundancy package worthy of Finland, and would be confident in my ability to find a job as a Linux developer very quickly.

For community members wondering whether to stick with MeeGo or jump ship, I’d ask, why were you hanging out around MeeGo in the first place? Has anything in the past week changed your motivations? If you wanted to have a shiny free-software-powered Nokia phone, you should have one by the end of the year. If you wanted to hack on any of the components that make up MeeGo, you can still do that. If you were hoping to make money off apps, that’s probably not going to happen with MeeGo on handsets any time soon. If you’re not convinced by the market potential of MeeGo apps on tablets, I’d jump ship to Android quick (in fact, why aren’t you there already?).

Qt users and developers are probably worried too. I don’t think that Qt is immediately threatened. The biggest danger for Qt at this point would be Intel & others deciding that Qt was a bad choice and moving to something else. That would be a massive strategic blunder – on a par with abandoning the GTK+ work which had been done before moblin 2 to move to Qt. Rewriting user interfaces is hard and I don’t think that Intel are ready to run the market risk of dropping Qt – which means that they’re pot-committed at this point. If Nokia ever did decide to drop Qt, Intel would probably be in the market to buy it. Then again, I can also see how Qt’s management might try to do an LMBO and bring the company private again. Either way, there will be a demand for Qt, and Qt developers, for some time to come.

No-one likes the guy giving unwanted advice to everyone, so this seems like a good place to stop. My instinct when something like this happens is to take a step back, see what’s inherently changed, and try to see what the landscape looks like from different perspectives. From my perspective, the future is definitely more challenging than it was a week ago, but it’s not like the Elopocalypse wiped out my livelihood. In fact, I have been thinking about life without Nokia since MeeGo was first announced last year, when I guessed that Nokia would prefer working through the Linux Foundation for an independent eye.

But even if Nokia were my only client, and they were going away tomorrow, I think I could probably find other clients, or get a job, quickly enough. It’s important to put these things in perspective.

« Previous Entries Next Entries »