Effective mentoring programs

community, freesoftware, General, gimp, gnome 18 Comments

I’ve been thinking a lot recently about mentoring programs, what works, what doesn’t, and what the minimum amount of effort needed to bootstrap a program might be.

With the advent of Google Summer of Code and Google Code-In, more and more projects are formalising mentoring and thinking about what newcomers to the project might be able to do to learn the ropes and integrate themselves into the community. These programs led to other organised programs like GNOME’s Women Summer Outreach Program. Of course, these initiatives weren’t the first to encourage good mentoring, but they have helped make the idea of mentors taking new developers under their wing much more widespread.

In addition to these scheduled and time-constrained programs, many projects have more informal “always-on” mentoring programs – Drupal Dojo and GNOME Love come to mind. Others save smaller easier tasks for newcomers to cut their teeth on, like the “easy hacks” in LibreOffice. Esther Schindler wrote a fantastic article a few years ago documenting best and worst practices in mentoring among different projects.

Most mentoring programs I have seen and participated in don’t have a very good success rate, though. In this article, I look at why that is the case, and what can be put in place to increase the success rate when recruiting new developers.

Why most mentoring fails

Graham Percival, a GNU/LilyPond developer, decided in 2008 to run an experiment. At one point, Graham decided that he would quit the project, but felt guilty about doing so in one go. So he  started the “Great Documentation Project” to recruit a replacement documentation team to follow on after his departure. He then spent 12 months doing nothing but mentoring newcomers to get them involved in the project, and documented his results. Over the course of a year, he estimates that he spent around 800 hours mentoring newcomers to the project.

His conclusions? The net result for the project was somewhere between 600-900 hours of productivity, and at the end of the year, 0 new documentation team members. In other words, Graham would have been better off doing everything himself.

Graham found that “Only 1 in 4 developers was a net gain for the project” – that is, for every 4 apprentices that Graham spent time mentoring, only 1 hung around long enough for the project to recoup the time investment he put into mentoring. A further 1 in 4 were neither gain or loss – their contribution roughly equalled the mentor time that they took up. And the remainder were a net loss for the project, with much more time spent mentoring than the results could justify.

The GNOME Women’s Summer Outreach Program in 2006 had 6 participants. In 2009, the GNOME Journal ran a “Where are they now?” follow-up article. Of the 6 original participants, only one is still involved in free software, and none are involved in GNOME. Murray Stokely did a follow-up in 2008 to track the 57 alumni of Summer of Code who had worked on FreeBSD. Of these, 10 students went on to get full commit access, and a further 4 students were still contributing to FreeBSD or OpenBSD after the project. Obey Arthur Liu also did a review of Debian participants in 2008. Of 11 students from 2008 who had no previous Debian developer experience, he found that 4 remained active in the project one year later.

From my own experience as a replacement mentor and administrator in the Summer of Code for the GIMP in 2006, we had 6 projects, most of which were considered a success by the end of the summer, yet of the participating students, none have made any meaningful contribution to the GIMP since.

I feel safe in saying that the majority of mentoring projects fail – and Graham’s 1 in 4 sounds about right as an overall average success/failure rate. This begs the question: why?

Most mentored projects take too long

What might take a mentor a couple of hours working on his own could well take an apprentice several days or weeks. All of the experience that allows you to hit the ground running isn’t there. The most important part of the mentoring experience is getting the student to the point where he can start working on the problem. To help address this point, many projects now require Summer of Code applicants to compile the project and propose a trivial patch before they are accepted for the program, but understanding the architecture of a project and reading enough code to get a handle on coding conventions will take time. It will also take mentor time. It takes longer to teach a newcomer to your project than to do the work yourself, as anyone who has ever had a Summer intern will attest.

When you set a trainee task which you estimate to be about 4 hours work, that will end up costing a few weeks of volunteer effort for your apprentice, and 8 to 10 hours mentoring time for you during that time. Obviously, this is a big investment on both sides, and can lead to the apprentice giving up, or the mentor running out of patience. I remember in the first year of Summer of Code, projects were taking features off their wishlists that had not been touched for years, and expecting students new to the project to come in and work full time implementing them perfectly over the course of 12 weeks. The reality that first year was that most of the time was spent getting a working environment set up, and getting started on their task.

Mentoring demands a lot of mentors

As a free software developer, you might not have a lot of time to work on your project. Josh Berkus, quoted in Schindler’s article, says “being a good mentor requires a lot of time, frequently more time than it would take you to do the development yourself”.  According to the Google Summer of Code FAQ, “5 hours a week is a reasonable estimate” for the amount of time you would need to dedicate to mentoring. Federico Mena Quintero suggests that you will need to set aside “between 30 and 60 minutes a day“.

When you only have 10 hours a week to contribute to a project, giving up half of it to help someone else is a lot. It is easy to see how working on code can get a higher priority than checking in with your apprentice to make sure everything is on track.

Communication issues

More mentoring projects fail for lack of communication than for any other reason.

Apprentices may expect their mentors to check in on them, while mentors expect apprentices to ask questions if they have any. Perhaps newcomers to the project are not used to working on mailing lists, or are afraid of asking stupid questions, preferring to read lots of documentation or search Google for answers. In the absence of clear guidelines on when and how parties will talk to each other, communication will tend towards “not enough” rather than “too much”.

No follow through

Many mentoring programs stop when your first task is complete. The relationship between the mentor and the apprentice lasts until the end of the task, and then either the apprentice goes off and starts a new task, with a new mentor, or that is the end of their relationship with the project. I would be really interested to hear how many Summer of Code mentors maintained a relationship with their students after the end of the Summer, and helped them out with further projects. I suspect that many mentors invest a lot of time during the program, and then spend most of their time catching up with what they wanted to do.

Project culture

In her OSCON keynote in 2009, Skud talked about the creation of a welcoming and diverse community as a prerequisite for recruiting new developers. Sometimes, your project culture just doesn’t match newcomers to the project. If this happens regularly, then perhaps the project’s leaders need to work on changing the culture, but this is easier said than done. As Chris di Bona says in this video, “the brutality of open source is such that people will learn to work with others, or they will fail”. While many think that this kind of trial-by-fire is fine, the will not be the environment for everyone. It is really up to each project and its leaders to decide how “brutal” or forgiving they want to be. There is a trade-off: investing time in apprentices who will contribute little is a waste of time, but being too dismissive of a potential new developer could cost your project in the long run.

Mentoring best practices

Is all the effort worth it? If mentoring programs are so much hassle, why go to the bother?

Mentoring programs are needed to ensure that your project is long-term sustainable. As Graham says in his presentation: “Core developers do most of the work. Losing core developers is bad. Projects will lose core developers.” Do you need any other reason to start actively recruiting new blood?

There are a few simple things that you can put in place to give your mentoring program a better chance of success.

Small tasks

Mentored tasks should be small, bite-sized, and allow the apprentice to succeed or fail fast. This has a number of advantages: The apprentice who won’t stick around, or who will accomplish nothing, has not wasted a lot of your mentor’s time. The apprentice who will stay around gets a quick win, gets his name in the ChangeLog, and gains assurance in his ability to contribute. And the quick feedback loop is incredibly rewarding for the mentor, who sees his apprentice attack new tasks and increase his productivity in short order. Graham implies that a 10 minute task is the right size, with the expectation that the apprentice might take 1 hour to accomplish the task.

A ten minute task might even take longer to identify and list than it would to do. You can consider this cost the boot-strapping cost of the mentoring program. Some tasks that are well suited to this might include:

  • Write user documentation for 1 feature
  • Get the source code, compile it, remove a compiler warning, and submit a patch
  • Critique 1 unreviewed patch in Bugzilla
  • Fix a trivial bug (a one line/local change)

Of course, the types of tasks on your list will change from one project to the next.

Mentoring is management

Just as not everyone is suited to being a manager, not everyone is suited to being a mentor. The skills needed to be a good mentor are also those of a good manager – keeping track of someone else’s activity while making them feel responsible for what they’re working on, communicating well and frequently, and reading between the lines to diagnose problems before it’s too late to do anything about them.

When you think of it in this way, there is an opportunity for developers who would like to gain management experience to do so as a mentor in a free software project. Continually recruiting mentors is just as important as recruiting developers. Since mentoring takes a lot of time, it’s important that mentors get time off and new mentors are coming in in their place.

Pair apprentices with mentors, not tasks

An apprentice should have the same mentor from the day he enters the mentoring program until he no longer needs or wants the help. The relationship will ideally continue until the apprentice has himself become a mentor. Free software communities are built on relationships, and the key point to a mentoring program is to help the creation of a new relationship. Mentoring relationships can be limited in time also, 6 months or a year seem like good time limits. The time needed to mentor will, hopefully, go down over this period.

Regular meeting times

Mentors and apprentices should ensure that there is a time on their calendar for a “one on one” at regular times. How regularly will depend on the tasks, and the amount of time you can spend on it. Weekly, fortnightly or monthly are all reasonable in different situations. This meeting should be independent of any other communication you have with the person – it is too easy for the general business of a project to swallow up a newbie and prevent their voice from being heard. Rands said it well when he said “this chatter will bury the individual voice unless someone pays attention.”

Convert apprentices into mentors

Never do you understand the pain of the initial learning curve better than when you have just gone through it. The people best suited to helping out newcomers to the project are those who have just come through the mentoring program themselves.

This is a phenomenon that I have seen in the Summer of Code. Those students who succeed and stay with the project are often eager to become mentors the following year. And they will, in general, be among the best mentors in the project.

Keep track

For all involved, it’s useful to have some idea of the issues newcomers have – ensure that documenting solutions is part of what you ask. It’s also useful to know how successful your mentoring program is. Can you do better than the 1 in 4 success rate of LilyPond? Keeping track of successes and failures encourages new mentors, and gives you data to address any problems you run into.

Manage the mentors

All of this work has overhead. In a small project with 1 or 2 core developers, it’s easy enough to have each core developer take an apprentice under their wing, and co-ordinate on the mailing list. In bigger projects, keeping track of who is a mentor, and who is mentoring who, and inviting new mentors, and ensuring that no-one falls through the cracks when a mentor gets too busy, is a job of itself. If your mentoring program goes beyond more than ~5 mentors or so, you might want to consider nominating someone to lead the program (or see who steps up to do the job). This is the idea behind the Summer of Code administrator, and it’s a good one.

Go forth and multiply

Developer attrition is a problem in open source, and recruitment and training of new developers is the only solution. Any project which is not bringing new developers up to positions where they can take over maintainership is doomed to failure. A good mentoring program, however, with a retention rate around 25%, organised continuously, should ensure that your project continues to grow and attract new developers.

Replenishing your stock of mentor tasks and recruiting new mentors will take effort, and continual maintenance of someone putting in a few hours a week. If you execute well, then you will have helped contribute to the long term diversity and health of your project.

Getting people together

community, freesoftware, gimp, gnome, guadec, libre graphics meeting, maemo, openwengo 3 Comments

One of the most important things you can do in a free software project, besides writing code, is to get your key contributors together as often as possible.

I’ve been fortunate to be able to organise a number of events in the past 10 years, and also to observe others and learn from them over that time. Here are some of the lessons I’ve learned over the years from that experience:

Venue

The starting point for most meetings or conferences is the venue. If you’re getting a small group (under 10 people) together, then it is usually OK just to pick a city, and ask a friend who runs a business or is a college professor to book a room for you. Or use a co-working space. Or hang out in someone’s house, and camp in the garden. Once you get bigger, you may need to go through a more formal process.

If you’re not careful, the venue will be a huge expense, and you’ll have to find that money somewhere. But if you are smart, you can manage a free venue quite easily.

Here are a few strategies you might want to try:

  • Piggy-back on another event – the Linux Foundation Collaboration Summit, OSCON, LinuxTag, GUADEC and many other conferences are happy to host workshops or meet-ups for smaller groups. The GIMP Developers Conference in 2004 was the first meet-up that I organised, and to avoid the hassle of dealing with a venue, finding a time that suited everyone, and so on, I asked the GNOME Foundation if they wouldn’t mind setting aside some space for us at GUADEC – and they said yes.Take advantage of the bigger conference’s organisation, and you get the added benefit of attending the bigger conference at the same time!
  • Ask local universities for free rooms – This won’t work once you go over a certain size, but especially for universities which have academics who are members of the local LUG, they can talk their department head into booking a lecture theatre & a few classrooms for a weekend. Many universities will ask to do a press release and get credit on the conference web-site, and this is a completely fair deal.The first Libre Graphics Meeting was hosted free in CPE Lyon, and the GNOME Boston Summit has been hosted free for a number of years in MIT.
  • If the venue can’t be free, see if you can get someone else to pay for it – Once your conference is bigger than about 200 people, most venues will require payment. Hosting a conference will cost them a lot, and it’s a big part of the business model of universities to host conferences when the students are gone. But just because the university or conference center won’t host you for free doesn’t mean that you have to be the one paying.

    Local regional governments like to be involved with big events in their region. GUADEC in Stuttgart, the Gran Canaria Desktop Summit, and this year’s Desktop Summit in Berlin have all had the cost of the venue covered by the host region. An additional benefit of partnering with the region is that they will often have links to local industry and press – resources you can use to get publicity and perhaps even sponsorship for your conference.

  • Run a bidding process – by encouraging groups wishing to host the conference to put in bids, you are also encouraging them to source a venue and talk to local partners before you decide where to go. You are also putting cities in competition with each other, and like olympic bids, cities don’t like to lose competitions they’re in!

Budget

Conferences cost money. Major costs for a small meet-up might be
covering the travel costs of attendees. For a larger conference, the
major costs will be equipment, staff and venue.

Every time I have been raising the budget for a conference, my rule of
thumb has been simple:

  1. Decide how much money you need to put on the event
  2. Fundraise until you reach that amount
  3. Stop fundraising, and move on to other things.

Raising money is a tricky thing to do. You can literally spend all of
your time doing it. At the end of the day, you have a conference to put
on, and the amount of money in the budget is not the major concern of
your attendees.

Remember, your primary goal is to get project participants together to
advance the project. So getting the word out to prospective attendees,
organising accommodation, venue, talks, food and drinks, social
activities and everything else people expect at an event is more
important than raising money.

Of course, you need money to be able to do all the rest of that stuff,
so finding sponsors, fixing sponsorship levels, and selling your
conference is a necessary evil. But once you have reached the amount of
money you need for the conference, you really do have better things to
do with your time.

There are a few potential sources of funds to put on a conference – I
recommend a mix of all of these as the best way to raise your budget.

  • Attendees – While this is a controversial topic among many communities, I think it is completely valid to ask attendees to contribute something to the costs of the conference. Attendees benefit from the facilities, the social events, and gain value from the conference.Some communities consider attendance at their annual event as a kind of reward for services rendered, or an incitement to do good work in the coming year, but I don’t think that’s a healthy way to look at it.

    There are a few ways for conference attendees to fund the running of the conference:

    1. Registration fees – This is the most common way to get money from conference attendees. Most community conferences ask for a token amount of fees. I’ve seen conferences ask for an entrance fee of €20 to €50, and most people have not had a problem paying this.

      A pre-paid fee also has an additional benefit of massively reducing no-shows among locals. People place more value on attending an event that costs them €10 than one where they can get in for free, even if the content is the same.

    2. Donations – very successfully employed by FOSDEM. Attendees are offered an array of goodies, provided by sponsors (books, magazine subscriptions, t-shirts) in return for a donation. But those who want can attend for free.
    3. Selling merchandising – Perhaps your community would be happier hosting a free conference, and selling plush toys, t-shirts, hoodies, mugs and other merchandising to make some money. Beware: in my experience you can expect less from profits from merchandising sales than you would get giving a free t-shirt to each attendee with a registration fee.
  • Sponsors – Media publications will typically agree to “press sponsorship” – providing free ads for your conference in their print magazine or website. If your conference is a registered non-profit which can accept tax-deductible donations, offer press sponsors the chance to invoice you for the services and then make a separate sponsorship grant to cover the bill. The end result for you is identical, but it will allow the publication to write off the space they donate to you for tax.

    What you really want, though, are cash sponsorships. As the number of free software projects and conferences has multiplied, the competition for sponsorship dollars has really heated up in recent years. To maximise your chances of making your budget target, there are a few things you can do.

    1. Conference brochure – Think of your conference as a product you’re selling. What does it stand for, how much attention does it get, how important is it to you, to your members, to the industry and beyond? What is the value proposition for the sponsor?

      You can sell a sponsorship package on three or four different grounds: perhaps conference attendees are a high-value target audience for the sponsor, perhaps (especially for smaller conferences) the attendees aren’t what’s important, it’s the attention that the conference will get in the international press, or perhaps you are pitching to the company that the conference is improving a piece of software that they depend on.

      Depending on the positioning of the conference, you can then make a list of potential sponsors. You should have a sponsorship brochure that you can send them, which will contain a description of the conference, a sales pitch explaining why it’s interesting for the company to sponsor it, potentially press clippings or quotes from past attendees saying how great the conference is, and finally the amount of money you’re looking for.

    2. Sponsorship levels – These should be fixed based on the amount of money you want to raise. You should figure on your biggest sponsor providing somewhere between 30% and 40% of your total conference budget for a smaller conference. If you’re lucky, and your conference gets a lot of sponsors, that might be as low as 20%. Figure on a third as a ball-park figure. That means if you’ve decided that you need €60,000 then you should set your cornerstone sponsor level at €20,000, and all the other levels in consequence (say, €12,000 for the second level and €6,000 for third level).

      For smaller conferences and meet-ups, the fundraising process might be slightly more informal, but you should still think of the entire process as a sales pitch.

    3. Calendar – Most companies have either a yearly or half-yearly budget cycle. If you get your submission into the right person at the right time, then you could potentially have a much easier conversation. The best time to submit proposals for sponsorship of a conference in the Summer is around October or November of the year before, when companies are finalising their annual budget.

      If you miss this window, all is not lost, but any sponsorship you get will be coming out of discretionary budgets, which tend to get spread quite thin, and are guarded preciously by their owners. Alternatively, you might get a commitment to sponsor your July conference in May, at the end of the first half budget process – which is quite late in the day.

    4. Approaching the right people – I’m not going to teach anyone sales, but my personal secret to dealing with big organisations is to make friends with people inside the organisations, and try to get a feel for where the budget might come from for my event. Your friend will probably not be the person controlling the budget, but getting him or her on board is your opportunity to have an advocate inside the organisation, working to put your proposal in front of the eyes of the person who owns the budget.

      Big organisations can be a hard nut to crack, but free software projects often have friends in high places. If you have seen the CTO or CEO of a Fortune 500 company talk about your project in a news article, don’t hesitate to drop him a line mentioning that, and when the time comes to fund that conference, a personal note asking who the best person to talk to will work wonders. Remember, your goal is not to sell to your personal contact, it is to turn her into an advocate to your cause inside the organisation, and create the opportunity to sell the conference to the budget owner later.

    Also, remember when you’re selling sponsorship packages that everything which costs you money could potentially be part of a sponsorship package. Some companies will offer lanyards for attendees, or offer to pay for a coffee break, or ice-cream in the afternoon, or a social event. These are potentially valuable sponsorship opportunities and you should be clear in your brochure about everything that’s happening, and spec out a provisional budget for each of these events when you’re drafting your budget.

Content

Conference content is the most important thing about a conference. Different events handle content differently – some events invite a large proportion of their speakers, while others like GUADEC and OSCON invite proposals and choose talks to fill the spots.

The strategy you choose will depend largely on the nature of the event. If it’s an event in its 10th year with an ever increasing number of attendees, then a call for papers is great. If you’re in your first year, and people really don’t know what to make of the event, then setting the tone by inviting a number of speakers will do a great job of helping people know what you’re aiming for.

For Ignite Lyon last year, I invited about 40% of the speakers for the first night (and often had to hassle them to put in a submission, and the remaining 60% came through a submission form. For the first Libre Graphics Meeting, apart from lightning talks, I think that I contacted every speaker except 2 first. Now that the event is in its 6th year, there is a call for proposals process which works quite well.

Schedule

Avoiding putting talks in parallel which will appeal to the same people is hard. Every single conference, you hear from people who wanted to attend talks which were on at the same time on similar topics.

My solution to conference scheduling is very low-tech, but works for me. Coloured post-its, with a different colour for each theme, and an empty talks grid, do the job fine. Write the talk titles one per post-it, add any constraints you have for the speaker, and then fill in the grid.

Taking scheduling off the computer and into real life makes it really easy to see when you have clashes, to swap talks as often as you like, and then to commit it to a web page when you’re happy with it.

I used this technique successfully for GUADEC 2006, and Ross Burton re-used it in 2007.

Parties

Parties are a trade-off. You want everyone to have fun, and hanging out is a huge part of attending a conference. But morning attendance suffers after a party. Pity the poor community member who has to drag himself out of bed after 3 hours sleep to go and talk to 4 people at 9am after the party.

Some conferences have too many parties. It’s great to have the opportunity to get drunk with friends every night. But it’s not great to actually get drunk with friends every night. Remember the goal of the conference: you want to encourage the advancement of your project.

I encourage one biggish party, and one other smallish party, over the course of the week. Outside of that, people will still get together, and have a good time, but it’ll be on their dime, and that will keep everyone reasonable.

With a little imagination, you can come up with events that don’t involved loud music and alcohol. Other types of social event can work just as well, and be even more fun.

At GUADEC we have had a football tournament for the last number of years. During the OpenWengo Summit in 2007, we brought people on a boat ride on the Seine and we went on a classic 19th century merry-go-round afterwards. Getting people eating together is another great way to create closer ties – I have very fond memories of group dinners at a number of conferences. At the annual KDE conference Akademy, there is typically a Big Day Out, where people get together for a picnic, some light outdoors activity, a boat ride, some sightseeing or something similar.

Extra costs

Watch out for those unforeseen costs! One conference I was involved in, where the venue was “100% sponsored” left us with a €20,000 bill for labour and equipment costs. Yes, the venue had been sponsored, but setting up tables and chairs, and equipment rental of whiteboards, overhead projectors and so on, had not. At the end of the day, I estimate that we used about 60% of the equipment we paid for.

Conference venues are hugely expensive for everything they provide. Coffee breaks can cost up to $10 per person for a coffee & a few biscuits, bottled water for speakers costs $5 per bottle, and so on.  Rental of an overhead projector and mics for one room for one day can cost €300 or more, depending on whether the venue insists that equipment be operated by their a/v guy or not.

When you’re dealing with a commercial venue, be clear up-front about what you’re paying for.

On-site details

I like conferences that take care of the little details. As a speaker, I like it when someone contacts me before the conference and says they’ll be presenting me, what would I like them to say? It’s reassuring to know that when I arrive there will be a hands-free mic and someone who can help fit it.

Taking care of all of these details needs a gaggle of volunteers, and it needs someone organising them beforehand and during the event. Spend a lot of time talking to the local staff, especially the audio/visual engineers.

In one conference, the a/v guy would switch manually to a screen-saver at the end of a presentation. We had a comical situation during a lightning talk session where after the first speaker, I switched presentations, and while the next presentation showed up on my laptop, we still had the screensaver on the big screen. No-one had talked to the A/V engineer to explain to him the format of the presentation!

So we ended up with 4 Linux engineers looking at the laptop, checking connections and running various Xrandr incantations, trying to get the overhead projector working again! We eventually changed laptops, and the a/v engineer realised what the session was, and all went well after that – most of the people involved ended up blaming my laptop.

Have fun!

Running a conference, or even a smaller meet-up, is time consuming, and consists of a lot of detail work, much of which will never be noticed by attendees. I haven’t even dealt with things like banners and posters, graphic design, dealing with the press, or any of the other joys that come from organising a conference.

The end result is massively rewarding, though. A study I did last year of the GNOME project showed that there is a massive project-wide boost in productivity just after our annual conference, and many of our community members cite the conference as the high point of their year.

Drawing up a roadmap

community, freesoftware, General, gimp, gnome, maemo, openwengo, work 6 Comments

One of the most important documents a project can have is some kind of elaboration of what the maintainers want to see happen in the future. This is the concrete expression of the project vision – it allows people to adhere to the vision, and gives them the opportunity to contribute to its realisation. This is the document I’ll be calling a roadmap.

Sometimes the word “roadmap” is used to talk about other things, like branching strategies and release schedules. To me, a release schedule and a roadmap are related, but different documents. Releasing is about ensuring users get to use what you make. The roadmap is your guiding light, the beacon at the end of the road that lets you know what you’re making, and why.

Too many projects fall into the trap of having occasional roadmap planning processes, and then posting a mighty document which stays, unchanged, until the next time the planning process gets done. Roadmaps like these end up being historical documents – a shining example of how aspirations get lost along the way of product development.

Other projects are under-ambitious. Either there is no roadmap at all, in which case the business as usual of making software takes over – developers are interrupt-driven, fixing bugs, taking care of user requests, and never taking a step back to look at the bigger picture. Or your roadmap is something you use to track tasks which are already underway, a list of the features which developers are working on right now. It’s like walking in a forest at night with a head-light – you are always looking at your feet avoiding tree-roots, yet you have no idea where you’re going.

When we drew up the roadmap for the GIMP for versions 2.0 and 2.2 in 2003, we committed some of these mistakes. By observing some projects like Inkscape (which has a history of excellent roadmapping) and learning from our mistakes, I came up with a different method which we applied to the WengoPhone from OpenWengo in 2006, and which served us well (until the project became QuteCom, at least). Here are some of the techniques I learned, which I hope will be useful to others.

Time or features?

One question with roadmaps is whether hitting a date for release should be included as an objective. Even though I’ve said that release plans and roadmaps are different documents, I think it is important to set realistic target dates on way-points. Having a calendar in front of you allows you to keep people focussed on the path, and avoid falling into the trap of implementing one small feature that isn’t part of your release criteria. Pure time-based releases, with no features associated, don’t quite work either. The end result is often quite tepid, a product of the release process rather than any design by a core team.

I like Joel’s scheduling technique: “If you have a bunch of wood blocks, and you can’t fit them into a box, you have two choices: get a bigger box, or remove some blocks.” That is, you can mix a time-based and feature-based schedule. You plan features, giving each one a priority. You start at the top and work your way down the list. At the feature freeze date, you run a project review. If a feature is finished, or will be finished (at a sufficient quality level) in time for release, it’s in. If it won’t realistically be finished in time for the release date, it’s bumped. That way, you stick to your schedule (mostly), and there is a motivation to start working on the biggest wood blocks (the most important features) first.

A recent article on lessons learned over years of Bugzilla development by Max Kanat-Alexander made an interesting suggestion which makes a lot of sense to me – at the point you decide to feature freeze and bump features, it may be better to create a release branch for stabilisation work, and allow the trunk to continue in active development. The potential cost of this is a duplication of work merging unfinished features and bug fixes into both branches, the advantage is it allows someone to continue working on a bumped feature while the team as a whole works towards the stable release.

Near term, mid term, long term

The Inkscape roadmap from 2005 is a thing of beauty. The roadmap mixes beautifully long-term goals with short-term planning. Each release has a by-line, a set of one or two things which are the main focus of the release. Some releases are purely focussed on quality. Others include important features. The whole thing feels planned. There is a vision.

But as you come closer and closer to the current work, the plans get broken down, itemised further. The BHAGs of a release in 2 years gets turned into a list of sub-features when it’s one year away, and each of those features gets broken down further as a developer starts planning and working on it.

The fractal geometer in me identifies this as a scaling phenomenon – coding software is like zooming in to a coastline and measuring its length. The value you get when measuring with a 1km long ruler is not the same as with a 1m ruler. And as you get closer and closer to writing code, you also need to break down bigger tasks into smaller tasks, and smaller tasks into object design, then coding the actual objects and methods. Giving your roadmap this sense of scope allows you to look up and see in the distance every now and again.

Keep it accurate

A roadmap is a living document. The best reason to go into no detail at all for  future releases beyond specifying a theme is that you have no idea yet how long things will take to do when you get there. If you load up the next version with features, you’re probably aiming for a long death-march in the project team.

The inaccurate roadmap is an object of ridicule, and a motivation killer. If it becomes clear that you’re not going to make a date, change the date (and all the other dates in consequence). That might also be a sign that the team has over-committed for the release, and an opportunity to bump some features.

Leave some empty seats

In community projects, new contributors often arrive who would like to work on features, but they don’t know where to start. There is an in-place core team who are claiming features for the next release left & right, and the new guy doesn’t know what to do. “Fix some bugs” or “do some documentation” are common answers for many projects including GNOME (with the gnome-love keyword in Bugzilla) and LibreOffice (with the easy hacks list). Indeed, these do allow you to get to know the project.

But, as has often been said, developers like to develop features, and sometimes it can be really hard what features are important to the core team. This is especially true with commercial software developers. The roadmap can help.

In any given release, you can include some high priority features – stuff that you would love to see happen – and explicitly marked as “Not taken by the core team”. It should be clear that patches of a sufficiently high standard implementing the feature would be gratefully accepted. This won’t automatically change a new developer into a coding ninja, nor will it prevent an ambitious hacker from biting off more than he can chew, but it will give experienced developers an easy way to prove themselves and earn their place in the core team, and it will also provide some great opportunities for mentoring programs like the Google Summer of Code.

The Subversion roadmap, recently updated by the core team, is another example of best practice in this area. In addition to a mixed features & time based release cycle, they maintain a roadmap which has key goals for a release, but also includes a separate list of high priority features.

The end result: Visibility

The end result of a good roadmap process is that your users know where they stand, more or less, at any given time. Your developers know where you want to take the project, and can see opportunities to contribute. Your core team knows what the release criteria for the next release are, and you have agreed together mid-term and long-term goals for the project that express your common vision. As maintainer, you have a powerful tool to explain your decisions and align your community around your ideas. A good roadmap is the fertile soil on which your developer community will grow.

The Lifecycle of a Patch (or: Working Upstream)

community, freesoftware, gimp, gnome, inkscape, maemo, meego 5 Comments

Reposted from Neary Consulting

Yesterday I looked into what it means to be a maintainer of a package. Today, I’m going to examine how to affect change in a distribution like MeeGo, and what it means to work upstream. To do so, we’re going to look at how code gets from a developer’s brain into the hands of a user.

So – how can you make a change in a Linux-based distribution? Here’s what happens when everything works as it should:

  1. You open a bug report for the feature against your distribution
  2. You identify the module or modules you need to change to implement the new feature
  3. You open bug reports for each of the modules concerned, detailing the feature and the changes needed in that module for the feature
  4. You write a patch to implement the feature, and propose it (appropriately cut up for ease of review) to the maintainers of those modules
  5. Once the code has gone through the appropriate review process, it will be committed to the source control of the module(s)
  6. Some time later, the maintainer of each module will include that code in a stable release of the module
  7. Some time after that, the new stable versions will be packaged and uploaded to MeeGo
  8. Your code will be included in the next release of the distribution following the upload.

When people talk about “working upstream” in MeeGo or Linaro, this is what they mean.

To simplify matters for our analysis, let’s consider that the feature we want to implement is self-contained in one module (or related modules which release together). There are two different scenarios we’ll consider:

  1. The module is maintained by people not associated with your distribution (for example, a GNU or GNOME project)
  2. The module is maintained by people closely related to your distribution (for example, Unity in Ubuntu, or oFono in MeeGo)

We will also look at a third situation, where you find and fix a bug in the software you are using – that is, a released version of a distribution (the proverbial “scratching an itch”).

For each case, I will try to pick a representative feature/patch and follow it from developer through to distribution to Real Users.

What if your code changes different projects?

If your code touches several modules (for example, if you are proposing some new API in GTK+ which you want to use in the GIMP) then things can get complicated – you will need a stable version of GTK+ to be released before you can ship a stable release of the GIMP which depends on it.

This issue of staggered releases is one that Andrew Cowie pointed out a few years ago for language bindings. To avoid making bindings on shifting sands, he preferred to package new APIs once they had been included in a stable GNOME release. In turn, Java GNOME developers rarely depend on development release bindings, and they would wait for the new API to be included in a stable bindings release. For example, the gtk_orientable_get_orientation, added to GTK+ at the end of September 2008, was released in GTK+ 2.16, in March 2009. The first version of Java-GNOME which depended on GTK+ 2.16 was version 4.0.13, released in August 2009. That was packaged in distributions in Autumn 2009, and so most users would not have access to the newer bindings for a few months after that – perhaps early 2010 – at which point, the API was written 18 months beforehand.

And that is when you have a regular release schedule you can rely on! Pity the developer who wants to release a GIMP plug-in which depends on some API included in GIMP 2.8 – the last stable GIMP release, 2.6, came out in October 2008, and over two years later, 2.8 still has not released. And when you combine unreliable release schedules for distributions and applications, the results are cumulative: users of the stable Debian distribution are still using GIMP 2.4 releases. The GIMP 2.4 released in October 2007. Features added to the GIMP in late 2007 are still not in the hands of users of stable Debian distributions.

Getting features to users

It is difficult to generalise when users upgrade their Linux distributions, or even to say what proportion of Linux users are new users at any given time. It would be over-simplifying to say that developers use bleeding-edge distributions, power users upgrade early to the latest and greatest, new users install the latest distributions available, but will only upgrade every 18 months or so afterwards, and conservative users stick with “Long term service” or stable distributions. Most developers I know use their computer for work (and thus want a stable distribution) and only install the latest versions of various dependencies they need to work on their project. But let’s generalise and say that this is roughly the case. So (guesstimating) about 10% of your users will be upgrading to the latest distribution very quickly after its release, a further 20% in the months after when the bugs are shaken out, and the rest will follow along in their own time, perhaps 12 or 18 months later.

To make this concrete, let’s follow the life of a single patch. This is complete anecdata, but in my defence, the patch has been chosen by random, from a project which I know has good community processes and release management in place. The patch we’re going to follow adds an extension to Inkscape to render objects along triangular paths.

  1. Bug #226001 opened on 2008-05-03 by inductiveload, with a description of the feature to be added, and proposed code to implement it. The code, as an extension, may have a lower bar for acceptance than code which is core to a project.
  2. Patch submission reviewed on 2008-05-03, minor comments, but patch is accepted (note: This was not the authors first submission to Inkscape)
  3. Patch corrected to respond to comments and committed on 2008-05-03 (did I mention these guys had good community processes!?!)
  4. Inkscape 0.47-pre0, containing the Triangle extension, released on 2009-07-02
  5. Inkscape 0.47-pre4 included in Ubuntu 9.10

So for a feature developed in mid 2008, most Inkscape users will still not have the feature by the end of 2009, 18 months later. This is both a typical and atypical example: in many projects, patch proposals lay unreviewed for days, weeks, sometimes months, but the 0.47 release cycle was a particularly long one for Inkscape. However, I think the lag from code written to presence on user’s hard drives of ~12 to 18 months is about correct.

Does it have to be this hard?

If this were the only way to get features into a distribution, trying to improve MeeGo by contributing upstream would be a very frustrating experience. Happily, there are ways to accelerate the process. Taking the MeeGo kernel as an example, where Greg Kroah-Hartman recently threw in the towel on persuading people to propose patches upstream; the process is supposed to work like this:

  1. Propose a patch for inclusion upstream. This patch will then ship in a future stable kernel release (let’s say 2.6.38).
  2. After peer review, when the code has been accepted for inclusion in the kernel upstream, propose a backport for inclusion in the MeeGo kernel. The back-ported patch will be maintained across the next MeeGo release, and will be dropped when the kernel version included in the MeeGo project catches up with 2.6.38.

The overhead here is reduced basically to the peer review process of the upstream project, and the cumulative cost of merging a patch over the course of 6 months.

As a distributor (or a developer working on a specific distribution), this allows you to get code to everyone, eventually, and have that code included in your distribution as soon as you are sure that it is up to the standard expected by the community. Currently in MeeGo, the trend seems to be more towards submitting patches concurrently upstream and to MeeGo kernel maintainers (or even submitting them upstream once they have been accepted into the MeeGo kernel). In the case that a patch requires substantial modifications, or is rejected outright, upstream, the kernel maintainers are then left carrying a patch indefinitely in the distribution. For one patch, this might not be a big deal, but for thousands of patches, the maintenance and integration burden of these patches adds up.

It is also not unusual for kernel developers to maintain their own git branches for a long time. Three examples that come to mind are inotify, which Robert Love maintained for over a year for both Novell and in the kernel before it was accepted into the mainline, ReiserFS, which was maintained for several years out-of-tree before being shipped with the Linux kernel in 2001, and the fast  desktop patchset which Con Kolivas maintained for almost five years on the -ck kernel branch. Distributions will occasionally ship a substantial diff to upstream if there is a maintainer committed to getting the code upstream eventually. Allocating someone to work over a long period to make everyone happy and comfortable with your code may enable you to ship a big patch to upstream, but this will not be sustainable long term.

To summarise: when working upstream, as a distribution, you should only ship with patches which have been accepted in a development version of upstream already, if you can help it.

Meetings in telephone boxes

Sometimes, however, when upstream and downstream coincide, you can simplify things considerably, while also adding a small measure of risk.

In MeeGo, to continue with that example, the distribution architects have a pretty good idea when they can expect emergency telephony to be ready for oFono and the MeeGo telephony stack, because they’re writing it. By co-ordinating the upstream release management with downstream packaging, you can make promises as a distribution which you can’t with community-developed software.

When upstream and downstream are co-ordinating each other, we cut out the middleman. The workflow becomes:

  1. Report a bug/feature request against a component of the distribution
  2. Develop a patch which implements the feature, and submit it directly to the distribution bug tracker
  3. Once it has been reviewed and accepted, you know that your patch will be included in the next version of the distribution.

This gives a distribution much more control, both over what gets done, and when, and explains both the Ayatana and MeeGo UX development projects. However, being able to plan around the release is no guarantee that the release will happen on time: GNOME has in the past been stung by planning during the 2.6 development cycle to depend on a new version of GTK+, only to find that the release was delayed. In the end, the GTK+ release shipped in time for the 2.6 release at the end of March.

Scratch scratch

The other patch lifecycle I’d like to mention, because it is so relevant to distributions, was pointed out to me by Federico Mena Quintero yesterday. What happens to a patch that someone makes and submits to a distribution when they find a bug in stable released software? This is one of the key advantages of free software – if you find a bug in the software you use, and you have the wherewithall, you can fix the bug and share that fix with everyone else.

However, as we have seen, there is typically a lag of several months from the time that software is released and the time it is being used by large numbers of users through distributions. With releases of Red Hat Enterprise Linux, Novell Suse Linux Desktop and Ubuntu LTS being supported for up to 5 years, it is possible that important bugs will be fixed in these stable versions for years after the original developers have moved on, and are no longer maintaining older stable versions.

Let’s say I find and fix a bug in Rhythmbox 0.12.5, which ships with Ubuntu 9.10. I open a bug report on Launchpad, attach a fix to the source .deb there, and I update my local copy. As a user, I’m happy – I have fixed my problem and shared the solution with others. If I’m particularly conscientious, I might open a bug on gnome.org against Rhythmbox and attach my patch there, but since the development version is now 0.13.2, the best you can hope for is that the patch applies cleanly to the master branch, and will be included in the next release. It is very unlikely that the upstream maintainers will release another update to the 0.12 series at this point.

Now imagine that you are a maintainer for Suse, and someone reports the same bug against a long-term service release.In practice, there are several different versions being maintained by different distributions, and no good way to know if the same bug has been reported and fixed by someone else. You end up searching for a fix in upstream bug trackers, and in the bug trackers of each of the other main distributions. According to Federico at the time:

Patches for old versions are traded in the black market. You have friends in another distro? You ask them first, “did you guys already fix this?” Those patches don’t ever manage to reach CVS, where everyone would be able to get them.

Ideally, you could collaborate ahead of time with other distributions to ensure that you are all using the same branch of upstream modules, and are committing patches upstream. The Linux kernel is moving to this model, and there are also discussions underway in GNOME to co-ordinate this type of activity. Mark Shuttleworth has also pushed for something similar by encouraging projects in the core Linux platform to have a regular cadence of releases, so that everyone can synchronise their longer term service offerings every couple of years.

But at the moment, the best you can hope for is that your patch will be included in an upcoming release for your distribution, and which point other users of the distro can avail of it, and that upstream will patch their development version and latest stable versions, and get your patch to everyone in a few months.

Working upstream

The goal of this article is to explain what working upstream actually means, and how to make that more palatable for a distribution that wants to get features written and included in their next release. Hopefully, by pointing out some of the shortcomings of the way patches circulate from developers to users, some of these issues can be addressed.

In any case, one thing is clear – if you are carrying a patch as a distribution without ever submitting it upstream, you are making a costly mistake. You will be carrying code that others won’t, and bearing all of the merge and maintenance burden for that code for years to come. The path to maximum happiness is to co-ordinate with other distributions and with upstream to ensure that everyone is working in the same place, and sharing work as much as possible.

Getting the hobbyist back

community, gimp, gnome 22 Comments

When I first installed a Linux distribution in 1996 or 97, it was a horrible experience. My friend who had started a few months before me thought it was great, though – I remember, it was Red Hat 5, the first version that had an ncurses installer that helped you through the process.

I was confronted with dozens of questions I knew nothing about and wasn’t equipped to answer. Did I want to create a primary or secondary partition? What was its mount point and filesystem type going to be? What was the manufacturer of my video card? What resolution & refresh rate did I need for my monitor (WARNING: the wrong answer can make your monitor explode!)? What was my IP address, netmask, gateway, DNS server? What keymap did I want for my keyboard? Was my NIC using ISA or PCI? Was it 10baseT or 100baseT? Which driver did it need? Was my mouse a PS/2, serial, Microsoft or “Other” (there was always an “Other”)? And on and on it went. How the hell did I know? What did I care?

But it was a learning experience. Installing Linux was the period in my life where I learned the most about how computers worked, hardware and software. Back then, if you wanted to try out an application you heard about, there was only one way to do it – download the source code and compile it. I had a wad of software in /usr/local, including MySQL, the GIMP, Scilab, and a bunch of other stuff I’ve forgotten. There was no online distribution channel. Free software developers didn’t do packaging, there were no PPAs. If it didn’t come on the install CD, it needed compiling.

It wasn’t better. There were fewer of us. Linux had a name as a hobbyist’s toy for a reason. Those of us that there were had a certain minimum knowledge of our systems. You knew shell commands because there was no other way to do anything. Everyone knew about fstab and resolv.conf and ld.so.conf and compiling kernel modules, because you had to. And every time you installed software, you had the source code – right there on your computer. And you knew how to compile it.

I don’t know if I would ever made a patch for the GIMP if I didn’t have the source code and had already gone through the pain of compiling & installing all its dependencies. I doubt it very much. And yet that’s the situation that the vast majority of Linux users are in today – they have never compiled any software, or learned about the nuts & bolts of their OS, because they don’t have to.

I remember Nat Friedman talking about this in a presentation he made a few years ago about how to become a free software developer. Step 1 was “download some source code”, step 2 was “compile and install it”, step 3 was “Find something you want to change, and change it”. And I recall that Nat identified step 2 as the major stumbling block.

We have bred a generation of free software users who have never compiled software, and don’t particularly care to. Is it any wonder that recruitment of developers appears to be slowing, that prominent older projects are suffering something of a demographic crisis, with hoary old 30 year olds holding down the fort, with no young fiery whippersnappers coming up to relieve them?

Is this a problem? It seems like it to me – so I ask the question to a wider audience. With a second question: how can we get the hobbyist back into using free software, at least for some part of our community?

Code:Free – a reminder that our software is for doing stuff

community, freesoftware, gimp, inkscape, scribus 5 Comments

I recently came across Code:Free, a webzine (made with Scribus) which showcases some great examples of art created with free software tools, and tutorials on how to achieve some nice effects – it’s kind of a compilation of the best of Deviantart made with Free tools. After seeing Ton Roosendaal keynote the Maemo Summit last weekend, it was a reminder that the goal behind creating software is to have your users take it & do cool stuff with it.

The webzine itself is gorgeously laid out and the art in it is very good indeed. Congratulations to Chrisdesign (of gimpforums.de fame) in this great initiative, long may it continue!

The value of engagement

community, freesoftware, General, gimp, gnome, maemo, work 5 Comments

(Reposted from Neary Consulting)

Mal Minhas of the LiMo Foundation announced and presented a white paper at OSiM World called “Mobile Open Source Economic Analysis” (PDF link). Mal argues that by forking off a version of a free software component to adjust it to your needs, run intensive QA, and ship it in a device (a process which can take up to 2 years), you are leaving money on the table, by way of what he calls “unleveraged potential” – you don’t benefit from all of the features and bug fixes which have gone into the software since you forked off it.

While this is true, it is also not the whole story. Trying to build a rock-solid software platform on shifting sands is not easy. Many projects do not commit to regular stable releases of their software. In the not too distant past, the FFMpeg project, universally shipped in Linux distributions, had never had a stable or unstable release. The GIMP went from version 1.2.0 in December 1999 to 2.0.0 in March 2004 in unstable mode, with only bug-fix releases on the 1.2 series.

In these circumstances, getting both the stability your customers need, and the latest & greatest features, is not easy. Time-based releases, pioneered by the GNOME project in 2001, and now almost universally followed by major free software projects, mitigate this. They give you periodic sync points where you can get software which meets a certain standard of feature stability and robustness. But no software release is bug-free, and this is true for both free and proprietary software. In the Mythical Man-Month, Fred Brooks described the difficulties of system integration, and estimated that 25% of the time in a project would be spent integrating and testing relationships between components which had already been planned, written and debugged. Building a system or a Linux distribution, then, takes a lot longer than just throwing the latest stable version of every project together and hoping it all works.

By participating actively in the QA process of the project leading up to the release, and by maintaining automated test suites and continuous integration, you can mitigate the effects of both the shifting sands of unstable development versions and reduce the integration overhead once you have a stable release. At some stage, you must draw a line in the sand, and start preparing for a release. In the GNOME project, we have a progressive freezing of modules, progressively freezing the API & ABI of the platform, the features to be included in existing modules, new module proposals, strings and user interface changes, before finally we have a complete code freeze pre-release. Similarly, distributors decide early what versions of components they will include on their platforms, and while occasional slippages may be tolerated, moving to a new major version of a major component of the platform would cause integration testing to return more or less to zero – the overhead is enormous.

The difficulty, then, is what to do once this line is drawn. Serious bugs will be fixed in the stable branch, and they can be merged into your platform easily. But what about features you develop to solve problems specific to your device? Typically, free software projects expect new features to be built and tested on the unstable branch, but you are building your platform on the stable version. You have three choices at this point, none pleasant – never merge, merge later, or merge now:

  • Develop the feature you want on your copy of the stable branch, resulting in a delta which will be unique to your code-base, which you will have to maintain separately forever. In addition, if you want to benefit from the features and bug fixes added to later versions of the component, you will incur the cost of merging your changes into the latest version, a non-negigible amount of time.
  • Once you have released your product and your team has more time, propose the features you have worked on piecemeal to the upstream project, for inclusion in the next stable version. This solution has many issues:
    • If the period is long enough, your feature additions will be long removed from the codebase as it has evolved, and merging your changes into the latest unstable tree will be a major task
    • You may be redundantly solving problems that the community has already addressed, in a different or incompatible way.
    • Feature requests may need substantial re-writing to meet community standards. This problem is doubly so if you have not consulted the community before developing the feature, to see how it might best be integrated.
    • In the worst case, you may have built a lot of software on an API which is only present in your copy of the component’s source tree, and if your features are rejected, you are stuck maintaining the component, or re-writing substantial amounts of code to work with upstream.
  • Develop your feature on the unstable branch of the project, submit it for inclusion (with the overhead that implies), and back-port the feature to your stable branch once included. This guarantees a smaller delta from the next stable version to your branch, and ensures you work gets upstream as soon as possible, but adds a time & labour overhead to the creation of your software platform

In all of these situations there is a cost. The time & effort of developing software within the community and back-porting, the maintenance cost (and related unleveraged potential) to maintaining your own branch of a major component, and the huge cost of integrating a large delta back to the community-maintained version many months after the code has been written.

Intuitively, it feels like the long-term cheapest solution is to develop, where possible, features in the community-maintained unstable branch, and back-port them to your stable tree when you are finished. While this might be nice in an ideal world, feature proposals have taken literally years to get to the point where they have been accepted into the Linux kernel, and you have a product to ship – sometimes the only choice you have is to maintain the feature yourself out-of-tree, as Robert Love did for over a year with inotify.

While addressing the raw value of the code produced by the community in the interim, Mal does not quantify the costs associated with these options. Indeed, it is difficult to do so. In some cases, there is not only a cost in terms of time & effort, but also in terms of goodwill and standing of your engineers within the community – this is the type of cost which it is very hard to put a dollar value on. I would like to see a way to do so, though, and I think that it would be possible to quantify, for example, the community overhead (as a mean) by looking at the average time for patch acceptance and/or number of lines modified from intial proposal to final mainline merge.

Anyone have any other thoughts on ways you could measure the cost of maintaining a big diff, or the cost of merging a lot of code?

GCDS round-up 5: Mobile Day

community, gimp, gnome, guadec, maemo 5 Comments

Nearing the end of the series on the Gran Canaria Desktop Summit.

On Wednesday morning (after SMASHED), we had to get to the new location for the conference. I missed the bus window of 8am to 9am, so I took a taxi, without knowing the address of where we were going, other than knowing that it was the “Gran Canaria university, informatics building”. Turns out that’s not enough information for a taxi driver 🙂 Anyway, got there eventually, late for the opening session, and a little more expensive than expected. I also lost some change down the back of the bucket seat, so he even got a tip.

Anyway, the rest of the day went pretty well, and we had some great mobile related presentations (to compliment all of the other mobile related content in the conference):

  • Multimedia in your pocket, by Stefan Kost: Nice presentation on using MAFW to build complex multimedia applications
  • Designing Moblin-Netbook. A free desktop on a 7-10″ Screen, by Nick Richards: Great overview of the Moblin platform, and the design principles guiding it – from design requirements, personas, and dealing with constraints.
  • Hildon desktop in Maemo 5 by Kimmo Hämäläinen: An overview of the Hildon desktop on a whiteboard by Kimmo.
  • MAFW: the Media Application Framework for Maemo by Iago Toral: Drilling down into the details of MAFW.
  • Why its easier to re-invent rather than participate on the mobile? by Shreyas Srinivasan: My favourite presentation of the day. Shreyas laid out what he had expectied from GNOME Mobile, the problems he encountered, his understanding of the issues, and some proposed solutions to those problems. All in 15 minutes. I really appreciate people who don’t pad out the content that they have to present and instead focus on making a high-impact presentation.
  • GNOME Mobile BOF, led by myself: We talked about how far we’ve come, the original goals of the initiative, and identified a bunch of things that we can improve short-term and medium-term.

Had a great dinner again on Wednesday, in a tapas bar with some Red Hatters and Michael Meeks, and then on to the party. Wednesday night was the golf club party, sponsored by Collabora, with a free bar until 1 (of which I mostly did not avail – I was being good), and I was in bed by 2. It was a great party, and I picked up another couple of cyclists for the outing I had been planning for Friday, before they wimped out on me.

Why I disagree with RMS concerning Mono

freesoftware, gimp, gnome, maemo, openwengo 43 Comments

The GNOME press contact alias got a mail last weekend from Sam Varghese asking about the possibility of new Mono applications being added to GNOME 3.0, and I answered it. I didn’t think much about it at the time, but I see now that the reason Sam was asking was because of Richard Stallman’s recent warnings about Mono – Sam’s article has since appeared with the ominous looking title “GNOME 3.0 may have more Mono apps“. And indeed it may. It may also have more alien technology, we’re not sure yet. We’re still working on an agreement with the DoD to get access to the alien craft in Fort Knox.

Anyway – that aside, Richard’s position is that it’s dangerous to include Mono to the point where removing it is difficult, should that become necessary to legally distribute your software. On the surface, I agree. But he goes a little further, saying that since it is dangerous to depend on Mono, we should actively discourage its use. And on this point, we disagree.

I’m not arguing that we should encourage its use either, but I fundamentally disagree with discouraging someone from pursuing a technology choice because of the threat of patents. In this particular case, the law is an ass. The patent system in the United States is out of control and dysfunctional, and it is bringing the rest of the world down with it. The time has come to take a stand and say “We don’t care about patents. We’re just not going to think about them. Sue us if you want.”

The healthy thing to do now would be to provoke a test case of the US patent system. Take advantage of one of the many cease & desist letters that get sent out for vacuous patented technology to make a case against the US PTO’s policy pertaining to software and business process patents. Run an “implement your favourite stupid patent as free software” competition.

In all of the projects that I have been involved in over the years, patent fears have had a negative affect on developer productivity and morale. In the GIMP, we struggled with patent issues related to compression algorithms for GIF and TIFF, colour management, and for some plug-ins. In GNOME, it’s been Mono mostly, but also MP3, and related (and unrelated) issues have handicapped basic functionality like playing DVDs for years. In Openwengo, the area of audio and video codecs is mined with patent restrictions, including the popular codecs G729 and H264 among others.

What could we have achieved if standards bodies had a patent pledge as part of their standardisation process, and released reference implementations under an artistic licence? How much further along would we be if cryptography, filesystems, codecs and data compression weren’t so heavily handicapped by patents? Or if we’d just ignored the patents and created clean-room implementations of these patented technologies?

That’s what I believe we need to do. Ignore the patent system completely. I believe strongly in respecting licencing requirements related to third party products and developer packs. I think it’s reasonable to respect people’s trademarks and trade secrets. But having respect for patents, and the patent system, is ridiculous. Let a thousand flowers bloom, and let the chips fall where they may.

So if you want to write a killer app in Mono, then don’t let anyone tell you otherwise. If you build it, they will come.

Will we pass $5000 today?

gimp, inkscape, libre graphics meeting, scribus No Comments

The Libre Graphics Meeting fundraiser has been inching higher in recent weeks, and we are very close to the symbolic level of $5,000 raised, with less than a week left in the fundraising drive.

I am sure we will manage to get past $5,000, but I wonder if we will do it today? To help us put on this great conference, and help get some passionate and deserving free software hackers together, you still have time to give to the campaign. (Update 2019-03-29: Pledgie is no more – for an explanation why, read this).

Thanks very much to the great Free Art and Free Culture community out there for your support!

Update: Less than 2 hours after posting, Mark Wielaard pushed us over the edge with a donation bringing us to exactly $5000. Thanks Mark! Next stop: $6000.

« Previous Entries