What community?

community, General, gnome, maemo, work 5 Comments

With the announcement of Tizen (pronounced, I learned, tie-zen, not tea-zen or tizz-en) recently, I headed over to the website to find out who the project was aimed at. I read this on the “Community” page:

The Tizen community is made up of all of the people who collectively work on or with Tizen:

  • Product contributors: kernel/distribution developers, release managers, quality assurance, localization, etc.
  • Application developers: people who write applications to run on top of Tizen
  • Users: people who run Tizen on their device and provide feedback
  • Vendors: companies who create products based on Tizen
  • Other contributors: promotion, documentation, and much more

Anyone can contribute by:

  • Submitting patches
  • Filing bugs
  • Developing applications
  • Helping with wiki documentation
  • Participating in other community efforts and programs

Wow! That’s a diverse target audience, and a very wide ranging list of ways you can help out. But is it really helpful to scope the project so wide, and try to cater to such a wide range of use-cases from the start? And is the project at a stage where it even makes sense to advertise itself to some of these different types of users?

I have talked about the different meanings of “maintainer” before, depending on whether you’re maintaining a code project or are a package maintainer for a distribution. I have also talked about the different types of community that build up around a project, and how each of them needs their own identity – particularly in the context of the MeeGo trademark. I particularly like Simon Phipps’s analysis of the four community types as a way to clarify what you’re talking about.

For Tizen, I see between three and five different types of community, each with different needs, and each of which can form at different stages in the life-cycle of the project. Trying to “sell” the project to one type of community before the project is ready for them will result in disappointment and frustration all round – managing the expectations of people approaching Tizen will be vital to its long-term success, even if it opens you up to short-term criticism. Unless each of these communities is targeted individually and separately, and at the right time, I am sceptical about the results.

“Upstream” software developers

The first and most identifiably “Open Source” family of communities will be the software developers working on components and applications which will end up in the core of Tizen. For the most part, these communities exist already, and Samsung and Intel engineers are working with them. These are the projects we commonly call “upstreams” – projects you don’t control, but from whom code flows into your product.

In other cases, code will originate from Intel and/or Samsung. In the same way that Buteo, oFono and the various applications which were developed for the MeeGo Netbook UX were very closely associated with MeeGo, there will be similar projects (sometimes the same projects) which will have a close association with Tizen. Each of these projects will have their own personality, their own maintainers, roadmaps, specs – and each of them should have their own identity, and space to collaborate and communicate.

Communities form around programming projects not because of the code, but because of a shared vision and values. Each project will attract different people – the people who are interested in metadata and search are not the same as the people who will be passionate about system-wide contact integration. Each project needs its own web space, maintainers, bug tracker, mailing list, and wiki space. Of course, many projects can share the same infrastructure, and a lot of the same community processes (for things like code governance), and for projects closely related to Tizen, we can provide common space to help create a Tizen developer community in the same way there’s a GNOME developer community. But each community around each component will have its own personality and will need its own space.

At the level of Tizen, we could start with an architecture diagram, perhaps – and for each component on the architecture diagram, link to the project’s home page – many of the links will point to places like kernel.org, gnome.org, freedesktop.org and so on. For Tizen-specific projects, there could be a link to the project home page, with a list of stuff that needs to be done before the component is “ready”.

Core platform packagers, testers, integrators

Once we have a set of components which are working well together, we get to the heart of what I think will be Tizen’s early activity – bringing those components together into a cohesive whole. Tizen will be, basically, a set of distributions aimed at different form factors. And the deliverable in a distribution is not code or a Git tag, it’s a complete, integrated stack.

The engineering skills, resources and processes required to integrate a distribution are different to those of a code project. Making a great integrated Linux platform is obviously difficult – otherwise Red Hat would not be making money, and Ubuntu would not have had the opportunity to capture so much mind-share. Both Red Hat and Canonical do something right which others failed at before them.

Distributions attract a different type of contributor than code projects, and need a different set of tools and infrastructure to allow people to collaborate.At the distribution level, it is more likely you will be debating whether or not to integrate a particular package or its competitor than it is to debate whether to implement a feature in a specific package. Of course, it is possible to influence upstream projects to get specific features implemented, not least by providing developer resources, and there will be a need for some ambassadors to bridge the gap to upstream projects. And it is possible for a distribution to carry patches to upstream packages if that community disagrees. But in general, not much code gets written in distributions.

What the distro community needs and expects is infrastructure for continuous integration, bug tracking software, a way to submit and build software packages, good release engineering, an easy way to find out what packages need a maintainer (see Debian’s WNPP list  or Ubuntu’s “need-packaging” list for examples) and a way to influence what packages or features are included in future releases (see Fedora or Ubuntu for examples). They also want tools to allow packaging, testing and  deploying the integrated distribution – for an embedded distro, that might mean an emulator and an image creator, perhaps.

Vendors and carriers

Communities of companies are worth a special mention. Companies have very different ways of working together and agreeing on things than communities of individuals. I was tempted to just roll vendors into the “Platform integrators” community type, but they are sufficiently different to be considered another type of community. Vendors have different constraints and motivations than individual contributors to the platform, and we should be aware of those.

Vendors like to have a business relationship – some written agreement that shows where everyone stands. They have a direct relationship with people who buy their hardware, and have an interest (potentially in conflict with other communities) in owning the user relationship – through branded application stores, UI and support forums, for example. And since vendors are typically working on hardware development in parallel with software development, they care a lot about a reliable release schedule and quality level from the stack. Something that companies care about which individuals usually don’t are legal concerns around working with the process – do they have patent rights to the code they ship? Are they giving up any of their own potential patent claims?

3rd party application developers

Application developers don’t care, in general, whether the platform is open source or closed, or developed collaboratively or by one party (witness the popularity of Android and iOS with application developers). What they do care about are developer tools, documentation, and the ability to share their work with device users and other application developers. Some application developers will want to develop their applications as free software, and it is possible to enable that, but I think the most important thing for application developers is that it’s easy to do things with your platform, that there are good tools for developing, testing and deploying your application, that your platforms APIs are enabling the developer to do what he wants, and that you are providing a channel for those developers to get their apps to users of your platform.

An application developer doesn’t want to have to ship his software to 5 different app stores on every release – in contrast to vendors, he would like a single channel to his market. Other things he cares about are being able to form a relationship with his users – so app stores need to be social, allow user ratings and comments, and allow the author to interact with his users. Clear terms of engagement are vital here too – especially for commercial application developers. And application developers are also another type of community – they will want to share tips and tricks, code, and their thoughts on the project leaders in some kind of app developer knowledge base.

Device users

There is another potential community which I should mention, and that is users of your platform – typically, these will be users of devices running your platform. It should be possible for engaged users to share information, opinions, tips & tricks, and interesting hacks among each other. It should also be possible to rate and recommend applications easily – this is in the interests of both your user community and your application developer ecosystem.

OK, so what?

Each of these community types is different, and they don’t mix well. They mature at different rates. There is no point in trying to build a user platform until there are devices running your platform on the market, for example

So each type of community needs a separate space to work. There is no point in catering to a 3rd party application developer until you have developer tools and a platform for him to develop against. Vendors will commit to products when they see a viable integrated platform. And so on.

What is vital is to be very clear, for each type of community, what the rules of engagement are. As an example, one company can control the integration of a platform and the development of many of its components (as is the case for Android) and everyone is relatively happy, because they know where they stand and what they’re getting into. But if you advertise as an open and transparent project, and a small group of people announce the decisions of what components are included or excluded from the stack (as was the case in MeeGo), then in spite of being vastly more open, people who have engaged with the project will end up unhappy, because of a mismatch between the message and the practice in the project.

So what about Tizen? I think it is a mistake to announce the projects as a place to “submit patches, report bugs and develop applications” when there is no identifiable code base, no platform to try, and no published SDK to develop against. By announcing that Tizen is an Open Source platform, Intel and Samsung have set an expectation for people – and these are people who have gone through the move to MeeGo under two years ago, and who have seen Nokia drop the project earlier this year. If they are disappointed by the project’s beginnings because the expectations around the project have been set wrong from the offset, it could take a long time to recover.

Personally, I would start low-key by announcing an architecture diagram and concentrating on code and features that need writing, then ramp up the integrator community with some alpha images and tools to allow people to roll their own; finally, when the platform stabilises roll out the developer SDK and app store and start building up an application developer community. But by aiming too big with the messaging, Tizen runs the risk of scaring some people away early. Time will tell.

 

Getting people together

community, freesoftware, gimp, gnome, guadec, libre graphics meeting, maemo, openwengo 3 Comments

One of the most important things you can do in a free software project, besides writing code, is to get your key contributors together as often as possible.

I’ve been fortunate to be able to organise a number of events in the past 10 years, and also to observe others and learn from them over that time. Here are some of the lessons I’ve learned over the years from that experience:

Venue

The starting point for most meetings or conferences is the venue. If you’re getting a small group (under 10 people) together, then it is usually OK just to pick a city, and ask a friend who runs a business or is a college professor to book a room for you. Or use a co-working space. Or hang out in someone’s house, and camp in the garden. Once you get bigger, you may need to go through a more formal process.

If you’re not careful, the venue will be a huge expense, and you’ll have to find that money somewhere. But if you are smart, you can manage a free venue quite easily.

Here are a few strategies you might want to try:

  • Piggy-back on another event – the Linux Foundation Collaboration Summit, OSCON, LinuxTag, GUADEC and many other conferences are happy to host workshops or meet-ups for smaller groups. The GIMP Developers Conference in 2004 was the first meet-up that I organised, and to avoid the hassle of dealing with a venue, finding a time that suited everyone, and so on, I asked the GNOME Foundation if they wouldn’t mind setting aside some space for us at GUADEC – and they said yes.Take advantage of the bigger conference’s organisation, and you get the added benefit of attending the bigger conference at the same time!
  • Ask local universities for free rooms – This won’t work once you go over a certain size, but especially for universities which have academics who are members of the local LUG, they can talk their department head into booking a lecture theatre & a few classrooms for a weekend. Many universities will ask to do a press release and get credit on the conference web-site, and this is a completely fair deal.The first Libre Graphics Meeting was hosted free in CPE Lyon, and the GNOME Boston Summit has been hosted free for a number of years in MIT.
  • If the venue can’t be free, see if you can get someone else to pay for it – Once your conference is bigger than about 200 people, most venues will require payment. Hosting a conference will cost them a lot, and it’s a big part of the business model of universities to host conferences when the students are gone. But just because the university or conference center won’t host you for free doesn’t mean that you have to be the one paying.

    Local regional governments like to be involved with big events in their region. GUADEC in Stuttgart, the Gran Canaria Desktop Summit, and this year’s Desktop Summit in Berlin have all had the cost of the venue covered by the host region. An additional benefit of partnering with the region is that they will often have links to local industry and press – resources you can use to get publicity and perhaps even sponsorship for your conference.

  • Run a bidding process – by encouraging groups wishing to host the conference to put in bids, you are also encouraging them to source a venue and talk to local partners before you decide where to go. You are also putting cities in competition with each other, and like olympic bids, cities don’t like to lose competitions they’re in!

Budget

Conferences cost money. Major costs for a small meet-up might be
covering the travel costs of attendees. For a larger conference, the
major costs will be equipment, staff and venue.

Every time I have been raising the budget for a conference, my rule of
thumb has been simple:

  1. Decide how much money you need to put on the event
  2. Fundraise until you reach that amount
  3. Stop fundraising, and move on to other things.

Raising money is a tricky thing to do. You can literally spend all of
your time doing it. At the end of the day, you have a conference to put
on, and the amount of money in the budget is not the major concern of
your attendees.

Remember, your primary goal is to get project participants together to
advance the project. So getting the word out to prospective attendees,
organising accommodation, venue, talks, food and drinks, social
activities and everything else people expect at an event is more
important than raising money.

Of course, you need money to be able to do all the rest of that stuff,
so finding sponsors, fixing sponsorship levels, and selling your
conference is a necessary evil. But once you have reached the amount of
money you need for the conference, you really do have better things to
do with your time.

There are a few potential sources of funds to put on a conference – I
recommend a mix of all of these as the best way to raise your budget.

  • Attendees – While this is a controversial topic among many communities, I think it is completely valid to ask attendees to contribute something to the costs of the conference. Attendees benefit from the facilities, the social events, and gain value from the conference.Some communities consider attendance at their annual event as a kind of reward for services rendered, or an incitement to do good work in the coming year, but I don’t think that’s a healthy way to look at it.

    There are a few ways for conference attendees to fund the running of the conference:

    1. Registration fees – This is the most common way to get money from conference attendees. Most community conferences ask for a token amount of fees. I’ve seen conferences ask for an entrance fee of €20 to €50, and most people have not had a problem paying this.

      A pre-paid fee also has an additional benefit of massively reducing no-shows among locals. People place more value on attending an event that costs them €10 than one where they can get in for free, even if the content is the same.

    2. Donations – very successfully employed by FOSDEM. Attendees are offered an array of goodies, provided by sponsors (books, magazine subscriptions, t-shirts) in return for a donation. But those who want can attend for free.
    3. Selling merchandising – Perhaps your community would be happier hosting a free conference, and selling plush toys, t-shirts, hoodies, mugs and other merchandising to make some money. Beware: in my experience you can expect less from profits from merchandising sales than you would get giving a free t-shirt to each attendee with a registration fee.
  • Sponsors – Media publications will typically agree to “press sponsorship” – providing free ads for your conference in their print magazine or website. If your conference is a registered non-profit which can accept tax-deductible donations, offer press sponsors the chance to invoice you for the services and then make a separate sponsorship grant to cover the bill. The end result for you is identical, but it will allow the publication to write off the space they donate to you for tax.

    What you really want, though, are cash sponsorships. As the number of free software projects and conferences has multiplied, the competition for sponsorship dollars has really heated up in recent years. To maximise your chances of making your budget target, there are a few things you can do.

    1. Conference brochure – Think of your conference as a product you’re selling. What does it stand for, how much attention does it get, how important is it to you, to your members, to the industry and beyond? What is the value proposition for the sponsor?

      You can sell a sponsorship package on three or four different grounds: perhaps conference attendees are a high-value target audience for the sponsor, perhaps (especially for smaller conferences) the attendees aren’t what’s important, it’s the attention that the conference will get in the international press, or perhaps you are pitching to the company that the conference is improving a piece of software that they depend on.

      Depending on the positioning of the conference, you can then make a list of potential sponsors. You should have a sponsorship brochure that you can send them, which will contain a description of the conference, a sales pitch explaining why it’s interesting for the company to sponsor it, potentially press clippings or quotes from past attendees saying how great the conference is, and finally the amount of money you’re looking for.

    2. Sponsorship levels – These should be fixed based on the amount of money you want to raise. You should figure on your biggest sponsor providing somewhere between 30% and 40% of your total conference budget for a smaller conference. If you’re lucky, and your conference gets a lot of sponsors, that might be as low as 20%. Figure on a third as a ball-park figure. That means if you’ve decided that you need €60,000 then you should set your cornerstone sponsor level at €20,000, and all the other levels in consequence (say, €12,000 for the second level and €6,000 for third level).

      For smaller conferences and meet-ups, the fundraising process might be slightly more informal, but you should still think of the entire process as a sales pitch.

    3. Calendar – Most companies have either a yearly or half-yearly budget cycle. If you get your submission into the right person at the right time, then you could potentially have a much easier conversation. The best time to submit proposals for sponsorship of a conference in the Summer is around October or November of the year before, when companies are finalising their annual budget.

      If you miss this window, all is not lost, but any sponsorship you get will be coming out of discretionary budgets, which tend to get spread quite thin, and are guarded preciously by their owners. Alternatively, you might get a commitment to sponsor your July conference in May, at the end of the first half budget process – which is quite late in the day.

    4. Approaching the right people – I’m not going to teach anyone sales, but my personal secret to dealing with big organisations is to make friends with people inside the organisations, and try to get a feel for where the budget might come from for my event. Your friend will probably not be the person controlling the budget, but getting him or her on board is your opportunity to have an advocate inside the organisation, working to put your proposal in front of the eyes of the person who owns the budget.

      Big organisations can be a hard nut to crack, but free software projects often have friends in high places. If you have seen the CTO or CEO of a Fortune 500 company talk about your project in a news article, don’t hesitate to drop him a line mentioning that, and when the time comes to fund that conference, a personal note asking who the best person to talk to will work wonders. Remember, your goal is not to sell to your personal contact, it is to turn her into an advocate to your cause inside the organisation, and create the opportunity to sell the conference to the budget owner later.

    Also, remember when you’re selling sponsorship packages that everything which costs you money could potentially be part of a sponsorship package. Some companies will offer lanyards for attendees, or offer to pay for a coffee break, or ice-cream in the afternoon, or a social event. These are potentially valuable sponsorship opportunities and you should be clear in your brochure about everything that’s happening, and spec out a provisional budget for each of these events when you’re drafting your budget.

Content

Conference content is the most important thing about a conference. Different events handle content differently – some events invite a large proportion of their speakers, while others like GUADEC and OSCON invite proposals and choose talks to fill the spots.

The strategy you choose will depend largely on the nature of the event. If it’s an event in its 10th year with an ever increasing number of attendees, then a call for papers is great. If you’re in your first year, and people really don’t know what to make of the event, then setting the tone by inviting a number of speakers will do a great job of helping people know what you’re aiming for.

For Ignite Lyon last year, I invited about 40% of the speakers for the first night (and often had to hassle them to put in a submission, and the remaining 60% came through a submission form. For the first Libre Graphics Meeting, apart from lightning talks, I think that I contacted every speaker except 2 first. Now that the event is in its 6th year, there is a call for proposals process which works quite well.

Schedule

Avoiding putting talks in parallel which will appeal to the same people is hard. Every single conference, you hear from people who wanted to attend talks which were on at the same time on similar topics.

My solution to conference scheduling is very low-tech, but works for me. Coloured post-its, with a different colour for each theme, and an empty talks grid, do the job fine. Write the talk titles one per post-it, add any constraints you have for the speaker, and then fill in the grid.

Taking scheduling off the computer and into real life makes it really easy to see when you have clashes, to swap talks as often as you like, and then to commit it to a web page when you’re happy with it.

I used this technique successfully for GUADEC 2006, and Ross Burton re-used it in 2007.

Parties

Parties are a trade-off. You want everyone to have fun, and hanging out is a huge part of attending a conference. But morning attendance suffers after a party. Pity the poor community member who has to drag himself out of bed after 3 hours sleep to go and talk to 4 people at 9am after the party.

Some conferences have too many parties. It’s great to have the opportunity to get drunk with friends every night. But it’s not great to actually get drunk with friends every night. Remember the goal of the conference: you want to encourage the advancement of your project.

I encourage one biggish party, and one other smallish party, over the course of the week. Outside of that, people will still get together, and have a good time, but it’ll be on their dime, and that will keep everyone reasonable.

With a little imagination, you can come up with events that don’t involved loud music and alcohol. Other types of social event can work just as well, and be even more fun.

At GUADEC we have had a football tournament for the last number of years. During the OpenWengo Summit in 2007, we brought people on a boat ride on the Seine and we went on a classic 19th century merry-go-round afterwards. Getting people eating together is another great way to create closer ties – I have very fond memories of group dinners at a number of conferences. At the annual KDE conference Akademy, there is typically a Big Day Out, where people get together for a picnic, some light outdoors activity, a boat ride, some sightseeing or something similar.

Extra costs

Watch out for those unforeseen costs! One conference I was involved in, where the venue was “100% sponsored” left us with a €20,000 bill for labour and equipment costs. Yes, the venue had been sponsored, but setting up tables and chairs, and equipment rental of whiteboards, overhead projectors and so on, had not. At the end of the day, I estimate that we used about 60% of the equipment we paid for.

Conference venues are hugely expensive for everything they provide. Coffee breaks can cost up to $10 per person for a coffee & a few biscuits, bottled water for speakers costs $5 per bottle, and so on.  Rental of an overhead projector and mics for one room for one day can cost €300 or more, depending on whether the venue insists that equipment be operated by their a/v guy or not.

When you’re dealing with a commercial venue, be clear up-front about what you’re paying for.

On-site details

I like conferences that take care of the little details. As a speaker, I like it when someone contacts me before the conference and says they’ll be presenting me, what would I like them to say? It’s reassuring to know that when I arrive there will be a hands-free mic and someone who can help fit it.

Taking care of all of these details needs a gaggle of volunteers, and it needs someone organising them beforehand and during the event. Spend a lot of time talking to the local staff, especially the audio/visual engineers.

In one conference, the a/v guy would switch manually to a screen-saver at the end of a presentation. We had a comical situation during a lightning talk session where after the first speaker, I switched presentations, and while the next presentation showed up on my laptop, we still had the screensaver on the big screen. No-one had talked to the A/V engineer to explain to him the format of the presentation!

So we ended up with 4 Linux engineers looking at the laptop, checking connections and running various Xrandr incantations, trying to get the overhead projector working again! We eventually changed laptops, and the a/v engineer realised what the session was, and all went well after that – most of the people involved ended up blaming my laptop.

Have fun!

Running a conference, or even a smaller meet-up, is time consuming, and consists of a lot of detail work, much of which will never be noticed by attendees. I haven’t even dealt with things like banners and posters, graphic design, dealing with the press, or any of the other joys that come from organising a conference.

The end result is massively rewarding, though. A study I did last year of the GNOME project showed that there is a massive project-wide boost in productivity just after our annual conference, and many of our community members cite the conference as the high point of their year.

Where do we go from here?

community, freesoftware, maemo, meego, work 14 Comments

The post-Elopocalypse angst has been getting me down over the past few days. It’s against my nature to spend a lot of time worrying about things that are decided, done, dusted. It was Democritus, I think, who said that only a fool worries about things over which he has no control, and I definitely identify with that. It seems that a significant number of people on mailing lists I’m subscribed to don’t share this character trait.

I prefer to roll with the punches, to ask, “where do we go from here?” – we have a new landscape, with Nokia potentially being a lot less involved in MeeGo over the coming months. Will they reduce their investment in 3rd party developers? Perhaps. I expect them to. Will they lay some people off? I bet that there will be a small layoff in MeeGo Devices, but I’d wager that there will be bigger cuts in external contracts. In any case, this is something over which I have no control.

First up – what next for MeeGo? While MeeGo is looking a lot less attractive for application developers now, I still think there’s a great value proposition for hardware vendors to get behind it in vertical markets. Intel seem committed, and MeeGo (even with Nokia reducing investment) is much broader than one company now. A lot of people are betting the bank on it being a viable platform. So I think it will be, and soon.

Will I continue contributing time & effort to MeeGo? My reasons for contributing to MeeGo were not dependent on Nokia’s involvement, so yes, but I will be carefully eyeing business opportunities as well. I’d be lying if I said that I didn’t expect to get some business from a vibrant MeeGo ecosystem, and now I will need to explore other avenues. But the idea of collaborating on a core platform and building a set of free software form-factor specific UIs is still appealing. And I really do like the Maemo/MeeGo community a lot.

Luckily, the time to market difficulties that Nokia experienced are, in my opinion, issues of execution rather than inherent problems in working with free software. Companies have a clear choice between embracing proprietary-style development and treating upstream as “free code” (as Google have with Android), or embracing community-style development and working “The Open Source Way” (as Red Hat have learned to do). Nokia’s problems came from the hybrid approach of engage-but-keep-something-back, which prevented them from leveraging community developers as co-developers, while at the same time imposing all the costs of growing and supporting a large community.

I expect lots of companies to try to learn from this experience and start working smarter with communities – and since that’s where I can help them, I’m not too worried about the medium term.

I would bet on Nokia partners and subcontractors battening down the hatches right now until the dust settles, and potentially looking for revenue sources outside the MeeGo world. If I had a team of people working for me that’s what I’d do. If some Nokia work kept coming my way, I’d be glad of it, but right now I’d be planning a life without Nokia in the medium term.

For any companies who have followed Nokia from Symbian to MeeGo, my advice would be to stick to Linux, convert to an Android strategy, and start building some Windows Phone skills in case Nokia’s bet works out, but don’t bet the bank on it. And working effectively with community developed software projects is a key skill for the next decade that you should be developing (a small plug for my services there).

For anyone working on MeeGo within Nokia, the suspense over who might lose their jobs is worse than the fall, let me reassure you. Having been through a re-org or two in my time, I know that the wait can last weeks or months, and even when the cuts come, there’s always an itching suspicion of another one around the corner. Nothing is worse for morale in a team than wondering who will still be there next month. But you have learned valuable and sought-after skills working on MeeGo, and they are bankable on the market right now. If I were working on MeeGo inside Nokia right now, I think I’d ignore the possibility of a lay-off and get on with trying to make the MeeGo phone as great as possible. If I got laid off, I’d be happy to have a redundancy package worthy of Finland, and would be confident in my ability to find a job as a Linux developer very quickly.

For community members wondering whether to stick with MeeGo or jump ship, I’d ask, why were you hanging out around MeeGo in the first place? Has anything in the past week changed your motivations? If you wanted to have a shiny free-software-powered Nokia phone, you should have one by the end of the year. If you wanted to hack on any of the components that make up MeeGo, you can still do that. If you were hoping to make money off apps, that’s probably not going to happen with MeeGo on handsets any time soon. If you’re not convinced by the market potential of MeeGo apps on tablets, I’d jump ship to Android quick (in fact, why aren’t you there already?).

Qt users and developers are probably worried too. I don’t think that Qt is immediately threatened. The biggest danger for Qt at this point would be Intel & others deciding that Qt was a bad choice and moving to something else. That would be a massive strategic blunder – on a par with abandoning the GTK+ work which had been done before moblin 2 to move to Qt. Rewriting user interfaces is hard and I don’t think that Intel are ready to run the market risk of dropping Qt – which means that they’re pot-committed at this point. If Nokia ever did decide to drop Qt, Intel would probably be in the market to buy it. Then again, I can also see how Qt’s management might try to do an LMBO and bring the company private again. Either way, there will be a demand for Qt, and Qt developers, for some time to come.

No-one likes the guy giving unwanted advice to everyone, so this seems like a good place to stop. My instinct when something like this happens is to take a step back, see what’s inherently changed, and try to see what the landscape looks like from different perspectives. From my perspective, the future is definitely more challenging than it was a week ago, but it’s not like the Elopocalypse wiped out my livelihood. In fact, I have been thinking about life without Nokia since MeeGo was first announced last year, when I guessed that Nokia would prefer working through the Linux Foundation for an independent eye.

But even if Nokia were my only client, and they were going away tomorrow, I think I could probably find other clients, or get a job, quickly enough. It’s important to put these things in perspective.

Drawing up a roadmap

community, freesoftware, General, gimp, gnome, maemo, openwengo, work 6 Comments

One of the most important documents a project can have is some kind of elaboration of what the maintainers want to see happen in the future. This is the concrete expression of the project vision – it allows people to adhere to the vision, and gives them the opportunity to contribute to its realisation. This is the document I’ll be calling a roadmap.

Sometimes the word “roadmap” is used to talk about other things, like branching strategies and release schedules. To me, a release schedule and a roadmap are related, but different documents. Releasing is about ensuring users get to use what you make. The roadmap is your guiding light, the beacon at the end of the road that lets you know what you’re making, and why.

Too many projects fall into the trap of having occasional roadmap planning processes, and then posting a mighty document which stays, unchanged, until the next time the planning process gets done. Roadmaps like these end up being historical documents – a shining example of how aspirations get lost along the way of product development.

Other projects are under-ambitious. Either there is no roadmap at all, in which case the business as usual of making software takes over – developers are interrupt-driven, fixing bugs, taking care of user requests, and never taking a step back to look at the bigger picture. Or your roadmap is something you use to track tasks which are already underway, a list of the features which developers are working on right now. It’s like walking in a forest at night with a head-light – you are always looking at your feet avoiding tree-roots, yet you have no idea where you’re going.

When we drew up the roadmap for the GIMP for versions 2.0 and 2.2 in 2003, we committed some of these mistakes. By observing some projects like Inkscape (which has a history of excellent roadmapping) and learning from our mistakes, I came up with a different method which we applied to the WengoPhone from OpenWengo in 2006, and which served us well (until the project became QuteCom, at least). Here are some of the techniques I learned, which I hope will be useful to others.

Time or features?

One question with roadmaps is whether hitting a date for release should be included as an objective. Even though I’ve said that release plans and roadmaps are different documents, I think it is important to set realistic target dates on way-points. Having a calendar in front of you allows you to keep people focussed on the path, and avoid falling into the trap of implementing one small feature that isn’t part of your release criteria. Pure time-based releases, with no features associated, don’t quite work either. The end result is often quite tepid, a product of the release process rather than any design by a core team.

I like Joel’s scheduling technique: “If you have a bunch of wood blocks, and you can’t fit them into a box, you have two choices: get a bigger box, or remove some blocks.” That is, you can mix a time-based and feature-based schedule. You plan features, giving each one a priority. You start at the top and work your way down the list. At the feature freeze date, you run a project review. If a feature is finished, or will be finished (at a sufficient quality level) in time for release, it’s in. If it won’t realistically be finished in time for the release date, it’s bumped. That way, you stick to your schedule (mostly), and there is a motivation to start working on the biggest wood blocks (the most important features) first.

A recent article on lessons learned over years of Bugzilla development by Max Kanat-Alexander made an interesting suggestion which makes a lot of sense to me – at the point you decide to feature freeze and bump features, it may be better to create a release branch for stabilisation work, and allow the trunk to continue in active development. The potential cost of this is a duplication of work merging unfinished features and bug fixes into both branches, the advantage is it allows someone to continue working on a bumped feature while the team as a whole works towards the stable release.

Near term, mid term, long term

The Inkscape roadmap from 2005 is a thing of beauty. The roadmap mixes beautifully long-term goals with short-term planning. Each release has a by-line, a set of one or two things which are the main focus of the release. Some releases are purely focussed on quality. Others include important features. The whole thing feels planned. There is a vision.

But as you come closer and closer to the current work, the plans get broken down, itemised further. The BHAGs of a release in 2 years gets turned into a list of sub-features when it’s one year away, and each of those features gets broken down further as a developer starts planning and working on it.

The fractal geometer in me identifies this as a scaling phenomenon – coding software is like zooming in to a coastline and measuring its length. The value you get when measuring with a 1km long ruler is not the same as with a 1m ruler. And as you get closer and closer to writing code, you also need to break down bigger tasks into smaller tasks, and smaller tasks into object design, then coding the actual objects and methods. Giving your roadmap this sense of scope allows you to look up and see in the distance every now and again.

Keep it accurate

A roadmap is a living document. The best reason to go into no detail at all for  future releases beyond specifying a theme is that you have no idea yet how long things will take to do when you get there. If you load up the next version with features, you’re probably aiming for a long death-march in the project team.

The inaccurate roadmap is an object of ridicule, and a motivation killer. If it becomes clear that you’re not going to make a date, change the date (and all the other dates in consequence). That might also be a sign that the team has over-committed for the release, and an opportunity to bump some features.

Leave some empty seats

In community projects, new contributors often arrive who would like to work on features, but they don’t know where to start. There is an in-place core team who are claiming features for the next release left & right, and the new guy doesn’t know what to do. “Fix some bugs” or “do some documentation” are common answers for many projects including GNOME (with the gnome-love keyword in Bugzilla) and LibreOffice (with the easy hacks list). Indeed, these do allow you to get to know the project.

But, as has often been said, developers like to develop features, and sometimes it can be really hard what features are important to the core team. This is especially true with commercial software developers. The roadmap can help.

In any given release, you can include some high priority features – stuff that you would love to see happen – and explicitly marked as “Not taken by the core team”. It should be clear that patches of a sufficiently high standard implementing the feature would be gratefully accepted. This won’t automatically change a new developer into a coding ninja, nor will it prevent an ambitious hacker from biting off more than he can chew, but it will give experienced developers an easy way to prove themselves and earn their place in the core team, and it will also provide some great opportunities for mentoring programs like the Google Summer of Code.

The Subversion roadmap, recently updated by the core team, is another example of best practice in this area. In addition to a mixed features & time based release cycle, they maintain a roadmap which has key goals for a release, but also includes a separate list of high priority features.

The end result: Visibility

The end result of a good roadmap process is that your users know where they stand, more or less, at any given time. Your developers know where you want to take the project, and can see opportunities to contribute. Your core team knows what the release criteria for the next release are, and you have agreed together mid-term and long-term goals for the project that express your common vision. As maintainer, you have a powerful tool to explain your decisions and align your community around your ideas. A good roadmap is the fertile soil on which your developer community will grow.

The Lifecycle of a Patch (or: Working Upstream)

community, freesoftware, gimp, gnome, inkscape, maemo, meego 5 Comments

Reposted from Neary Consulting

Yesterday I looked into what it means to be a maintainer of a package. Today, I’m going to examine how to affect change in a distribution like MeeGo, and what it means to work upstream. To do so, we’re going to look at how code gets from a developer’s brain into the hands of a user.

So – how can you make a change in a Linux-based distribution? Here’s what happens when everything works as it should:

  1. You open a bug report for the feature against your distribution
  2. You identify the module or modules you need to change to implement the new feature
  3. You open bug reports for each of the modules concerned, detailing the feature and the changes needed in that module for the feature
  4. You write a patch to implement the feature, and propose it (appropriately cut up for ease of review) to the maintainers of those modules
  5. Once the code has gone through the appropriate review process, it will be committed to the source control of the module(s)
  6. Some time later, the maintainer of each module will include that code in a stable release of the module
  7. Some time after that, the new stable versions will be packaged and uploaded to MeeGo
  8. Your code will be included in the next release of the distribution following the upload.

When people talk about “working upstream” in MeeGo or Linaro, this is what they mean.

To simplify matters for our analysis, let’s consider that the feature we want to implement is self-contained in one module (or related modules which release together). There are two different scenarios we’ll consider:

  1. The module is maintained by people not associated with your distribution (for example, a GNU or GNOME project)
  2. The module is maintained by people closely related to your distribution (for example, Unity in Ubuntu, or oFono in MeeGo)

We will also look at a third situation, where you find and fix a bug in the software you are using – that is, a released version of a distribution (the proverbial “scratching an itch”).

For each case, I will try to pick a representative feature/patch and follow it from developer through to distribution to Real Users.

What if your code changes different projects?

If your code touches several modules (for example, if you are proposing some new API in GTK+ which you want to use in the GIMP) then things can get complicated – you will need a stable version of GTK+ to be released before you can ship a stable release of the GIMP which depends on it.

This issue of staggered releases is one that Andrew Cowie pointed out a few years ago for language bindings. To avoid making bindings on shifting sands, he preferred to package new APIs once they had been included in a stable GNOME release. In turn, Java GNOME developers rarely depend on development release bindings, and they would wait for the new API to be included in a stable bindings release. For example, the gtk_orientable_get_orientation, added to GTK+ at the end of September 2008, was released in GTK+ 2.16, in March 2009. The first version of Java-GNOME which depended on GTK+ 2.16 was version 4.0.13, released in August 2009. That was packaged in distributions in Autumn 2009, and so most users would not have access to the newer bindings for a few months after that – perhaps early 2010 – at which point, the API was written 18 months beforehand.

And that is when you have a regular release schedule you can rely on! Pity the developer who wants to release a GIMP plug-in which depends on some API included in GIMP 2.8 – the last stable GIMP release, 2.6, came out in October 2008, and over two years later, 2.8 still has not released. And when you combine unreliable release schedules for distributions and applications, the results are cumulative: users of the stable Debian distribution are still using GIMP 2.4 releases. The GIMP 2.4 released in October 2007. Features added to the GIMP in late 2007 are still not in the hands of users of stable Debian distributions.

Getting features to users

It is difficult to generalise when users upgrade their Linux distributions, or even to say what proportion of Linux users are new users at any given time. It would be over-simplifying to say that developers use bleeding-edge distributions, power users upgrade early to the latest and greatest, new users install the latest distributions available, but will only upgrade every 18 months or so afterwards, and conservative users stick with “Long term service” or stable distributions. Most developers I know use their computer for work (and thus want a stable distribution) and only install the latest versions of various dependencies they need to work on their project. But let’s generalise and say that this is roughly the case. So (guesstimating) about 10% of your users will be upgrading to the latest distribution very quickly after its release, a further 20% in the months after when the bugs are shaken out, and the rest will follow along in their own time, perhaps 12 or 18 months later.

To make this concrete, let’s follow the life of a single patch. This is complete anecdata, but in my defence, the patch has been chosen by random, from a project which I know has good community processes and release management in place. The patch we’re going to follow adds an extension to Inkscape to render objects along triangular paths.

  1. Bug #226001 opened on 2008-05-03 by inductiveload, with a description of the feature to be added, and proposed code to implement it. The code, as an extension, may have a lower bar for acceptance than code which is core to a project.
  2. Patch submission reviewed on 2008-05-03, minor comments, but patch is accepted (note: This was not the authors first submission to Inkscape)
  3. Patch corrected to respond to comments and committed on 2008-05-03 (did I mention these guys had good community processes!?!)
  4. Inkscape 0.47-pre0, containing the Triangle extension, released on 2009-07-02
  5. Inkscape 0.47-pre4 included in Ubuntu 9.10

So for a feature developed in mid 2008, most Inkscape users will still not have the feature by the end of 2009, 18 months later. This is both a typical and atypical example: in many projects, patch proposals lay unreviewed for days, weeks, sometimes months, but the 0.47 release cycle was a particularly long one for Inkscape. However, I think the lag from code written to presence on user’s hard drives of ~12 to 18 months is about correct.

Does it have to be this hard?

If this were the only way to get features into a distribution, trying to improve MeeGo by contributing upstream would be a very frustrating experience. Happily, there are ways to accelerate the process. Taking the MeeGo kernel as an example, where Greg Kroah-Hartman recently threw in the towel on persuading people to propose patches upstream; the process is supposed to work like this:

  1. Propose a patch for inclusion upstream. This patch will then ship in a future stable kernel release (let’s say 2.6.38).
  2. After peer review, when the code has been accepted for inclusion in the kernel upstream, propose a backport for inclusion in the MeeGo kernel. The back-ported patch will be maintained across the next MeeGo release, and will be dropped when the kernel version included in the MeeGo project catches up with 2.6.38.

The overhead here is reduced basically to the peer review process of the upstream project, and the cumulative cost of merging a patch over the course of 6 months.

As a distributor (or a developer working on a specific distribution), this allows you to get code to everyone, eventually, and have that code included in your distribution as soon as you are sure that it is up to the standard expected by the community. Currently in MeeGo, the trend seems to be more towards submitting patches concurrently upstream and to MeeGo kernel maintainers (or even submitting them upstream once they have been accepted into the MeeGo kernel). In the case that a patch requires substantial modifications, or is rejected outright, upstream, the kernel maintainers are then left carrying a patch indefinitely in the distribution. For one patch, this might not be a big deal, but for thousands of patches, the maintenance and integration burden of these patches adds up.

It is also not unusual for kernel developers to maintain their own git branches for a long time. Three examples that come to mind are inotify, which Robert Love maintained for over a year for both Novell and in the kernel before it was accepted into the mainline, ReiserFS, which was maintained for several years out-of-tree before being shipped with the Linux kernel in 2001, and the fast  desktop patchset which Con Kolivas maintained for almost five years on the -ck kernel branch. Distributions will occasionally ship a substantial diff to upstream if there is a maintainer committed to getting the code upstream eventually. Allocating someone to work over a long period to make everyone happy and comfortable with your code may enable you to ship a big patch to upstream, but this will not be sustainable long term.

To summarise: when working upstream, as a distribution, you should only ship with patches which have been accepted in a development version of upstream already, if you can help it.

Meetings in telephone boxes

Sometimes, however, when upstream and downstream coincide, you can simplify things considerably, while also adding a small measure of risk.

In MeeGo, to continue with that example, the distribution architects have a pretty good idea when they can expect emergency telephony to be ready for oFono and the MeeGo telephony stack, because they’re writing it. By co-ordinating the upstream release management with downstream packaging, you can make promises as a distribution which you can’t with community-developed software.

When upstream and downstream are co-ordinating each other, we cut out the middleman. The workflow becomes:

  1. Report a bug/feature request against a component of the distribution
  2. Develop a patch which implements the feature, and submit it directly to the distribution bug tracker
  3. Once it has been reviewed and accepted, you know that your patch will be included in the next version of the distribution.

This gives a distribution much more control, both over what gets done, and when, and explains both the Ayatana and MeeGo UX development projects. However, being able to plan around the release is no guarantee that the release will happen on time: GNOME has in the past been stung by planning during the 2.6 development cycle to depend on a new version of GTK+, only to find that the release was delayed. In the end, the GTK+ release shipped in time for the 2.6 release at the end of March.

Scratch scratch

The other patch lifecycle I’d like to mention, because it is so relevant to distributions, was pointed out to me by Federico Mena Quintero yesterday. What happens to a patch that someone makes and submits to a distribution when they find a bug in stable released software? This is one of the key advantages of free software – if you find a bug in the software you use, and you have the wherewithall, you can fix the bug and share that fix with everyone else.

However, as we have seen, there is typically a lag of several months from the time that software is released and the time it is being used by large numbers of users through distributions. With releases of Red Hat Enterprise Linux, Novell Suse Linux Desktop and Ubuntu LTS being supported for up to 5 years, it is possible that important bugs will be fixed in these stable versions for years after the original developers have moved on, and are no longer maintaining older stable versions.

Let’s say I find and fix a bug in Rhythmbox 0.12.5, which ships with Ubuntu 9.10. I open a bug report on Launchpad, attach a fix to the source .deb there, and I update my local copy. As a user, I’m happy – I have fixed my problem and shared the solution with others. If I’m particularly conscientious, I might open a bug on gnome.org against Rhythmbox and attach my patch there, but since the development version is now 0.13.2, the best you can hope for is that the patch applies cleanly to the master branch, and will be included in the next release. It is very unlikely that the upstream maintainers will release another update to the 0.12 series at this point.

Now imagine that you are a maintainer for Suse, and someone reports the same bug against a long-term service release.In practice, there are several different versions being maintained by different distributions, and no good way to know if the same bug has been reported and fixed by someone else. You end up searching for a fix in upstream bug trackers, and in the bug trackers of each of the other main distributions. According to Federico at the time:

Patches for old versions are traded in the black market. You have friends in another distro? You ask them first, “did you guys already fix this?” Those patches don’t ever manage to reach CVS, where everyone would be able to get them.

Ideally, you could collaborate ahead of time with other distributions to ensure that you are all using the same branch of upstream modules, and are committing patches upstream. The Linux kernel is moving to this model, and there are also discussions underway in GNOME to co-ordinate this type of activity. Mark Shuttleworth has also pushed for something similar by encouraging projects in the core Linux platform to have a regular cadence of releases, so that everyone can synchronise their longer term service offerings every couple of years.

But at the moment, the best you can hope for is that your patch will be included in an upcoming release for your distribution, and which point other users of the distro can avail of it, and that upstream will patch their development version and latest stable versions, and get your patch to everyone in a few months.

Working upstream

The goal of this article is to explain what working upstream actually means, and how to make that more palatable for a distribution that wants to get features written and included in their next release. Hopefully, by pointing out some of the shortcomings of the way patches circulate from developers to users, some of these issues can be addressed.

In any case, one thing is clear – if you are carrying a patch as a distribution without ever submitting it upstream, you are making a costly mistake. You will be carrying code that others won’t, and bearing all of the merge and maintenance burden for that code for years to come. The path to maximum happiness is to co-ordinate with other distributions and with upstream to ensure that everyone is working in the same place, and sharing work as much as possible.

What’s involved in maintaining a package?

community, freesoftware, gnome, maemo, meego 7 Comments

Reposted from Neary Consulting

An interesting question was asked on a MeeGo mailing list recently: What does it mean to be a maintainer of something? How much time does it take to maintain software? It resulted in a short discussion which went down a few back alleys, and I think has some useful general information for people working with projects like MeeGo, which are part software development, part distribution.

Are you maintaining software, or a package?

The first question is whether you are asking about maintaining something in the Debian sense, or the GNOME sense?

A Debian package maintainer:

  • Tracks upstream development, and ensures new releases of software are packaged and uploaded in a timely manner
  • Work with distribution users and other maintainers to identify bugs and integration issues
  • Ensure bugs and feature requests against upstream software are reported upstream, and bugs fixed upstream are propagated to the distribution packages
  • Fix any packaging related issues, and maintain any distribution-specific patches which have not (yet) been accepted or released upstream

A GNOME project maintainer:

  • Makes regular releases of the software they maintain (typically a .tar.gz with “./configure; make; make install” to build)
  • Are the primary guardians of the roadmap for the module, and sets the priorities for the project
  • Works with packagers, documenters, translators and other contributors to the software to ensure clear communication of release schedules and  priorities
  • Acts as a central point of contact for release planning, bug reports and patch review and integration
  • A typical maintainer is also the primary developer of the software in question, but this is not necessarily the case

Obviously, these two jobs are very different. One places a high priority on coding & communication, another on integration, testing, and communication.

So how much time does maintaining software take?

Well, how long is a piece of string?

To give opposite extremes as examples: Donald Knuth probably spends a median time of 0 hours per week maintaining TeX and Metafont. On the other hand, Linus Torvalds has worked full time maintaining the Linux kernel for at least the past 15 years, and has been increasingly delegating large chunks of maintenance to lieutenants. The maintenance of the Linux kernel is a full time job for perhaps dozens of people.

On a typical piece of GNOME software (let’s take Brasero as an example) much of the work is simplified by following the GNOME release schedule – the schedule codifies string freezes and interface freezes to simplify the co-ordination of translation and documentation. In addition, outside of translation commits, Brasero has had contributions from its maintainer, Philippe Rouquier, and 6 other developers in the last 3 months. Most of these changes are related to the upcoming GTK+ 3 API changes, and involve members of the GTK+ 3 team helping projects migrate.

In total since the 2.32.0 release, there have been 55 commits relating to translations, 50 commits from Philippe, 9 from Luis Medina, co-maintainer of the module, and there were 4 commits by other developers. Of Philippe’s 50 commits, 14 were related to release management or packaging (“Update NEWS file”), 5 were committing patches by other developers that had gone through a review process, and the remainder were features, bug fixes or related to the move to the new GTK+. Of Luis’s commits, 2 were packaging related, and 2 were committing patches by other developers.

This is a lot of detail, but the point I am making is that the “maintenance” part of the work is relatively small, and that the bigger part of maintenance is actually sending out the announcements, paying attention to bug reports and performing timely patch review. I would be interested to know how much time Philippe has spent working on Brasero over the past release cycle. I would guess that he has spent a few hours (somewhere between 5 and 10) a week.

On the other hand, the Debian maintainer for the Brasero package has a different job. There are 6 bugs currently forwarded upstream from the Debian bug tracker, and another 35 or so awaiting some final determination. A number of these look like packaging bugs (“you need version X of dependency Y installed”). The last release packaged and uploaded was 2.30.3-2, dating from November, and there have been 4 releases packaged in the past 8 months, none by the maintainer.

A typical Debian maintainer is a “Debian developer” for several packages. Pedro Fragoso, the Debian maintainer of Brasero, maintains 5 packages. I think it is fair to say that the amount of time a package maintainer spends maintaining an individual package is quite low, unless it is extremely popular. Perhaps a few hours a month.

The package maintainer has little or no say (beyond interacting with the project maintainer and forwarding on bug reports & feature requests) in what happens upstream, or which features have a high priority. His influence comes primarily from the fact that he is representing a larger user base and can indicate which bugs his distro’s users are running into and reporting regularly, or which feature requests are generating a lot of feedback.

What’s in a word?

It’s clear that a package maintainer is not the same thing as a project maintainer. So when Sivan asked on the MeeGo developer list how he could become a maintainer, he clarified later to say that what he was really asking was “How can I affect change in MeeGo?” To do that, you need to write some code that changes a module, or a number of modules, and then you need to get that code into MeeGo.

How that happens, in all its gory details, is the next instalment in this series of at least 2 articles: The Lifecycle of a Patch (or: Working Upstream).

Community Building Guide

community, freesoftware, gnome, maemo 4 Comments

I wrote another guest article for the VisionMobile blog last week, which just went live yesterday, titled “Open Source community building: a guide to getting it right”.

Exerpt:

Community software development can be a powerful accelerator of adoption and development for your products, and can be a hugely rewarding experience. Working with existing community projects can save you time and money, allowing you to get to market faster, with a better product, than is otherwise possible. The old dilemma of “build or buy” has definitively changed, to “build, buy or share”.

Whether you’re developing for Android, MeeGo , Linaro or Qt, understanding community development is important. After embracing open development practices, investing resources wisely, and growing your reputation over time, you can cultivate healthy give-and-take relationships, where everyone ends up a winner. The key to success is considering communities as partners in your product development.

By avoiding the common pitfalls, and making the appropriate investment of time and effort, you will reap the rewards. Like the gardener tending his plants, with the right raw materials, tools and resources, a thousand flowers will bloom.

After focusing recently on a lot of the things that people do wrong, I wanted to identify some of the positive things that companies can do to improve their community development experiences: try to fit in, be careful who you pick to work in the community, and ensure that your developers are engaging the project well. If you are trying to grow a community development project around a piece of software, then you should ensure that you lower the barriers to entry for new contributors, ensure that you create a fair and just environment where everyone is subject to the same rules, and don’t let the project starve for lack of attention to things like patch review, communication, public roadmapping and mentoring.

The original title of the article was “Here be dragons: Best practices for community development” – I’ll let you decide whether the VisionMobile editors made a good decision to change it or not.

Follow-up to “Shy Developer Syndrome”

community, freesoftware, maemo, work 4 Comments

Reposted from neary-consulting.com

My article on “Shy Developer Syndrome” a few weeks ago garnered quite a bit of interest, and useful feedback. Since a lot of it adds valuable perspectives to the problem, I thought I should share some of my favourite responses.

Here on gnome.org, Rodney Dawes argued that developers tend to stay away from mailing lists because the more public lists are very noisy:

For me, mailing lists are a huge risk vs. low return problem. They can become a time sink easily, and it’s quite often that pointless arguments get started on them, as offshoots of the original intent of the thread. Web Forums also have this problem. And, to really get much of anything out of a list, you must subscribe to it, as not everyone who replies, is going to put you specifically in the recipients headers. That means, you’re now suddenly going to get a lot more mail than you normally would for any highly active project. And for anyone trying to get involved in an open source community, 99% of the mail on that list is probably going to be totally irrelevant to them. It will just make tracking the conversation they are trying to have, much harder.

I agree with Rodney that dealing with a new level of volume of email is one of the trickiest things for new contributors. I still remember when I signed up to lkml for an afternoon in college, only to find 200 new emails 3 hours later. I panicked, unsubscribed, and gave up that day on being a Linux kernel hacker.

Since then, however, I have learned some email habits which are shared by other free software hackers I know. Everyone I know has their own tricks for working with medium or high volume mailing lists, and some combination of them may make things livable for you, allowing you to hear the signal without being drowned out by the noise. LifeHacker is a good source of tips.

Rob Staudinger says something similar, pointing the finger at bikeshed discussions as a big problem with many community lists:

Will the zealots go and suggest postgresql’s process model was poor, or samba’s memory allocator sucks? Unlikely, but they will tell you your GUI was bad or that you’re using a package format they don’t like, just because it’s so easy to engage on that superficial level.

Over at LWN, meanwhile, Ciaran O’Riordan makes a good point. Many developers working on free software want to separate their work and personal lives.

When I leave the office at 6pm, my work should have no more relevance until the following morning. Same when I quit a company. I might choose to tell people where I work/worked, but it should be a choice, and I should be able to choose how much I tell people about my work. Having mailing list posts and maybe even cvs commits might be too detailed. Maybe waaay too detailed.

Finally, over at neary-consulting.com, MJ Ray suggested that asking individuals to respond to a request can backfire:

Publicly referring to individuals on a mailing list is a double-edged sword. It might bolster the confidence of the named individual, but it also reduces the confidence of other people who might have answered the question. In general, I feel it’s best not to personalise comments on-list. Some e-democracy groups require all messages to be addressed to a (fictional or powerless) chair or editor, similar to the letters pages of The Times.

While I agree with MJ in situations where the answer is accessible to the wider community, but often only developers working for you, the manager, are in a position to reply – at that point, you have a choice: get the information off your developer and answer yourself, or ask him to answer the question. and I’ve found that asking on the list has the positive side-effects I mentioned.

Curing “Shy Developer Syndrome”

community, freesoftware, maemo, marketing, work 7 Comments

From the Neary Consulting blog:

One of the most common issues I have seen with experienced professional software developers who start to work on community software is a reluctance to engage with public communication channels like mailing lists. Understanding the reasons why, and helping your developers overcome their timidity, is key to creating a successful and fruitful relationship with the community you are working with.

In my experience, common reasons for this timidity are a lack of confidence in written English skills, or technical skills, nervousness related to public peer review, and seeing community interaction as “communication” or “marketing” (which are not part of their job), rather than just “getting stuff done” (which, of course, is part of their job).

Read more…

MeeGo Conference: building bridges (literally!)

community, maemo 4 Comments

As part of the early bird events before the MeeGo conference this week, I ran a lollipop bridge building contest last night at the conference venue. The rules were simple: 100 lollipop sticks, a glue gun, and you have to bridge a 40cm gap, and resist as much weight as possible. We had about 40 participants, and 10 bridges entered.

There were two awards: prettiest bridge and strongest bridge. Obviously, the prettiest bridge contest was judged first.

Before...

Before...

The results were really impressive! The prettiest bridge was designed by Team Symbio, Ville Kankainen, Ilkka Maki, Henri Ranki and Márton Ekler. It was a beautiful arch bridge.

Team Symbio working on the prettiest bridge

Team Symbio working on the prettiest bridge

The winning bridge, made by the team “The Unbreakables”, Casper van Donderen, Dan Leinir Turthra Jensen and Sivan Greenberg, survived the shopping basket we used as the breaking tool, with 25 1L bottles of water on top – impressive! The bridge was eventually broken when Chani tried to hang off it.

Very smug looking - the bridge survived all the weight we had

The Unbreakables - looking very smug

Some of the bridges held quite a lot of weight – and broke very spectacularly!

The 9 bridges we managed to break during judging.

...and after

« Previous Entries