Humanitarian Free & Open Source Software meeting place

community, freesoftware No Comments

This year, we hosted a Humanitarian Free and Open source software track at the Open World Forum for the third time. The track has been of great value to participants as a way of communicating with other practitioners in the space, and exchanging best practices and experiences around funding, community, and working with NGOs.

Once again, we had great participation from representatives of a wide range of projects, in crisis management, healthcare, social enterprise and microfinance. And the quality of the presentations was excellent – when discussing after the conference with Leslie Hawthorn what our favourite presentations were, the short answer was “all of them”. My personal favourites were Mark Prutsalis’s description of the EUROSHA project, Heather Blanchard’s experiences from working closely with aid agencies, and Leslie’s presentation with Julius Awakame of the great work that OpenMRS is doing in electronic medical record management targeting the poorest communities in the world.

The number one request I have had for the track is to turn it into more of a conversation – part presentations to establish a baseline for discussion, but mostly workshops or discussion forums on issues common to projects in HFOSS. Some organisations have funding, but have not succeeded in developing a commercial ecosystem of partners to support the deployment of the software. Other organisations have a partner ecosystem, but have not succeeded in growing an active community or identifying a sustainable funding model. Others have vibrant communities, but do not have either the funding or the organisational infrastructure to have a full time staff, or to bid for projects with aid agencies.

How would you feel about making next year’s Humanitarian track at the Open World Forum a worldwide meeting place of leaders of HFOSS projects, over two days, with a more traditional conference component including presentations, round-tables and workshops, and a Humanitarian BarCamp the day afterwards in Paris? I bet that if we can get critical mass for the idea that we could make this a regular meeting place for these projects. If there is somewhere else more appropriate as a meeting of minds, I’m happy to point people there instead, but I do not have the impression that there is. It has become obvious to me that there is a need for such a meeting place, and Paris in the Autumn seems like the ideal place to have it – central location, easy accessibility internationally from Europe, America, Asia and Africa.

Is there interest in making a real Humanitarian FOSS conference in Paris next year?

 

 

Attention! Speed bump ahead!

community, freesoftware, General 13 Comments
Speed bump ahead

Speed bump ahead!

This week I was reminded that the first step in an Open Source project is often the hardest. We’ve been using MediaWiki for a project at work, with the “ConfirmAccount” extension to deal with spammers – a very nice extension indeed! It adds account creation requests to a queue where they can be handled by members of the bureaucrat group.

We had one wishlist item. We wanted to have a notification email sent out to every member of the group when a new request was received. There is an existing feature to send a notification email to a configurable email address, but not to the whole bureaucrat group.

So I said to myself, how hard can that be? And I rolled up my sleeves and set aside an afternoon to make the change, and submit it upstream. After a few false-starts which were nothing to do with MediaWiki, I got down to it.

First task: Upgrade to the latest version of MediaWiki – the version on Fedora 17 is MediaWiki 1.16.5, and the latest stable version is 1.19.2. So I download the latest version, and follow the upgrade instructions to over-write the system MediaWiki install in /usr/share/mediawiki with the upstream version. Unfortunately, the way that $IP (the Install Path) is set changed in version  1.17, in a way that took a little time to work around.

Once that was done, I downloaded the HEAD of the trunk branch from SVN which was linked from the old version of the extension home page, and got the extension working. That needed a few additional modules, and some configuration to get the email notifications working locally, but eventually, I was good to go.

I got to work making my change. It took a while, but once I figured out how to turn on debugging and the general idiom for database queries, it was easy enough. After a couple of hours hacking and testing, I was happy with the result.

The first date

I headed back to the extension home page to figure out how to submit a patch. At the same time, out of habit, I joined the project IRC channel, #mediawiki on Freenode, reasoning that if I got lost I could ask for help there. No indication on the Extension page, but a web search showed me that MediaWiki uses Bugzilla. So I registered for Yet Another Bugzilla Account, and confirmed my email address a few minutes later. Then I created a bug on Bugzilla, and attached my patch to the report.

Simultaneously, on IRC, I was asking for help and was told by a very nice community guy called Dereckson that the preferred way to submit patches was through Gerrit. It turned out that the extension home page should have been pointing to the more recent Git repository all along, and I had been developing against the wrong version of MediaWiki. Dereckson updated the extension page with the right repository information as soon as he discovered it hadn’t been updated before. No big deal, I cloned the Git repository, and tried to apply the patch from svn to git. Unfortunately, it didn’t work – some other changes related to translatable strings had changed code in the same area, and I had to re-do the change, but that was pretty easy.

I did try to submit a patch by following the procedure in the git workflow document, but without an account on Gerrit it didn’t work, of course. Dereckson convinced me to apply to developer access. After some initial resistance, because I really didn’t want to be a MediaWiki developer just to submit a drive-by patch, I requested developer access with the comment “I just wanted to submit one patch to an extension I use, Extension:ConfirmAccount.” Half an hour later, jeremyb approved my request with the comment “That’s what you think now!” 🙂

Then I went to the documentation on getting access and followed the instructions there until I was directed to upload my ssh key to labs and found I did not have access to that resource. Thanks again to Dereckson (once more to the rescue!) on IRC, I found my way to getting set up for git and Gerrit, and got an SSH key set up for Gerrit. Then I went back to the instructions for submitting a patch and a quick “git review” later, I had submitted my first patch ever to Gerrit.

Jumping through hoops

Jumping through hoops (CC by-sa, rwp-roger@flickr)

Pretty quickly, the first couple of reviews came in. First comment: “There’s some whitespace issues here.” Gee, thanks. Second comment, from Dereckson (again!) started with a “Thank you”, said the idea was a good one, and then gave me an example of the project norms for commit comments,  and made one comment on the code suggesting I use an option.

As a first time user of Gerrit, I noticed a few issues with it for newbies. It’s not at all clear to me how to distinguish “important” commenters from trivial-to-change things like whitespace issues. It’s also not clear whether a -1 blocks a commit, or how to have a discussion with someone about the approach taken in a patch. Also, it was unclear what the suggested way to update a patch was and propose a new, improved version. My first try, I made some changes, committed them into a new revision on my local branch, and pushed the lot for review (normal git workflow, I daresay). Unfortunately, this was not correct. I ended up squashing the two commits with a “rebase -i”, and since then I have been using “commit –amend”.

After a few more rounds of comments (another whitespace comment, and a suggestion to avoid hard-coding the group name), I am currently on the 5th patchset, which I think does what it says it should, and will pass review muster, when someone gets around to reviewing it. I’ve been told that the review time for a small patch like this can be up to 5 or 10 days, and I don’t know exactly how I know that the process is over and it’s good to commit.

Sunk costs

The end result is that for a small change to a fairly simple MediaWiki extension, I spent about 2 hours coding, and about 4 or 5 hours (a full afternoon) going through the various hoops involved in submitting the change for review upstream.

I’m aware that this is a one-off cost – that now that I have a Bugzilla account, and a git and Gerrit account, it will be easier next time. Now that I have spent the time reading the MediaWiki coding conventions, git workflow, and have spent time understanding how to use Gerrit, I won’t have these issues again. The next patch will only take a few minutes to submit, and I won’t be wondering if I did something wrong if I don’t get a review in the first 10 minutes.

But along with some installation and firewall issues, I ended up spending slightly more than a full day on this. In hindsight, I’m saying to myself “was it  really worth a full day of work to avoid maintaining this 20 line patch over time?”

I think it’s important that projects make newcomers jump through some hoops when joining your project – the tools you use and the community processes you follow are an important part of your culture. Sometimes, however, the initial investment that you have to put into the first-time use of a tool – the investment that regular contributors never see any more – is big.

If you’ve never used Bugzilla, git, Gerrit, or SSH, how long would it take you to submit a first patch to a project? How many hurdles does someone have to jump through to submit a patch for your project? Is there a way to ease people into it? I could imagine something like an email based process for newcomers, and only after they’ve made a few contributions, insist that all of the community’s preferred tools and processes be used. Or having true single sign-on, where you have only one account-creation process for all of your interactions with the project, so that you don’t end up creating a wiki account, a Bugzilla account, a Labs account and configuring a Gerrit account.

I want to make clear – I am not picking on MediaWiki here. I rate the project well above average in the speed and friendliness with which I was helped at every turn. But they, like every project, have adopted tools to make it easier for regular contributors, and to help ensure that no patches get dropped on the floor because of poor processes. Here’s the $64,000 question: are the tools and processes which make it easier for regular contributors making it harder for first-time contributors?

Joining Red Hat

community, freesoftware, work 30 Comments

I’m joining Red Hat on May 2nd, where I’ll be working with the Open Source and Standards team. We’ll be working hard to help all of the projects that Red Hat contributes to kick ass at growing community. I have known several of the team from years gone past, and interviewing for the position was frankly a pleasure.

I’d like to thank Karsten Wade for thinking of me and making the connections back in February. When my future boss described the position and the team to me around then, and asked me whether I might be interested, I couldn’t help myself from saying “I think you just described my dream job”. I know you’re supposed to show restraint and play hard to get in these situations, but I got carried away.

Red Hat is one of the few companies out there that could tempt me away from independent consulting. They have a range of projects covering the server, desktop, middleware, cloud services and virtualisation. They are the top, or one of the top, corporate contributors to dozens of projects I use every day. I love the Red Hat philosophy of working with communities to make great Free and Open Source software.

Of course, that doesn’t mean that everything is roses. There are projects within Red Hat (or that Red Hat contributes heavily to) which need to improve their community processes, that could do a better job of promoting themselves, or that have hung on to former business models post-acquisition, at the expense of community growth. And it’ll be our job to fix those issues. It’ll be challenging, it’ll be a slow, incremental process. But I have no doubt that it will be very rewarding. I’m looking forward to it!

Connecting Apache httpd to Tomcat with mod_jk: The bare minimum

freesoftware, General, work 3 Comments

Earlier this week, I wrote:

I hate docs that tell you what to do, but not why. As soon as a package name or path changes, you’re dust. This is maybe the 4th time I’ve been configuring Apache to delegate stuff to Tomcat using mod_jk, and every time is just like the first.

For those who don’t know, mod_jk is a module inplementing the wire protocol AJP/13, which allows a normal HTTP web server to forward on certain requests to a second server. In this case, we want to forward requests for JSP pages and servlets to Tomcat 6. This allows you to do neat things like serve static content with Apache and only forward on the dynamic Java stuff to Tomcat. The user sees a convenient URL (no port :8080 on the hostname) and the administrator gets to serve multiple web scripting languages on the same server, or load balance requests for Java server resources across several hosts.

I have spent enough time on it at this point, I think, that I understand all of the steps in the process, and have stripped it down to the bare minimum that one would need to do in terms of configuration to get things working. And so I’m putting my money where my mouth is, and this is my attempt to write a nice explanation of how mod_jk does its thing, and how to avoid some of the common mistakes I had.

First, a remark: Apache is one of those pieces of software that has gotten harder, rather than easier, to configure as time has gone on. Distributions each package it differently, with different “helpful” mechanisms that make common tasks like enabling a module easier, and to enable convenient packaging of modules like PHP, independent of the core package. But the overall effect is that a lot of magic done by distributions makes it much harder to follow the upstream documentation. Config files are called different names, or stored in different places. Different distributions handle the inclusion of config file snippets differently. And so on.

This is not to say that Apache, Tomcat and mod_jk don’t have some nice docs – they do, but often the docs don’t correspond to the distros, or haven’t been updated in a while, and often they don’t explain why you have to do something, putting emphasis instead on what you need to do. After all my reading, I finally found the Holy Grail I was looking for – the simple document of how to configure mod_jk – but even this has its shortcomings. The article doesn’t mention Tomcat, for example, which left me digging around for information on the configuration I needed to do to Tomcat, which led me to this, which led me to over-write the sample workers.properties file in the simple set-up document.

But if you understand the First Principles, you can figure out what’s going on with any organisation of configuration. That’s what I’m hoping to get across here.

How does mod_jk do its thing?

The first issue I had trouble getting my head around was how, exactly, all this was supposed to work. In particular, I didn’t quite understand how the configuration worked on the Tomcat size of things.

As I understand it, here’s what happens:

  • A GET request comes in to httpd for http://localhost/examples/jsp/num/numguess.jsp
  • Apache processes the request, and find a matching pattern for the URL among JkMount directives
  • Apache then reads the file specified by the JkWorkersFile option to figure out what to do with the request. Let’s say that config file says to forward to localhost:8009 using the protocol ajp13
  • Tomcat has a Connector listening on port 8009, with the protocol AJP/13, which handles the request and replies on the wire. Apache httpd sends the reply back to the client

Apache httpd configuration

There are two steps to configuring Apache:

  1. Enabling the module
  2. Configuring mod_jk

Apache provides a handy utility called “a2enmod” which will enable a module for you, once it’s installed. What happens behind the scenes for modules depends on the distribution. On Ubuntu, module load instructions are put in a file called /etc/apache2/mods-available/<module>.load optionally alongside a sample configuration file /etc/apache2/mods-available/<module>.conf. To enable the module, you create a symlink to the .load file in /etc/apache2/mods-enabled.

On my Ubuntu laptop, my jk.load contains:

LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so

On OpenSuse, on the other hand, a line similar to this is explicitly added to the file /etc/apache2/sysconfig.d/loadmodule by sysconfig, based on the contents of a field in the configuration file /etc/sysconfig/apache2 – remember how I said that distro packaging makes things harder? If you added the line directly to the loadmodule file, the change would be lost the next time Apache restarts.

In both cases, these files (on Ubuntu, the mods-available/*.load files, and on OpenSuse the sysconfig.d/* files) are loaded by the main Apache config file (httpd.conf) at start-up.

Configuring mod_jk

The minimum configuration that mod_jk needs is a pointer to a Workers definition config file (JkWorkersFile). Other useful configuration options are a path to a log file (JkLogFile – which should be writable by the user ID which owns the httpd process) and a desired log level – I set JkLogLevel to “debug” while getting things set up. On OpenSuse, I also needed to set JkShmFile, since for the default file location (/srv/www/logs/jk-runtime-status) the directory didn’t exist and wasn’t writable by wwwrun, the user that owns the httpd process.

This configuration, and the configuration of paths below, is usually in a separate config file – in both Ubuntu and OpenSuse, it’s jk.conf in /etc/apache2/conf.d (files ending in .conf in this directory are automatically parsed at start-up). To avoid errors in the case where mod_jk is not present or loaded, you can surround all Jk directives with an “<IfModule mod_jk.c>…</IfModule>” check if you’d like.

The JkMount directive configures what will get handled by which worker (more on workers later). It takes two arguments: a path, and the name of the worker to handle requests matching the path. Unix wildcards (globs) are accepted, so

JkMount /examples/*.jsp ajp13_worker

will match all files under /examples ending in .jsp and will pass them off to the ajp13_worker worker.

If you want Apache to serve any static content under your webapps, you’ll also need either a Directory or Alias entry to handle them. Putting together with the previous section, the following (from Ubuntu) was the jk.conf file I used to pass the handling of JSPs and servlets off to Tomcat, and serves static stuff through Apache:

<IfModule mod_jk.c>

JkWorkersFile   /etc/libapache2-mod-jk/workers.properties
JkLogFile       /var/log/apache2/mod_jk.log
JkLogLevel      debug
Alias /examples /usr/share/tomcat6-examples/examples
JkMount /examples/*.jsp ajp13_worker
JkMount /examples/servlets/* ajp13_worker

</IfModule>

I should use Directory to prevent Apache from serving anything it shouldn’t, like Tomcat config files under WEB-INF – I could also just use “JkMount /examples/* ajp13_worker” to have everything handled by Tomcat.

Now that Apache’s config is done, we need to configure mod_jk itself, via the workers.properties file we set in the JkWorkersFile parameter.

workers.properties

Sample workers.properties files contain a lot of stuff you probably don’t need. The basic, unavoidable parameters you will need are the name of a worker (which you’ve already used as the 2nd argument for JkMount above), and a hostname and port to send requests to, and a protocol type (there are several options for worker type besides AJP/1.3 – “lb” for “load balancer” is the most important to read up on). For the above jk.conf, the simplest possible workers.properties file is:

worker.list=ajp13_worker
worker.ajp13_worker.port=8009
worker.ajp13_worker.host=localhost
worker.ajp13_worker.type=ajp13

And that’s it! The last step is to set up Tomcat to handle AJP 1.3 requests on port 8009.

Configuring Tomcat

In principle, Tomcat doesn’t need to know anything about mod_jk.It just needs to know that requests are coming in on a given port, with a given protocol.

Typically, an AJP 1.3 connector is already defined in te default server.xml (in /usr/tomcat6 on both Ubuntu and OpenSuse) when you install Tomcat. The format of the connector configuration is:

<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ />

I am pretty sure that this will work without the redirectPort option, but I haven’t tried it. It basically allows requests received with security constraints specifying encryption to be handled over SSL, rather than unencrypted.

In addition to this, Tomcat does provide a facility to auto-create the appropriate mod_jk configuration on the fly. To do so, you need to specify an ApacheConfig in the Tomcat connector, and point it at the workers.properties file. This facility looks pretty straightforward, but I know I found it confusing in the past when I lost edits to the jk.conf file – I prefer manual configuration myself.

Gotchas

I have had quite a few gotchas while figuring all this out – I may as well share for the benefit of future people having the same problems.

  • All the documentation for mod_jk installedd with the packages refers to Tomcat5 paths – for example, on OpenSuse, in the readme, I was asked to copy workers.config into /etc/tomcat5/base – a directory which doesn’t exist (even when you change the 5 to a 6)
  • If your apache web server uses virtual hosts (and, on Ubuntu, it does by default) then JkMounts are not picked up from the global configuration file! You need to either add “JkMountCopy true” to the VirtualHost section, or have JkMounts per VirtualHost. If you used Alias as I did above, and you try to run a servlet, the error message is just a 404. If you try to load a JSP, you will see the source.
  • If you make a mistake in your workers.property file (I had a typo “workers.list=ajp13_worker” for several hours) and your worker name is not found in a “worker.list” entry, you will see no error message at all with warnings set to error or info. With the warning level set to debug, you will see the error message “jk_translate::mod_jk.c (3542): no match for /examples/ found” The chances are you have a typo in either your jk.conf file (check that the name of the worker corresponds to the name you use in workers.properties), or you have a typo somewhere in your workers.properties file (is it really work.list? Does the worker name match? Is it the same as the worker name in the .host, .port and .type configuration?
  • Make sure you get Tomcat working correctly first and working perfectly on port 8080 – or you won’t know whether errors you’re seeing are Tomcat errors, Apache errors or mod_jk errors.

I’m sure I’ve made mistakes and forgotten important stuff – I’m happy to get feedback in the comments.

Jeudi du Libre: “How to use Git to look good”

freesoftware 6 Comments

I will be giving a short training session on Thursday December 1st 2011 at 19h30 in the Maison pour Tous, la Salles de Rancy, on the subject “Présentation et initiation à l’utilisation de Git pour la gestion d’un projet communautaire (ou comment donner l’impression qu’on est un super codeur ?)”(roughly translated: “A presentation of Git usage in community software projects (or: how to make people think you’re a super coder)”. The basic idea is to show people how to use git to (a) save their work regularly and (b) reorganise patches using

git rebase --interactive

to make themselves look smarter than they are.

For those interested but unable to attend, I will be borrowing heavily from Federico Mena Quintero’s 2008 blog post Why I want to have the children of git rebase –interactive

This is a short session, but I plan to plough through a lot of content, like this:

  • 10 minutes: What is version control, how does Git (and other DVCS) differ from svn
  • 20 minutes: Git basics (initialising a repository, allowing others to pull from it, how clone, status, add, commit, update, pull, push, branch works)
  • 20 minutes: Branching, merging – good policy for maintaining a common trunk, submitting patches for review with git format-patch and send-email
  • 20 minutes: git rebase, with –interactive using pick, edit and squash; A word of warning on squashing or editing commits when others are cloning your code
  • 20 minutes: Questions

Are there things I absolutely should talk about which aren’t listed? I’m assuming that most of the attendees will be somewhat familiar with some version control and potentially even git already, and don’t need a full one hour on the virtues of version control, so I’m assuming I can fly through the basics pretty quickly.

 

Encouraging empathy

community, freesoftware, General 9 Comments

A couple of days ago, I tweeted this: “Insight of the day: All “community norms” documents come down to one word: Empathy. Think how others will feel before you act.” I think that’s worth developing on.

This is what empathy meant to me growing up

As I was growing up, empathy meant "Deanna Troi" to me

Here are a few examples:

  • GNOME Code of Conduct: “If someone asks for help it is because they need it. Do politely suggest specific documentation or more appropriate venues where appropriate, but avoid aggressive or vague responses such as “RTFM”.” – think how it feels to need help, and to take the step of asking for it. I want to know that someone *understands* what I’m going through, feels my frustration, and is looking for a way to help relieve it. Recently, when confronted with a technical issue that prevented me from using my scanner, the advice I received on an IRC channel was to upgrade or change my Linux distribution! How un-empathic can you get!
  • Koha patch rules (along with every other development project in the world): “The patch must apply to the current HEAD of the master branch of the code”: Put yourself in the place of a project maintainer who receives a patch proposal. He tries to apply it to his source tree, but the merge fails. Why? Because the patch was created against a year-old release of the project, and he’s since reworked internals to solve a different, unrelated issue. The maintainer is faced with a number of unsavory choices now: Spend time reading the patch, understanding what it does, and “forward-porting” it to master; check out the old branch, apply the patch, review, and test it there, and not commit to master; or drop the patch. What would you do in that situation? Someone is giving you a gift, which is going to make work for you. Is it worth an hour or two of your time to work on it to get it to just apply to your work? You still need to review the patch after that – which you would have to do anyway. In that situation, most people will ask the original patch proposer to do the initial grunt work and get the patch working on the tip of master.
  • Now flip things around. You found a bug in software you use – the bug was really annoying. You took the time to get the source code, identify, qualify and fix the buig, open a bug report, which was confirmed, and attach a patch which you have checked fixes the problem. And what answer do you get? “Not good enough – work on it more”. How would that make you feel? That depends on how it is communicated. If it’s a stock answer, like a sheet of paper handed over a tax office counter, with a list of prerequisites, then I bet that would make me angry, resentful and frustrated. I poured time and effort into that patch, and this is how you treat it? If the criticism is of the core of the patch – it doesn’t fix the problem, or should do so differently, then the criticism might be easier to take. But if it’s issues which potentially add many hours of effort on top of time already spent, with no benefit to the proposer (check out the latest code, upgrade half a dozen dependencies without breaking my old version, compile it, and then forward-port the patch), chances are he won’t do it. An empathic response might be to make someone aware of the guidelines and their reason for being, but help him with the forward-porting on IRC by asking him to explain the patch, what it does and why.

All of those guidelines on indentation and whitespace, commenting code, including test cases, updating documentation, and ensuring code compiles at the tip of the master branch are designed to help patch proposers make patches which are easy for maintainers to apply. And in this context, an empathic patch proposer can understand them much better. Miguel de Icaza did a great job of framing this right.

When I was in college, I went to the Netherlands one Summer for a working holiday. At one point, I had a job offer for short-term work, but needed to be registered for tax to start, and I needed a bank account to get paid. So I went to the tax office, and the lady behind the counter very resignedly handed me a piece of paper with prerequisites, and told me to come back when I had fulfilled them. Then she looked over my shoulder and said “Next!” One of the prerequisites was a permanent address – I explained that I was living in a camp-site for the summer, and would that address do? Of course not! No proposed solution, no consideration of the situation I was in, no empathy.

Then I went to the bank, where I was told that I needed a permanent address to open an account. Same sense that the person I was dealing with didn’t care about me. So with my friend Barry, we looked into short term accommodation options. Landlords required (among other things) an employment agreement and a bank account before they would rent us accommodation – even if only by the week! In the end, we had to pass up that job, and work “on the black” for less than minimum wage to survive the Summer – but what choice did we have?

How frustrating! Just imagine how we felt. That’s empathy.

Again and again in community guidelines, whether it’s guidelines for people who are approaching a community to report a bug or propose a patch or feature, or guidelines for community members dealing with each other and people outside the community, this idea “think how this would make you feel if the roles were reversed” is pervasive, but unwritten. I think that it should be.

It reminds me of a social experiment I heard about recently:

Rats are placed in a box with a lever. When you pull the lever, a food pellet is distributed. The rats quickly learn to pull the lever. After a while, you change the configuration of the cage. Now, when the lever is pulled, the food pellet is distributed, but a cold shower drenches all the rats in the box. After a while, the rats learn not to pull the level, and start to punish rats who do as a group.

The second generation starts when a new rat is introduced into the box. as he approaches the leverl, the older rats all jump on top of him, to prevent him from touching it. In effect, they are teaching him the rule “don’t touch the lever”, without explaining why the rule exists. As time goes on, new rats are added, and old rats removed from the box.

The third generation happens when there are none of the oiriginal rats left in the box. None of the rats have experienced the cold shower after the lever was pressed. At that point, you can turn off the cold shower function – you will be sure that no rat will ever touch the level, because the community rules forbid it. Ask any of the rats why, though, and they will not be able to give you a better answer than “because that’s the way it is”. If rats could talk, of course.

Community guidelines which are purely written documents, but which neglect the empathic side of the equation, and don’t explain how not following the guidelines affect other people, can be similar. It’s important to include the reasons for guidelines,so that we don’t forget how breaking the rules makes others feel. It’s also important to have sufficient flexibility and adaptability in dealing with new community members – like the rat experiment, circumstances change all the time. Back when everyone was using CVS, performing merges was a complicated and time-consuming process. Nowadays, rebasing and merging with Git, Bazaar or Mercurial is so easy that some of the coding guidelines we used to have may no longer have the same impact. Likewise, email technology has moved on, and with cheap and copious bandwidth, email norms have evolved – netiquette community norms move on with them.

In general, as Bill & Ted famously said, “Be excellent to each other”. Think about how your actions & statements will be received. Be empathic.

Humanitarian Software – Using technology to help humanity

community, freesoftware, marketing, work No Comments

Tomorrow, Friday September 23rd, the Humanitarian FOSS track at the Open World Forum will bring together leaders from some of the most important humanitarian software projects and case studies of the impact these projects are having on people’s lives around the world. I’m happy to have been allowed to chair the track, and I am humbled by the quality of the presenters and the impact that their work is having.

In addition to the Humanitarian track, we are also honoured to have Laura Walker Hudson from FrontlineSMS give a keynote presentation on the overarching theme of “Humanitarian FOSS – serving humanity” in the main auditorium at 17:15. Laura will give an overview of the myriad ways that free and open source software is saving and helping people’s lives.

The Humanitarian track will have two core themes:

  • Crisis Management– how Free and Open Source Software plays a role in extreme events
    • The Sahana project, born in Sri Lanka after the 2004 tsunami in the Indian Ocean, helps NGOs and citizens caught in a crisis by crowd-sourcing missing persons reports, co-ordinate different NGOs working in the same place, and track incident reports and volunteer co-ordination.
    • Tashiro Shuichi from Japan will present the ways that Open Source software helped during the tsunami disaster in Japan.
    • Syrine Tlili from the Tunisian Ministry of Communication Technologies will tell us how Open Source was used by citizens during the Arab Spring revolutions
    • Sigmah is a project that enables project management for NGOs
  • Sustainable Development– once the crisis is over, what are the projects that help with systemic problems like education, health-care, sanitation, and documenting human rights violations?
    • SMS is the killer app for communication in the developing world. Most villages in Africa, Asia and South America have cellphone connectivity, but unreliable power grid, Internet and no phone lines. FrontlineSMS enables you to send and receive SMS messages from any computer, using a cheap phone or GSM modem. It is at the heart of every prominent humanitarian software project.
    • Sugar is an operating system which was designed from the ground up to meet the needs of educators in developing countries, as part of the OLPC (One Laptop Per Child) project to revolutionize the use of technology in education. Sean Daly from the Sugar project will show us a deployment of Sugar and OLPC in a secondary school in a small town in Madagascar.
    • Martus, a project created by Benetech, allows the secure recording and storage of testimony relating to human rights violations. Testimony collected with Martus has been used to successfully prosecute police officers for murder in Guatemala.
    • Mifos, which was developed by Grameen Bank, the pre-cursor of micro-financing, provides a micro-financing platform for financial institutions.
    • Akvo help connect doers and donors to transform communities in some of the poorest parts of the world, funding water, sanitation, and health-care projects around the world.
    • The Open Bank Project promotes financial transparency and provides tools to allow people to fight corruption in banking.

Coders for Social Good

There are dozens of amazing Free/Open Source Software projects working to improve the lives of people around the world. For example, Literacy Bridge provides talking books to communities in Africa, and OpenMRS enables the gathering of medical information from regional clinics to reduce child mortality by improving resource allocation.

Many Open Source developers are developing software in communities because they want to make the world a better place. Working on a humanitarian project provides a unique opportunity to combine the social good of Open Source community projects and the public good of helping people in need. Social Coding 4 Good is a new initiative from Benetech which puts willing volunteers in contact with humanitarian projects in need of resources.

The schedule for the track is available on the Open World Forum website. For any press or interview requests, please contact me by email dave@neary-consulting.com or my cellphone +33 6 77 01 92 13.

 

Humanitarian FOSS at Open World Forum 2011 – schedule online

community, freesoftware No Comments

After some last minute confirmations, cancellations and other trials and tribulations, I am very happy with the final line-up for the  Humanitarian FOSS track at the Open World Forum this year.

We have a range of projects attending and speaking this year which will cover both crisis response (crowdsourcing first responder information, document and information management when it’s most important) and longer term actions (education, sanitation, financing sustainable projects, enabling reconstruction and securing people from persecution).

I’m particularly happy to have Syrine Tlili from Tunisia presenting the role that free software played in the citizen’s revolution in Tunisia this Spring, and Tashiro Suichi to tell the story of the lives that were saved thanks to free software after the tsunami disaster in Japan in March. I also expect Simon Redfern’s presentation on the Open Bank Project to be particularly interesting, in the context of the current financial crisis.

In addition to the track, Laura Walker Hudson has also agreed late in the day to keynote on the topic of “Free and Open Source Software: Serving Humanity” at 17:15 on Friday the 23rd. As project manager of FrontlineSMS, she has a key position at the center of a lot of what goes on in the humanitarian world: SMS is the key means of gathering and sharing information with rural areas of the developing world, and is also the main way people share information during a crisis.

I’d also like to take the opportunity to give a big shout out to Bayor Andrew Azaabanye from the Literacy Bridge project – he was due to speak on the ways Literacy Bridge is being deployed in Ghana, but unfortunately we could not get a visa in time for him to attend.

We will be in the “Paris” room from 1pm until 5pm on Friday September 23rd – I hope to see a lot of you there!

 

The Cost of Going it Alone

community, freesoftware 20 Comments

These are speaker notes from a presentation I gave at the Desktop Summit 2011, on a topic I’ve written about in the past. The slides for the presentation are on the conference website (PDF link).

I’m going to talk about the costs associated with modifying and maintaining free software “out of tree” – that is, when you don’t work with the developers of the software to have your changes integrated. But I’m also going to talk about the costs of working with upstream projects. It can be easy for us to forget that working upstream takes time and money – and we ignore that to our peril. It’s in our interests as free software developers to make it as cost-effective as possible for people to work with us.

Hopefully, if you’re a commercial developer, you’ll come away from this article with a better idea of when it’s worthwhile to work upstream, and when it isn’t. And if you’re a community developer, perhaps this will give you some ideas about how to make it easier for people to work with you.

Softway

In 1996 and 1997, Softway worked on bringing a POSIX API to Windows NT. This involved major patches to all the components of the GCC toolchain – the compiler, linker, assembler, debugger, etc. To make the changes they needed, they hired an 18 year industry veteran of compilers and operating systems, and over the course of 6-8 months, the main body of work was done.

Conscious of the costs involved in maintaining that much work out-of-tree, Softway approached upstream developers to propose that their changes be integrated. The reactions ranged from “this is great, but…” to “NT? Don’t care about it”.

After this initial failure, Softway turned to the GCC company at the time: Cygnus Solutions. Cygnus had hired many GCC maintainers, and at that time if you wanted anything done with GCC, they were the guys to talk to. Their quote? $140,000. And they wouldn’t be able to start work on the project for 14 months.

Deciding this was too expensive, Softway eventually hired another company called Ada Core, maintainers of the Ada front-end for GCC, to rework the patches and get them upstream. Ada Core cost $40,000, and could start next week. That’s roughly the same amount of money as was originally spent developing the features in question.

Getting Things Done

At the highest level, the question that people want to answer is: “How can I get what I want done, as quickly and cheaply as possible?” This software exists, it does 80% of what I need, I need to change it a little bit to fit those needs. What’s the best way for me to do that?

The most common strategy is to pick a release on which to build on, and hack away from there. In fact, in probably 90% of cases that’s as far as it goes. I add the features I need for the project I’m working on Right Now, ship it, and never even talk to upstream about what I did, or why.

The costs involved in this approach are all the “stuff” that gets added upstream (features, bug fixes and security patches) which you never see. Mal Minhas, former CTO of the LiMo Foundatioon, labelled this “unleveraged potential” (PDF link) – missing out on the work of other people who are doing things in your interests. The underlying assumption is that you will end up redoing at least some of this work during your maintenance of your own work.

To avoid missing out on this work, it’s recommended to merge regularly changes from upstream into your local copy of the upstream package. But this merge is typically not free – and the bigger your changes, the more likely it is you will find significant conflicts between upstream and your local copy. There is also an additional, often-forgotten overhead involved with regression testing and validation post-merge. Every time I upgrade a component to a new version, I need to verify that it hasn’t broken anything, either because of changed behaviour at integration points I’m using or simply because some regressions were introduced when fixing other issues.

And the worst part about this maintenance cost is that you will incur it every time you upgrade. Almost every time you pull code from upstream, you will have substantial costs involved in merge resolution and regression testing. And to keep the delta between your code and upstream as small as possible, you should do these merges regularly.

Inevitably, someone will suggest that the maintenance costs have grown to the point where it’s worth your while “giving back” (where this is often a synonym for “dumping our stuff upstream”). The goal is to reduce the delta to a point where only client- or project-specific patches are maintained out of tree, and anything which might be useful to someone else is sent back to the upstream project, to be maintained there. Jim Zemlin recently said in an interview “Let me tell you, maintaining your own version of Linux ain’t cheap, and it ain’t easy”… in his words, “if some aren’t giving back as much as others today, I just think it will naturally happen over time. It always is in their business interest to do so”.

It’s at this point that you will run into the community overhead.

Community overhead

Open Source developers expect contributors to jump through all sorts of hoops to get their code upstream. Most maintainers will request that you re-format patches according to community norms, submit patches which apply cleanly to the head of the development branch, and may suggest alternative approaches to how you should write your patch or feature. Jonathan Corbet has described how this works with the kernel community:

A patch of any significance will result in a number of comments from other developers as they review the code. Working with reviewers can be, for many developers, the most intimidating part of the kernel development process […] when reviewers send you comments, you need to pay attention to the technical observations that they are making. Do not let their form of expression or your own pride keep that from happening. When you get review comments on a patch, take the time to understand what the reviewer is trying to say. If possible, fix the things that the reviewer is asking you to fix. And respond back to the reviewer: thank them, and describe how you will answer their questions.

To take just one example of how projects expect people to do extra work to contribute, think about what version of a piece of software a company is likely to want to integrate into their product or solution. When we worked on QuteCom, the rule I expected our developers to follow was to use only stable releases of any libraries we depended on, which were included in recent releases of major distributions, and were present in Debian testing. I didn’t want my guys to be debugging unstable versions of software written by other people, we had our own stuff to get out the door. And by requiring that components be present in releases of popular distros, I felt I was lowering the bar to participation, by allowing community members to get started by installing devel packages from the distro, rather than downloading and compiling dependencies.

So what happens when you make some changes to the projects you depend on? Those changes are made to a stable version of the software. If your product release will take several months, it is likely that upstream will have moved on. And we expect patches against development branches, not against stable releases. So any work that I do needs to be merged first with the development branch, before it’s submitted.

A developer is left with some choices, none without costs or risk:

  • Ignore upstream completely, scratch your own itch, and sell upgrades to your end client to pay for maintenance costs – which wastes developer time and does not benefit upstream at all, not to mention leaving your software open to security issues and bugs as they are discovered and fixed upstream.
  • Fork a vendor branch off a stable release, and “when the project is finished” work on merging it upstream – but by that time, other projects have come along, there’s a substantial cost involved in rebasing your work to the latest and greatest upstream – “when I have time” doesn’t happen very often in the Real World. One way to get around this is to hire someone from upstream to “take care” of getting your code upstream – as we have seen with Softway, this can be a significant financial and time investment. But it can almost guarantee that your code will get upstream.
  • The ideal situation – work on a feature branch on the development branch of upstream, and back-port your work to the slow-moving stable version, submitting it for inclusion upstream as soon as it’s finished. Martin Fowler recently pointed out some of the problems with feature branches, but they’re much better than big merges and code dumps. The cost here is that you’re paying up-front the merge cost by making small frequent merges, and adding an extra overhead keeping two branches in sync. Also, there is a significant cost involved in the risk that once you’ve put in all the work, the result will be rejected by upstream developers anyway.

Some stories can illustrate the relative weight of the costs involved in the second and third of these options.

Maemo GTK+

In 2003 or 2004, Nokia started working on modifications to GTK+ for its Nokia 770 tablet. By the time the tablet was released in 2005, the Nokia delta to GTK+ was tens of thousands of lines of code. In addition, a set of mobile-only widgets had been packaged into the Hildon package, which depended on Nokia’s vendor version of GTK+. At that point, when the project became public, Nokia wanted to propose these changes for inclusion upstream in GTK+.

To help with this work, Nokia contracted Imendio (now called Lanedo) to help. At the time they started working on the project, the delta was over 50,000 lines of code. Over the course of 4 years, a lot of work was done to reduce this delta, sometimes by re-writing Maemo features to make them acceptable upstream, sometimes by shepherding changes in, and in part by rebasing Maemo’s GTK+ on GTK+ 2.10, which solved some of the problems which needed to be addressed before.

And yet, even after four calendar years and many more man-years of effort, Hildon, the (reduced) set of mobile widgets which many Maemo application developers used, did not work perfectly on top of a stock upstream GTK+. When Nokia made a grant of $50,000 to the GNOME Foundation to be spent to enhance the developer experience for MeeGo developers, integrating Hildon widgets upstream and ensuring that Maemo application developers could easily port their applications to MeeGo was a big part of the winning bid.

A huge amount of this work could have been avoided by following Andrew Morton’s advice, given to a developer of the experimental filesystem Tux3:

Do NOT fall into the trap of adding more and more stuff to an out-of-tree project. It just makes it harder and harder to get it merged.There are many examples of this.

But to ask the question the other way around: could the GTK+ maintainers have done more to facilitate the submission of this work upstream?

Wakelocks

In 2005, Google acquired a little-known, stealth mode company called Android, which we now know was developing a Java- and Linux-based phone operating system. By 2007, when the platform and first Android phones were announced, the company had made significant changes to the Linux kernel for its needs. One of those changes was called wakelocks – system services and kernel drivers could request that the kernel not go to sleep in the absence of user input (thus locking the device awake, giving the name). Matthew Garrett gave a pretty clear description of what wakelocks do, and why they were added to Android (from his point of view as maintainer of power management in the kernel) at last year’s LinuxCon.

Wakelocks allow the system to save battery, even when there are poorly behaved applications running on the system. For a production environment with thousands of applications of varying quality, that makes sense. So in early 2009, a little known kernel developer working on Android, Arve Hjønnevåg, proposed that wavelocks be included in the kernel. To that end, he sent the following mail, along with a big patch, to the Linux kernel mailing list:

The following patch series adds two apis, wakelock and earlysuspend. The Android platform uses the earlysuspend api to turn the screen and some input devices on and off. The wakelock code determines when to enter the full suspend state.

These apis could also be useful to other platforms where the goal is to enter full suspend whenever possible.

It’s fair to say that these changes were not initially well understood or well received. The initial reaction was covered at the time by Jonathan Corbet of LWN.

After a few rounds of revisions, the proposal appears to have been dropped. At least, no significant efforts were made to have the patches proposed from March 2009 until early 2010, when a number of things happened that converged to a perfect storm around the issue. At the end of 2010, Greg Kroah-Hartman deleted a number of Android drivers from the staging tree, saying “no one cared about the code”. Partly as a result of that, a number of key kernel figures met in April 2010 at the collaboration summit to discuss “the Android problem” with Google engineers. Soon afterwards (perhaps by coincidence), Arve re-posted a new set of patches, which seemed to be well received.

That thread erupted, however, and 1500 emails and some hurt feelings later, an alternate implementation called suspend blockers by Rafael Wysocki was integrated into the kernel.

All of this took a lot of work from Google, essentially after the feature was finished. According to Ted Ts’o, speaking in August 2010:

Android team members have spent literally hundreds of man hours (my mail folder on the suspend blocker thread has over 1500 mail messages, and is nearly 10MB), and have tried rewriting the patches several times, in an attempt to make them be main-line acceptable.

Chris DiBona, speaking in an interview with Paula Rooney, said that at that time that “there were some developers at Google working on it who “feel burned” by the decision but he acknowledged that the “staffing, attitude and culture” at Google isn’t sufficient to support the kernel crew.”

Clearly, there is a cost involved in trying to submit code to the kernel and to other projects, and there is a significant risk that the code will be rejected, in spite of that effort.

Given the heavy-hitting kernel developers working inside Google, I can’t help but wonder whether things would have gone smoother if Andrew Morton were asked to help shepherd wakelock functionality into the kernel. As Matthew Garrett said in his LinuxCon post-mortem: “Getting code into the kernel is always easier if you have a recognised name associated with it”.

EVMS and IBM

Even when you do everything right, it is possible to have code rejected by upstream – at which point, the best solution is often to suck it up, and port your work over to whatever API was provided for the same problem space by upstream.

In 2000 or 2002, IBM started work on EVMS, the Enterprise Volume Management System/ EVMS was a set of kernel drivers and user space tools which allowed users to manage several virtual disk drives, potentially across several disks and physical partitions. Dan Frye, speaking about the project in August 2002, said it was “a quantum step forward in terms of ease of use, reliability and performance”.

The project was published first on SourceForge in 2001, and was developed in the open, with releases regularly being made against the most recent upstream kernels, as well as for the major distributions.

On October 2nd, 2002, Kevin Corry proposed EVMS for inclusion in the 2.5 kernel. At the time, he wrote:

To make this as simple as possible for you, there is a Bitkeeper
tree available with the latest EVMS source code, located at:
http://evms.bkbits.net/linux-2.5
This tree is sync'd with the linux-2.5 tree on linux.bkbits.net
as of about noon today (Oct 2).

At this point, IBM had done everything right, as far as I can tell. They tracked upstream development, talked about their plans and encouraged feedback, and when they proposed EVMS for inclusion, it was intended to make it as easy as possible to accept it.

At this point, things get a little fuzzy. The end result, though, was that LVM2, an alternative kernel framework for logical volume management developed by a small company called Sistina (later acquired by Red Hat), was merged on October 29th. And on November 5th, Kevin Corry announced a change in direction for EVMS:

As many of you may know by now, the 2.5 kernel feature freeze has come
and gone, and it seems clear that the EVMS kernel driver is not going
to be included. With this in mind, we have decided to rework the EVMS
user-space administration tools (the Engine) to work with existing
drivers currently in the kernel, including (but not necessarily limited
to) device mapper and MD.

Again, IBM did “the Right Thing”, and ported their tools to the APIs which were in the kernel. Making that decision earned the team a lot of respect among the kernel community. But at the end of the day, it also cost them – all of the wasted work on their kernel drivers, and all of the work porting their tools to new APIs. A back of the envelope calculation, based on the releases of EVMS after that date, suggests that it set the project back ~6 months, and ~18 man-months of work.

When to engage?

So when does it become useful to engage upstream? And, more importantly, is there anything that we, as upstreams, can do to lower that barrier, to make it easier for new companies to engage?

The reality is, if you’re only making small patches to a project, it’s going to take you longer to get your patches upstream than it will to maintain them over time. It’s just not worth your while, unless there is some intrinsic motivation for working with the project.

If you have a moderately sized delta with upstream, and you expect to have to maintain your version over time, then it may be time to start thinking about getting some of that code upstream. There are a couple of approaches you can use: The first is to train up developers working in-house and have them build some relationship capital over time. The second is to hire a company who already employs maintainers to review your code, suggest changes and help shepherd anything that is appropriate upstream, as Softway did with Ada Core. The relative costs of these options will depend on how nice the project is to new developers, how well documented the community processes are, and of course the quality of your code. In any case, at this point you will need to consider the cost involved in rebasing to a later version of upstream, and consider how often you will have to do it.

However, once your delta goes over a certain threshold, you will end up spending a substantial amount of time in maintenance and regression testing. At this point, the repeated long-term cost of conflict resolution and regression testing every time you merge will outweigh the cost of getting code upstream. Again, there are two options – train up one of your developers, or subcontract the upstreaming work. You might even consider killing two birds with one stone, and including mentorship of one or two of your developers in the subcontracting contract, to grow some in-house project developers. It may also be worth head-hunting someone with an existing reputation in the project.

Finally, if you are using a piece of software as a strategic plan of a product portfolio, and you are making substantial changes to it, then you would be insane not to have a maintainer, or at least a senior developer, from the project on payroll. When Samsung decided to include EFL in their phone platform, they hired Rasterman. Google hired Andrew Morton. Red Hat hired many of the GTK+ maintainers of the era when they created Red Hat Advanced Development Labs to develop GNOME back in 2000. Collabora hired Wim Taymans, Edward Hervey and Christian Schaller from the GStreamer project. And the list continues. But if you do hire a maintainer, do so in the knowledge that their primary allegiance may be to “their” project, and not to your company. Being a good maintainer entails doing a lot more than writing code – factor into your schedule that 20% – 30% of their time will be spent on patch review, rolling releases, documenting roadmap plans, chatting on the mailing list, etc.

Let me leave you with this take-away: We will never make it financially interesting for someone who’s done a quick hack to get that upstream, or get them to fix their “bug” the Right Way. It will always be in the interests of companies with a strategic dependency on a piece of software to hire or train someone to be a core member of the community developing that software. But for everything in between, as free software developers, we can help. We can make it easier for companies to figure out who they can hire to train or mentor their developers, or shepherd changes upstream. We can lower the bar to getting features upstream by documenting community norms and practices, by being nicer to new developers on the list, and by instituting a mentoring programme. By improving patch review processes, we can decrease friction and make it nicer and easier to contribute to the project. If it’s easier for a developer to do a merge every three months than it is for him to talk to you, then your project is missing out.

 

OpenOffice.org in Apache: The Next Step

freesoftware No Comments

A few weeks ago, I wrote about what submitting OOo to Apache meant for the various parties involved. In particular, I said “IBM can continue to develop Symphony, with a licence it’s happy with.”

Yesterday, I saw that  IBM will contribute Symphony to OpenOffice.org. This is a natural next step for them for a number of reasons:

  • They will reduce their delta between Symfony and OpenOffice.org, thus reducing maintenance costs. And since they are a major sponsor of the move to Apache, we can expect considerably less resistance to invasive code changes than we might have previously seen at OpenOffice.org, making the effort to merge considerably cheaper than it would otherwise have been
  • They retain the option of keeping parts of their product private, thanks (as I pointed out) to the license involved – notably, anything related to Lotus Notes integration can stay proprietary
  • They will be helping solve a major problem for OOo adoption by Apache: the replacement of (L)GPL dependencies

All in all, this is good for OpenOffice.org, and good for IBM. And, in the long run, if the long-awaited iAccessibility2 et al can easily be integrated into LibreOffice, good all round.

 

« Previous Entries Next Entries »