September 17, 2009
community, freesoftware, General, gimp, gnome, maemo, work
(Reposted from Neary Consulting)
Mal Minhas of the LiMo Foundation announced and presented a white paper at OSiM World called “Mobile Open Source Economic Analysis” (PDF link). Mal argues that by forking off a version of a free software component to adjust it to your needs, run intensive QA, and ship it in a device (a process which can take up to 2 years), you are leaving money on the table, by way of what he calls “unleveraged potential” – you don’t benefit from all of the features and bug fixes which have gone into the software since you forked off it.
While this is true, it is also not the whole story. Trying to build a rock-solid software platform on shifting sands is not easy. Many projects do not commit to regular stable releases of their software. In the not too distant past, the FFMpeg project, universally shipped in Linux distributions, had never had a stable or unstable release. The GIMP went from version 1.2.0 in December 1999 to 2.0.0 in March 2004 in unstable mode, with only bug-fix releases on the 1.2 series.
In these circumstances, getting both the stability your customers need, and the latest & greatest features, is not easy. Time-based releases, pioneered by the GNOME project in 2001, and now almost universally followed by major free software projects, mitigate this. They give you periodic sync points where you can get software which meets a certain standard of feature stability and robustness. But no software release is bug-free, and this is true for both free and proprietary software. In the Mythical Man-Month, Fred Brooks described the difficulties of system integration, and estimated that 25% of the time in a project would be spent integrating and testing relationships between components which had already been planned, written and debugged. Building a system or a Linux distribution, then, takes a lot longer than just throwing the latest stable version of every project together and hoping it all works.
By participating actively in the QA process of the project leading up to the release, and by maintaining automated test suites and continuous integration, you can mitigate the effects of both the shifting sands of unstable development versions and reduce the integration overhead once you have a stable release. At some stage, you must draw a line in the sand, and start preparing for a release. In the GNOME project, we have a progressive freezing of modules, progressively freezing the API & ABI of the platform, the features to be included in existing modules, new module proposals, strings and user interface changes, before finally we have a complete code freeze pre-release. Similarly, distributors decide early what versions of components they will include on their platforms, and while occasional slippages may be tolerated, moving to a new major version of a major component of the platform would cause integration testing to return more or less to zero – the overhead is enormous.
The difficulty, then, is what to do once this line is drawn. Serious bugs will be fixed in the stable branch, and they can be merged into your platform easily. But what about features you develop to solve problems specific to your device? Typically, free software projects expect new features to be built and tested on the unstable branch, but you are building your platform on the stable version. You have three choices at this point, none pleasant – never merge, merge later, or merge now:
- Develop the feature you want on your copy of the stable branch, resulting in a delta which will be unique to your code-base, which you will have to maintain separately forever. In addition, if you want to benefit from the features and bug fixes added to later versions of the component, you will incur the cost of merging your changes into the latest version, a non-negigible amount of time.
- Once you have released your product and your team has more time, propose the features you have worked on piecemeal to the upstream project, for inclusion in the next stable version. This solution has many issues:
- If the period is long enough, your feature additions will be long removed from the codebase as it has evolved, and merging your changes into the latest unstable tree will be a major task
- You may be redundantly solving problems that the community has already addressed, in a different or incompatible way.
- Feature requests may need substantial re-writing to meet community standards. This problem is doubly so if you have not consulted the community before developing the feature, to see how it might best be integrated.
- In the worst case, you may have built a lot of software on an API which is only present in your copy of the component’s source tree, and if your features are rejected, you are stuck maintaining the component, or re-writing substantial amounts of code to work with upstream.
- Develop your feature on the unstable branch of the project, submit it for inclusion (with the overhead that implies), and back-port the feature to your stable branch once included. This guarantees a smaller delta from the next stable version to your branch, and ensures you work gets upstream as soon as possible, but adds a time & labour overhead to the creation of your software platform
In all of these situations there is a cost. The time & effort of developing software within the community and back-porting, the maintenance cost (and related unleveraged potential) to maintaining your own branch of a major component, and the huge cost of integrating a large delta back to the community-maintained version many months after the code has been written.
Intuitively, it feels like the long-term cheapest solution is to develop, where possible, features in the community-maintained unstable branch, and back-port them to your stable tree when you are finished. While this might be nice in an ideal world, feature proposals have taken literally years to get to the point where they have been accepted into the Linux kernel, and you have a product to ship – sometimes the only choice you have is to maintain the feature yourself out-of-tree, as Robert Love did for over a year with inotify.
While addressing the raw value of the code produced by the community in the interim, Mal does not quantify the costs associated with these options. Indeed, it is difficult to do so. In some cases, there is not only a cost in terms of time & effort, but also in terms of goodwill and standing of your engineers within the community – this is the type of cost which it is very hard to put a dollar value on. I would like to see a way to do so, though, and I think that it would be possible to quantify, for example, the community overhead (as a mean) by looking at the average time for patch acceptance and/or number of lines modified from intial proposal to final mainline merge.
Anyone have any other thoughts on ways you could measure the cost of maintaining a big diff, or the cost of merging a lot of code?
March 2, 2009
I’ll be in San Francisco later this month, from March 22nd until the 29th, for OSBC, and I’ll be looking to get in a few training runs to get ready for a marathon I hope to run in May while I’m there.
I am staying in a hotel on Market Street, near the Civic Center, and I have a few runs in the training plan for that period:
- A 28km long run on the 22nd (which I’ll probably be doing to stay awake early evening to get over jet lag)
- A 16km run on Tuesday – 5km warm-up plus 2x4500m marathon pace, plus a few kms warm-down – it’d be nice to have a known distance around 3 miles for this one
- Some speed work (500m splits) on Thursday that I’ll probably skip or swap out for an early morning jog
- Another 28km long run on Saturday 28th, before flying out on the 29th.
Does anyone have any suggestions for good places to run? I would like to run around Crissy Field and across the Golden Gate bridge while there, if possible, and I notice there are some nice looking hills the other side of it in the Golden Gate park.
Anyone want to join me for one or more of the runs? I’m arriving early and leaving late so I’ll be all on my lonesome for a few days if anyone feels like going for a 2.5h jog together on weekend and show off the city for me. Drop me a line, we’ll work something out.
Update: Another idea which looks like it’d be great, having looked at a map of Marin County, would be to rent a bike for the day and go for a 3 or 4 hour ride. Anyone game?
February 12, 2009
freesoftware, marketing, work
In gathering material for my series on migrating to free software, one thing immediately jumps out at me.
If your server software uses industry standard protocols to communicate with your client software, then finding free replacement software is easy, painless and transparent for the user. Need a DNS service? Bind’ll do, thank you very much. SMTP? You’re spoiled for choice – there’s Qmail, Postfix, sendmail among others. IMAP, POP3 – try Dovecot, or the UWash IMAP server. SSH – OpenSSH. FTP – PureFTPd, VSFTPd, proftpd are all fine. HTTP – Apache os one of many web servers available.
Pretty much anything with an RFC has free software implementations that are complete, and compare well with commercial competitors. Often, as is the case of Bind or Apache, they are the leaders in their space.
In other words, by using only standard client-server protocols, you have freedom to leave.
However, if your server software “integrates” tightly with your client software, in the style of Notes and Domino from Lotus/IBM, or Exchange Server and Outlook, or Sharepoint and Office, or if it has its own proprietary wire protocol, then you may have a pain point.
So the first lesson, I think, is consider how replacable server elements of your infrastructure are at the acquisition, if you want to avoid lock-in later on. As hard as projects like Samba and Zimbra chase the tail-lights of proprietary wire protocols, the easiest way to avoid them is to rely, where possible, of standard, open protocols.
And that’s what I’m looking for more than anything. How do people get around their pain points? Have people had an Exchange or Sharepoint hang-over? Now that PostPath has gone away, are people looking to get rid of Exchange stuck with Zimbra? Has migrating from MS SQL to a free database server been a pain in the leg? What have people used to centralise authentication and share home directories across the network? Is Samba with LDAP a drop-in solution?
December 20, 2008
community, freesoftware, General, gnome, marketing, work
Reposting from Neary Consulting: This is an article accompanying the presentation I gave at MAPOS 08 in London on December 9th 2008.
Moving the Mobile industry from purchasing to co-development in free software communities
Recently, Matt Aslett wrote an article about the way that attitudes to free software evolve over time within a company, using a graphic he got from the Eclipse Foundation, based on some Nortel funded research. Software sneaks in on the ground floor, going from simple use of components to a real understanding of community-driven development, resulting, long-term, in building free software projects and strategies.
Matt sees an evolution in attitudes as the software and its value is discovered at different levels of the organisation, before finally the business development side of the company picks up the ball and drives free software into the heart of the company’s product strategy.
I have also seen this learning process in action, but I would express it differently. People discover the value of the freedoms granted by free software one by one, more or less independently of their level in an organisation – exploring each freedom before discovering its limitations, and thus discovering the value of the next freedom, and qualifying for the next level.
The core freedoms in the Free Software Definition which are granted to the user of free software are:
- Freedom to use
- Freedom to modify
- Freedom to share, freedom to redistribute
- Freedom to participate
As companies start to integrate free software components into their products, they discover the value of these freedoms one by one.
The first thing that people see about free software is FREE! As in zero cost. The days when companies reject a product out of hand because they don’t have to pay for it are gone – Linux, OpenOffice.org, Apache, Red Hat and a plethora of other “free” products have proven themselves in the marketplace, and companies are now prepared to allow free software components into their solutions, after appropriate consideration of the licences involved.
To quote one attendee at MAPOS 08, “why would I want to write a compression library, when I can download the best one in the world from zlib.org?” In the area of specialised components for secure communications, compression/decompression, a commodity kernel, and a bunch of other situations, it is appropriate to use free software components off-the-shelf. We expect them to work, and we don’t expect to ever need to talk to the maintainer.
Free software components are in use like this in thousands of systems solutions and commercial products, often without their authors even being aware of it. The main advantage of this for a systems or product company is a saving of time and money, through having a fully functional component without having to go through a purchasing process, and a reduced software bill of materials. An additional advantage is the simplification of your licensing due diligence, thanks to the relatively well-understood consequences of the various popular free software licences.
The difficulty arises when the software doesn’t meet your needs. In many cases, libraries are written by an individual to scratch an itch – it works for him, but is not quite up to your requirements. As one friend of mine put it: “Open Source: 80% as good as the last guy needed it to be”.
Perhaps it’s software that works on 32 bit platforms, but has never been tested for 64 bit. Perhaps it has not been ported to ARM or MIPS. Or perhaps the author simply never imagined that anyone would want the feature which you find indispensable.
In this situation, you can always ask the software author to write the feature or fix the bug for you – but since there is no client/supplier relationship between you, it is entirely reasonable for a volunteer to put your request on the long finger, or reject it outright.
At this point, you realise the value of having the source code – you can modify the software to meet your needs, or pay someone else to do it for you.
Being able to modify software that doesn’t quite meet your needs is amazing. This is the way things used to work by default, but the shrink-wrapped software revolution of the 1980s got everyone used to the idea that software was a valuable asset to be protected from public view at all costs. When I worked for Informix in the late ’90s, we used to refer to the source code of our leading product as “the crown jewels”.
With the widespread acceptance of free software as an alternative, developers are no longer surprised when they may see how a program works, and change its behaviour. This ability brings two important and immediate benefits – you have control of the behaviour of the software, and you can adapt it to suit exactly your needs. The old choice of build vs buy has become: build vs buy vs extend.
This situation is common in software services companies which provide vertically integrated “solutions” to corporate clients. You take components where you can find them to speed up initial development, stick everything together with duct-tape, hack whatever you need in whatever libraries you’re using to make everything pass the client’s integration tests, and then publish a set of .tar.gz files somewhere on the website of the company to fulfil any licensing requirements.
This control and ability to tailor a solution comes at a price, however. Over and above the cost of making the changes, your team is lumbered with a maintenance problem. Let’s say that implementing the features you need on top of a component the first time round takes a month. Fixing bugs in the features when it has been rolled out can take another few weeks. A few months later, the upstream product you’re based on goes and releases a shiny new version, with lots of compelling new features that you really want.
The cost of integrating your features into the newer version, and doing extensive regression testing before rolling out the new version, might take you another 6 weeks. It is not unusual for time spent integrating your work into later versions to quickly outweigh initial development time and investment. Inconveniently, this is typically effort which is not budgeted for beforehand.
After a company has run into this problem a couple of times, over the course of a year or two, someone will usually suggest that you propose that the features you have developed be sent upstream to the projects you work with – if the feature is accepted, you have solved your maintenance problem, it will be in all future releases of the project, and all of that tricky integration work and regression testing work will get done upstream, as part of normal maintenance.
And so you tell your star hacker Jack that he has two weeks to get your 5,000 line patch down to manageable size by getting your work integrated upstream. (when I said this at MAPOS, no-one laughed – so maybe this does not sound as ridiculous as I thought it did).
He diligently goes to work, cleaning up his code, getting rid of all the warnings, spliting up the big diff into small manageable chunks, creating accounts in 10 different bug trackers, signing up to a dozen mailing lists, creating 47 bugs with terse descriptions, attaching proposed bug fixes, and for major features he sends email telling people that the feature is there and asking for review.
By the end of a frantic month, two weeks more than he was given, he reckons that if everything he’s submitted is accepted, your 5,000 patch will be down to a more manageable 2,000 line patch.
What happens next is… underwhelming.
Major features and bug fixes lie unreviewed for weeks or months. Those that are reviewed need changes which take time and effort. Some patches are rejected outright because they’re too big and the feature is difficult to review.
A post mortem analysis of the project of “giving back to the community” might identify some of the following conclusions:
- Not enough time and resources were devoted to advocating your changes upstream
- Personal relationships between Jack and the project maintainers led to a much higher acceptance rate for patches and feature requests
- The projects were initially evaluated on technical grounds, no thought was given to the developer community underpinning it
- In some cases, maintainers priorities were ill-understood
There are two common conclusions that people make from this kind of analysis;
- It’s not worth it. They don’t want our work, and the time we’re spending is costing us more than maintaining out-of-tree patches
- Perhaps if you had engaged with the projects before modifying them heavily, or had been regularly sending contributions, that the maintainers would have been more encouraging, and might have been more prepared to consider your work. If someone from your company was a maintainer or committer already, you would have had a valuable short-cut to getting your agenda implemented in the upstream project.
If you choose door number 1, you will go no further in your quest to really understanding free software processes. This is a reasonable thing to do, but the costs involved are often miscalculated. In addition, the benefits of influencing upstream projects are often vastly underestimated.
If you choose door number 2, you have concluded, in short, that it is madness to include a component in one of your products and exert no influence with upstream projects.
To have influence, you must understand how the community around a project works. Someone within the team must become an active, trusted member of the community. Once they have gained the trust of the community through their contributions, there may be some procedure to follow for them to become a maintainer of the project, or to gain commit privileges.
These considerations are not technical, for the most part. Friendship and trust are fuzzy human concepts. And this more than anything else brings me to my final point.
Community is hard
For a start, every community is different. They all have different people, different behavioural norms, different dynamics, different forums for communication.
Taking GNOME Mobile as an example, there are 18 projects in the GNOME Mobile platform, with another 10 or so in incubation. Within that, we have a large number of projects housed on gnome.org, and governed by our rules, procedures and conventions. And yet each project has its own set of maintainers – GTK+ is maintained by a committee of around 10 people, EDS is maintained principally by Novell employees, gtkmm has one core maintainer, and so on.
On top of this are a number of freedesktop.org projects, and a couple more which are not under either of these umbrellas. To be an effective influencer of GNOME Mobile, you need to learn the culture of over 20 projects, of wildly varying sizes and baggage.
There are a number of issues to bear in mind when you approach a free software community for the first time. The main one is that while the vast majority of projects think that they are welcoming people with open arms and are very welcoming, if you are a stranger to their land, it is very likely that you will be getting exactly the opposite message.
In some cases, the extent of the welcome is “go and read wiki page telling people how to contribute to the project”. In other cases, no wiki page exists. Occasionally, you will be told that you’re asking your question on the wrong mailing list, or in the wrong way, or that you should read the relevant documentation first. It is not unusual for people to answer questions with a very terse answer – perhaps a link to a mailing list discussion or web-page where the answer can be found.
In general, all of these things are intended to fulfil a simple goal – get you the information you want as quickly as possible, in a way that wastes the time of people already in the project as little as possible. An admirable goal indeed, but as a newcomer, this is not how people are used to being welcomed. Eric Raymond wrote extensively about this in his essay “How to ask questions the smart way”.
Indeed, one of the hardest things to do as an outsider looking in is to evaluate when a community is healthy and viable, and when it has problems which will prevent you from working effectively in partnership. Few resources which talk about healthy free software community projects exist – “Producing Open Source Software”, by Karl Fogel, is something of a bible on the subject, and should be required reading for anyone considering investing in free software. I have also found some presentations, including Simon Phipps’s 2006 OSCon keynote “The Zen of Free” and “How Open Source Projects Survive Poisonous People” by Ben Collins-Sussman and Brian Fitzpatrick, to be excellent resources in helping identify traits of what makes up a healthy community. Two other useful papers which include metrics on measuring the openness of a community, including its governance model, are Pia Waugh’s “The Foundations of Openness” and François Druel’s Ph.D. Thesis (in French) “Évaluation de la valeur à l’ère du Web ” (PDF – rough translation: “Measuring value in the era of the Web”).
Some of the considerations when evaluating a community are whether there is clear leadership, whether that leadership is an individual, a group, or a company, how the leaders are chosen (if they are chosen), what technological and social barriers to participating in the project exist, whether the community processes are documented and transparent, what recourse one has if one feels badly treated, what the behavioural norms of the community are (and whether they are documented) – the list goes on. Pia’s paper in particular gives a great overview in the section “Open Governance”.
Call to arms
And so I close with a call to arms to both free software communities, and companies planning on developing an “open source strategy”.
First, developers, document your communities. Think of yourselves as guides, explaining the cultural quirks of your country to a newly arrived immigrant. Be explicit. In addition to explaining where and how your community works, document how one gains trust and responsibility. Ensure that a newcomer can learn quickly what he needs to do to become a citizen and from there a project maintainer. I am not saying that it should be easy for someone to become a maintainer. What I am suggesting is that it should be easy to see how one becomes a maintainer before doing it
Next, project managers, software developers, company leaders: please, please, please – save yourself time and money and, when you reach the point where you will be building products which depend on good free software components, let the second thing that you do, right after a technical evaluation, be to evaluate the health of the community. A community where you can earn influence and guide the project to better meet your needs is a better long-term investment than betting on a slightly technically superior solution with an unhealthy governance model.
You are building products that you will be selling, supporting, and hopefully profiting from. In this situation, does it really make sense not to have the developer’s ear?
October 24, 2008
community, freesoftware, marketing, work
Comments Off on Jerry Maguire on the future of the free software industry
[Reposted from my professional site]
Suddenly, it was all clear. The answer was fewer clients. Less money. Caring for them and caring for ourselves.
“Fewer clients. Less money.” Sacrilege in a world where the goal is to grow the first billion dollar “open source vendor”. But that chimera that Matt Asay holds a torch for may never come. Free software has a lot of selling points – and the main one is that if your vendor is charging you too much money, you can find a different, smaller one who will charge you less.
That doesn’t mean that the originator of the software can’t make money – knowing the software better than anyone else, and being able to customise the software, is a pretty powerful selling point and a clear path to building a profitable small business.
As many commentators have said (and I agree), support is not a scalable business model. Other smaller, more agile, companies can start businesses around your product, gain expertise, become contributors to your project, and syphon off some of that yummy support and maintenance cash you’re hunting for.
But so what? Free software doesn’t get developed like proprietary software, why should the free software industry look like the proprietary software industry?
Here’s my vision of the future: Smaller businesses. Each with fewer, happier clients. Less money. Lots of them, all over the world.
September 19, 2008
community, freesoftware, maemo, work
Coming to the end of the first day of the Maemo Summit in C-Base in Berlin. From just outside, you have a view of the antenna of the space station that the C-Base group have been mapping out for the past few years. For those who don’t know, this is the terraforming space station which brought life to earth, and which crashed in what is now Berlin 4.5 billion years ago. Only the central tower, now in use as a television tower, is visible above ground.
The two days in OSiM World were useful and educational. I got to meet people from companies trying to learn how to work well with Free Software, which gave a great opportunity to affect real change by talking to the decision makers in those companies. I also got to meet some Maemo people who came to OSiM to meet up and hang out at the Maemo/Nokia stand (by far the most active stand in the conference, by the way).
But the Maemo Summit is a refreshing counter-weight to that – some of the observations that people made this morning were:
- We got better wireless and power for free than at the high-powered conference people paid thousands of euros to attend (given that the CCC is involved in C-Base, that was not surprising to me)
- The vocabulary has totally shifted. We’ve moved from value propositions, cost-benefit analysis, return on investment and fragmentation to people getting excited about tracemonkey, PowerVR, OMAP3, Clutter, hacks, crashes and bugs… the people who are down in the trenches and know what free software is are here.
- Less suits, more t-shirts
It has been an amazing day so far – some big news from Peter Schneider this morning that the interface for the Fremantle will be Clutter based – Rodrigo Novo went into more details: Nokia are funding new tablet-oriented widgets and off-screen support for GTK+, the integration of Clutter, and more.The lightning talks were fascinating for the breadth and depth of things which people are doing with Maemo – everything from using them in police cars to porting PyPy through running Debian in a chrooted environment as a Maemo application (presentation given in OO.o running in said Debian on an N810!!!).
As usual, what’s most impressive at these things is meeting old friends and making new ones. I’ve got to spend lots of time with Stormy, Paul Cooper, Jim Zemlin, Lefty, Bdale and others, and today I had a chance to have a good chat with Rob Taylor, Philippe Kalaf, Murray Cumming, Simon Budig and more. I was very happy to put a face to Tero Cujo’s name, the latest addition to Nokia’s maemo.org team. I’m a little disappointed not to have met anyone from Nemein here yet after working with them for so long – but Henri is in Korea for an international Haedong Kumdo competition, and getting his 23rd Dan (or something like that) confirmed by a master while there.
In addition to the summit, there is also a desktop search hackfest happening over the next two days, with people involved in Tracker, Beagle and Xesam getting together to agree on interfaces and work on implementation, to bring rocking search to desktops and tablets of the future.
There’s a heavy GNOME influence at the conference, which makes the various noises I hear about Nokia backing away from GNOME seem exceedingly over-stated. It looks to me like Nokia are using more and more of the GNOME and freedesktop.org stack, and are more than any other company right now setting a direction for GNOME in the future with investments in technologies like Clutter.
So far, great stuff! Looking forward to the party tonight, and day 2 tomorrow.
September 15, 2008
community, freesoftware, maemo, work
On Thursday I’ll be participating in a panel at OSIM World – “Effectively Building and Maintaining an Open Source Community”. It was a happy coincidence when I saw Matt Asay writing about the issue on Friday, and again today – it gives me a chance to think a bit more about the issues involved, and provides a data point which is very close to the experience that I have repeatedly seen when companies decide to use free software, be it peripherally or strategically.
On several occasions I have seen a lone developer decide to use a free software library to do some job for him. It doesn’t quite fit his needs, but he hacks up the extra feature or two in a couple of days, finds a few bugs that he fixes along the way to make things work as he needs them to, and ships it to a client as part of a larger solution.
At this point, one of two things will happen. The external project either stays as-is in the SCM of the company, awaiting a future upgrade request from the client, or the developer (usually because he is “the Linux guy” in the company and knows about these things) bundles up a couple of patches, heads along to the bug database for the project, signs up for Yet Another Bugzilla Account, and creates three or four bug entries with bug fixes attached, and another one for the new feature he hacked up. All told, he spends maybe half a day cooking up patches, navigating account creation, and submitting his work.
Usually, the patches will sit there for weeks or months before being reviewed. In most projects, if you don’t go onto a mailing list or IRC channel and ask the right guy to take the time to look at them, you can expect to wait. He has a backlog, gets lots of bugzilla nag mail already, and anyway, he’s working on a new feature he wants to get done this weekend between playing with the kids and doing the grocery shopping.
When they do get reviewed, the code base is likely to have shifted, so the patches don’t apply cleanly. Perhaps they don’t conform accurately to the coding conventions of the project. The feature, while useful, was done quickly (since it was only a minor part of a larger project), wasn’t accompanied by unit tests, and has a couple of issues that need resolving.
Of the four or five bug reports that our hacker created, one gets marked INVALID, another one is a DUPLICATE, and one patch gets applied and the bug fixed. The feature request status gets set to NEEDINFO, since there are some open issues to be addressed, but our hacker is now 6 months away from the code, 3 projects down the line, and has less time to write unit tests, review and resubmit the code.
Maybe he’ll do it anyway – and maybe he won’t.
In fact, I would say that the vast majority of the features people code up for free software projects never make it into an upstream bugzilla – developers are perfectly happy shipping a 10 year old version of GNU Kermit with hairy patches sticking out all over the place. And of those patches that do make it into an email or bugzilla, a small percentage ever make it into the upstream code base.
I would argue that when a project is strategic to a company product (as Lucene is to Alfresco), then the company has every interest in having someone who is regularly contributing to the project, who knows the key people in the community, and who is a trusted member of the community themselves. This ensures that your code is getting the care and attention it deserves when submitted upstream, and helps contribute to reducing your maintenance cost long run (as well as giving you influence over a project you depend on).
All this is to say that reducing the argument to “throw code over wall bad, participate good” is slightly over-simplifying – in the case where the project is a core part of your business, I agree wholeheartedly. If you’re using free software libraries as product, and merely tweaking to your needs, then the cost of participating outweighs the benefits in most cases. Reducing that cost by lowering the barrier of entry to participating is key to developing a vibrant community. But increased availability and a very low barrier to entry also incurs a cost on the community. Like most community-related issues, the balancing act is not an easy one to get right.
July 18, 2008
gnome, maemo, marketing, work
I only just got home Friday evening, and after a weekend with the family, and 3 working days this week, I’m off again to OSCon, for the first time. I have a feeling I’ll be seeing some familiar faces I’m currently posting this blog entry (which I wrote on the airport) in room 640 of the Doubletree (anyone who’s reading this & wants to grab a bite tonight, ring me at +33 677 019 213).
On Saturday and Sunday, I’ll be helping run the FLOSS Foundations meeting, then on Monday I’ll be helping out a bit with the Open Mobile Exchange day. I may take Tuesday as a relaxing/working day before the conference proper, where I’ll be giving the State of GNOME lightning talk on Thursday morning.
My main reason for going to OSCon, though, is to meet people who might be interested in availing of my consulting services. As someone who’s recently set up shop, but who has worked with free software communities for many years, I feel I’m well positioned to help companies save money by working better with communities they depend on. It benefits everyone.
My services go from presentations to managers & directors, training of developers in the dynamics of a given community and how best to work with them, to on-site consulting on specific issues like free software governance, community management and integrating free software best practises into your development team.
The transition from closed shop to free software participant is complex, and often underestimated. I can help make it easier.
I don’t much like banging my own drum on my syndicated blog, but I figure that I don’t do it very often, so… if you need someone like this, drop me a line.
June 12, 2008
A few weeks ago, a new MediaWiki instance was installed by Ferenc Szekely of the maemo team, in response to numerous requests. Many people were not fans of Midgard’s user interface for the wiki, and missed a number of features available in other wiki software. And so we have been undertaking the second major migration for the maemo wiki (we previously moved from MoinMoin).
Over the past couple of weeks or so, I’ve been organising a small team which has moved over content from the old wiki, has worked on stylesheets, templates and categories which make sense, and we’re now ready to take the wraps off! Head on over to http://wiki.maemo.org and have a look.
This is not a finished work, like most wikis. Content in the “Midgard wiki” category needs review and editing, and a lot of theofficial documentation of maemo will be wikiised over the coming weeks and months. Some content still needs migrating and categorisation. But we have a decent start, an editing team, and the new wiki has already been baptised with its first couple of pages with over 100 edits: 100Days and 2010 Agenda.
Credit where credit’s due! The following people have been outstanding throughout the migration: GeneralAntilles, jaffa, Niels Breet, ludovicus, trickie and Navi. I’m probably leaving lots of people out but these guys have made their mark with me over the past couple of weeks.
June 5, 2008
There’s a curious phenomenon I’ve noticed when I attend conferences. During the conference, the energy of everyone around me pumps me up & keeps me going. I love meeting people & talking to them, hearing about the cool stuff they’re up to and making contacts for new projects. This was true of meeting the maemo guys last week – we had a great evening talking about tablets, maemo, and life in general over mugs of good German beer.
But after the conference, it’s like you’ve been on some kind of artificial high of late nights, early mornings, high concentration & caffeine charged conversations – and you get on the plane to come home, and you just deflate.
It always seems to take me about the same length to recover from a conference as I spent at the conference. Which meant that I was still in a funk on Monday, when I decided that the first thing I had to do was get rid of some Stuff.
Paperwork had built up over the past month or so, I had bits & pieces all over my desk, in stacks on the floor, in drawers… From what I can tell from Getting Things Done (I’m about half way through! yay!), this is a pretty normal situation – something comes into your hand that you can’t forget, but that you can’t handle right away, so you add it to the top of a bunch of other stuff which you couldn’t do straight away, but which you couldn’t forget, and there it lays until you’ve forgotten it.
And so Monday and Tuesday, I spent ages working through email, expenses, receipts, forms for insurance, tax returns and all of the other things that had been building up. At the end of it, my life feels a bit cleaner, but I’ve got the impression I lost half the week.
I can’t wait until the magic happens and my office space suddenly becomes magically organised so that filing becomes fun and I always have a list of things I can do, regardless of what I’m up to at the time. That’ll be fun (I’m not holding my breath).
« Previous Entries Next Entries »