February 7, 2011
community, freesoftware, General, gimp, gnome, maemo, openwengo, work
One of the most important documents a project can have is some kind of elaboration of what the maintainers want to see happen in the future. This is the concrete expression of the project vision – it allows people to adhere to the vision, and gives them the opportunity to contribute to its realisation. This is the document I’ll be calling a roadmap.
Sometimes the word “roadmap” is used to talk about other things, like branching strategies and release schedules. To me, a release schedule and a roadmap are related, but different documents. Releasing is about ensuring users get to use what you make. The roadmap is your guiding light, the beacon at the end of the road that lets you know what you’re making, and why.
Too many projects fall into the trap of having occasional roadmap planning processes, and then posting a mighty document which stays, unchanged, until the next time the planning process gets done. Roadmaps like these end up being historical documents – a shining example of how aspirations get lost along the way of product development.
Other projects are under-ambitious. Either there is no roadmap at all, in which case the business as usual of making software takes over – developers are interrupt-driven, fixing bugs, taking care of user requests, and never taking a step back to look at the bigger picture. Or your roadmap is something you use to track tasks which are already underway, a list of the features which developers are working on right now. It’s like walking in a forest at night with a head-light – you are always looking at your feet avoiding tree-roots, yet you have no idea where you’re going.
When we drew up the roadmap for the GIMP for versions 2.0 and 2.2 in 2003, we committed some of these mistakes. By observing some projects like Inkscape (which has a history of excellent roadmapping) and learning from our mistakes, I came up with a different method which we applied to the WengoPhone from OpenWengo in 2006, and which served us well (until the project became QuteCom, at least). Here are some of the techniques I learned, which I hope will be useful to others.
Time or features?
One question with roadmaps is whether hitting a date for release should be included as an objective. Even though I’ve said that release plans and roadmaps are different documents, I think it is important to set realistic target dates on way-points. Having a calendar in front of you allows you to keep people focussed on the path, and avoid falling into the trap of implementing one small feature that isn’t part of your release criteria. Pure time-based releases, with no features associated, don’t quite work either. The end result is often quite tepid, a product of the release process rather than any design by a core team.
I like Joel’s scheduling technique: “If you have a bunch of wood blocks, and you can’t fit them into a box, you have two choices: get a bigger box, or remove some blocks.” That is, you can mix a time-based and feature-based schedule. You plan features, giving each one a priority. You start at the top and work your way down the list. At the feature freeze date, you run a project review. If a feature is finished, or will be finished (at a sufficient quality level) in time for release, it’s in. If it won’t realistically be finished in time for the release date, it’s bumped. That way, you stick to your schedule (mostly), and there is a motivation to start working on the biggest wood blocks (the most important features) first.
A recent article on lessons learned over years of Bugzilla development by Max Kanat-Alexander made an interesting suggestion which makes a lot of sense to me – at the point you decide to feature freeze and bump features, it may be better to create a release branch for stabilisation work, and allow the trunk to continue in active development. The potential cost of this is a duplication of work merging unfinished features and bug fixes into both branches, the advantage is it allows someone to continue working on a bumped feature while the team as a whole works towards the stable release.
Near term, mid term, long term
The Inkscape roadmap from 2005 is a thing of beauty. The roadmap mixes beautifully long-term goals with short-term planning. Each release has a by-line, a set of one or two things which are the main focus of the release. Some releases are purely focussed on quality. Others include important features. The whole thing feels planned. There is a vision.
But as you come closer and closer to the current work, the plans get broken down, itemised further. The BHAGs of a release in 2 years gets turned into a list of sub-features when it’s one year away, and each of those features gets broken down further as a developer starts planning and working on it.
The fractal geometer in me identifies this as a scaling phenomenon – coding software is like zooming in to a coastline and measuring its length. The value you get when measuring with a 1km long ruler is not the same as with a 1m ruler. And as you get closer and closer to writing code, you also need to break down bigger tasks into smaller tasks, and smaller tasks into object design, then coding the actual objects and methods. Giving your roadmap this sense of scope allows you to look up and see in the distance every now and again.
Keep it accurate
A roadmap is a living document. The best reason to go into no detail at all for future releases beyond specifying a theme is that you have no idea yet how long things will take to do when you get there. If you load up the next version with features, you’re probably aiming for a long death-march in the project team.
The inaccurate roadmap is an object of ridicule, and a motivation killer. If it becomes clear that you’re not going to make a date, change the date (and all the other dates in consequence). That might also be a sign that the team has over-committed for the release, and an opportunity to bump some features.
Leave some empty seats
In community projects, new contributors often arrive who would like to work on features, but they don’t know where to start. There is an in-place core team who are claiming features for the next release left & right, and the new guy doesn’t know what to do. “Fix some bugs” or “do some documentation” are common answers for many projects including GNOME (with the gnome-love keyword in Bugzilla) and LibreOffice (with the easy hacks list). Indeed, these do allow you to get to know the project.
But, as has often been said, developers like to develop features, and sometimes it can be really hard what features are important to the core team. This is especially true with commercial software developers. The roadmap can help.
In any given release, you can include some high priority features – stuff that you would love to see happen – and explicitly marked as “Not taken by the core team”. It should be clear that patches of a sufficiently high standard implementing the feature would be gratefully accepted. This won’t automatically change a new developer into a coding ninja, nor will it prevent an ambitious hacker from biting off more than he can chew, but it will give experienced developers an easy way to prove themselves and earn their place in the core team, and it will also provide some great opportunities for mentoring programs like the Google Summer of Code.
The Subversion roadmap, recently updated by the core team, is another example of best practice in this area. In addition to a mixed features & time based release cycle, they maintain a roadmap which has key goals for a release, but also includes a separate list of high priority features.
The end result: Visibility
The end result of a good roadmap process is that your users know where they stand, more or less, at any given time. Your developers know where you want to take the project, and can see opportunities to contribute. Your core team knows what the release criteria for the next release are, and you have agreed together mid-term and long-term goals for the project that express your common vision. As maintainer, you have a powerful tool to explain your decisions and align your community around your ideas. A good roadmap is the fertile soil on which your developer community will grow.
November 23, 2010
Novell will be bought by a North American group called Attachmate that appears to be made up of financiers buying assets as investments. As someone who has seen one acquisition of a company by financiers up-close, and an acquisition of legacy products, where they languished as cash cows, I feel partly qualified to guess what might happen to Novell post acquisition – although I am often wrong about these things, so take all this with a pinch of salt.
The first hint is that Novell is being split into two groups – traditional Novell activities (mostly identity, security, systems & resource management) and Linux business (Suse Linux – presumably including things like Mono, the desktop group, OBS, Suse Studio and other related interesting stuff). In summary, looking at last quarter’s results (PDF link), old stuff that still generates a lot of revenue but little growth, and new & growing business, which just recently became profitable.
If you are a Dark Hand type of guy, the financier who wants a return on investment and doesn’t really care about innovation or changing the world, then your goal is to buy assets, perhaps sell a subsidiary or two to recoup some of the costs of the deal, perhaps change the management team, and keep the profitable business for a 5 year horizon before selling it on to make a profit. Your anticiated ROI for this type of deal would need to be around 8% to 10% per year.
So you sell on some patents & copyrights that you’re not really interested in (presumably with a free license to use said patents for a period of time), you split your business up into the cash cow moneymaker (Old Novell) and the new, growing business that can sell at a high valuiation relative to its earnings (Suse Linux), and you line up a buyer for the speculative Linux business. With $450M for patents and perhaps $800m for the Linux business, you get old, profitable business with limited growth potential, but with regular earnings (~$600M for the last financial year, as far as I can tell, in legacy revenues, with an operating net margin of >10%) and $300M cash on hand (after subtracting liabilities & deferred revenues from cash on hand).
Let’s do the sums, then: let’s say, for arguments sake, that Suse ends up being worth $800m (not unreasonable given annual revenues in the $300m range, with great growth prospects). This represents probably a 3x valuation of (Suse + Ximian), given that Suse was bought in 2003 for $210m – certainly not unreasonable given the growth of Suse and Linux since then, this might even be on the low side. Add in the $450m for patents, and $300m cash assets that they’re getting as part of the deal.
That means Attachmate will be getting all of Novell’s legacy business for $650m, around one year’s revenue. With an annual return of >10% per year on revenues. Presumably, there will be some cost cutting to increase that margin further, and some growth will be expected, so I’m sure that Attachmate are confident that they will find a buyer for Novell after a few years for around the same price, giving them that 50% return in around 4 or 5 years.
I’m sure that some people here more familiar with the financial markets, SEC filings and annual reports, and generally “the way things work” will point out the half-dozen flaws in my thinking here, but this is what I expect to happen – a lot of people in non-core areas will be laid off in an effort to reduce costs and “streamline” the company (ie. make it a more attractive acquisition target), Suse will be sold on, and Novell will be kept as a cash cow.
To all the friends I have working with Novell, I wish you well. Acquisitions are uncertain times, and morale sapping at the best of times. The dust will settle soon.
July 28, 2010
(Reposted from Neary Consulting)
Today at GUADEC I presented the results (Slides are now on slideshare) of the GNOME Census, a project we have been working on for a while. For as long as I have been involved in GNOME, press, analysts, potential partners and advisory board members have been asking us: How big is GNOME? How many paid developers are there? Who writes all this software, and why?
By looking at the modules in the GNOME 2.30 release, made last March, we aim to answer many of those questions, and give deeper insight into the motivations of participants in the project.
The GNOME heartbeat - pre-release peaks and GUADEC boosts
Here are our key findings:
- GNOME has a rhythm – there is a measurable increase in activity before release time, and after the annual GNOME conference GUADEC
- While over 70% of GNOME developers identify themselves as volunteers, over 70% of the commits to the GNOME releases are made by paid contributors
- Red Hat are the biggest contributor to the GNOME project and its core dependencies. Red Hat employees have made almost 17% of all commits we measured, and 11 of the top 20 GNOME committers of all time are current or past Red Hat employees. Novell and Collabora are also on the podium.
- A number of top company contributors are consultancy/services companies specialising in the GNOME platform – Collabora, CodeThink, Openismus, Lanedo and Fluendo are in the top 20 companies. As many of these companies grew initially through work on Maemo, this is a sign of the success of Nokia’s strategy around the GNOME stack.
|The Family International
One of the interesting things that we have done for the census is to look at who is maintaining modules by looking at commits over the past two years, and use this data to identify areas of the platform which see lots of collaboration, areas where the maintenance burden is left to volunteers, and areas where individual companies assume most of the maintenance burden.
There are a number of modules in the platform which see a considerable amount of co-opetition, including Evolution, Evolution Data Server, DBus and GStreamer. Most modules in the platform, however, are either maintained to a large extent by volunteer developers, or see the vast majority of their contributions from one company.
I see this information being useful for companies interested in using the GNOME platform for their products, companies seeking custom application development, potential large-scale customers of desktop Linux or customers buying high-level support who want to know who employs more module maintainers or committers to the project.
- The GNOME maintenance map, with modules coloured according to the company maintaining them
Update: Two significant omissions in the maintenance map were pointed out to me. After correctly associating a number of commiters to a company, Lanedo is responsible for 16.5% of the commits in GTK+ over the past two years, and volunteers are also responsible for at least 17%. Red Hat are still the largest contributor, with 32% of all commits to the module. libsoup is maintained by Dan Winship, who left Novell to join Red Hat in 2007, where he developed and maintains the module.
Update 2: As I announced in this post, the report is now available as a free download via neary-consulting.com licensed as Creative Commons by-sa 3.0
July 19, 2010
Open core, Open core, more Open core… the debate goes on and on, with Monty the latest to weigh in.
When you get down to it this is a fight over branding – which is why the issue is so important to the OSI folks (who are all about the brand). I don’t actually care that much how SugarCRM, Jahia, Alfresco et al make the software they sell to their customers. As a customer I’m asking a whole different set of questions to “is this product open source?” I want to know how good the service and support is, how good the product is, and above all, does it solve the problem I have at a price point I’m comfortable with. The license doesn’t enter into consideration.
So if that’s the case (and I believe it is), why the fighting? Because of the Open Source brand, and all the warm-and-fuzzies that procures. “Open solutions” are the flavour of the decade, and as a small ISV building a global brand, being known as Open Source is a positive marketing attribute. The only problem is that the warm-and-fuzzies implied by Open source – freedom to change supplier or improve the software, freedom to try the software before purchasing, the existence of a diverse community of people with knowledge, skills and willingness to help a user in difficulty – don’s exist in the Open Core world. The problem is that for the most part, the Open Core which you can obtain under the OSI-approved license is not that useful.
Yesterday on Twitter, I said “Open Core is annoying because the “open core” bit is pretty much useless. It doesn’t do exactly what it says on the tin.”
Now, I wasn’t expecting this to be particularly controversial, but I got some push-back on this. Dan Fabulich replied “Ridiculous. Like the free version of MySQL is useless?” Which leads me to think of Inigo Montoya on the top of the Cliffs on MoherInsanity turning to Vizzini and saying “You keep using that word. I do not think it means what you think it means.”
With all this talk of Open Core, clearly some confusion has crept in. Perhaps it’s on my part. So allow me to elaborate what I understand by “Open Core”.
First, companies can’t be Open Core. Products are Open Core. So whereas Monty considers that from 2006 on, MySQL was not an “Open Source company”, I would contend that MySQL Server has always been, and continues to be, Free Software, and an Open Source product. That is, not Open Core.
Open Core for me means you provide a free software product, improve it, and don’t release the improvements under the free software licence. In my mind, Mac OS X is not “Open Core” just because it’s based on the NetBSD kernel, it is proprietary software.
Perhaps it would be useful to give some examples of what is Open Core:
- Jahia is Open Core – significant features and stabilisation work are present in the Enterprise Edition are not available at all in the Community Edition
- SugarCRM is obviously Open Core. Key features related to reporting, workflow, administration and more are only present in the commercial editions
- JasperSoft BI Suite is Open Core. Lots of useful features are only available to people buying the product.
The key here is that support contracts and extra features are only available if you also pay licensing fees. To take the oft-cited example of InnoDB hot back-up tool for MySQL, you can purchase this and use it with the GPL licensed MySQL Server.
This is why I say that Open Core products “don’t do exactly what it says on the tin” – the features you see advertised on the project’s website are not available to you along with software freedom.
I have talked to companies who deliberately avoid adding “spit & polish” to the community edition to encourage people to trade up for things like better documentation, attractive templates and easy installation – and don’t provide an easy way for the community edition users to share their own work. Other products have an open source engine that doesn’t do much except sit there, and all useful functionality is available as paid modules. Yes, a persistent, skilled, patient developer can take the Open Source version of the product and make it do something useful. For the most part, however, if you want to actually use the software without becoming an expert in its internals, you’ll need some of the commercial upgrades.
There is another name for this which is even more pejorative, Crippleware. Deliberately hobbled software. And that’s what I think gets people riled up – if you’re releasing something as free software, then there should at least be the pretence that you are giving the community the opportunity to fend for itself – even if that is by providing an “unofficial” git tree where the community can code up GPL features competing with your commercial offering, or a nice forum for people to share templates, themes and extensions and fend for themselves. But what gets people riled is hearing a company call themselves “an Open Source company” when most of the users of their “open source” product do not have software freedom. It’s disingenuous, and it is indeed brand dilution.
That said, let me repeat – I have no problem with companies doing this. I have no problem with them advertising their GPL-licensed stuff as Open Source. I would just like to see more of these companies providing a little bit of independence and autonomy to their user community. But then, that’s potentially not in their long-term interest – even if it is difficult to imagine a situation where the community-maintained version outstrips the “Enterprise” edition in features and stability.
July 2, 2010
No, not the film.
While I’ve been back in Ireland it’s been impossible to avoid “Discover Ireland” promoting the country as a tourist destination for locals, with ads backed by an infectiously catchy tune.
I’d never heard the group before, so I went hunting and found Heathers, an Irish duet of teenage girls who seem just a little shy, but when they start singing they belt it out. Very simple – guitar + +2 female voices, with an atypical sound and a heavy Irish accent. They’re pretty great. The joys of the interweb and youth who thinks differently about copyright – there is *lots* of live material from these girls on youtube. They’ve already been touring the US and I suspect that they will make a name for themselves. Well worth discovering.
June 16, 2010
community, freesoftware, General, humour, maemo
Who knew that educating people in simple sabotage (defined as sabotage not requiring in-depth training or materials) could have so much in common with communicating free software values? I read the OSS Simple Sabotage Field Manual (pdf) which has been doing the rounds of management and security blogs recently, and one article on “motivating saboteurs” caught my eye enough to share:
- The ordinary citizen very probably has no immediate personal motive for committing simple sabotage. Instead, he must be made to anticipate indirect personal gain, such as might come with enemy evacuation or destruction of the ruling government group. Gains should be stated as specifically as possible for the area addressed: simple sabotage will hasten the day when Commissioner X and his deputies Y and Z will be thrown out, when particularly obnoxious decrees and restrictions will be abolished, when food will arrive, and so on. Abstract verbalizations about personal liberty, freedom of the press, and so on, will not be convincing in most parts of the world. In many areas they will not even be comprehensible.
- Since the effect of his own acts is limited, the saboteur may become discouraged unless he feels that he is a member of a large, though unseen, group of saboteurs operating against the enemy or the government of his own country and elsewhere. This can be conveyed indirectly: suggestions which he reads and hears can include observations that a particular technique has been successful in this or that district. Even if the technique is not applicable to his surroundings, another’s success will encourage him to attempt similar acts. It also can be conveyed directly: statements praising the effectiveness of simple sabotage can be contrived which will be published by white radio, freedom stations, and the subversive press. Estimates of the proportion of the population engaged in sabotage can be disseminated. Instances of successful sabotage already are being broadcast by white radio and freedom stations, and this should be continued and expanded where compatible with security.
- More important than (a) or (b) would be to create a situation in which the citizen-saboteur acquires a sense of responsibility and begins to educate others in simple sabotage.
Now doesn’t that sound familiar? Trying to convince people that free software is good for them because of the freedom doesn’t work directly – you need to tie the values of that freedom to something which is useful to them on a personal level.
“You get security fixes better because people can read the code”, “You have a wide range of support options for Linux because it’s free software and anyone can understand it”, “Sun may have been bought by Oracle, but you can continue to use the same products because anyone can modify the code, so others have taken up the maintenance, support and development burden”, and so on.
Providing (custom tailored) concrete benefits, which comes from freedom is the way to motivate people to value that freedom.
In addition, the point on motivation struck a cord – you need to make people feel like they belong, that their work means something, that they’re not alone and their effort counts, or they will become discouraged. A major job in any project is to make everyone feel like they’re driving towards a goal they have personally bought into.
Finally, you will only have succeeded when you have sufficiently empowered a saboteur to the point where they become an advocate themselves, and start training others in the fine arts – and this is a major challenge for free software projects too, where we often see people with willingness to do stuff, and have some difficulty getting them to the point where they have assimilated the project culture and are recruiting and empowering new contributors.
For those who haven’t read it yet, the document is well worth a look, especially the section on “General Interference with Organisations and Production”, which reads like a litany of common anti-patterns present in most large organisations; and if you never knew how to start a fire in a warehouse using a slow fuse made out of rope and grease, here’s your chance to find out.
June 11, 2010
Comments Off on Last chance for early bird rate for GNOME training
Registration for the GNOME Developer Training courses at GUADEC is still open on the GUADEC registration site – and the early bird rate of €1200 is available for all orders received until next Tuesday June 15th. So if you’ve been hesitating or delaying signing up, the time is now!
As a reminder of what’s included in the package, you will get lunch and refreshments both days of the training course, a full professional registration to GUADEC worth €250, and printed materials related to the course to take home with you and spread the knowledge, and two full days of intense Lunix development training with a focus on GNOME. There are four half-day modules, covering Linux development, testing and debugging tools, the social side of contributing to free software projects, an overview of the GNOME and freedesktop.org platform and a hands-on workshop where you get to put what you learn into practice.
this will be a great opportunity to give a boost to your entire team by learning developer tips & tricks on being a productive Linux developer, learning tools and tips to improve performance and memory usage of your applications, and how to get your code upstream more efficiently & reduce maintenance costs.
Registration is open, and we still have a few places left!
March 23, 2010
community, General, maemo
The voting tokens have just been sent out for the Q1 2010 Maemo Community Council elections.
I already have over 100 bounced emails, so if you think that you should have a vote and you have not received an email with a voting token yet, please send me an email or leave a comment, I will look up your Maemo username and send you on the voting token/email combo we have on record so that you can vote.
Voting runs until March 30th – you can find more information about the election and the council in the Maemo wiki.
December 30, 2009
As a long time free software user, every time I buy hardware I have the same decision paralysis. Will the graphics card be fully supported? Are the drivers stable? Will the on-board wifi, sound card, and the built-in webcam Just Work? Will they work if I spend hours hunting down drivers and installing kernel modules (and remembering to reinstall them every time my distro upgrades the kernel)? Or will they stay broken for at least 6 months, until the next version of the OS is released?
I’ve gone through this dance many times in the past – with an Intel 915 graphics chip, and an Nvidia chip before that, with multiple webcams, USB headsets, a scanner, a graphics tablet, digital cameras and sound chips.
Thankfully, problems with digital cameras and sound chips seems to be more or less a thing of the past, except for those USB headsets, but there are still issues with webcams, scanners,tablets and wifi chips. And I keep hearing that support for graphics chips sucks for both ATI and Nvidia, making me wary of both (and thus about 80% of computers on the market).
So when I go shopping for hardware, it sucks to be me. I haven’t tested all this stuff, and I don’t know how much of it works perfectly out of the box. What I need is to decide what software I’m going to put on it, and have hardware recommendations per price point from the software distributor, so that I can just go to my local Surcouf, FNAC or whatever, and just look at one label & say “That’s only 90% supported, no custom from me!”
Does one exist already? I really liked the Samsung NC20 page I found on the Ubuntu wiki, but I would have preferred to see it before buying. The laptop testing team page on Ubuntu is along the lines of what I want, but it doesn’t take a position on any of the hardware, which is what I need. I want Canonical to say “buy this one, it’s great” or “don’t buy that one, unless you’re prepared to spend 2 days messing with drivers”. I know this might piss off some partners, but it’d be really helpful to me. And isn’t that more important?
What I’d like to see is laptops ordered by level of support out-of-box & after fiddling, on the latest version of Ubuntu. So the NC20, for example, would get a 60% “Out of the box” rating (because the video card just doesn’t work at all), and a 90% “after fiddling” rating (because of the CPU frequency issue, lack of support for 3d in graphics driver, and graphics driver instability).
Anyone able to point me to a Linux hardware buyer’s guide that dates from 2009 that gives what I’m looking for?
December 23, 2009
community, freesoftware, General
Michael Meeks wrote a great piece on the consequences of copyright assignment on free software projects yesterday. He has a lot of experience in the area, and has gone from fervent advocate to something of an outspoken opponent of copyright assignment through his involvement in the OpenOffice.org project in recent years.
One of the things that Michael said in his book is that commercial agreements with partners (resellers and redistributors), made possible by copyright assignment or sharing, can work against the core principles of free software. He cites some examples, but there are many ways that companies use their dominant position within the project:
- Vendor X agrees to commercially license their software, on condition that any changes that the licensee makes to the software in the future be submitted only to the vendor. By removing the right to redistribute changes from the licensee, the vendor prevents the licensee from participating in any forks of the project. SugarCRM’s EULA contains a no-forking clause, for example. Ironically, it also contains a “standard” non-reverse-engineering clause, so you may look at the source code before buying the enterprise version to see how it works, but once you are an enterprise customer, that’s off the table.
- A vendor ties an official partner programme, support and commercial licensing together. Matt Asay has described the Alfresco parner programme, which contains these restrictions. If you want to be an official Alfresco reseller, you must agree to sell only commercially licensed Alfresco, and you must get the client to commit to a subscription before starting the support contract. You are free not to be an official Alfresco reseller, but in this case, you may not resell commercial licenses for Alfresco, or distribute any commercial add-ons.
- No compete clauses can require commercial licensees not only not to contribute to any fork of the vendor’s product, but also to any competitor of the product. While BitKeeper was not a free software product, its licensing agreement contains many of the worst excesses you can find in vendor licenses, to the point where employees of clients were asked to stop working (in their free time) on free software competition.
- Proprietary licenses can change under your feet. There are often clauses that allow a vendor to update the licensing agreement at will, and apply it retro-actively to existing clients. BitKeeper did this.
- Non-disclosure rules can prevent you from publishing performance tests, for example, as in Alfresco’s trial license. Or even disclosing the terms of your agreement, as Michael suggested, meaning that you can’t even tell people what you may and may not do in the context of the proprietary agreement.
Proprietary software agreements are simply contracts between the vendor and the user, which set out the terms by which both parties agree that the user may use the vendor’s software, and gets some value off the vendor.
Contracts are a part of life. When I rent an office, I have obligations, and so does the landlord. I’m a grown-up and I can agree to whatever I want, if I’m also getting what I need from the deal. But contracts also have victims. As a community member, if you (as a user) sign a contract that says you may not participate in the community, you’re hurting the rest of the community. And if you (as a vendor) force your clients not to participate in the community, or to do so on different terms to everyone else, they you’re hurting the community too.
Since you can only do so much to hurt a community before you don’t have one, this is why I consider copyright assignment a key barrier to entry to community building. And in a vicious circle, because there is little broad community activity around most single-vendor free software projects, those vendors feel vindicated by their copyright assignment decisions, and have little reason to invest heavily in community building – since doing so gives a very low return on investment.
It is possible to build certain types of communities, even with copyright assignment – through a modular architecture which allows anyone to build plug-ins or add-ons, for example, OpenBravo has built a large community of module developers, but has seen little contribution in the core product. And perhaps building a broad and deep group of core contributors is not important to your business model or investors as a company – and that’s fine. The only point I’m making is that you can’t have your cake and eat it. It’s a balancing act between building community and maintaining control.
« Previous Entries Next Entries »