Christmas wish: Distro hardware buyer’s guide

freesoftware, General 23 Comments

As a long time free software user, every time I buy hardware I have the same decision paralysis. Will the graphics card be fully supported? Are the drivers stable? Will the on-board wifi, sound card, and the built-in webcam Just Work? Will they work if I spend hours hunting down drivers and installing kernel modules (and remembering to reinstall them every time my distro upgrades the kernel)? Or will they stay broken for at least 6 months, until the next version of the OS is released?

I’ve gone through this dance many times in the past – with an Intel 915 graphics chip, and an Nvidia chip before that, with multiple webcams, USB headsets, a scanner, a graphics tablet, digital cameras and sound chips.

Thankfully, problems with digital cameras and sound chips seems to be more or less a thing of the past, except for those USB headsets, but there are still issues with webcams, scanners,tablets and wifi chips. And I keep hearing that support for graphics chips sucks for both ATI and Nvidia, making me wary of both (and thus about 80% of computers on the market).

So when I go shopping for hardware, it sucks to be me. I haven’t tested all this stuff, and I don’t know how much of it works perfectly out of the box. What I need is to decide what software I’m going to put on it, and have hardware recommendations per price point from the software distributor, so that I can just go to my local Surcouf, FNAC or whatever, and just look at one label & say “That’s only 90% supported, no custom from me!”

Does one exist already? I really liked the Samsung NC20 page I found on the Ubuntu wiki, but I would have preferred to see it before buying. The laptop testing team page on Ubuntu is along the lines of what I want, but it doesn’t take a position on any of the hardware, which is what I need. I want Canonical to say “buy this one, it’s great” or “don’t buy that one, unless you’re prepared to spend 2 days messing with drivers”. I know this might piss off some partners, but it’d be really helpful to me. And isn’t that more important?

What I’d like to see is laptops ordered by level of support out-of-box & after fiddling, on the latest version of Ubuntu. So the NC20, for example, would get a 60% “Out of the box” rating (because the video card just doesn’t work at all), and a 90% “after fiddling” rating (because of the CPU frequency issue, lack of support for 3d in graphics driver, and graphics driver instability).

Anyone able to point me to a Linux hardware buyer’s guide that dates from 2009 that gives what I’m looking for?

2009 blog links collection

community, freesoftware, gnome, maemo, marketing, running, work Comments Off on 2009 blog links collection

Looking back on 2009, I wrote quite a bit on here which I would like to keep and reference for the future.

This is a collection of my blog entries which gave, in my opinion, the most food for thought this year.

Free software business practice

Community dynamics and governance

Software licensing & other legal issues

Other general stuff

Happy Christmas everyone, and have a great 2010.

Side-effects of copyright assignment

community, freesoftware, General 1 Comment

Michael Meeks wrote a great piece on the consequences of copyright assignment on free software projects yesterday. He has a lot of experience in the area, and has gone from fervent advocate to something of an outspoken opponent of copyright assignment through his involvement in the OpenOffice.org project in recent years.

One of the things that Michael said in his book is that commercial agreements with partners (resellers and redistributors), made possible by copyright assignment or sharing, can work against the core principles of free software. He cites some examples, but there are many ways that companies use their dominant position within the project:

  • Vendor X agrees to commercially license their software, on condition that any changes that the licensee makes to the software in the future be submitted only to the vendor. By removing the right to redistribute changes from the licensee, the vendor prevents the licensee from participating in any forks of the project. SugarCRM’s EULA contains a no-forking clause, for example. Ironically, it also contains a “standard” non-reverse-engineering clause, so you may look at the source code before buying the enterprise version to see how it works, but once you are an enterprise customer, that’s off the table.
  • A vendor ties an official partner programme, support and commercial licensing together. Matt Asay has described the Alfresco parner programme, which contains these restrictions. If you want to be an official Alfresco reseller, you must agree to sell only commercially licensed Alfresco, and you must get the client to commit to a subscription before starting the support contract. You are free not to be an official Alfresco reseller, but in this case, you may not resell commercial licenses for Alfresco, or distribute any commercial add-ons.
  • No compete clauses can require commercial licensees not only not to contribute to any fork of the vendor’s product, but also to any competitor of the product. While BitKeeper was not a free software product, its licensing agreement contains many of the worst excesses you can find in vendor licenses, to the point where employees of clients were asked to stop working (in their free time) on free software competition.
  • Proprietary licenses can change under your feet. There are often clauses that allow a vendor to update the licensing agreement at will, and apply it retro-actively to existing clients. BitKeeper did this.
  • Non-disclosure rules can prevent you from publishing performance tests, for example, as in Alfresco’s trial license. Or even disclosing the terms of your agreement, as Michael suggested, meaning that you can’t even tell people what you may and may not do in the context of the proprietary agreement.

Proprietary software agreements are simply contracts between the vendor and the user, which set out the terms by which both parties agree that the user may use the vendor’s software, and gets some value off the vendor.

Contracts are a part of life. When I rent an office, I have obligations, and so does the landlord. I’m a grown-up and I can agree to whatever I want, if I’m also getting what I need from the deal. But contracts also have victims. As a community member, if you (as a user) sign a contract that says you may not participate in the community, you’re hurting the rest of the community. And if you (as a vendor) force your clients not to participate in the community, or to do so on different terms to everyone else, they you’re hurting the community too.

Since you can only do so much to hurt a community before you don’t have one, this is why I consider copyright assignment a key barrier to entry to community building. And in a vicious circle, because there is little broad community activity around most single-vendor free software projects, those vendors feel vindicated by their copyright assignment decisions, and have little reason to invest heavily in community building – since doing so gives a very low return on investment.

It is possible to build certain types of communities, even with copyright assignment – through a modular architecture which allows anyone to build plug-ins or add-ons, for example, OpenBravo has built a large community of module developers, but has seen little contribution in the core product. And perhaps building a broad and deep group of core contributors is not important to your business model or investors as a company – and that’s fine. The only point I’m making is that you can’t have your cake and eat it. It’s a balancing act between building community and maintaining control.

Save Sun jobs, let Oracle finalise acquisition

community, freesoftware, gnome 6 Comments

I’ve stayed quiet on this, but listened on the sidelines, for a while now. But the blogs I read today from Monty and Mneptok lead me to reply.

I was a long-time Sun shareholder (don’t laugh) but sold my shares as soon as the Oracle acquisition was announced. I was pretty ambivalent about the deal at the time, not really taking position on either side of the fence, and happy just to think about possibilities.

But the latest lobbying of the EU to try to stymie the deal has ticked me off.

MySQL, through their choice of licensing and business model, set the rules of the game. Sun bought MySQL for lots of money. It’s their property now. It is, as Michael Meeks said, very bad form for the guy who set up the rules to complain that they’re not fair now.

So what will the effect of Oracle’s purchase of Sun Microsystems be?

First, Oracle offered $7.4bn for Sun, while Sun (over)paid $1bn for MySQL at the beginning of 2008. That means, being generous, that MySQL makes up under 13% of Sun. And the other 87% no-one is worried about, apparently.

Second, Sun is haemorraging money. This is not surprising; any time a company offers to buy another company, all the existing customers who were planning purchases wait until the acquisition is finished. They want to know what product lines are being maintained, whether licensing, support or pricing conditions change. In short, it is expected that the revenues for a company between the moment an acquiqition is announced and the moment it is finalised go into the toilet.

Third, friends of mine work at Sun. I’m seeing them be miserable because they don’t know what role they have to play in the company. They don’t know if they’re going to have a job in a few months. And the chances of them having a job in a few months are inversely related to the amount of time until this acquisition is completed. Low employee morale during uncertainty is another inevitable consequence of the delay in the acquisition, and it’s one with longer term consequences for the health of the company than any short-term delayed purchase decisions.

The uncertainty is killing Sun, and it’s killing the projects that Sun owns – MySQL among them. One possible outcome of all of this is that Oracle come back with a lower offer price after this all shakes out, because frankly Sun is worth less, the deal falls through, and Sun as a company will be on life support.

I have read RMS’s letter to Neelie Kroes, and I respectfully disagree. The entire letter reads as an advocacy of dual licensing as the way to make money from a free software project – an astounding position given the signatories. To quote: “As only the original rights holder can sell commercial licenses, no new forked version of the code will have the ability to practice the parallel licensing approach, and will not easily generate the resources to support continued development of the MySQL platform.”

I had to check twice to ensure that the thing was indeed signed by Richard Matthew Stallman, and not someone else with the same initials.

MySQL is available under the GPL v2, a well understood licence. Oracle will be free to take it closed-source only. They will be free to change the licence (perhaps even to GPL v3). They will even be free to kill development of the project altogether. Does this put companies like Monty Program at a disadvantage compared to Oracle? Perhaps. Is that disadvantage insurmountable? Not at all. MariaDB and Drizzle have a great chance of succeeding in the same way MySQL did – by disrupting the database market.

The whole thing smells to me like a double standard – it’s OK to have certain licensing policies if you’re friendly, but not if you aren’t. Luis Villa set me straight on this point a few years back, and it has stuck with me: “what if the corporate winds change? […] At that point, all the community has is the license, and [the company’s] licensing choices”. You trust the license, and the licensing choices. And at this point, I’m more concerned about the jobs of the people working at Sun, and the future of Sun owned projects, than I am about what Oracle will or won’t do with/to MySQL.

Ubuntu Karmic and external displays

community, freesoftware, gnome 11 Comments

It was with some trepidation that I plugged in an external monitor into my laptop today to test how Ubuntu 9.10 handles external displays. In my last three upgrades the behaviour has changed and l’ve ended up on more than one occasion in front of a group telling them I’d get started in just a minute…

But yesterday, when I plugged in an external CRT monitor to see how things would react ahead of a training course I was giving this morning, I was pleasantly surprised! The new screen was automatically added to the right side of my existing screen to make a large virtual desktop. When I opened display preferences, mirroring the screens worked perfectly. When I unplugged the CRT, the desktop degraded gracefully – nothing froze or crashed, I didn’t get a reboot, and all the applications which were displaying on the external screen were seamlessly displayed on my laptop display. Bliss! Everything worked just as I expected it to.

So kudos to the Ubuntu integrators, and the Xorg and GNOME developers, and especially to the developers working on the Intel X drivers, for making me smile yesterday. You have given me hope that this year I will attend at least one tech conference where no Linux user has trouble with the overhead projector.

Update: I meant Karmic Koala, Ubuntu 9.10, not Jaunty. Thanks to Marius Gedimas for pointing that out.

The trough of disillusionment for Ubuntu?

community, freesoftware 13 Comments

Reading this blog entry on Linux Magazine, the thought occurred to me that Ubuntu is making its way nicely along the path that new projects have travelled for many years. It is around the same place that Red Hat used to be around the time of Red Hat 7.

The Hype Cycle describes the way that new technologies and projects are perceived over time, if they do a good job of handling themselves, going from a technology trigger, inflated expectations, disillusionment, enlightenment, before arriving at “the plateau of productivity” – a state where there is no more hype and the new technology is simply a normal part of our lives.

Ubuntu arrived with a bang, and certainly has had inflated expectations over the past couple of years. And yet due to quality issues, it has recently been failing to meet those expectations, especially around upgrading from previous versions (by no means an easy problem to get right, don’t get me wrong). Many long-time Ubuntu users appear to be getting upset.

But then, you don’t get upset about things you don’t care about.

This disillusionment, if it doesn’t turn into resignation, could be a sign of health in the Ubuntu project and community – on condition that the lessons of quality are learned and put into practice. Certainly this is a drum that Mark Shuttleworth has been beating for some time now – but unfortunately it’s not as easy as asking upstream to get their act together in a Tom Sawyer community model. QA seems like an ideal opportunity for collaboration between distributions and upstream projects, as well as being the core activity of each individual distribution. Supplying quality is, after all, the market opportunity which Linux distributions base their business models on.

In any case, I for one am looking forward to the deflated expectations being met and exceeded in future releases, allowing us Ubuntu users to make it to the Plateau of Productivity as soon as possible.

Code:Free – a reminder that our software is for doing stuff

community, freesoftware, gimp, inkscape, scribus 5 Comments

I recently came across Code:Free, a webzine (made with Scribus) which showcases some great examples of art created with free software tools, and tutorials on how to achieve some nice effects – it’s kind of a compilation of the best of Deviantart made with Free tools. After seeing Ton Roosendaal keynote the Maemo Summit last weekend, it was a reminder that the goal behind creating software is to have your users take it & do cool stuff with it.

The webzine itself is gorgeously laid out and the art in it is very good indeed. Congratulations to Chrisdesign (of gimpforums.de fame) in this great initiative, long may it continue!

Giving Great Presentations – speaker notes

community, freesoftware, General, maemo 5 Comments

Earlier today I gave a lightning talk on giving great presentations at the Maemo Summit. The response has been great, and here are the notes I wrote for the presentation, so that people can refer back tol the advice when the time comes.

Giving Great Presentations

It was said that when Cicero finished speaking, people turned to each other and said “that was a great speech”. But when Demosthenes finished speaking, people said “we must march”.

Throughout history, great orators have changed the world. Entire movements can grow from the powerful communication of an idea.

Yet most technical presentations are horrible. Slides filled with bullet points, and monotone delivery. How many people here have asked themselves at one stage or another during a presentation, “why am I here?”

You might not be Obama, but you can still give better presentations. Here are some basic tips for improving. Nothing I’m going to say here is difficult, but there are no easy fixes either.

Think of your audience

The first tip is for when you are considering giving a presentation, and when you start writing your content. Think of what your audience will get from your presentation. What’s in it for them?

If your point is “to talk about…” you’re off track. You will put your audience to sleep. Seriously.
If you want to share some information, why not just write a blog entry? Why do you need to be in the room?

People don’t care about you. They care about themselves. So make your presentation about them.

A presentation is a sales pitch. You are there to convince people of something. Maybe it’s an idea you want them to believe. Maybe it’s a product you want them to use. If you’re not *selling* something, why are you giving a presentation? You may as well write a blog entry, and stay at home.

So cut to the chase. When you’re thinking about your presentation, think about one core question: What do I want audience members to do once they’ve seen my presentation? And then make sure everything in your presentation is driving towards that goal.

Tell a story

The best way to convince someone of something is to entertain them. And stories are entertaining. Some people are funny, and can use humour to entertain. I’m not funny. But everyone can tell a story.

Human beings are natural storytellers. And stories are a wonderful way to get a point across, especially if you structure your narrative well.

One possible narrative you could use is this:

  1. Problem statement
  2. Proposed solution
  3. Supporting evidence
  4. Conclusion

It’s important to finish your presentation will a call to action. Make people march. The action can be small. Integrate the key lesson of your presentation into their work. Download an SDK and try out some sample apps. Write a letter to a local politician. Donate to your cause.

But make it clear to people what you want from them.

Presentation design

The third suggestion is to design slides to compliment what you say, rather than repeat it.

Don’t write everything you’re going to say on the slide. Otherwise people will just read it, and won’t concentrate on you. You might as well just write a document and stay at home. Bullet points are especially bad for this – avoid them. Slides should be sparse. Pictures work better. Use images that reinforce your point – show, then tell.

Let’s say I wanted to convince you that Ethiopia was once again on the brink of famine. I could show you charts of crop yields, child mortality, and displaced populations. Or I could show you a photo and tell you the rest.

It’s emotional. It’s cheating. It works.

Practice

The biggest sin that people make when giving presentations is not to say what they want to say out loud before getting on stage.

Runners train. Football players practice. Musicians and actors spend hours getting performances right. So shouldn’t you too? How do you know how long it will take you to get through your content? How do you know what’s useful and what’s superfluous? Does your presentation have a good flow? Practice will tell you.

Doing all this takes time. It’s not as easy as throwing bullet points together the day before your presentation and hoping for the best.

But think of how many man-hours people will spend watching your presentation. How much of your time is it worth to ensure that your audience isn’t wasting theirs?

So go do it. Concentrate on your audience’s interests. Tell stories and entertain people. Make slides sparse. And prepare beforehand by practising. This is harder than what you do now. The pay-off is huge.

The best part is that your audiences will thank you.

Related links:

  • Really Bad Powerpoint – Seth Godin (source of many of the ideas in this presentation)
  • Kill your presentation (before it kills again) – Kathy Sierra – Kathy has lots of material on focusing on your users rather than on yourself – and this is true for presentations too
  • Presentation Zen – the great blog of Garr Reynolds – there is an accompanying book which is well worth reading
  • slide:ology – Nancy Duarte – one of my favourite books on presentation design – a must-read on all stages of presentation design from deciding what to talk about through to working on your delivery

Garmin Forerunner 405 ANT+ protocol

freesoftware, General, running 1 Comment

Anyone anywhere know anyone working for Garmin who might be able to put me in touch with someone who can tell me what the ANT+ communication protocol is, so that I can give it to the good people developing gant, so that they can fix their driver to not crash in the middle of a transfer please? It seems to break for me for any transfer with more than one track.

I can see absolutely no competitive reason to keep the protocol private, it’s almost completely reverse engineered already, and this would cost Garmin essentially nothing, and allow us poor Linux users a way to get our tracks off our watches. The problem is there’s an inertia in keeping this stuff private. It’s hard to get the person with the knowledge (the engineer) and the person with signing power to publish the protocol (a VP probably) in the same place with the person who wants the information (little ol’ me) – it can take hours of justifications & emails & meetings… Can anyone help short-curcuit the problem by helping me get the name of the engineer & the manager involved?

Thanks!

Estimating merge costs

community, freesoftware, General, maemo 2 Comments

After commenting on Mal Minhas’s “cost of non-participation” paper (PDF), I’ve been thinking about the cost of performing a merge back to a baseline, and I think I have something to work with.

First, this might be obvious, but worth stating: Merging a branch which has changed and a branch which has not changed is trivial, and has zero cost.

So merging only has a cost if we have a situation where the two trees concerned with the merge have changed.

We can also make another observation: If we are only adding new function points to a branch, and the mainline branch does not change the API, there is a very small cost to merging (almost zero). There may be some cost if functions with similar names, performing similar functions, have been added to the mainline branch, but we can trivially merge even a large diff if we are not touching any of the baseline code, and only adding new files, objects, or functions.

With that said, let’s get to the nuts & bolts of the analysis:

Let’s say that a code tree has n function points. A vendor takes a branch and makes a series of modifications which affects x function points in the program. The community develops the mainline, and changes y function points in the original program. Both vendor and community add new function points to extend functionality, but we’re assuming that merging these is an almost zero cost.

The probability of conflicts is obviously greater the bigger x and y are. This probability increases very fast the bigger the numbers. Let’s assume that every time that a given function point has been modified by both the vendor and the community that there is a conflict which must be manually resolved  (1).  If we assume that changes are independently distributed across the codebase (2), we can work out that the probability of at least one conflict is 1 – (n-x)!(n-y)!/n!(n-x-y)! if I haven’t messed up my maths (thanks to derf on #maemo for the help!).

So if we have 20 functions, and one function gets modified on the mainline and another on the vendor branch, we have a 5% chance of a conflict, but if we modify 5 each, the probability goes up to over 80%. This is the same phenomenon which lets you show that if you have 23 people in a room, chances are that at least two of them will share a birthday.

We can also calculate the expected number of conflicts, and thus the expected cost of the merge, if we assume the cost of each of these conflicts is a constant cost C (3). However, the maths to do that is outside the scope of my skillz right now :-( Anyone else care to give it a go & put it in the comments?

We have a bunch of data we can analyse to calculate the cost of merges in quantitative terms (for example, Nokia’s merge of Hildon work from GTK+ 2.6 to 2.10), to estimate C, and of course we can quite easily measure n and y over time from the database of source code we have available to us, so it should be possible to give a very basic estimate metric for cost of merge with the public data.

Footnotes:

(1) It’s entirely possible to have automatic merges happen within a single function, and the longer the function, the more likely this is to happen if the patches are short.

(2) A poor assumption, since changes tend to be disproportionately concentrated in a  few key functions.

(3) I would guess that the cost is usually proportional to the number of lines in the function, perhaps by the square of the number of lines – resolving a conflict in a 40 line function os probably more than twice as easy as resolving a conflict in an 80 line function. This is slightly at odds with footnote (1), so overall the assumption of constant cost seems reasonable to me.

« Previous Entries Next Entries »