Trial by fire: distro upgrade

General 9 Comments

I recently upgraded from Ubuntu 8.10 to 9.04 and in the process “cleaned up” the distro using the very useful option to “make my system as close as possible to a new install” (I don’t remember if that’s the exact text, but that was the gist of it). Last night, I tried to use the printer in my office for the first time since upgrading, an Epson Stylus Office BX300F (all in one scanner/printer/copier/fax).

With 8.10, I finally got printing working – I don’t remember the details, but I do recall that I had to install pipslite and generate a new PPD file to get a working driver for the printer, which I found through the very useful OpenPrinting.org website. It’s a fairly new printer, on the market since September 2008 as far as I can tell, cheap, and part of a long-running series from Epson (the Linux driver available for download on the Epson site is dated early 2007).

Nonetheless I was reassured by OpenPrinting’s assurance that the printer and scanner “work perfectly”, and I wasn’t expecting to have to download a source package, install some development packages, and compile myself a new Ubuntu package to get it working. And then discover that there was a package available already that I just hadn’t found. But anyway, that was then…

When I upgrade my OS, I have a fairly simple expectation, that changes I have to make to the previous version to “fix” things don’t get broken post-upgrade. There are some scenarios where I can almost accept an exception – a few releases ago, I had problems with Xrandr because changes I had previously had to make to get my Intel hardware working properly were no longer necessary as X.org integrated and improved the driver – but it took me a while to figure out what was happening, and revert my Xorg config to the distro version.

Yesterday, when I had to print some documents, I got a nice error message in the properties of the printer that let me know I had a problem: “Printer requires the ‘pipslite-wrapper’ program but it is not currently installed. Please install it before using this printer.” And thus began the yak-shaving session that people could follow on twitter yesterday.

  • Search in synaptic for pipslite – found – but: “Package pipslite has no available version, but exists in the database.” Gah!
  • Try to find an alternative driver for the Epson installed on the system: no luck. Hit the forums.
  • Noticed that libsane-backends-extra wasn’t installed, installed it to get the epkowa sane back-end, and “scanimage -L” as root worked (for the first time) – so went on a side-track to get the scanner working as a normal user
  • Figure out what USB node the scanner is, chgrp scanner, scanning works!
  • Then figure out how the group gets set on the node on plugging, found the appropriate udev rules file (/lib/udev/rules.d/40-libsane-extras), copied it to /etc/udev/rules.d, added a new line to get the scanner recognised (don’t forget to restart udev!) scanning works!
  • Re-download a driver from the website linked to in OpenPrinting’s page for the printer – they have a .deb for Ubuntu 9.04! Rock!
  • Install driver, error message has changed, but still no printing: “/usr/lib/cups/filter/pipslite-wrapper failed”. Forums again.
  • Tried to regenerate a PPD file: pipslite-install: libltdl.so.3 not found. ls -l /usr/lib/*ltdl*: libltdl.so.7 – Bingo! The pre-built “Ubuntu” binaries don’t link to the right versions of some dependencies.
  • Download the source code, compile a new .deb (dpkg-buildpackage works perfectly), install, regenerate .ppd file, (don’t forget to restart CUPS), and we have a working printer!

4 hours lost.

Someone will doubtless follow up in comments telling me how stupid I was not to [insert some “easy” way of getting the printer working] which didn’t involved downloading source code and compiling my own binary package, or fiddling about in udev to add new rules, or sullying my pristine upgrade with an unofficial package. Please do! I’m eager to learn. And perhaps someone else with the same problems will find this blog entry when they look for “Ubuntu Epson Stylus Office BX300F” and won’t have to figure things out the hard way like I did.

Please bear in mind when you do that I’m not a neophyte, that I’ve got some pretty good Google-fu, and that I’ve been using Linux for many many years – and it took me 4 hours to re-do something I’d already done once 6 months ago, and wasn’t expecting to have to do again. How much harder is it for a first timer when he buys a USB headset & mic, or printer/scanner, or webcam?

Update: After fixing the problem, I have discovered that the Gutenprint driver mentioned on the OpenPrinting page (using CUPS+Gutenprint) does work with my printer. It seems that if I had done a fresh install, rather than an upgrade, I would not have had this existing printer using a no longer installed “recommended” driver – as John Mark suggested to me on twitter, pipslite is no longer necessary. In addition, when I tested both drivers with the same image, there is a noticeable difference in the results – the gutenprint driver appears to use a higher alpha, resulting in colours being much lighter in mid-tones. The differences are quite remarkable.

Too many platforms?

community, freesoftware, maemo 6 Comments

Fabrizio Capobianco of Funambol wondered recently if there are too many mobile Linux platforms.

The context was the recent announcement of oFono by Intel and Nokia, and some confusion and misunderstanding about what oFono represents. Apparently, several people in the media thought that oFono would be Yet Another Complete Stack, and Fabrizio took the bait too.

As far as I can tell, oFono is a component of a mobile stack, supplying the kind of high-level API for telephony functions which Gstreamer does for multimedia applications. If you look at it like this, it is a natural compliment to Moblin and Maemo and potentially a candidate technology for inclusion in the GNOME Mobile module set.

Which brings me to my main point. Fabrizio mentions five platforms besides oFono in his article: Android, LiMo, Symbian, Maemo and Moblin. First, Symbian is not Linux. Of the other four, LiMo, Maemo and Moblin share a bunch of technology in their platforms. Common components across the three are: The Linux kernel (duh), DBus, Xorg, GTK+, GConf, Gstreamer, BlueZ, SQLite… For the most part, they use the same build tools. The differences are in the middleware and application layers of the platform, but the APIs that developers are mostly building against are the same across all three.

Maemo and Moblin share even more technology, as well as having very solid community roots. Nokia have invested heavily in getting their developers working upstream, as has Intel. They are both leveraging community projects right through the stack, and focusing on differentiation at the top, in the user experience. The same goes for Ubuntu Netbook Edition (the nearest thing that Moblin has to a direct competitor at the moment).

So where is the massive diversity in mobile platforms? Right now, there is Android in smartphones, LiMo targeting smartphones, Maemo in personal internet tablets and Moblin on netbooks. And except for Android, they are all leveraging the work being done by projects like GNOME, rather than re-inventing the wheel. This is not fragmentation, it is adaptability. It is the basic system being tailored to very specific use-cases by groups who decide to use an existing code base rather than starting from scratch. It is, in a word, what rocks about Linux and free software in general.

Community analysis as risk management

community 3 Comments

Recently I’ve had a number of conversations with potential clients which have reinforced someting which I have felt for some time. Companies don’t know how to evaluate the risk associated with free software projects.

First, a background assumption. Most software built in the world, by a large margin, is in-house software.

IT departments of big corporations have long procurement proceses where they evaluate the cost of adopting a piece of infrastructure or software, including a detailed risk analysis. They ask a long list of questions, including some of these.

  • How much will the software cost over 5 years?
  • What service package do we need?
  • How much will it cost us to migrate to a competing solution?
  • Is the company selling us this software going to go out of business?
  • If it does, can we get the source code?
  • How much will it cost us to maintain the software for the next 5 years, if we do?
  • How much time & money will it cost to build an equivalent solution in-house?

There are others, but the nub of the issue is there: you want to know what the chances are that the worst will happen, and how much the scenario will cost you. Companies are very good at evaluating the risk associated with commercial software – I would not be surprised to learn that there are actuarial tables that you can apply, knowing how much a company makes, how old it is and how many employees it has which can tell you its probability of still being alive in 1, 3 and 5 years.

Companies built on free software projects are harder to gauge. Many “fauxpen source” companies have integrated “community” in their sales pitch as an argument for risk mitigation. The implicit message is: “You’re not just buying software from this small ISV, if you choose us you get this whole community too, so you’re covered if the worst happens”. At OSBC, I heard one panellist say “Open Source is the ultimate source escrow” – you don’t have to wait until the worst happens to get the code, you can get it right now, befor buying the product.

This is a nice argument indeed. But for many company-driven projects, it’s simply not the case that the community will fill the void.  The risk involved in the free software solution is only slightly smaller than buying a commercial software solution.

And what of community-driven projects, like GNOME? How do you evaluate the risk there? There isn’t even a company involved.

There are a number of ways to evaluate legal risk of adopting free software – Black Duck and Fossology come to mind. But very little has been written about evaluating the community risks associated with free software adoption. This is closely associated to the community metrics work I have pointed to in the past – Pia Waugh and Randy Metcalfe’s paper is still a reference in this area, as is Siobhan O’Mahony and Joel West’s paper “The Role of Participation Architecture in Growing Sponsored Open Source Communities”.

This is a topic that I have been working on for a while now in fits and starts – but I think the time has come to get the basic ideas I use to evaluate risk associated with free software projects down on paper and out in the ether. In the meantime, what kinds of criteria do my 3 readers think I should be keeping in mind when thinking about this issue? Do you have references to related work that I might not have heard about yet?

Run a marathon… check!

running 12 Comments

Yesterday, on my second serious attempt (previously I injured myself 4 weeks before the race) I finally ran a marathon in Geneva, Switzerland.

Since getting injured in 2007, I’ve taken up running fairly seriously, joined a club, and this time round I was fairly conscientious about my training, getting in most of my long runs, speed work & pace runs as planned. I thought I was prepared, but I don’t think anything can prepare you for actually running 42.195 kilometers at race pace. Athletes will tell you that the marathon is one of the hardest events out there because it’s not just a long-distance race, it’s also a race where you have to run fast all the time. But until you’ve done it, it’s hard to appreciate what they mean.

This year, the club chose the Geneva marathon as a club outing, and around 40 club members signed up for either the marathon or the half-marathon on the banks of Lac Leman, and I couldn’t resist signing up for the marathon.

I wasn’t in perfect health, since I’ve been feeling some twinges in my right hip & hamstring for the past couple of weeks, but during taper before the race I’ve been taking it very easy, and I felt pretty good the day before. With the club we met up on Saturday 9th after lunch, and drove to Geneva to get our race numbers, and then to the hotel in Annemasse for a “special marathon runner’s” dinner (which had a little too much lardons, vinaigrette & buttery sauce to be called a true marathon runner meal), last minute preparations for the big day, and a good night’s rest.

Up early, light breakfast, back into Geneva for the race. Arrived at 7am, lots of marathon runners around, and the excitement levels are starting to climb. After the usual formalities (vaseline under armpits and between thighs, taped nipples, visit to toilet) we made our way to the starting line for the 8am start.

Nice pace from the start – a little fast, even, but by the 3rd kilometer I’d settled into my race pace, at around 4’40 per kilometer (aiming for 3h20 with a couple of minutes margin). Walked across every water station to get two or three good mouthfuls of water and banana without upsetting my tummy. Around kilometer 7, I started to feel a little twinge in the hamstring and piriformis/pyramidal muscle, and I felt like I might be in for a long day. It didn’t start affecting me for a while, but by kilometer 16, I was starting to feel muscles seize up in my hip in reaction to the pain.

First half completed on schedule, 1h38’55, and I was feeling pretty good. Not long afterwards, every step was getting painful. Around kilometer 26, I decided (or was my body deciding for me?) to ease off on the pace a little and I started running kilometers at 4’50 to 5′.

They talk about the wall, but you don’t know what they mean until you hit it. Around kilometer 32, I found out. At first, I welcomed the feeling of heavy legs – it drowned out the pain from my hip, and here was a familiar sensation I thought I could manage. But as the kilometers wore on, and my pace dropped, I was having a harder and harder time putting one foot in front of the other. Starting again after walking across a water stop at kilometers 33 and 38 was hard –  it was pure will that got me going again. My pace was slipping – from 5′ to 5’30 – one kilometer I ran in 6′. It looked like I was barely going to finish in 3’30, if I made it to the end at all.

Then a club-mate who was on a slower pace caught up to me, tapped me on the shoulder, and said “Hang on to me, we’ll finish together” (“accroche toi, on termine ensemble”). A life-saver. Manna from heaven. I picked up speed to match him – if only for 100m. After that, I said to myself, I’ll try to keep this up for another kilometer. When we passed the marker for 40k, I said I’d make it to 41 with him, and let him off for the last straight. And when we got to the final straight, I summoned up everything I had left to go for the last 1200m.

In the end, I covered those last 3200m in an average of 4’35 per kilometer – which just went to teach me that those 5km when I was feeling sorry for myself were more mental blockage than anything else, and I was able to overcome my body screaming out at me to stop.

The record will show that I ran 3h26’33 for my first marathon, but that doesn’t come close to telling the story.

Afterwards, I got a massage, drank a lot of water, ate some banana, and, feeling emptied & drained, a wave of emotion overcame me when I realised what I’d done.

Congratulations to the other first-time marathon runners who ran with me yesterday, and thank you Paco, I’ll never forget that you got me to the end of my first marathon.

Update: The marathon organisers had a video camera recording everyone’s arrival during the race. I discovered this afterwards, otherwise I might have been slightly more restrained after crossing the line.

You can see me arriving here, and Paco, who arrived a few seconds after me, here – for the extended sound-track.

Football clubs and free software projects

community, freesoftware, gnome, maemo 4 Comments

A few weeks ago I pointed out some similarities between community software projects and critical mass. After watching Chelsea-Barcelona last night – an entertaining match for many of the wrong reasons and a few of the right ones – I wanted to share another analogy that could perhaps be useful in analysing free software projects. What can we learn from football clubs?

Before you roll your eyes, hear me out for a second. I’m a firm believer that building software is just like building anything else. And free software communities share lots of similarities with other communities. And football clubs are big communities of people with shared passions.

Football clubs share quite a few features with software development. Like with free software, there are different degrees of involvement: the star players and managers on the field, the back-room staff, physiotherapists, trainers and administrators, the business development and marketing people who help grease the wheels and make the club profitable, and then the supporters. If we look at the star players, they are often somewhat mercenary – they help their club to win becauise they get paid for it. Similarly, in many free software projects, many of the core developers are hired to develop – this doesn’t mean they’re not passionate about the project, but Stormy’s presentation about the relationship between money and volunteer efforts, “would you do it again for free?” rings true.

Even within the supporters, you have different levels of involvement – members of supporter clubs and lifetime ticket holders, the people who wouldn’t miss a match unless they were on their death bed, people who are bringing their son to the first match of his life in the big stadium, and the armchair fans, who “follow” their team but never get closer than the television screen.

The importance of the various groups echoes free software projects too – those fanatical supporters may think that the club couldn’t survive without them, and they might be right, but the club needs trainers, back-room staff and players more. In the free software world, we see many passionate users getting “involved” in the community by sending lots of email to mailing lists suggesting improvements, but we need people hacking code, translating the software and in general “doing stuff” more than we need this kind of input. The input is welcome, and without our users the software we produce would be irrelevant, but the contribution of a supporter needs to be weighed against the work done by core developers, the “stars” of our community.

Drogba shares the love

Drogba shares the love

Football clubs breed a club culture, like software projects. For years West Ham was known for having the ‘ardest players in the league, with the ‘ardest supporters – the “West ‘Am Barmy Army”. Other clubs have built a culture of respect for authority – this is particularly true in a sport like rugby. More and more the culture in football is one of disrespect for authority. Clubs like Manchester United have gotten away with en masse intimidation of match officials when decisions didn’t go their way. I was ashamed to see players I have admired from afar – John Terry, Didier Drogba, Michael Ballack, in the heat of the moment show the utmost of disrespect for the referee. That culture goes right through the club – when supporters see their heroes outraged and aggressive, they get that way too. The referee in question has received death threats today.

Another similarity is the need for a sense of identity and leadership. Football fans walk around adorned in their club’s colours, it gives them a sense of identity, a shared passion. And so do free software developers – and the more obscure the t-shirt you’re wearing the better. “I was at the first GUADEC” to a GNOME hacker is like saying “I was in Istanbul” for a Liverpool supporter.

This is belonging

This is belonging

So – given the similarities – spheres of influence and involvement, with lots of different roles needed to make a successful club, a common culture and identity, what can we learn from football clubs?

A few ideas:

  • Recruitment: Football clubs work very very hard to ensure a steady stream of talented individuals coming up through the ranks. They have academies where they grow new talent, scouts, reserve teams and feeder clubs where they keep an eye on promising talent, and they will buy a star away from a competing club based on his reputation and track record.
  • Teams have natural lifecycles: When old leaders come to the end of the road, managers often have trouble filling the leadership void. Often, it’s not one player leaving, but a group of friends who have played together for years. Teams have natural lifecycles, but good teams manage to see further ahead, and are constantly looking to renew the team, so that they don’t end up in a situation where they lose 5 or 6 key players in one season
  • Build belonging: Supporters want to show their sense of belonging, and people who don’t have the skillz to be on the field still want to wear their team colours, and share their passion for the team. Merchandising is one way to do that, but not the only way. We should look at the way clubs cultivate their user groups and create a passionate following
  • Leaders decide the culture: We owe it to ourselves to systematically grow a nurturing culture at the heart of our project – core developers, thought leaders, anyone who is a figurehead within the project. If we are polite and respectful to each other, considerate of the feelings of those we deal with and sensitive to how our words will be received, our supporters will follow suit.

Are there any other dodgy analogies that we can make with free software develoment communities? Any other lessons we might be able to draw from this one?