October 6, 2009
freesoftware, General, running
Anyone anywhere know anyone working for Garmin who might be able to put me in touch with someone who can tell me what the ANT+ communication protocol is, so that I can give it to the good people developing gant, so that they can fix their driver to not crash in the middle of a transfer please? It seems to break for me for any transfer with more than one track.
I can see absolutely no competitive reason to keep the protocol private, it’s almost completely reverse engineered already, and this would cost Garmin essentially nothing, and allow us poor Linux users a way to get our tracks off our watches. The problem is there’s an inertia in keeping this stuff private. It’s hard to get the person with the knowledge (the engineer) and the person with signing power to publish the protocol (a VP probably) in the same place with the person who wants the information (little ol’ me) – it can take hours of justifications & emails & meetings… Can anyone help short-curcuit the problem by helping me get the name of the engineer & the manager involved?
September 28, 2009
community, freesoftware, General, maemo
After commenting on Mal Minhas’s “cost of non-participation” paper (PDF), I’ve been thinking about the cost of performing a merge back to a baseline, and I think I have something to work with.
First, this might be obvious, but worth stating: Merging a branch which has changed and a branch which has not changed is trivial, and has zero cost.
So merging only has a cost if we have a situation where the two trees concerned with the merge have changed.
We can also make another observation: If we are only adding new function points to a branch, and the mainline branch does not change the API, there is a very small cost to merging (almost zero). There may be some cost if functions with similar names, performing similar functions, have been added to the mainline branch, but we can trivially merge even a large diff if we are not touching any of the baseline code, and only adding new files, objects, or functions.
With that said, let’s get to the nuts & bolts of the analysis:
Let’s say that a code tree has n function points. A vendor takes a branch and makes a series of modifications which affects x function points in the program. The community develops the mainline, and changes y function points in the original program. Both vendor and community add new function points to extend functionality, but we’re assuming that merging these is an almost zero cost.
The probability of conflicts is obviously greater the bigger x and y are. This probability increases very fast the bigger the numbers. Let’s assume that every time that a given function point has been modified by both the vendor and the community that there is a conflict which must be manually resolved (1). If we assume that changes are independently distributed across the codebase (2), we can work out that the probability of at least one conflict is 1 – (n-x)!(n-y)!/n!(n-x-y)! if I haven’t messed up my maths (thanks to derf on #maemo for the help!).
So if we have 20 functions, and one function gets modified on the mainline and another on the vendor branch, we have a 5% chance of a conflict, but if we modify 5 each, the probability goes up to over 80%. This is the same phenomenon which lets you show that if you have 23 people in a room, chances are that at least two of them will share a birthday.
We can also calculate the expected number of conflicts, and thus the expected cost of the merge, if we assume the cost of each of these conflicts is a constant cost C (3). However, the maths to do that is outside the scope of my skillz right now Anyone else care to give it a go & put it in the comments?
We have a bunch of data we can analyse to calculate the cost of merges in quantitative terms (for example, Nokia’s merge of Hildon work from GTK+ 2.6 to 2.10), to estimate C, and of course we can quite easily measure n and y over time from the database of source code we have available to us, so it should be possible to give a very basic estimate metric for cost of merge with the public data.
(1) It’s entirely possible to have automatic merges happen within a single function, and the longer the function, the more likely this is to happen if the patches are short.
(2) A poor assumption, since changes tend to be disproportionately concentrated in a few key functions.
(3) I would guess that the cost is usually proportional to the number of lines in the function, perhaps by the square of the number of lines – resolving a conflict in a 40 line function os probably more than twice as easy as resolving a conflict in an 80 line function. This is slightly at odds with footnote (1), so overall the assumption of constant cost seems reasonable to me.
September 17, 2009
community, freesoftware, General, gimp, gnome, maemo, work
(Reposted from Neary Consulting)
Mal Minhas of the LiMo Foundation announced and presented a white paper at OSiM World called “Mobile Open Source Economic Analysis” (PDF link). Mal argues that by forking off a version of a free software component to adjust it to your needs, run intensive QA, and ship it in a device (a process which can take up to 2 years), you are leaving money on the table, by way of what he calls “unleveraged potential” – you don’t benefit from all of the features and bug fixes which have gone into the software since you forked off it.
While this is true, it is also not the whole story. Trying to build a rock-solid software platform on shifting sands is not easy. Many projects do not commit to regular stable releases of their software. In the not too distant past, the FFMpeg project, universally shipped in Linux distributions, had never had a stable or unstable release. The GIMP went from version 1.2.0 in December 1999 to 2.0.0 in March 2004 in unstable mode, with only bug-fix releases on the 1.2 series.
In these circumstances, getting both the stability your customers need, and the latest & greatest features, is not easy. Time-based releases, pioneered by the GNOME project in 2001, and now almost universally followed by major free software projects, mitigate this. They give you periodic sync points where you can get software which meets a certain standard of feature stability and robustness. But no software release is bug-free, and this is true for both free and proprietary software. In the Mythical Man-Month, Fred Brooks described the difficulties of system integration, and estimated that 25% of the time in a project would be spent integrating and testing relationships between components which had already been planned, written and debugged. Building a system or a Linux distribution, then, takes a lot longer than just throwing the latest stable version of every project together and hoping it all works.
By participating actively in the QA process of the project leading up to the release, and by maintaining automated test suites and continuous integration, you can mitigate the effects of both the shifting sands of unstable development versions and reduce the integration overhead once you have a stable release. At some stage, you must draw a line in the sand, and start preparing for a release. In the GNOME project, we have a progressive freezing of modules, progressively freezing the API & ABI of the platform, the features to be included in existing modules, new module proposals, strings and user interface changes, before finally we have a complete code freeze pre-release. Similarly, distributors decide early what versions of components they will include on their platforms, and while occasional slippages may be tolerated, moving to a new major version of a major component of the platform would cause integration testing to return more or less to zero – the overhead is enormous.
The difficulty, then, is what to do once this line is drawn. Serious bugs will be fixed in the stable branch, and they can be merged into your platform easily. But what about features you develop to solve problems specific to your device? Typically, free software projects expect new features to be built and tested on the unstable branch, but you are building your platform on the stable version. You have three choices at this point, none pleasant – never merge, merge later, or merge now:
- Develop the feature you want on your copy of the stable branch, resulting in a delta which will be unique to your code-base, which you will have to maintain separately forever. In addition, if you want to benefit from the features and bug fixes added to later versions of the component, you will incur the cost of merging your changes into the latest version, a non-negigible amount of time.
- Once you have released your product and your team has more time, propose the features you have worked on piecemeal to the upstream project, for inclusion in the next stable version. This solution has many issues:
- If the period is long enough, your feature additions will be long removed from the codebase as it has evolved, and merging your changes into the latest unstable tree will be a major task
- You may be redundantly solving problems that the community has already addressed, in a different or incompatible way.
- Feature requests may need substantial re-writing to meet community standards. This problem is doubly so if you have not consulted the community before developing the feature, to see how it might best be integrated.
- In the worst case, you may have built a lot of software on an API which is only present in your copy of the component’s source tree, and if your features are rejected, you are stuck maintaining the component, or re-writing substantial amounts of code to work with upstream.
- Develop your feature on the unstable branch of the project, submit it for inclusion (with the overhead that implies), and back-port the feature to your stable branch once included. This guarantees a smaller delta from the next stable version to your branch, and ensures you work gets upstream as soon as possible, but adds a time & labour overhead to the creation of your software platform
In all of these situations there is a cost. The time & effort of developing software within the community and back-porting, the maintenance cost (and related unleveraged potential) to maintaining your own branch of a major component, and the huge cost of integrating a large delta back to the community-maintained version many months after the code has been written.
Intuitively, it feels like the long-term cheapest solution is to develop, where possible, features in the community-maintained unstable branch, and back-port them to your stable tree when you are finished. While this might be nice in an ideal world, feature proposals have taken literally years to get to the point where they have been accepted into the Linux kernel, and you have a product to ship – sometimes the only choice you have is to maintain the feature yourself out-of-tree, as Robert Love did for over a year with inotify.
While addressing the raw value of the code produced by the community in the interim, Mal does not quantify the costs associated with these options. Indeed, it is difficult to do so. In some cases, there is not only a cost in terms of time & effort, but also in terms of goodwill and standing of your engineers within the community – this is the type of cost which it is very hard to put a dollar value on. I would like to see a way to do so, though, and I think that it would be possible to quantify, for example, the community overhead (as a mean) by looking at the average time for patch acceptance and/or number of lines modified from intial proposal to final mainline merge.
Anyone have any other thoughts on ways you could measure the cost of maintaining a big diff, or the cost of merging a lot of code?
August 4, 2009
I happened on a Wired article about the 6 word novel this morning, in a link from a newsletter I’m subscribed to.
Like Haiku and other very formulaic structures, the six word novel gives the author enormous freedom while constraining them in a fish-bowl.
My favourite examples:
- “For sale: Baby shoes. Never worn.” – Ernest Hemmingway.
- “Longed for him. Got him. Shit.” – Margaret Atwood
- “With bloody hands, I say good-bye.” – Frank Miller
Reading these, and seeing the second-place finisher of the competition in this newsletter (which was my favourite): “My secret discovered. Plane ticket purchased.” made me want to give it a try.
After some work, here’s my best effort.
As the noose tightened, she remembered.
A bit macabre, nonetheless I’m pretty happy with the images it brings forward, and the questions it leaves unanswered.
Anyone else care to try?
July 30, 2009
General, gnome, guadec, maemo
Sunday, Monday and Tuesday were the “core” days of the Gran Canaria Desktop Summit, with cross-desktop and KDE & GNOME specific presentations throughout. I caught a number of presentations, but mostly I was chatting in the hallway track, or doing work on the schedule, or actually working.
For me, the story of the 3 days was “parties”. I missed the early sessions on Sunday and Monday to get breakfast at 10am, after the parties hosted by Nokia (Sunday night) and Igalia (Monday night) – I was relieved that there was no party planned on Tuesday night, my 35 year old body couldn’t stand the pace! Great parties, not marred by excessive boozing mostly, and some great chats, notably with jrb, and Adam Dingle and Jim Nelson from Yorba, makers of Shotwell, a Vala photo manager with some really nice features and plans. And some great discussions with Michael Meeks and Matthew Garrett on the fouton during the Igalia party, with Federico Mena Quintero on architecture design patterns, and Jorge Castro on dinosaurs. I also got to meet Joaquim from Igalia, the Macacque band were great, but I’m sure that a hoarse Lefty regretted sweet home chicago and smoke on the water the day after.
I did get to some presentations though (here with a one line summary):
- Power management by Matthew Garrett: “Power management isn’t doing the same amount of work, slower. Do less work, or you’re killing polar bears.”
- ConnMan by Not Marcel Holtmann (Joshua Lock from Intel gave the talk in the end – thanks Emmanuele!): “ConnMan solves some problems for Moblin that NetworkManager wasn’t designed to solve.” (I think).
- Bluetooth on Linux by Bastien Nocera: “It mostly works now”
- Introduction to GNOME Shell, by Owen Taylor: “It’s pretty cool stuff already”
- GNOME Zeitgeist, by Thorsten, Seif and Federico: “We record what you’re doing”
- Communicating design in development, by Celeste Lyn Paul: “Keep it simple until they get the design principle, excessive realism too early just makes the discussion about the details”. Unfortunately, I don’t see a video available, highly recommended viewing if there was one.
- GNOME 3.0: A live circus^Wstatus update, by Vincent Untz et al: “It’s not just GTK+, Zeitgeist and GNOME Shell”
- GNOME 1,2,3, by Fernando Herrera and Xan Lopez: “A history of GNOME with thanks to YouTube” (my favourite presentation of the conference)
- Personal Passion lightning talks, by Aaron Bockover: “We’re not just Free Software hackers!” This was absolutely my second favourite session of the conference. We got a 10 minute overview of the burnout cycle from Jono Bacon, underlining how important it is to have a life outside of software, and heard from people whose passion was running (complete with a soundtrack of me finishing a marathon), airplanes, motorcycling, cooking, bacon, dinosaurs, Aikido, buddhism and calligraphy, trekking in Argentina, and also a couple of geeky ones on icon design and scheme (which was very enjoyable indeed, thanks Andy!)
Update: Memory playing tricks with me – for of course, Tuesday evening was the highly anticipated meeting of SMASHED. We finally met at the Mare Baja again, where the opening night party was held, and enjoyed a bunch of tapas courtesy of CodeThink, before scoffing down some great whisk[e]y, including (from memory) a 21yo Highland Park, a nice 16yo Longmorn, a very lod bottle of Oude Ginever from Lefty, an old standard Connemara single malt, and a Yamazaki 10yo I brought.
SMASHED 2009 in Gran Canaria
Festivities carried on until after 1pm, when I left with Andrew Savory and someone else (whose name I don’t recall), and Behdad got in an unprovoked fight with the footpath on the way back to the hotel – it came right up and hit him in the face. Some nice KDE people took him to the hospital to get sewn up – luckily the group photos had been taken earlier in the day.
Got back to the hotel around 2, and tried to catch up on some of that beauty sleep before Mobile Day on Wednesday in the new conference location in the university.
July 7, 2009
General, gnome, guadec
I was talking with Aaron Bockover yesterday and he told me that he wasn’t going to give the Silverlight talk which he had submitted back in March, and that he planned to give a presentation on something completely unrelated that he found interesting. Chris Blizzard suggested that he could give a lightning talk on amateur aeronautics, and as the idea spread a whole bunch of ideas on interesting non-GNOME related subjects that GNOME community members are interested in came up from architecture to running. There’s also a really valuable short talk on the burnout cycle (and how to break it) from Jono Bacon in there. So for 45 minutes, we will have a set of lightning talks reflecting the eclectic nature of the GNOME community – if you see Aaron and have something you are passionate about that you want to talk about for 3 to 5 minutes, grab him today or turn up at his session at 5:30 and shout.
And spread the word!
June 9, 2009
Warning: politics post
Since moving to France, the only elections I get to vote in here are the European and municipal elections – so on Sunday I blew the dust off my voter card & trotted down to my local “bureau de vote” as one of the 40% of the French electorate who voted. I had a chance to think about why the European elections inspire people so little.
In the past couple of weeks, debate about European issues has been mostly absent from newspapers and TV. What little we hear is more like celeb news – “he said, she said” or “the sworn enemies unite and appear on stage together pretending they like each other”. But to me, the fundamental questions about what we expect from Europe, and how a vote for one party or another will move towards that vision, are absent.
There are a few reasons for this – the political groupings in the EU parliament are detached from the local political landscape in France. Even the major groupings like EPP, PES, the Liberals and the Greens don’t have an identity in the election camaign. There is no European platform of note. Very little appears to be spent spent on advertising. In brief, the European election appears to the public to be nothing more than a mid-term popularity contest with little impact on people.
That is not to say that the EU has no impact. But the European parliament is quite hamstrung by the European law-making process, as we saw with the vote for the EUCD: in that case, the EU parliament was unhappy with the law proposed by the commission, and proposed many amendments which improved the law, only to see the majority of these reversed by the council of ministers. When the law came back to the parliament, there were three options available: accept the law, reject it outright (requiring an absolute majority of MEPs, difficult to obtain), or reject it by a majority (by proposing amendments) and send it into a commission, made up 50% of nominees from the council of ministers and 50% from the EU parliament.
The process is weighted toward the commission (which writes the law in the first place) and the council of ministers, who have veto power at every stage, and against the parliament, due to the requirement of an absolute majority for rejection in second reading. The commission and the council of ministers are both nominated by the governments of the member countries. I would argue that because of this, they don’t represent the European population, so much as they represent a cross-section of European political parties.
On other occasions, a stand-off between the governments and the EP is possible – as with the nomination of the Barroso commission in 2004. And when people are asked their opinion on the direction of Europe, as in the first referendum on the Nice treaty in Ireland, the French and Dutch referenda on the European constitution, and now the referendum on the Lisbon treaty in Ireland, if the result doesn’t match with what is supported by the member governments, a way is found to work around the result. In the case of a small country like Ireland, a couple of special case amendments, and you rerun the referendum. For the bigger countries like France, you renegotiate the form of the agreement so that it’s a treaty, not a single document (which, by the way, makes it harder to read and understand), so that you can ratify it with a working majority in parliament.
And so Europeans are slowly but surely distancing themselves from Europe. Fringe parties and independents representing a protest vote get very good scores, like the UKIP in the UK, or NPA and (until recently) the Front National in France. The European parliament is becoming less representative of European opinion, rather than more representative. Only 4 in 10 registered voters go to the polls. I would be willing to bet that Lisbon will not pass the second time around in Ireland, plunging Europe into another institutional crisis.
These are the twin problems facing Europe: the national governments in Europe are not representing the views of their citizens, and the only representative body we have is pretty ineffectual, even when they try to do something.
The solutions in my opinion: Elect commissioners and members of the council of ministers. Create Europe-wide political parties with Europe-wide campaigns, like in the US. Let the voters know what they’re voting for in the parliament, and allow them to vote the executive branch of the European government. The path to greater voter activity in Europe is greater voter inclusion in the electoral process.
May 26, 2009
I recently upgraded from Ubuntu 8.10 to 9.04 and in the process “cleaned up” the distro using the very useful option to “make my system as close as possible to a new install” (I don’t remember if that’s the exact text, but that was the gist of it). Last night, I tried to use the printer in my office for the first time since upgrading, an Epson Stylus Office BX300F (all in one scanner/printer/copier/fax).
With 8.10, I finally got printing working – I don’t remember the details, but I do recall that I had to install pipslite and generate a new PPD file to get a working driver for the printer, which I found through the very useful OpenPrinting.org website. It’s a fairly new printer, on the market since September 2008 as far as I can tell, cheap, and part of a long-running series from Epson (the Linux driver available for download on the Epson site is dated early 2007).
Nonetheless I was reassured by OpenPrinting’s assurance that the printer and scanner “work perfectly”, and I wasn’t expecting to have to download a source package, install some development packages, and compile myself a new Ubuntu package to get it working. And then discover that there was a package available already that I just hadn’t found. But anyway, that was then…
When I upgrade my OS, I have a fairly simple expectation, that changes I have to make to the previous version to “fix” things don’t get broken post-upgrade. There are some scenarios where I can almost accept an exception – a few releases ago, I had problems with Xrandr because changes I had previously had to make to get my Intel hardware working properly were no longer necessary as X.org integrated and improved the driver – but it took me a while to figure out what was happening, and revert my Xorg config to the distro version.
Yesterday, when I had to print some documents, I got a nice error message in the properties of the printer that let me know I had a problem: “Printer requires the ‘pipslite-wrapper’ program but it is not currently installed. Please install it before using this printer.” And thus began the yak-shaving session that people could follow on twitter yesterday.
- Search in synaptic for pipslite – found – but: “Package pipslite has no available version, but exists in the database.” Gah!
- Try to find an alternative driver for the Epson installed on the system: no luck. Hit the forums.
- Noticed that libsane-backends-extra wasn’t installed, installed it to get the epkowa sane back-end, and “scanimage -L” as root worked (for the first time) – so went on a side-track to get the scanner working as a normal user
- Figure out what USB node the scanner is, chgrp scanner, scanning works!
- Then figure out how the group gets set on the node on plugging, found the appropriate udev rules file (/lib/udev/rules.d/40-libsane-extras), copied it to /etc/udev/rules.d, added a new line to get the scanner recognised (don’t forget to restart udev!) scanning works!
- Re-download a driver from the website linked to in OpenPrinting’s page for the printer – they have a .deb for Ubuntu 9.04! Rock!
- Install driver, error message has changed, but still no printing: “/usr/lib/cups/filter/pipslite-wrapper failed”. Forums again.
- Tried to regenerate a PPD file: pipslite-install: libltdl.so.3 not found. ls -l /usr/lib/*ltdl*: libltdl.so.7 – Bingo! The pre-built “Ubuntu” binaries don’t link to the right versions of some dependencies.
- Download the source code, compile a new .deb (dpkg-buildpackage works perfectly), install, regenerate .ppd file, (don’t forget to restart CUPS), and we have a working printer!
4 hours lost.
Someone will doubtless follow up in comments telling me how stupid I was not to [insert some "easy" way of getting the printer working] which didn’t involved downloading source code and compiling my own binary package, or fiddling about in udev to add new rules, or sullying my pristine upgrade with an unofficial package. Please do! I’m eager to learn. And perhaps someone else with the same problems will find this blog entry when they look for “Ubuntu Epson Stylus Office BX300F” and won’t have to figure things out the hard way like I did.
Please bear in mind when you do that I’m not a neophyte, that I’ve got some pretty good Google-fu, and that I’ve been using Linux for many many years – and it took me 4 hours to re-do something I’d already done once 6 months ago, and wasn’t expecting to have to do again. How much harder is it for a first timer when he buys a USB headset & mic, or printer/scanner, or webcam?
Update: After fixing the problem, I have discovered that the Gutenprint driver mentioned on the OpenPrinting page (using CUPS+Gutenprint) does work with my printer. It seems that if I had done a fresh install, rather than an upgrade, I would not have had this existing printer using a no longer installed “recommended” driver – as John Mark suggested to me on twitter, pipslite is no longer necessary. In addition, when I tested both drivers with the same image, there is a noticeable difference in the results – the gutenprint driver appears to use a higher alpha, resulting in colours being much lighter in mid-tones. The differences are quite remarkable.
March 27, 2009
As promised during my presentation yesterday, here are the various publications I linked to for information on evaluating community governance patterns (and anti-patterns):
And, for French speakers, a bonus link (although the language is dense academic):
March 23, 2009
For a European travelling in the US, one of the things that jumps out at you when you turn on the TV is the number of ads for prescription drugs you get in the US.
These 30 or 60 second ads are all very similar: 5 to 10 seconds presenting the medication, followed by 20 to 25 seconds of disclaimers and disclosure of secondary effects, with a warning to consult with your physician and ask him about the drug in question.
It’s symptomatic of the approach to healthcare in the US, which says that the patient is responsible for his care – your doctor’s role is to advise you what medications are available, and let you decide what you use to medicate yourself. Thus, drug companies market their drugs directly to the public, rather than to doctors.
Like Bary Schwartz in “The Paradox of Choice” I don’t think this is a healthy state of affairs. Excessive choice creates stress, and asking someone to make a decision they are not sufficiently informed to make is asking for trouble. You might as well ask me to fix the financial crisis – it doesn’t matter how good my advisors are, I’m not equipped to make decisions in the area.
Where I live, patience go to their doctors for expert advice. The doctor decides what medication, if any, is appropriate for your condition, and gives you a prescription. Of course, it is your choice if you fill that prescription afterwards, and if you’re like me, you ask the doctor lots of questions during your visit, but the chain of responsibility is substantially different. There is no point marketing prescription medication to the general public, because the doctor is the one who decides what prescription medication you use.
« Previous Entries Next Entries »