Giving Great Presentations – speaker notes

community, freesoftware, General, maemo 5 Comments

Earlier today I gave a lightning talk on giving great presentations at the Maemo Summit. The response has been great, and here are the notes I wrote for the presentation, so that people can refer back tol the advice when the time comes.

Giving Great Presentations

It was said that when Cicero finished speaking, people turned to each other and said “that was a great speech”. But when Demosthenes finished speaking, people said “we must march”.

Throughout history, great orators have changed the world. Entire movements can grow from the powerful communication of an idea.

Yet most technical presentations are horrible. Slides filled with bullet points, and monotone delivery. How many people here have asked themselves at one stage or another during a presentation, “why am I here?”

You might not be Obama, but you can still give better presentations. Here are some basic tips for improving. Nothing I’m going to say here is difficult, but there are no easy fixes either.

Think of your audience

The first tip is for when you are considering giving a presentation, and when you start writing your content. Think of what your audience will get from your presentation. What’s in it for them?

If your point is “to talk about…” you’re off track. You will put your audience to sleep. Seriously.
If you want to share some information, why not just write a blog entry? Why do you need to be in the room?

People don’t care about you. They care about themselves. So make your presentation about them.

A presentation is a sales pitch. You are there to convince people of something. Maybe it’s an idea you want them to believe. Maybe it’s a product you want them to use. If you’re not *selling* something, why are you giving a presentation? You may as well write a blog entry, and stay at home.

So cut to the chase. When you’re thinking about your presentation, think about one core question: What do I want audience members to do once they’ve seen my presentation? And then make sure everything in your presentation is driving towards that goal.

Tell a story

The best way to convince someone of something is to entertain them. And stories are entertaining. Some people are funny, and can use humour to entertain. I’m not funny. But everyone can tell a story.

Human beings are natural storytellers. And stories are a wonderful way to get a point across, especially if you structure your narrative well.

One possible narrative you could use is this:

  1. Problem statement
  2. Proposed solution
  3. Supporting evidence
  4. Conclusion

It’s important to finish your presentation will a call to action. Make people march. The action can be small. Integrate the key lesson of your presentation into their work. Download an SDK and try out some sample apps. Write a letter to a local politician. Donate to your cause.

But make it clear to people what you want from them.

Presentation design

The third suggestion is to design slides to compliment what you say, rather than repeat it.

Don’t write everything you’re going to say on the slide. Otherwise people will just read it, and won’t concentrate on you. You might as well just write a document and stay at home. Bullet points are especially bad for this – avoid them. Slides should be sparse. Pictures work better. Use images that reinforce your point – show, then tell.

Let’s say I wanted to convince you that Ethiopia was once again on the brink of famine. I could show you charts of crop yields, child mortality, and displaced populations. Or I could show you a photo and tell you the rest.

It’s emotional. It’s cheating. It works.

Practice

The biggest sin that people make when giving presentations is not to say what they want to say out loud before getting on stage.

Runners train. Football players practice. Musicians and actors spend hours getting performances right. So shouldn’t you too? How do you know how long it will take you to get through your content? How do you know what’s useful and what’s superfluous? Does your presentation have a good flow? Practice will tell you.

Doing all this takes time. It’s not as easy as throwing bullet points together the day before your presentation and hoping for the best.

But think of how many man-hours people will spend watching your presentation. How much of your time is it worth to ensure that your audience isn’t wasting theirs?

So go do it. Concentrate on your audience’s interests. Tell stories and entertain people. Make slides sparse. And prepare beforehand by practising. This is harder than what you do now. The pay-off is huge.

The best part is that your audiences will thank you.

Related links:

  • Really Bad Powerpoint – Seth Godin (source of many of the ideas in this presentation)
  • Kill your presentation (before it kills again) – Kathy Sierra – Kathy has lots of material on focusing on your users rather than on yourself – and this is true for presentations too
  • Presentation Zen – the great blog of Garr Reynolds – there is an accompanying book which is well worth reading
  • slide:ology – Nancy Duarte – one of my favourite books on presentation design – a must-read on all stages of presentation design from deciding what to talk about through to working on your delivery

Garmin Forerunner 405 ANT+ protocol

freesoftware, General, running 1 Comment

Anyone anywhere know anyone working for Garmin who might be able to put me in touch with someone who can tell me what the ANT+ communication protocol is, so that I can give it to the good people developing gant, so that they can fix their driver to not crash in the middle of a transfer please? It seems to break for me for any transfer with more than one track.

I can see absolutely no competitive reason to keep the protocol private, it’s almost completely reverse engineered already, and this would cost Garmin essentially nothing, and allow us poor Linux users a way to get our tracks off our watches. The problem is there’s an inertia in keeping this stuff private. It’s hard to get the person with the knowledge (the engineer) and the person with signing power to publish the protocol (a VP probably) in the same place with the person who wants the information (little ol’ me) – it can take hours of justifications & emails & meetings… Can anyone help short-curcuit the problem by helping me get the name of the engineer & the manager involved?

Thanks!

Estimating merge costs

community, freesoftware, General, maemo 2 Comments

After commenting on Mal Minhas’s “cost of non-participation” paper (PDF), I’ve been thinking about the cost of performing a merge back to a baseline, and I think I have something to work with.

First, this might be obvious, but worth stating: Merging a branch which has changed and a branch which has not changed is trivial, and has zero cost.

So merging only has a cost if we have a situation where the two trees concerned with the merge have changed.

We can also make another observation: If we are only adding new function points to a branch, and the mainline branch does not change the API, there is a very small cost to merging (almost zero). There may be some cost if functions with similar names, performing similar functions, have been added to the mainline branch, but we can trivially merge even a large diff if we are not touching any of the baseline code, and only adding new files, objects, or functions.

With that said, let’s get to the nuts & bolts of the analysis:

Let’s say that a code tree has n function points. A vendor takes a branch and makes a series of modifications which affects x function points in the program. The community develops the mainline, and changes y function points in the original program. Both vendor and community add new function points to extend functionality, but we’re assuming that merging these is an almost zero cost.

The probability of conflicts is obviously greater the bigger x and y are. This probability increases very fast the bigger the numbers. Let’s assume that every time that a given function point has been modified by both the vendor and the community that there is a conflict which must be manually resolved  (1).  If we assume that changes are independently distributed across the codebase (2), we can work out that the probability of at least one conflict is 1 – (n-x)!(n-y)!/n!(n-x-y)! if I haven’t messed up my maths (thanks to derf on #maemo for the help!).

So if we have 20 functions, and one function gets modified on the mainline and another on the vendor branch, we have a 5% chance of a conflict, but if we modify 5 each, the probability goes up to over 80%. This is the same phenomenon which lets you show that if you have 23 people in a room, chances are that at least two of them will share a birthday.

We can also calculate the expected number of conflicts, and thus the expected cost of the merge, if we assume the cost of each of these conflicts is a constant cost C (3). However, the maths to do that is outside the scope of my skillz right now :-( Anyone else care to give it a go & put it in the comments?

We have a bunch of data we can analyse to calculate the cost of merges in quantitative terms (for example, Nokia’s merge of Hildon work from GTK+ 2.6 to 2.10), to estimate C, and of course we can quite easily measure n and y over time from the database of source code we have available to us, so it should be possible to give a very basic estimate metric for cost of merge with the public data.

Footnotes:

(1) It’s entirely possible to have automatic merges happen within a single function, and the longer the function, the more likely this is to happen if the patches are short.

(2) A poor assumption, since changes tend to be disproportionately concentrated in a  few key functions.

(3) I would guess that the cost is usually proportional to the number of lines in the function, perhaps by the square of the number of lines – resolving a conflict in a 40 line function os probably more than twice as easy as resolving a conflict in an 80 line function. This is slightly at odds with footnote (1), so overall the assumption of constant cost seems reasonable to me.

The value of engagement

community, freesoftware, General, gimp, gnome, maemo, work 5 Comments

(Reposted from Neary Consulting)

Mal Minhas of the LiMo Foundation announced and presented a white paper at OSiM World called “Mobile Open Source Economic Analysis” (PDF link). Mal argues that by forking off a version of a free software component to adjust it to your needs, run intensive QA, and ship it in a device (a process which can take up to 2 years), you are leaving money on the table, by way of what he calls “unleveraged potential” – you don’t benefit from all of the features and bug fixes which have gone into the software since you forked off it.

While this is true, it is also not the whole story. Trying to build a rock-solid software platform on shifting sands is not easy. Many projects do not commit to regular stable releases of their software. In the not too distant past, the FFMpeg project, universally shipped in Linux distributions, had never had a stable or unstable release. The GIMP went from version 1.2.0 in December 1999 to 2.0.0 in March 2004 in unstable mode, with only bug-fix releases on the 1.2 series.

In these circumstances, getting both the stability your customers need, and the latest & greatest features, is not easy. Time-based releases, pioneered by the GNOME project in 2001, and now almost universally followed by major free software projects, mitigate this. They give you periodic sync points where you can get software which meets a certain standard of feature stability and robustness. But no software release is bug-free, and this is true for both free and proprietary software. In the Mythical Man-Month, Fred Brooks described the difficulties of system integration, and estimated that 25% of the time in a project would be spent integrating and testing relationships between components which had already been planned, written and debugged. Building a system or a Linux distribution, then, takes a lot longer than just throwing the latest stable version of every project together and hoping it all works.

By participating actively in the QA process of the project leading up to the release, and by maintaining automated test suites and continuous integration, you can mitigate the effects of both the shifting sands of unstable development versions and reduce the integration overhead once you have a stable release. At some stage, you must draw a line in the sand, and start preparing for a release. In the GNOME project, we have a progressive freezing of modules, progressively freezing the API & ABI of the platform, the features to be included in existing modules, new module proposals, strings and user interface changes, before finally we have a complete code freeze pre-release. Similarly, distributors decide early what versions of components they will include on their platforms, and while occasional slippages may be tolerated, moving to a new major version of a major component of the platform would cause integration testing to return more or less to zero – the overhead is enormous.

The difficulty, then, is what to do once this line is drawn. Serious bugs will be fixed in the stable branch, and they can be merged into your platform easily. But what about features you develop to solve problems specific to your device? Typically, free software projects expect new features to be built and tested on the unstable branch, but you are building your platform on the stable version. You have three choices at this point, none pleasant – never merge, merge later, or merge now:

  • Develop the feature you want on your copy of the stable branch, resulting in a delta which will be unique to your code-base, which you will have to maintain separately forever. In addition, if you want to benefit from the features and bug fixes added to later versions of the component, you will incur the cost of merging your changes into the latest version, a non-negigible amount of time.
  • Once you have released your product and your team has more time, propose the features you have worked on piecemeal to the upstream project, for inclusion in the next stable version. This solution has many issues:
    • If the period is long enough, your feature additions will be long removed from the codebase as it has evolved, and merging your changes into the latest unstable tree will be a major task
    • You may be redundantly solving problems that the community has already addressed, in a different or incompatible way.
    • Feature requests may need substantial re-writing to meet community standards. This problem is doubly so if you have not consulted the community before developing the feature, to see how it might best be integrated.
    • In the worst case, you may have built a lot of software on an API which is only present in your copy of the component’s source tree, and if your features are rejected, you are stuck maintaining the component, or re-writing substantial amounts of code to work with upstream.
  • Develop your feature on the unstable branch of the project, submit it for inclusion (with the overhead that implies), and back-port the feature to your stable branch once included. This guarantees a smaller delta from the next stable version to your branch, and ensures you work gets upstream as soon as possible, but adds a time & labour overhead to the creation of your software platform

In all of these situations there is a cost. The time & effort of developing software within the community and back-porting, the maintenance cost (and related unleveraged potential) to maintaining your own branch of a major component, and the huge cost of integrating a large delta back to the community-maintained version many months after the code has been written.

Intuitively, it feels like the long-term cheapest solution is to develop, where possible, features in the community-maintained unstable branch, and back-port them to your stable tree when you are finished. While this might be nice in an ideal world, feature proposals have taken literally years to get to the point where they have been accepted into the Linux kernel, and you have a product to ship – sometimes the only choice you have is to maintain the feature yourself out-of-tree, as Robert Love did for over a year with inotify.

While addressing the raw value of the code produced by the community in the interim, Mal does not quantify the costs associated with these options. Indeed, it is difficult to do so. In some cases, there is not only a cost in terms of time & effort, but also in terms of goodwill and standing of your engineers within the community – this is the type of cost which it is very hard to put a dollar value on. I would like to see a way to do so, though, and I think that it would be possible to quantify, for example, the community overhead (as a mean) by looking at the average time for patch acceptance and/or number of lines modified from intial proposal to final mainline merge.

Anyone have any other thoughts on ways you could measure the cost of maintaining a big diff, or the cost of merging a lot of code?

Six word novels

General 20 Comments

I happened on a Wired article about the 6 word novel this morning, in a link from a newsletter I’m subscribed to.

Like Haiku and other very formulaic structures, the six word novel gives the author enormous freedom while constraining them in a fish-bowl.

My favourite examples:

  • “For sale: Baby shoes. Never worn.” – Ernest Hemmingway.
  • “Longed for him. Got him. Shit.” – Margaret Atwood
  • “With bloody hands, I say good-bye.” – Frank Miller

Reading these, and seeing the second-place finisher of the competition in this newsletter (which was my favourite): “My secret discovered. Plane ticket purchased.” made me want to give it a try.

After some work, here’s my best effort.

As the noose tightened, she remembered.

A bit macabre, nonetheless I’m pretty happy with the images it brings forward, and the questions it leaves unanswered.

Anyone else care to try?

GCDS round-up 4: Days 2 – 4

General, gnome, guadec, maemo 2 Comments

Sunday, Monday and Tuesday were the “core” days of the Gran Canaria Desktop Summit, with cross-desktop and KDE & GNOME specific presentations throughout. I caught a number of presentations, but mostly I was chatting in the hallway track, or doing work on the schedule, or actually working.

For me, the story of the 3 days was “parties”. I missed the early sessions on Sunday and Monday to get breakfast at 10am, after the parties hosted by Nokia (Sunday night) and Igalia (Monday night) – I was relieved that there was no party planned on Tuesday night, my 35 year old body couldn’t stand the pace! Great parties, not marred by excessive boozing mostly, and some great chats, notably with jrb, and Adam Dingle and Jim Nelson from Yorba, makers of Shotwell, a Vala photo manager with some really nice features and plans. And some great discussions with Michael Meeks and Matthew Garrett on the fouton during the Igalia party, with Federico Mena Quintero on architecture design patterns, and Jorge Castro on dinosaurs. I also got to meet Joaquim from Igalia, the Macacque band were great, but I’m sure that a hoarse Lefty regretted sweet home chicago and smoke on the water the day after.

I did get to some presentations though (here with a one line summary):

  • Power management by Matthew Garrett: “Power management isn’t doing the same amount of work, slower. Do less work, or you’re killing polar bears.”
  • ConnMan by Not Marcel Holtmann (Joshua Lock from Intel gave the talk in the end – thanks Emmanuele!): “ConnMan solves some problems for Moblin that NetworkManager wasn’t designed to solve.” (I think).
  • Bluetooth on Linux by Bastien Nocera: “It mostly works now”
  • Introduction to GNOME Shell, by Owen Taylor: “It’s pretty cool stuff already”
  • GNOME Zeitgeist, by Thorsten, Seif and Federico: “We record what you’re doing”
  • Communicating design in development, by Celeste Lyn Paul: “Keep it simple until they get the design principle, excessive realism too early just makes the discussion about the details”. Unfortunately, I don’t see a video available, highly recommended viewing if there was one.
  • GNOME 3.0: A live circus^Wstatus update, by Vincent Untz et al: “It’s not just GTK+, Zeitgeist and GNOME Shell”
  • GNOME 1,2,3, by Fernando Herrera and Xan Lopez: “A history of GNOME with thanks to YouTube” (my favourite presentation of the conference)
  • Personal Passion lightning talks, by Aaron Bockover: “We’re not just Free Software hackers!” This was absolutely my second favourite session of the conference. We got a 10 minute overview of the burnout cycle from Jono Bacon, underlining how important it is to have a life outside of software, and heard from people whose passion was running (complete with a soundtrack of me finishing a marathon), airplanes, motorcycling, cooking, bacon, dinosaurs, Aikido, buddhism and calligraphy, trekking in Argentina, and also a couple of geeky ones on icon design and scheme (which was very enjoyable indeed, thanks Andy!)

Update: Memory playing tricks with me – for of course, Tuesday evening was the highly anticipated meeting of SMASHED. We finally met at the Mare Baja again, where the opening night party was held, and enjoyed a bunch of tapas courtesy of CodeThink, before scoffing down some great whisk[e]y, including (from memory) a 21yo Highland Park, a nice 16yo Longmorn, a very lod bottle of Oude Ginever from Lefty, an old standard Connemara single malt, and a Yamazaki 10yo I brought.

SMASHED 2009 in Gran Canaria

SMASHED 2009 in Gran Canaria

Festivities carried on until after 1pm, when I left with Andrew Savory and someone else (whose name I don’t recall), and Behdad got in an unprovoked fight with the footpath on the way back to the hotel – it came right up and hit him in the face. Some nice KDE people took him to the hospital to get sewn up – luckily the group photos had been taken earlier in the day.

Got back to the hotel around 2, and tried to catch up on some of that beauty sleep before Mobile Day on Wednesday in the new conference location in the university.

Last minute schedule change: Personal passions

General, gnome, guadec 1 Comment

I was talking with Aaron Bockover yesterday and he told me that he wasn’t going to give the Silverlight talk which he had submitted back in March, and that he planned to give a presentation on something completely unrelated that he found interesting. Chris Blizzard suggested that he could give a lightning talk on amateur aeronautics, and as the idea spread a whole bunch of ideas on interesting non-GNOME related subjects that GNOME community members are interested in came up from architecture to running. There’s also a really valuable short talk on the burnout cycle (and how to break it) from Jono Bacon in there. So for 45 minutes, we will have a set of lightning talks reflecting the eclectic nature of the GNOME community – if you see Aaron and have something you are passionate about that you want to talk about for 3 to 5 minutes, grab him today or turn up at his session at 5:30 and shout.

And spread the word!

European parliamentary elections

General 10 Comments

Warning: politics post

Since moving to France, the only elections I get to vote in here are the European and municipal elections – so on Sunday I blew the dust off my voter card & trotted down to my local “bureau de vote” as one of the 40% of the French electorate who voted. I had a chance to think about why the European elections inspire people so little.

In the past couple of weeks, debate about European issues has been mostly absent from newspapers and TV. What little we hear is more like celeb news – “he said, she said” or “the sworn enemies unite and appear on stage together pretending they like each other”. But to me, the fundamental questions about what we expect from Europe, and how a vote for one party or another will move towards that vision, are absent.

There are a few reasons for this – the political groupings in the EU parliament are detached from the local political landscape in France. Even the major groupings like EPP, PES, the Liberals and the Greens don’t have an identity in the election camaign. There is no European platform of note. Very little appears to be spent spent on advertising. In brief, the European election appears to the public to be nothing more than a mid-term popularity contest with little impact on people.

That is not to say that the EU has no impact. But the European parliament is quite hamstrung by the European law-making process, as we saw with the vote for the EUCD: in that case, the EU parliament was unhappy with the law proposed by the commission, and proposed many amendments which improved the law, only to see the majority of these reversed by the council of ministers. When the law came back to the parliament, there were three options available: accept the law, reject it outright (requiring an absolute majority of MEPs, difficult to obtain), or reject it by a majority (by proposing amendments) and send it into a commission, made up 50% of nominees from the council of ministers and 50% from the EU parliament.

The process is weighted toward the commission (which writes the law in the first place) and the council of ministers, who have veto power at every stage, and against the parliament, due to the requirement of an absolute majority for rejection in second reading. The commission and the council of ministers are both nominated by the governments of the member countries. I would argue that because of this, they don’t represent the European population, so much as they represent a cross-section of European political parties.

On other occasions, a stand-off between the governments and the EP is possible – as with the nomination of the Barroso commission in 2004. And when people are asked their opinion on the direction of Europe, as in the first referendum on the Nice treaty in Ireland, the French and Dutch referenda on the European constitution, and now the referendum on the Lisbon treaty in Ireland, if the result doesn’t match with what is supported by the member governments, a way is found to work around the result. In the case of a small country like Ireland, a couple of special case amendments, and you rerun the referendum. For the bigger countries like France, you renegotiate the form of the agreement so that it’s a treaty, not a single document (which, by the way, makes it harder to read and understand), so that you can ratify it with a working majority in parliament.

And so Europeans are slowly but surely distancing themselves from Europe. Fringe parties and independents representing a protest vote get very good scores, like the UKIP in the UK, or NPA and (until recently) the Front National in France. The European parliament is becoming less representative of European opinion, rather than more representative. Only 4 in 10 registered voters go to the polls. I would be willing to bet that Lisbon will not pass the second time around in Ireland, plunging Europe into another institutional crisis.

These are the twin problems facing Europe: the national governments in Europe are not representing the views of their citizens, and the only representative body we have is pretty ineffectual, even when they try to do something.

The solutions in my opinion: Elect commissioners and members of the council of ministers. Create Europe-wide political parties with Europe-wide campaigns, like in the US. Let the voters know what they’re voting for in the parliament, and allow them to vote the executive branch of the European government. The path to greater voter activity in Europe is greater voter inclusion in the electoral process.

Trial by fire: distro upgrade

General 9 Comments

I recently upgraded from Ubuntu 8.10 to 9.04 and in the process “cleaned up” the distro using the very useful option to “make my system as close as possible to a new install” (I don’t remember if that’s the exact text, but that was the gist of it). Last night, I tried to use the printer in my office for the first time since upgrading, an Epson Stylus Office BX300F (all in one scanner/printer/copier/fax).

With 8.10, I finally got printing working – I don’t remember the details, but I do recall that I had to install pipslite and generate a new PPD file to get a working driver for the printer, which I found through the very useful OpenPrinting.org website. It’s a fairly new printer, on the market since September 2008 as far as I can tell, cheap, and part of a long-running series from Epson (the Linux driver available for download on the Epson site is dated early 2007).

Nonetheless I was reassured by OpenPrinting’s assurance that the printer and scanner “work perfectly”, and I wasn’t expecting to have to download a source package, install some development packages, and compile myself a new Ubuntu package to get it working. And then discover that there was a package available already that I just hadn’t found. But anyway, that was then…

When I upgrade my OS, I have a fairly simple expectation, that changes I have to make to the previous version to “fix” things don’t get broken post-upgrade. There are some scenarios where I can almost accept an exception – a few releases ago, I had problems with Xrandr because changes I had previously had to make to get my Intel hardware working properly were no longer necessary as X.org integrated and improved the driver – but it took me a while to figure out what was happening, and revert my Xorg config to the distro version.

Yesterday, when I had to print some documents, I got a nice error message in the properties of the printer that let me know I had a problem: “Printer requires the ‘pipslite-wrapper’ program but it is not currently installed. Please install it before using this printer.” And thus began the yak-shaving session that people could follow on twitter yesterday.

  • Search in synaptic for pipslite – found – but: “Package pipslite has no available version, but exists in the database.” Gah!
  • Try to find an alternative driver for the Epson installed on the system: no luck. Hit the forums.
  • Noticed that libsane-backends-extra wasn’t installed, installed it to get the epkowa sane back-end, and “scanimage -L” as root worked (for the first time) – so went on a side-track to get the scanner working as a normal user
  • Figure out what USB node the scanner is, chgrp scanner, scanning works!
  • Then figure out how the group gets set on the node on plugging, found the appropriate udev rules file (/lib/udev/rules.d/40-libsane-extras), copied it to /etc/udev/rules.d, added a new line to get the scanner recognised (don’t forget to restart udev!) scanning works!
  • Re-download a driver from the website linked to in OpenPrinting’s page for the printer – they have a .deb for Ubuntu 9.04! Rock!
  • Install driver, error message has changed, but still no printing: “/usr/lib/cups/filter/pipslite-wrapper failed”. Forums again.
  • Tried to regenerate a PPD file: pipslite-install: libltdl.so.3 not found. ls -l /usr/lib/*ltdl*: libltdl.so.7 – Bingo! The pre-built “Ubuntu” binaries don’t link to the right versions of some dependencies.
  • Download the source code, compile a new .deb (dpkg-buildpackage works perfectly), install, regenerate .ppd file, (don’t forget to restart CUPS), and we have a working printer!

4 hours lost.

Someone will doubtless follow up in comments telling me how stupid I was not to [insert some “easy” way of getting the printer working] which didn’t involved downloading source code and compiling my own binary package, or fiddling about in udev to add new rules, or sullying my pristine upgrade with an unofficial package. Please do! I’m eager to learn. And perhaps someone else with the same problems will find this blog entry when they look for “Ubuntu Epson Stylus Office BX300F” and won’t have to figure things out the hard way like I did.

Please bear in mind when you do that I’m not a neophyte, that I’ve got some pretty good Google-fu, and that I’ve been using Linux for many many years – and it took me 4 hours to re-do something I’d already done once 6 months ago, and wasn’t expecting to have to do again. How much harder is it for a first timer when he buys a USB headset & mic, or printer/scanner, or webcam?

Update: After fixing the problem, I have discovered that the Gutenprint driver mentioned on the OpenPrinting page (using CUPS+Gutenprint) does work with my printer. It seems that if I had done a fresh install, rather than an upgrade, I would not have had this existing printer using a no longer installed “recommended” driver – as John Mark suggested to me on twitter, pipslite is no longer necessary. In addition, when I tested both drivers with the same image, there is a noticeable difference in the results – the gutenprint driver appears to use a higher alpha, resulting in colours being much lighter in mid-tones. The differences are quite remarkable.

Community governance links

General 2 Comments

As promised during my presentation yesterday, here are the various publications I linked to for information on evaluating community governance patterns (and anti-patterns):

And, for French speakers, a bonus link (although the language is dense academic):

« Previous Entries Next Entries »