In London

  • Post author:
  • Post category:Uncategorized

I’m in London at the moment with Carlos, Danilo, David and Steve for a Launchpad sprint focused on Bazaar and Rosetta. The weather is a nice change from Perth winter.

Next week I’ll be in Vilnius, Lithuania, and then it is back to London for another week before going home.

It is a nice change from winter weather in Perth.

Vote Counting and Board Expansion

  • Post author:
  • Post category:Uncategorized

Recently one of the Gnome Foundation directors quit, and there has been a proposal to expand the board by 2 members. In both cases, the proposed new members have been taken from the list of candidates who did not get seats in the last election from highest vote getter down.

While at first this sounds sensible, the voting system we use doesn’t provide a way of finding out who would have been selected for the board if a particular candidate was removed from the ballot.

The current voting system gives each foundation member N votes to assign to N candidates (where N is the number of seats on the board). The votes are then tallied for each candidate, and the N candidates with the most votes get the seats.

If we look at last year’s results, there were 119 people who voted for Luis. If Luis had not been a candidate, then those 119 people would have used that vote to pick other candidates. The difference in the number of votes received by Vincent (the board member receiving the least votes) and Quim (the unsuccessful candidate with the most votes) was just 16, so those extra 119 votes could easily have affected the ordering of the remaining candidates.

Furthermore, if the election was for nine seats rather than seven then each foundation member would have had an additional two votes to cast.

This particular problem would not be an issue with a preferential voting system where each foundation member lists all the candidates in their order of preference. If a board member drops out, it is trivial to recalculate the results with that candidate removed: the relative orderings of the other candidates on the ballot are preserved. It is also possible to calculate the results for a larger number of seats.

Of course, all the candidates from the last election would make great board members so it isn’t so much of an issue in this case, but it might be worth considering for next time.

JHBuild Updates

  • Post author:
  • Post category:Uncategorized

The progress on JHBuild has continued (although I haven’t done much in the last week or so). Frederic Peters of JhAutobuild fame now has a CVS account to maintain the client portion of that project in tree.

Perl Modules (#342638)

One of the other things that Frederic has been working on is support for building Perl modules (which use a Makefile.PL instead of a configure script). His initial patchworked fine for tarballs, but by switching over to the new generic version control code in jhbuild it was possible to support Perl modules maintained in any of the supported version control systems without extra effort.

Speed Up Builds (#313997)

One of the other suggestions for jhbuild that came up a while ago was to make it “eleventy billion times faster”. In essence, adding a mode where it would only rebuild modules that had changed. While the idea has merrit, the proposed implementation had some problems (it used the output of “cvs update” to decide whether things had changed).

I’d like to get something like this implemented, preferably with three possible behaviours:

  1. Build everything (the current behaviour).
  2. Build only modules that have changed.
  3. Build only modules that have changed, or have dependencies that have changed.

The second option is obviously the fastest, and is a useful option for collections of modules that should be API stable. The third option is essentially an optimisation of the first option. For both the second and third option, it is necessary to be able to tell if the code in a module has been updated. The easiest way to do this is to record an identifier for the tree state, and the identifier is different after an update.

The identifier should be cheap to calculate too, so will probably be dependent on the underlying version control system:

  • CVS – a hash of the names and versions of all files in the tree. Something like this, maybe (can be constructed by reading the CVS/Root, CVS/Repository and CVS/Entries files in the tree).
  • Subversion – a combination of (a) the repository UUID, (b) the path of the tree inside the repository and (c) the youngest revision for this subtree in the repository.
  • Arch – the output of “baz tree-id“.
  • Bzr – the working tree’s revision ID.
  • Darcs – a hash of the sequence of patches representing the tree, maybe?
  • Tarballs – the version number for the tarball.

On a successful build, the ID for the tree would be recorded. On subsequent builds, the ID gets recalculated after updating the tree. The new and old IDs are then used to decide on whether to build the module or not, according to the chosen policy.

Statistics of Breath Testing

Yesterday there were some news reports about the opposition party in Victoria issuing a FOI request and finding that the breath testers used to test blood alcohol content routinely under report the readings by up to 20%. They used this fact to show that it was giving negative readings for some people who are a little over the limit. On the face of it this sounds like a problem, but when you look at the statistics the automatic reduction makes sense.

The main point is that breathalyzer tests are not completely accurate. Let’s consider the case where the breathalyzer which does no adjustments gives a 0.05 BAC reading. We’d expect the probability distribution for the real BAC reading to be a normal distribution with 0.05 BAC as the mean:

So the real BAC may be either above or below 0.05. Given that it is only an offence to have a BAC above 0.05, the test would only give even odds that the person had broken the law. That would make it pretty useless for getting a conviction.

If you automatically reduce the displayed reading on the breathalyzer by 2 standard deviations, there is a different picture. For a BAC reading of 0.05, the real BAC will still be normally distributed but the mean will be offset:

So this gives a 97.5% probability that the BAC is above 0.05. So while removing the automatic result reduction might catch more people over the 0.05 limit, it would also drastically increase the number of people caught while below the limit.

To reduce the number of false negatives without increasing the false positives, the real answer is to use a more accurate test so that the error margins are lower.