JHBuild Updates

  • Post author:
  • Post category:Uncategorized

The progress on JHBuild has continued (although I haven’t done much in the last week or so). Frederic Peters of JhAutobuild fame now has a CVS account to maintain the client portion of that project in tree.

Perl Modules (#342638)

One of the other things that Frederic has been working on is support for building Perl modules (which use a Makefile.PL instead of a configure script). His initial patchworked fine for tarballs, but by switching over to the new generic version control code in jhbuild it was possible to support Perl modules maintained in any of the supported version control systems without extra effort.

Speed Up Builds (#313997)

One of the other suggestions for jhbuild that came up a while ago was to make it “eleventy billion times faster”. In essence, adding a mode where it would only rebuild modules that had changed. While the idea has merrit, the proposed implementation had some problems (it used the output of “cvs update” to decide whether things had changed).

I’d like to get something like this implemented, preferably with three possible behaviours:

  1. Build everything (the current behaviour).
  2. Build only modules that have changed.
  3. Build only modules that have changed, or have dependencies that have changed.

The second option is obviously the fastest, and is a useful option for collections of modules that should be API stable. The third option is essentially an optimisation of the first option. For both the second and third option, it is necessary to be able to tell if the code in a module has been updated. The easiest way to do this is to record an identifier for the tree state, and the identifier is different after an update.

The identifier should be cheap to calculate too, so will probably be dependent on the underlying version control system:

  • CVS – a hash of the names and versions of all files in the tree. Something like this, maybe (can be constructed by reading the CVS/Root, CVS/Repository and CVS/Entries files in the tree).
  • Subversion – a combination of (a) the repository UUID, (b) the path of the tree inside the repository and (c) the youngest revision for this subtree in the repository.
  • Arch – the output of “baz tree-id“.
  • Bzr – the working tree’s revision ID.
  • Darcs – a hash of the sequence of patches representing the tree, maybe?
  • Tarballs – the version number for the tarball.

On a successful build, the ID for the tree would be recorded. On subsequent builds, the ID gets recalculated after updating the tree. The new and old IDs are then used to decide on whether to build the module or not, according to the chosen policy.

JHBuild Improvements

I’ve been doing most JHBuild development in my bzr branch recently. If you have bzr 0.8rc1 installed, you can grab it here:

bzr branch http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.dev

I’ve been keeping a regular CVS import going at http://www.gnome.org/~jamesh/bzr/jhbuild/jhbuild.cvs using Tailor, so changes people make to module sets in CVS make there way into the bzr branch. I’ve used a small hack so that merges back into CVS get recorded correctly in the jhbuild.cvs branch:

  1. Apply the diff between jhbuild.cvs and jhbuild.dev to my CVS checkout and commit.
  2. Modify tailor to pause before committing the to jhbuild.cvs.
  3. While tailor is paused, run bzr revert followed by a merge of the same changes from jhbuild.dev.
  4. Let tailor complete the commit.

It’s a bit of a hack, but it allows me to do repeated merges from the CVS import to my development branch (and back again). It also means that any file moves I do in my bzr branch are reflected in the CVS import when merged.

So now when filing bug reports on jhbuild, you can submit fixes in the form of bzr branches as well as patches.

So, on to the improvements:

Generic Version Control Interface

Previously, to add support for a new version control system the following additions were needed:

  • Some code to invoke the version control utility to make checkouts and update working trees.
  • Code to implement the build state machine for modules using the version control system (these classes would generally derive from AutogenModule which implemented most of the build logic).
  • Code to create instances of the above module type when parsing .modules files.

This was quite a bit of work, and in the end would only help if the code in question was set up to build the same way as most Gnome modules (i.e. with a autogen.sh script and autotools). If you wanted to build a module using Python distutils out of Subversion, a new module type would be needed. If you wanted to build a distutils module from a tarball, that would be another module type again.

With the new system, the different version control support modules provide a common interface. This means that a single module type is capable of implementing the build state machine for any version control system. Similarly, it should now be possible to implement distutils module support such that it will work with any supported version control system.

This work is not yet finished though. A bit more work needs to be done to parse version control system agnostic module definitions from .modules files. When this is done, a fair bit of the current syntax can be deprecated and eventually removed. When this is done, adding support for a new version control system shouldn’t take more than 100-200 lines.

Module Type Simplifications

As well as reducing the number of module types that need to be maintained in JHBuild, I’ve been working on simplifying the code in these module types. Previously, each stage of a module build was represented by a method call on the module type. The return value of the method was used to say (a) whether the stage succeeded or not, (b) what the next state would be and (c) if an error occurred some alternative next states to go to (e.g. offer to rerun autogen.sh).

With the new system, the next state and error states are declared as attributes on the method object. The method can indicate a failure by raising a particular exception. This greatly simplifies the cases where a build stage involves a number of separate actions that could each fail individually, since the exception cuts processing short without the error checks getting in the way of the code.

There are still a few module build stages not converted to the new system since their next state depends on various config settings (e.g. if running “make check” has been enabled or not). Since these generally involve skipping a stage based on some criteria, the plan is to move the logic to the stage being skipped, which should simplify things further.

intltool and po/LINGUAS

  • Post author:
  • Post category:Uncategorized

Rodney: my suggestions for intltool were not intended as an attack. I just don’t really see much benefit in intltool providing its own po/Makefile.in.in file.

The primary difference between the intltool po/Makefile.in.in and the version provided by gettext or glib is that it calls intltool-update rather than xgettext to update the PO template, so that strings get correctly extracted from files types like desktop entries, Bonobo component registration files, or various other XML files.

The current method intltool uses to get intltool-update called (providing its own po/Makefile.in.in) is a lot better than the previous method (maintaining patches for the po/Makefile.in.in files from various versions of gettext and then deciding which one to apply), however it can make it difficult to take advantage of new gettext features (the po/LINGUAS file being the most recent example). If it was possible for intltool-update to be called without any modification to the po/Makefile.in.in file that gettext installs then this sort of problem wouldn’t occur.

The standard po/Makefile.in.in uses the makefile variable $(XGETTEXT) as the program to extract translations for the PO template. If intltool had a program (or a mode for one of the existing programs) that was command line argument compatible with xgettext, then all that would be necessary would be to redefine $(XGETTEXT) to the appropriate value. Since $(XGETTEXT) is set through a simple autoconf substitution, this should be very easy to do from intltool’s M4 autoconf macro.

Ekiga

  • Post author:
  • Post category:Uncategorized

I’ve been testing out Ekiga recently, and so far the experience has been a bit hit and miss.

  • Firewall traversal has been unreliable. Some numbers (like the SIPPhone echo test) work great. In some cases, no traffic has gotten through (where both parties were behind Linux firewalls). In other cases, voice gets through in one direction but not the other. Robert Collins has some instructions on setting up siproxd which might solve all this though, so I’ll have to try that.
  • The default display for the main window is a URI entry box and a dial pad. It would make much more sense to display the user’s list of contacts here instead (which are currently in a separate window). I rarely enter phone numbers on my mobile phone, instead using the address book. I expect that most VoIP users would be the same, provided that using the address book is convenient.
  • Related to the previous point: the Ekiga.net registration service seems to know who is online and who is not. It would be nice if this information could be displayed next to the contacts.
  • Ekiga supports multiple sound cards. It was a simple matter of selecting “Logitech USB Headset” as the input and output device on the audio devices page of the preferences to get it to use my headset. Now I hear the ring on my desktop’s speakers, but can use the headset for calls.
  • It is cool that Ekiga supports video calls, but I have no video camera on my computer. Even though I disabled video support in the preferences, there is still a lot of knobs and whistles in the UI related to video.

Even though there are still a few warts, Ekiga shows a lot of promise. As more organisations provide SIP gateways become available (such as the UWA gateway), this software will become more important as a way of avoiding expensive phone charges as well as a way of talking to friends/colleagues.

Annoying Firefox Bug

  • Post author:
  • Post category:Uncategorized

Ran into an annoying Firefox bug after upgrading to Ubuntu Dapper. It seems to affect rendering of ligatures.

At this point, I am not sure if it is an Ubuntu specific bug. The current conditions I know of to trigger the bug are:

  • Firefox 1.5 (I am using the 1.5.dfsg+1.5.0.1-1ubuntu10 package).
  • Pango rendering enabled (the default for Ubuntu).
  • The web page must use a font that contains ligatures and use those ligatures. Since the “DejaVu Sans” includes ligatures and is the default “sans serif” font in Dapper, this is true for a lot of websites.
  • The text must be justified (e.g. use the “text-align: justify” CSS rule).

If you view a site where these conditions are met with an affected Firefox build, you will see the bug: ligature glyphs will be used to render character sequences like “ffi“, but only the advance of the first character’s normal glyph is used before drawing the next glyph. This results in overlapping glyphs

It also results in a weird effect when selecting text, since the ligatures get broken appart if the selection begins or ends in the middle of the ligature, causing the text to jump around.

I wonder if this bug affects the Firefox packages in any other distributions, or is an Ubuntu only problem?