IPP

Out of curiosity, I decided to write a little IPP client library in Python. An
in-progress version can be found here.

In less than 500 lines of Python, I have an IPP message
encoder/decoder, and some higher level classes to perform a few
operations on printers and jobs. I’ve been able to successfully talk to
the following IPP servers:

  • CUPS (I’ve also got a little code to perform some of the CUPS
    proprietary operations).

  • an HP LaserJet 5100 and a 2300 — both with JetDirect 615n
    (J6057A) cards.

  • a Lexmark Optra C710.

The following didn’t want to talk to me:

  • an HP LaserJet 4V with a JetDirect 400n (J4100A) card (it seems
    to always give me a client-error-bad-request response).

  • a Canon iR C3200. (incidentally, this printer/copier apparently
    runs an embedded version of SunOS 4.1.4)

I’m probably doing something wrong for these last two, although it
is a bit difficult to work out what.

When talking to CUPS, I can use the proprietary CUPS-Get-Printers
operation to list all the printers it knows about which would make it
pretty easy to provide functionality of something like gnome-print-manager:

>>> import ipplib
>>> cups = ipplib.CUPSServer('ipp://localhost/')
>>> for info in cups.get_printer_info():
... print info['printer-name'], '-', info['printer-uri-supported']
...
harryplotter - ipp://hostname/printers/harryplotter
 ...
>>>

Similarly, it is easy to list the jobs (pending or completed) for a
printer. I still haven’t tried out any of the operations that can
change a printer or job’s status, but in theory that should all work 🙂.

Thoughts on the protocol

While IPP uses HTTP as a transport, there is a fair bit of overlap
between what the two protocols do, such as:

  • request methods/operations and response status codes.
  • identification of the resource the operation is being performed
    on.

  • IPP attributes seem quite similar to HTTP headers.
  • message body mime type declarations
  • compression of message bodies

Other things serve no purpose when tunneled through HTTP, such as
message sequence numbers. Apparently the reason for this is so that IPP
could in the future be sent directly using a custom protocol (in that
case, the sequence numbers would allow for pipelining of requests, and
out of order responses). However, I would be surprised if such a
protocol ever gets developed. IPP will probably continue to use HTTP as
its transport.

This does lead to complications though:

  • The URI you do an HTTP POST to may differ from the URI specified
    inside the IPP message. The spec says that the HTTP level URI should be
    ignored. If you have ever looked at the CUPS log files, you might have
    noticed that it almost always posts to “/” rather than the resource it
    is acting on. To make matters more complex, some of the proprietary CUPS
    operations
    require that you post to a different HTTP URI to the one
    in the IPP message.

  • A request can fail at one of two levels. An IPP client will need
    to detect and handle both HTTP level and IPP level error responses. In
    fact, most IPP error messages will come back as “HTTP/1.1 200 OK”.

Apart from the few warts, IPP seems like a pretty nice protocol. It
is fairly easy to parse (assuming you have an http client library to
use), and is very extensible. A lot nicer than LPR 🙂.

23 February 2004

louie: doesn’t the fact that the introduction of a third credible candidate causes problems is in fact a problem in itself?

There are vote counting schemes in use that don’t penalise similar candidates, such as the single transferable vote system used in Australia to elect MPs. Rather than splitting the vote, the votes for the least popular candidate’s votes get transferred to their second preference. This process gets repeated til there is a candidate with a clear majority. There is something fundamentally wrong with a system where a minor candidate does more harm for their cause than good.

It’s due to similar reasons that I’ve brought up switching to the multiple seat version of STV (which is used for Australian senate elections) for the Gnome Foundation elections.

Guantanamo Bay

It feels really weird to agree with John Howard. There are currently two Australian citizens who have been held at Guantanamo bay for over two years without being charged (one of them wasn’t even in the war zones — he was captured in Pakistan). Apparently the US will only release them to Australia on the condition that they be prosecuted. Unfortunately neither person committed any crime under Australian law as it was at the time.

We now have Mark Latham offering to support the Government if they want to change the new terrorrism laws to allow them to be applied retrospectively to the two. This seems like such a bad idea at so many levels (think about the precedent). It doesn’t look like that will be happenning though, since John Howard rejected the offer.

Maybe the US will get a new sensible president, and the situation will get resolved sensibly. Maybe not.

17 December 2003

Callum: the slowness of modular DocBook XSLT stylesheets is in the chunking code, as I found out a while ago. You will find that if you turn off chunking (ie. produce one huge output file rather than many smaller files), the processing time will be cut in half. Interestingly, the older DSSSL stylesheets showed the opposite behaviour.

One thing that might be interesting would be to try porting gtk-doc over to using Shaun McCance’s new XSLT stylesheets (there are more details on his website). If these are suitable, they could give a significant boost to building API and user docs.

22 October 2003

Laptop

I started running out of space on my laptop, so decided it would be easier to buy a new hard disk rather than clean things up (after all, I could get a 40GB drive for about AU$200, which would give me more than 3 times as much storage, and had almost identical power requirements). If only things were that easy …

After backing everything up, the first problem was taking the old hard disk out of the machine. The m300 is quite a nice machine, as you only need to undo one screw to remove the hard drive mounting. Getting the hard drive out of the mounting was a bit more of a problem as there were two torx screws holding the drive in. Moreover, I didn’t have access to a small enough torx driver 🙁. Luckily the screw heads were raised enough that it was possible to undo them using some pliers without damaging anything.

After getting the new drive into the mounting frame and into the machine, I needed to get Windows 98 onto the drive. This was required to get the hibernation working under Linux (the BIOS saves the contents of memory to a special file on the Windows partition). It turned out that the CD that came with the laptop was a quick restore disk, and wanted to create a full 40GB partition, rather than use the use the smaller partition I had already created. It them proceeded to screw up the restore, leaving me with a system that (a) wouldn’t boot fully, and (b) was convinced that there were errors on the hard disk, but just couldn’t find them. I guess that the restore CD managed to mis-format the drive somehow. In the end, I had to borrow a 98 CD and do a clean install, which worked perfectly (and let me install to a smaller partition). I can see how a quick restore CD could be useful in many common cases, but this one was nowhere near as robust as I would have liked.

Compared to this, getting Linux up and running was trivial. After completing the restore, I did a few tests with hdparm -Tt which showed that the new disk had a read performance of 25MB/s (in comparison, the old disk did 13MB/s), which has resulted in noticably shorter compile times on the laptop. It is also a lot quieter when busy.

This should put off the need to get a new laptop for quite a while.

Gnome 2.5

Updated my system to CVS head, and things are looking good. The new Nautilus feels even faster (especially in spatial mode). Apparently metadata plugins are planned for 2.6, which should be interesting. It should allow people to implement things like TortoiseCVS, augmenting the existing views rather than creating a completely new view like Apotheke does.

Python

Been reading over Ulrich Drepper’s paper on how to write shared libraries, and it struck me that use of the PyArg_ParseTupleAndKeywords() function will result in a lot of relocations that can’t be avoided.

I did a few tests using some dummy extension modules that contained a number of functions. I tried varying the number of functions, number of arguments for each function, and whether keyword arguments were supported.

I found that in the PyArg_ParseTuple() case, the number of relocations was proportional to the number of functions (as expected — a few relocations for each entry in the PyMethodDefarray. For the PyArg_ParseTupleAndKeywords() case, there was also one relocation for each argument listed in the keyword list array, which dominated as the number of arguments went up.

I haven’t checked how much influence this has on the startup speed, but it would make a difference to the amount of code shareable between processes for larger modules.