More investigation of the slow processing speed for my
document. It seems like the slowdown is somewhere in
libxslt’s chunking code.
With chunking turned on, xsltproc took 4 minutes to
process the document, while without chunking (ie. producing
one large file), it only took 1:30 minutes (less than half
the processing time).
In comparison, using Jade to process the document took
about 2 minutes with chunking turned on. With chunking
turned off, it took 4 minutes.
I wonder if this means that xsltproc’s performance with
chunking turned on can be improved to be faster than its
nochunks performance? Either that, or the DSSSL stylesheets
for Jade can be optimised for the non chunked case 🙂
I was updating my documentation generator for pygtk (the
one that tries to make the C reference docs for GTK look
like docs for Python). It was taking a while to process
with db2html (which uses Jade to convert from SGML to HTML),
so I thought I would look at using DocBook/XML and
DV‘s xsltproc, which I had heard
ran a lot
Unlike other people’s experiences, the docs ended up
taking over twice as long to process with xsltproc compared
to jade! I suppose this was to do with the size (about
1.9MB of of XML source), and the number of cross references
(the doc generation script added a lot of xrefs). On other
docs I tried, xsltproc seemed noticably better.
I also found out that White Christmas made with coco pops
tastes pretty good.