GHashTable Memory Requirements

March 26th, 2011 by mortenw

Someone threw a 8-million cell csv file at Gnumeric. We handle it, but barely. Barely is better than LibreOffice and Excel if you don’t mind that it takes 10 minutes to load. And if you have lots of memory.

I looked at the memory consumption and, quite surprisingly, the GHashTable that we use for cells in sheet is on top of the list: a GHashTable with 8 million entries uses 600MB!

Here’s why:

  • We are on a 64-bit platform so each GHashNode takes 24 bytes including four bytes of padding.
  • At a little less than 8 million entries the hash table is resized to about 16 million entries.
  • While resizing, we therefore have 24 million GHashNodes around.
  • 24*24000000 is around 600M, all of which is being accessed during the resize.

So what can be done about it? Here are a few things:

  • Some GHashTables have identical keys and values. For such tables, there’s no need to store both.
  • If the hash function is cheap, there’s no need to keep the hash values around. This gets a little tricky with the unused/tombstone special pseudo-hash values used by the current implementation. It can be done, though.

I wrote a proof of concept which skips things like key destructors because I don’t need them. This uses 1/3 the memory of GHashTable. It might be possible to lower this further if an in-place resize algorithm could be worked out.

On Profiling and Sharks

November 16th, 2010 by mortenw

Be careful in applying simplified models to complicated systems.

For example, Federico has been a persistent proponent for using profiling as the major (only?) guide about where to improve performance: Profiling A+B. And that is great as far as it goes. But no further.

If you were studying sharks on the decks of the fishing boat that caught them you might describe sharks as nearly blind, having no sense of smell, and unable to acquire enough oxygen by themselves. That is not a very accurate description of how they appear in their natural environment.

The point of that is that the A+B model argument applies to a system where only one program is running. The model has been simplified to the point where there is no interaction with the outside, so the A+B model is not well suited to describing the programs behaviour in a realistic system. For example, if B is very I/O intensive then it might well be the right place of concentrate performance efforts.

I/O is not the only way to hit other programs hard. Memory usage is another — especially if the program is long running.

Code Quality, Part II

September 29th, 2010 by mortenw

I have been known to complain loudly when I see code that I feel should have been better before seeing the light of day. But what about my own code? Divinely inspired and bug free from day one? Not a chance!

With Gnumeric as the example, here is what we do to keep the bug count down.

  • Testing for past classes of errors. For example, we found errors in Gnumeric’s function help texts, such as referring to arguments that do not exist or not describing all the arguments. The solution was not only to fix the problems we found, but also to write a test that checks all the function help texts for this kind of errors. Sure enough, there were several more. They are gone now, and new ones will not creep in. We do not like to make the same mistake twice!
  • Use static code checkers. This means that we keep the warning count from “gcc -Wall” down so know nothing serious is being ignored. We have looked at c-lang and Coverity output and fixing the apparent problems. (Those tools have pretty high false report rates, though.) We occasionally use sparse too and have a handful of dumb perl scripts looking for things like GObject destroy/finalize/etc handlers that fail to chain up to the parent class.
  • Use run-time code checkers. Gnumeric has been run through Valgrind and Purify any numbers of times. It is part of the test suite, so it happens regularly. This is regrettably getting harder because newer versions of Gtk+ and the libraries upon which it is built hold on to more and more memory with no way of forcing release. Glib has a built-in checker for some memory problems. We use that too.
  • Automated tests of as many part of the program as we have found time to write. The key word here is “automated”. I used to be somewhat scared of changing the format string (number rendering) code, because there was basically no way of making sure no new errors were introduced in that hairy piece of code. With the extensive test suite, I have no such reservations anymore.
  • Fuzzing, i.e., deliberately throwing garbled input at the program. I wrote tools to do this subtly for xml and files inside a zip archive in such a way that the files are still syntactically correct xml or zip files — otherwise you end up only testing the xml/zip parser which is fine, but not sufficient.
  • Google for Gnumeric. Not every will report problems to us, but they might discuss issues with others. Google seems to be pretty good at finding such occurrences.

The take-home message from this is that code quality is work. Lots of work. And yet we still let mistakes through. I blame that on the lack of a proper QA department.

Code Quality

August 31st, 2010 by mortenw

The recently released GLib 2.25.15 contains a new class for dealing with dates: gdatetime.c. With apologies to Pauli: That code is not right. It is not even wrong.

The code basically claims to handle date+time+timezone. Such code, in principle, makes a lot of sense in Glib. But even a cursory scan through the code reveals numerous grave problems:

  • It reimplements the date handling code from GDate. Badly and buggy: even fairly simple things as advancing one day does not work. Actually, advancing by zero days does not work either.
  • Code like g_date_time_difference and g_date_time_get_hour as well as the representation of time-of-day makes it clear that the code does not and will not handle timezones properly. The author does not understand things like daylight savings time and the fact that some days are not 24 hours long under that regime.
  • Code like g_date_time_printf makes it clear that the author does not understand UTF-8. Here is an outline:

    for (i = 0; i < utf8len; i++)
    {
    const char *tmp =
    g_utf8_offset_to_pointer (format, i);
    char c = g_utf8_get_char (tmp);
    [...]
    }

    That has got to be the worst way to traverse a UTF-8 string seen in the wild. And note how it mangles characters with code points outside the range of “char”.

  • There is no error handling and the API as-is will not allow it.
  • The code obviously was not tested well.

Why does code like that make it into GLib? The code was reviewed by Glib maintainer Matthias Clasen. I do not think he did a very good job. (He is busy asking for patches, but not busy applying patches. Certainly he avoids talking about substance. In any case, the code does not need patches, it needs to be taken out back.)


* * * * *

The bigger question is how you control code quality in a large project like GLib/GTK+. It is a simple question with a very complicated answer probably involving test suites and automated tools. I do not have anything to say about test suites here.

In the free software world the automated tools mostly come down to the compiler, sparse, and valgrind. (Let me know if I have missed anything substantial.)

  • “gcc -Wall” or some variant thereof. GLib and Gtk+ use this and use it reasonably well.
  • “Sparse”. There are signs that GLib/Gtk+ have not been run through sparse for a very long time. Gio, for example, appears to never have been tested.
  • “Valgrind”. Valgrind is probably being used on GLib/Gtk+ regularly, but each new release seems to be putting new roadblocks in the way of making effective use of Valgrind. In modern versions you cannot make Gtk+ release its objects, so Pango will not release its stuff and the font libraries in turn will not release its. Do not get me wrong: exit(2) if a very efficient way of releasing resources, but not being able to — optionally — release resources manually means that you do not know if your memory handling works right.

In short: Glib and Gtk+ are slowly moving away from automated code quality checks beyond the compiler.

I used to run GLib/Gtk+ through Sparse and Purify. Over time I got the message that bug reports based on that were not particular welcome.

Look Left, Look Right!

August 21st, 2010 by mortenw

My Left View

My Right View

The pictures were taken a few days apart in early July. My center view is mostly water.

ODF Plus Five Years

February 10th, 2010 by mortenw

Five years ago I strongly criticized the OpenDocument standard for being critically incomplete for spreadsheets since it left out the syntax and semantics of formulas. As a consequence it was unusable as a basis for creating interoperable spreadsheets.

Off the record several ODF participants agreed. The explanation for the sorry state of the matter was that there was a heavy pressure for getting the ODF standard out of the door early. The people working on the text document part of the standard were not willing to wait for the spreadsheet part to be completed.

That was then and this is now. Five years have passed and there has been no relevant updates to the standard. However, one thing that has happened is that is that Microsoft started exporting ODF documents that highlight the problems I pointed out. ODF supporters cried foul when it turned out that those spreadsheets did not work with OpenOffice. In my humble opinion, those same loud ODF supporters should look for the primary culprit at the nearest mirror. You were warned; the problem was obvious for anyone dealing with programming language semantics; you did nothing.

So given the state of the standard, where does that leave ODF support in spreadsheets? Microsoft took the least-work approach and just exported formulas with their own (existing) syntax and semantics. Of course they knew that it would not play well with anyone else, but that was clearly not a priority in Redmond. Anyone else at this point realizes that ODF for spreadsheets is not defined by the standard, but by what part OpenOffice happens to implement. Just like XLS is whatever Excel says it is.

One implications is that ODF changes whenever OpenOffice does. For example, OpenOffice has changed formula syntax at least once — a change that broke Gnumeric’s import. If you follow that link, you can see that OpenOffice did precisely the same thing that Microsoft did: introduce a new formula namespace. Compare the reactions. For the record, in Gnumeric the work involved in supporting those two new namespaces were about the same.

For Gnumeric the situation remains that we will support ODF as any other spreadsheet file format. Until and unless the deficiencies are fixed, ODF is not suitable as the native format for Gnumeric or any other spreadsheet. (There are other known problems with ODF, but those are somewhat technical and not appropriate here.)

Note: I want to make clear that the above relates to spreadsheets only. I know of no serious problems with ODF and text documents, nor do I have reason to believe that are any.

OpenSUSE 11.2

November 19th, 2009 by mortenw

I decided to give a new OpenSUSE 11.2 a spin. In hindsight, that was probably a mistake.

The new version installs a desktop-optimized kernel. The idea sounds good, but for me it does not work: named consistently causes an Oops or a kernel panic. (I haven’t otherwise had a kernel panic for many, many years!) I reverted to the so-called “default” kernel and the system seems to suffer only a loss of my confidence.

Somewhat more worrisome is that the system seems to have no bladder^Wfan control. The fan remains off until the temperature reaches crazy levels. Then the fan turns on full-blast and remains on until shutdown. In the same department, the backlight controls do not work. The tricks that worked in 11.1 no longer do. I am going to try a bios upgrade and see if things improve.

Ideas, anyone? This is a Toshiba Satellite L305-S5944. Drop me a line at mwelinder at gmail.

Update: I really don’t think 11.2 likes me:

  • Emacs’ menus are partially broken. For example, in Dired the menus for Mark/Regexp/Immediate/Subdir are all empty.
  • Valgrind is broken. I get incomplete stack traces for places with full debug info. I get complaints about unrecognized syscalls.
  • The source repository doesn’t seem to be set up right.

But, hey!, it comes with wobbly windows. What more can anyone want?

Goodbye F-Spot, Hello Picasa

June 2nd, 2009 by mortenw

I am giving up on F-Spot.

It was a really promising application, but it has never been able follow up on that great start. The worst thing is that it is sluggish. Operations that should be instant — like displaying the next image — are not, but take half a second. (Getting a new camera did not help there!) I used to think it was just my old laptop, but with a new laptop that excuse does not fly anymore.

I have now tried Picasa. It is crazy-fast! For now, I am going to use that. The biggest problem was migrating the F-Spot tags. That problem was solved with the help from a Robert Brown on Google’s help forums.

Basically, you need this script to create album files and then blow away Picasa’s database to force a regeneration.

The Gtk+ File Chooser Dialog, Take II

February 23rd, 2009 by mortenw

OpenSuSE 11.1 updated the Gnome File Chooser and it now looks like this:

file-dialog2

Recall that my premise is that the file chooser’s function is to help the user choose files. By my count, the area used for files is 29% in the above dialog, including header and scroll bar. That simply is not right! Why does the “Places” section (which I rarely use) and its buttons take up that much space? Further, what does the path bar give me that adding the path into the location entry and putting “..” into the file list does not give me?

Things get a lot worse if a file preview is added. Here’s how uploading the above image looked in Mozilla:

file-dialog3

There is room for an incredible 2-4 letters of the file names! And the full “Places” and path bar sections, of course.

Could someone please defend the Gnome File Chooser so I do not have to suggest taking it out back and having it shot!

(I do not take comments at my blog, but you can probably find an email address somewhere.)

Interwoven Alignment Preambles

February 6th, 2009 by mortenw

Thomas, that is a fine rant, but I look in vain for the answer to the question of whether you could get work done after spending 15 minutes learning git instead of cursing it for not having the same command line arguments as cvs. I am sure that with counseling you could get over typing “–help” instead of “-h”.

One of my favourite error messages in from TeX. Stuff this into TeX and look in the log:

\halign{#\global\futurelet\foo\relax\cr\cr}
\halign{#\cr\foo}

It will tell you that “Interwoven alignment preambles are not allowed”. A bit cryptic, you might say, but TeX comes with a manual, i.e., The TeXbook which helpfully explains

If you have been so devious as to get this message, you will understand it, and you deserve no sympathy.