GObject performance work

I spent some time last week and this week on fixing some performance issues in gobject. It started out with the patches in bug 557100, which seemed very useful. I cleaned up those patches a bit, wrote a serious performance test and did some additional optimizations.

These changes focus on speeding up creation of “simple” gobject, i.e. things that have no properties or implement any interfaces, etc. They are still important because being able to use gobject gives us lots of advantages like threadsafe refcounting, runtime type introspection, user-data, etc. Sometimes people avoid using gobjects for small things just because they are a bit more expensive than some homebrew struct, which is very sad. With these fixes we can get rid of some of that.

Another thing about gobject that has bothered me for some time is the handling of interfaces. GIO and other modern APIs are starting to use interfaces more and more, so its important that they work well. However, interfaces in gobject have a feature that most people are unaware of, namely that you can add interfaces to a class after the class/type has been initialized. This means that the list of interfaces a class implements must be protected by a lock, and this lock must be taken each time we e.g. check if an object implement an interface or cast to the interface to do a method call on it.

Additionally the interface lookup algorithm used in gobject uses a binary search on the sorted list of interfaces a class implements. Better approaches are possible, like the one used in gcj (described here) which allows constant time (O(1)) interface lookup.

In bug 594525 and 594650 I described these issues and posted patches that fix them.

I added all these patches to the gobject-performance branch in glib git, including the performance test I wrote. The performance improvements are pretty good:

  • Construction speed for simple objects more than doubled, while the construction speed for complex object is not much affected (within one percent).
  • Interface typechecks go from 52 to 95 million per second in the non-threaded case and from 12 to 95 if g_threads_init() has been called.
  • Additionally the contention for typechecks in multiple threads goes to zero as you can see in the tests does by benjamin in bug 594525.

Data about Data

Warning: Long, technical post

One of the few remaining icky areas of the Nautilus codebase is the metadata store. Its got some weird inefficient XML file format, the code is pretty nasty and its the data is not accessible to other apps. Its been on my list of things to replace for quite some time, and yesterday I finally got rid of it.

The new system is actually pretty cool, both in the API to access is how it works internally. So, I’m gonna spend a few bits on explaining how it works.

Lets start with the requirements and then we can see how to fulfil these. We want:

  • A generic per-file key-value store with string and string list values. (String lists are required by Nautilus for e.g. emblems)
  • All apps should be able to access the store for both writing and reading.
  • Access, in particular read access, needs to be very efficient, even when used in typical I/O fashion (lots of small calls intermixed with other file I/O). Getting the metadata for a file should not be significantly more expensive than a stat syscall.
  • Removable media should be handled in a “sane” way, even if multiple volumes may be mounted in the same place.
  • We don’t require transactional semantics for the database (i.e. no need to guarantee that a returned metadata set is written to stable storage). What we want is something I call “desktop transaction semantics”.
    By this I means that in case of a crash, its fine to lose what you changed in the recent history. However, things that were written a long time ago (even if recently overwritten) should not get lost. You either get the “old” value or the “new” value, but you never ever get neither or a broken database.
  • Homedirs on NFS should work, without risking database corruption if two logins with the same homedir write concurrently. It is fine if doing so may lose some of these writes, as long as the database is not corrupted. (NFS is still used in a lot of places like universities and enterprise corporations.)

Seems like a pretty tall order. How would you do something like that?

Performance

For performance reason its not a good idea to require IPC for reading data, as doing so can block things for a long time (especially when data are contended, compare i.e. with how gconf reads are a performance issue on login). To avoid this we steal an idea from dconf: all reads go through mmaped files.

These are opened once and the file format in them is designed to allow very fast lookups using a minimal amount of page faults. This means that once things are in a steady state lookup is done without any syscalls at all, and is very fast.

Writes

Metadata writes are a handled by a single process that ensures that concurrent writes are serialized when writing to disk.

Clients talk to the metadata daemon via dbus. The daemon is started automatically by dbus when first used, and it may exit when idle.

Desktop Transaction semantics

In order to give any consistancy guarantees for file writes fsync() is normally used. However this is overkill and in some cases a serious system performance problem (see the recent ext3/4 fsync discussion). Even without the ext3 problem a fsync requires a disk spinup and rotation to guarantee some data on disk before we could return a metadata write call, which is quite costly (on the order of several milliseconds at least).

In order to solve this I’ve made the file format for a single database be in two files. One file is the “tree” which contains a static, read only, metadata tree. This file is replaced using the standard atomic replace model (write to temp, fsync, rename over).

However, we rarely change this file, instead all writes go to another file, the “journal”. As the name implies this is a journal oriented format where each new operation gets written at the end of the journal. Each entry has a checksum so that we can validate the journal on read (in case of crash) and the journal is never fsynced.

After a timeout (or when full) the journal is “rotated”, i.e. we create a new “tree” file containing all the info from the journal and a new empty journal. Once something is rotated into the “tree” it is generally safe for long term storage, but this slow operation happens rarely and not when a client is blocking for the result.

NFS homedirs

It turns out that this setup is mostly OK for the NFS homedir case too. All we have to do is put the log file on a non-NFS location like /tmp so that multiple clients won’t scribble over each other. Once a client rotates the journal it will be safely visible by every client in a safe fashion (although some clients may lose recent writes in case of concurrent updates).

There is one detail with atomic replace on NFS that is problematic. Due to the stateless nature of NFS an open file may be removed on the server by another client (the server don’t know you have the file open), which would later cause an error when we read from the file. Fortunately we can workaround this by opening the database file in a specific way[1].

Removable media

The current Nautilus metadata database uses a single tree based on pathnames to store metadata. This becomes quite weird for removable media where the same path may be reused for multiple disks and where one disk can be mounted in different places. Looking at the database it seems like all these files are merged into a single directory, causing various problems.

The new system uses multiple databases. libudev is used to efficiently look up the filesystem UUID and label for as mount and if that is availible use that as the database id, storing paths relative to that mount. We also have a standard database for your homedir (not based on UUID etc, as the homedir often migrates between systems, etc) and a fall-back “root” database for everything not matching the previous databases.

This means that we should seamlessly handle removable media as long as there are useful UUIDs or labels and have a somewhat ok fall-back otherwise.

Integration with platform

All this is pretty much invisible to applications. Thanks to the gio/GVfs split and the extensible gio APIs things are automatically availible to all applications without using any new APIs once a new GVfs is installed. Metadata can be gotten with the normal g_file_query_info() calls by requesting things from the “metadata” namespace. Similar standard calls can be used to set metadata.

Also, the standard gio copy, move and remove operations automatically affect the metadata databases. For instance, if you move a file its metadata will automatically move with it.

Here is an example:

$ touch /tmp/testfile
$ gvfs-info -a "metadata::*" /tmp/testfile
attributes:
$ gvfs-set-attribute /tmp/testfile metadata::some-key "A metadata value"
$ gvfs-info -a "metadata::*" /tmp/testfile
attributes:
  metadata::some-key: A metadata value
$ gvfs-copy /tmp/testfile /tmp/testfile2
$ gvfs-info -a "metadata::*" /tmp/testfile2
attributes:
  metadata::some-key: A metadata value

Relation to Tracker

I think I have to mention this since the Tracker team want other developers to use Tracker as a data store for their applications, and I’m instead creating my own database. I’ll try to explain my reasons and how I think these should cooperate.

First of all there are technical reasons why Tracker is not a good fit. It uses sqlite which is not safe on NFS. It uses a database, so each read operation is an IPC call that gets resolved to a database query, causing performance issues. It is not impossible to make database use efficient, but it requires a different approach than how file I/O normally looks. You need to do larger queries that does as much as possible in one operation, whereas we instead inject many small operations between the ordinary i/o calls (after each stat when reading a directory of files, after each file copy, move or remove, etc).

Secondly, I don’t feel good about storing the kind of metadata Nautilus uses in the Tracker database. There are various vague problems here that all interact. I don’t like the mixing of user specified data like custom icons with auto-extracted or generated data. The tracker database is a huge (gigabytes) complex database with information from lots of sources, mostly autogenerated. This risks the data not being backed up. Also, people having problems with tracker are prone to remove the databases and reindexing just to see if that “fixes it”, or due to database format changes on upgrades. Also, the generic database model seems like overkill for the simple stuff we want to store, like icon positions and spatial window geometry.

Additionally, Tracker is a large dependency, and using it for metadata storage would make it a hard dependency for Nautilus to work at all (to e.g. remember the position of the icons on the desktop). Not everyone wants to use tracker at this point. Some people may want to use another indexer, and some may not want to run Tracker for other reasons. For instance, many people report that system performance when using Tracker suffer. I’m sure this is fixable, but at this point its imho not yet mature enought to force upon every Gnome user.

I don’t want to be viewed like any kind of opponent of Tracker though. I think it is an excellent project, and I’m interested in using it, fixing issues it has and helping them work on it for integration with Nautilus and the new metadata store.

Tracker already indexes all kinds of information about files (filename, filesize, mtime, etc) so that you can do queries for these things. Similarly it should extract metadata from the metadata store (the size of this pales in comparison to the text indexes anyways, so no worries). To facilitate this I want to work with the Tracker people to ensure tracker can efficiently index the metadata and get updates when metadata changes for a file.

Where to go from here

While some initial code has landed in git everything is not finished. There are some lose ends in the metadata system itself, plus we need to add code to import the old nautilus metadata store into the new one.

We can also start using metadata in other places now. For instance, the file selector could show emblems and custom icons, etc.

Footnotes

[1] Remove-safe opening a file on NFS:
Link the file to a temporary filename, open the temp file, unlink the tempfile. Now the NFS client on you OS will “magically” rename the tempfile to .nfsXXXXX something and will track this fd to ensure this gets remove when the fd is closed. Other clients removing the original file will not cause the .nfsXXXX link on the server to be removed.

The return of Client side windows

For a long time now I’ve been working on the client side windows branch of Gtk+. By now it is mostly feature complete when it comes to normal use. However, one of the drivers of client side windows and the initial reason I started working on it is the ability to do offscreen window rendering. The last two weeks I’ve been spending on getting that to work and integrated into the platform.

I think a video says more than a million words here:

[vimeo width=”400″ height=”439″]http://vimeo.com/5126552[/vimeo]
(Original ogg availible here)

This is using the current client-side-windows branch of Gtk+, plus my own gtk-in-clutter code availible in the client-side-window branch of http://gitorious.org/clutter-gtk-copy.

Next up is getting the non-X backends working and getting this merged into master.

ext4 vs fsync, my take

There has been a lot of discussion about the ext4 data loss issue, and I see a lot of misconceptions, both about why rename() is used and what guarantees POSIX gives. I’ll try to give the background, and then my opinion on the situation.

There are two basic ways to update a file. You can either truncate the old file and write the new contents, or you can write the new contents to a temporary file and rename it over the old file when finished. The rename method have several advantages, partly based on the fact that rename is atomic. The exact wording from POSIX (IEEE Std 1003.1TM, 2003 Edition) is:

In this case, a link named new shall remain visible to other processes throughout the renaming operation and refer either to the file referred to by new or old before the operation began.

This gives the rename method some useful properties:

  • If the application crashes while writing the new file, the original file is left in place
  • If an application reads the file the same time as someone is updating it the reading application gets either the old or the new file in its entirety. I.e. we will never read a partially finished file, a mixup of two files, or a missing file.
  • If two applications update the file at the same time we will at worst lose the changes from one of the writers, but never cause a corrupted file.

Note that nothing above talks about what happens in the case of a system crash. This I because system crashes are not specified at all by POSIX. In fact, the behaviour specified predates journaled filesystems where you have any reasonable expectation that recently written data is availible at all after a system crash. For instance, a traditional unix filesystem like UFS or ext2 may well lose the entire filesystem on a system crash if you’re unlucky, but it is still POSIX compliant.

In addition to the above POSIX specifies the “fsync” call, which can be used in the rename method. It flushes all in-memory buffers corresponding to the file onto hardware (this is vaguely specified and the exact behaviour is hw and sw dependent), not returning until its fully saved. If called on the new file before renaming it over the old file it gives a number of advantages:

  • If there is a hardware I/O error during the write to the disk we can detect and report this.
  • In case of a system crash shortly after the write, its more likely that we get the new file than the old file (for maximum chance of this you additionally need to fsync the directory the file is in)
  • Some filesystems may order the metadata writes such that the rename is written to disk, but the contents of the new file are not yet on disk. If we crash at this point this is detected on mount and the file is truncated to 0 bytes. Calling fsync() guarantees that this does not happen. [ext4]

However, it also has a number of disadvantages:

  • It forces a write immediately, spinning up the disk and causing more power use and more wear on flash filesystems.
  • It causes a longer wait for the user, waiting for data to be on disk.
  • It causes lower throughput if updating multiple files in a row.
  • Some filesystems guarantee ordering constraint such that fsync more or less implies a full sync of all outstanding buffers, which may cause system-wide performance issues. [ext3]

It should be noted that POSIX, and even ext4 gives no guarantees that the file will survive a system crash even if using fsync. For instance, the data could be outstanding in hardware buffers when the crash happens, or the filesystem in use may not be journaled or otherwise be robust wrt crashes. However, in case of a filesystem crash it gives a much better chance of getting the new data rather than the old, and on reordering filesystems like an unpatched ext4 it avoids truncated files from the rename method.

Both the fsync and the non-fsync version has their places. For very important data the guarantees given by fsync are important enough to outweight the disadvantages. But in many cases the disadvantages makes it too heavy to use, and the possible data loss is not as big of an issue (after all, system crashes are pretty uncommon).

So much for the background, now over to my personal opinions on filesystem behaviour. I think that in the default configuration all general purpose filesystem that claim to be robust (be it via journalling or whatever) should do their best to preserve the runtime guarantees of the atomic rename save operation so that they extend to the system crash case too. In other words, given a write to a new file followed by a rename over an old file, we shall find either the old data or the new data. This is a less of a requirement than fsync-on-close, but a requirement nevertheless that does result in a performance loss. However, just the fact that you’re running a journaled filesystem is a performance cost already, and something the user has explicitly chosen in order to have less risk of losing data.

It would be nice if the community could work out a way to express intent of the save operation to the filesystem in such a way that we avoid the unnecessary expensive fsync() call. For instance, we could add a fcntl like F_SETDATAORDERED that tells the kernel to ensure the data is written to the disk before writing the metadata for the file to the disk. With this in place applications could choose either if they want the new file on disk *now*, or just if it wants either the old or the new file, without risk for total data loss. (And fall back on fsync if the fcntl is not supported.)

This is the current status of the rename method on the commonly used Linux filesystems to my best knowledge:
(In this context “safe” means we get either the old or the new version of the file after a crash.)

ext2: No robustness guarantees on system crash at all.

ext3: In the default data=ordered mode it is safe, because data is written before metadata. If you crash before the data is written (5 seconds by default) you get the old data. With data=writeback mode it is unsafe.

ext4: Currently unsafe, with a quite long window where you risk data loss. With the patches queued for 2.6.30 it is safe.

btrfs: Currently unsafe, the maintainer claims that patches are queued for 2.6.30 to make it safe

XFS: Currently unsafe (as far as i can tell), however the truncate and overwrite method is safe.

Eternal Vigilance!

I’ve spent a lot of time during the years fixing nautilus memory use. I noticed the other day that it seemed to be using a lot of memory again, doing nothing but displaying the desktop:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
14315 alex      20   0  487m  46m  15m S  0.3  1.2   0:00.86 nautilus

So, its time for another round of de-bloating. I fired up massif to see what used so much memory, and it turns out that there is a cache in GnomeBG that caches the original desktop background image. We don’t really need that since we keep around the final pixmap for the background.

It turns out that my desktop image is 2560×1600, which means the unscaled pixbuf uses 12 megs of memory. Just fixing this makes things a bit better:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
16129 alex      20   0  538m  33m  15m S  4.9  0.8   0:00.87 nautilus

However, looking at the actual allocations in massif its obvious that we’re not actually using this much memory. For a short time when creating the desktop background pixmap we do several large temporary allocations, but these are quickly freed. So, it seems we’re suffering from the heap growing and then not being returned to the OS due to fragmentation.

It is ‘well known’ that glibc uses mmap for large (> 128k by default) allocations and that such allocations should be returned to the OS directly when freed. However, this doesn’t seem to happen for some reason. Lots of research follows…

It turns out that this isn’t true anymore, since about 2006. Glibc now uses a dynamic threshold for when to start using mmap for allocations. It uses the size of freed mmaped memory chunks to update the threshold, and this is causing problems for nautilus which has a behaviour where almost all allocations are small or medium sized, but there are a few large allocations when handling the desktop background. This is leading to several large temporary allocations going to the heap, never to be returned to the OS.

Enter mallopt(), with lets us set a static mmap limit. So, we set this back to the old value of 128k:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
 4971 alex      20   0  479m  26m  15m S  0.0  0.7   0:00.90 nautilus

Not bad. Dropped 20 meg of resident size with a few hours of work. Now nautilus isn’t leading the pack anymore but comes after apps like gnome-panel, gnome-screensaver and gnome-settings-daemon.

How to remove flicker from Gtk+

In between spending time taking care of a sick kid, a sick wife and being sick myself I have slowly been working on the remaining issues in the client-side-windows branch of Gtk+. The initial and main interest in having client side windows is that it lets us emulate all thats needed for widgets to work without any server side windows, which lets us do things like put Gtk+ widgets inside clutter, etc. However another interesting, and not entierly obvious advantage of client side windows is that it allows us to remove flicker. This post will describe how this works and show the effects.

Gtk+ already does quite a lot of things to avoid flicker. For instance, all drawing in expose events is automatically double buffered so that you never see partially drawn results. The remaining flickering is related to the effect of moving or resizing server side subwindows. Although even these are minimized by Gtk+, since many widgets don’t use such windows or only use input-only windows which don’t cause any visual effects. However, there are still some areas where subwindows are used, mostly in cases where scrolling is involved.

Lets start with an example on how scrolling works:

Evince

This is a regular Evince window showing a pdf, and we want to scroll down. This happens in several steps. First we copy the bottom area of the window to the top of the window:

Evince 2

Then we mark the newly scrolled in area at the bottom as invalid:

Evince scrolling 3

As a result of this Gtk+ will call the application to redraw the invalid region as soon as it has finished handling the incomming events:

scrolling-4

Voila! We have scrolled. (In reality more happened above, the scrollbar area was marked invalid and repainted also, but lets ignore that for now.)

This example also makes it easy to see where flicker comes from. The drawing of the newly exposed area is double buffered, so the newly drawn area is replaced atomically, however the initial copy is not done with the Gtk+ drawing system, instead its done with a XCopyArea directly on the window (not a subwindow move, but with similar effect). So, the xserver will display that immediately, while there might be some delay before the expose of the scrolled in area is drawn causing visual tearing.

Another common problem is widget resizing/move that can be seen in my previous blog entry.  In this case what happens is that a widget with a subwindow is moved and/or resized and it ends up over another widget. The window move operation is done immediately in the server and results in a copy similar to the above, and then there is some delay before the widgets are redrawn to match that.

Now, client side windows don’t by itself fix this, but the copies above and all rendering is now under control of the client (i.e. the app) so we have the tools to do something about it. The solution is to delay the copying until we’re ready to draw everything that will be drawn, so we never show any partial results. Whenever some region of a window is copied we just record the area to be copied and by how much. When we’re handling the expose events for the invalid area we handle the expose up to the point of drawing everything in the double buffer. At this point we replay all the copies we recorded, except we don’t care about copying anything that will draw into the area which will be drawn by the expose. Then we blit out the final result of the expose event.

Furthermore, in practice it often happens that we do several moves/scrolls of the same region before it its drawn. This works with the above approach, but is a bit wasteful as some regions are copied twice. So, instead of just simply keeping track of all copies being made we try to combine such double copies into a single copy, thus minimizing the actual copies we make in the end.

So, how does this look in the end? Its kind of hard to capture this kind of flicker with a screengrabber, so here is a video I took with my phone:

[vimeo]http://www.vimeo.com/3148023[/vimeo]

Can you tell which one uses the standard Gtk+?

Flicker free Gtk+ continued

Remember my preview of subwindowless Gtk+? It got rid of some flicker, but it was still pretty raw.

I’ve been working on a new version of the subwindowless patch, and today I implemented a cool trick that gives fully flicker-free subwindow move/resize:

[videofile width=”500″ height=”400″]http://www.gnome.org/~alexl/noflicker2.ogg[/videofile]

This video is done with the same kind of increased X latency as the previous ones, but no flickering is detectable.

Gdb is dead. Long live Gdb!

All hail Archer and Tom Tromeys python gdb integration:

Before:


(gdb) bt
#0  nautilus_file_unref (file=0xdbbc60) at nautilus-file.c:724
#1  0x004a9e74 in nautilus_directory_async_state_changed (directory=0x7bf090) at nautilus-directory-async.c:4891
#2  0x004ae363 in nautilus_directory_call_when_ready_internal (directory=0x7bf090, file=0xdbbc60, file_attributes=3, wait_for_file_list=0,
directory_callback=<value optimized out>, file_callback=0x4c4b80 <file_list_file_ready_callback>, callback_data=0x9729c0) at nautilus-directory-async.c:1344
#3  0x004fcad1 in vfs_file_call_when_ready (file=0xc19768, file_attributes=0, callback=0, callback_data=<value optimized out>) at nautilus-vfs-file.c:68
#4  0x004c69e1 in nautilus_file_list_call_when_ready (file_list=<value optimized out>, attributes=3, handle=0x972880, callback=0x4eaa80 <activate_activation_uris_ready_callback>, callback_data=0x972830) at nautilus-file.c:6900
#5  0x0046f52a in fm_directory_view_activate_files (view=0xa07430, files=0x1109480, mode=NAUTILUS_WINDOW_OPEN_ACCORDING_TO_MODE, flags=0, confirm_multiple=1) at fm-directory-view.c:703
#6  0xf4fbf6fd in IA__g_closure_invoke (closure=0xa43740, returnvalue=0x0, n_param_values=2, param_values=0x110d950, invocation_hint=0x7fffffffcc10) at gclosure.c:767
#7  0xf4fd6760 in signal_emit_unlocked_R (node=0xa360b0, detail=0, instance=0x92d340, emission_return=0x0, instance_and_params=0x110d950) at gsignal.c:3244
#8  0xf4fd7e49 in IA__g_signal_emit_valist (instance=0x92d340, signal_id=<value optimized out>, detail=0, var_args=0x7fffffffcdf0) at gsignal.c:2977
#9  0xf4fd8383 in IA__g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#10 0x004d5405 in activate_selected_items (container=0x92d340) at nautilus-icon-container.c:6669
#11 0x0004df20d in item_event_callback (item=<value optimized out>, event=0x7fffffffd3b0, data=<value optimized out>) at nautilus-icon-container.c:6237
#12 0xf7dcde38 in eel_marshal_BOOLEAN__BOXED (closure=0xf0b7f0, returnvalue=0x7fffffffd130, n_param_values=<value optimized out>, param_values=0x1113d80,
invocation_hint=<value optimized out>, marshal_data=0x4decc0) at eel-marshal.c:121
#13 0xf4fbf6fd in IA__g_closure_invoke (closure=0xf0b7f0, returnvalue=0x7fffffffd130, n_param_values=2, param_values=0x1113d80, invocation_hint=0x7fffffffd0f0) at gclosure.c:767
#14 0xf4fd6760 in signal_emit_unlocked_R (node=0xa3ade0, detail=0, instance=0xd6ed00, emission_return=0x7fffffffd270, instance_and_params=0x1113d80) at gsignal.c:3244
#15 0xf4fd7cf8 in IA__g_signal_emit_valist (instance=0xd6ed00, signal_id=<value optimized out>, detail=0, var_args=0x7fffffffd2d0) at gsignal.c:2987
#16 0xf4fd8383 in IA__g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#17 0xf7dacca8 in emit_event (canvas=<value optimized out>, event=<value optimized out>) at eel-canvas.c:2518
#18 0x004dd749 in button_press_event (widget=0x92d340, event=0x100a100) at nautilus-icon-container.c:4183
#19 0xf67ac148 in _gtk_marshal_BOOLEAN__BOXED (closure=0x7ee870, returnvalue=0x7fffffffd6c0, n_param_values=<value optimized out>, param_values=0x112c0f0,
invocation_hint=<value optimized out>, marshal_data=0x4dd6b0) at gtkmarshalers.c:84
#20 0xf4fbf6fd in IA__g_closure_invoke (closure=0x7ee870, returnvalue=0x7fffffffd6c0, n_param_values=2, param_values=0x112c0f0, invocation_hint=0x7fffffffd680) at gclosure.c:767
#21 0x4fd6432 in signal_emit_unlocked_R (node=0x7eea00, detail=0, instance=0x92d340, emission_return=0x7fffffffd800, instance_and_params=0x112c0f0) at gsignal.c:3282
#22 0xf4fd7cf8 in IA__g_signal_emit_valist (instance=0x92d340, signal_id=<value optimized out>, detail=0, var_args=0x7fffffffd860) at gsignal.c:2987
#23 0xf4fd8383 in IA__g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#24 0xf68aec3e in gtk_widget_event_internal (widget=0x92d340, event=0x100a100) at gtkwidget.c:4745
...

After: (interesting areas marked out)

(gdb) gbt
#0  nautilus_file_unref (file=<NautilusVFSFile:0xdbbc60>) at nautilus-file.c:724
#1  0x004a9e74 in nautilus_directory_async_state_changed (directory=<NautilusVFSDirectory:0x7bf090>) at nautilus-directory-async.c:4891
#2  0x004ae363 in nautilus_directory_call_when_ready_internal (directory=<NautilusVFSDirectory:0x7bf090>, file=<NautilusVFSFile:0xdbbc60>, file_attributes=3, wait_for_file_list=0, directory_callback=<value optimized out>, file_callback=0x4c4b80 <file_list_file_ready_callback>, callback_data=0x9729c0) at nautilus-directory-async.c:1344
#3  0x004fcad1 in vfs_file_call_when_ready (file=0xc19768, file_attributes=0, callback=0, callback_data=<value optimized out>) at nautilus-vfs-file.c:68
#4  0x004c69e1 in nautilus_file_list_call_when_ready (file_list=<value optimized out>, attributes=3, handle=0x972880, callback=0x4eaa80 <activate_activation_uris_ready_callback>, callback_data=0x972830) at nautilus-file.c:6900
# Emit signal activate on instance <FMIconContainer:0x92d340>
#9  0xf4fd8383 in g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#10 0x004d5405 in activate_selected_items (container=<FMIconContainer:0x92d340>) at nautilus-icon-container.c:6669
#11 0x004df20d in item_event_callback (item=<value optimized out>, event=0x7fffffffd3b0, data=<value optimized out>) at nautilus-icon-container.c:6237
# Emit signal event on instance <NautilusIconCanvasItem:0xd6ed00>
#16 0xf4fd8383 in g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#17 0xf7dacca8 in emit_event (canvas=<value optimized out>, event=<value optimized out>) at eel-canvas.c:2518
#18 0x004dd749 in button_press_event (widget=<FMIconContainer:0x92d340>, event=0x100a100) at nautilus-icon-container.c:4183
# Emit signal button-press-event on instance <FMIconContainer:0x92d340>
#23 0xf4fd8383 in g_signal_emit (instance=0xdbbc60, signal_id=12687208, detail=0) at gsignal.c:3034
#24 0xf68aec3e in gtk_widget_event_internal (widget=<FMIconContainer:0x92d340>, event=0x100a100) at gtkwidget.c:4745
...

As a bonus, I throw in the break_finalize command:


(gdb) p file
$1 = (NautilusFile *) 0xdbbc60
(gdb) break_finalize file
Breakpoint 2 at 0x7ffff4fc3b00: file gobject.c, line 742.
(gdb) c
Continuing.
Breakpoint 2, g_object_finalize (object=0xdbbc60) at gobject.c:742
742     {

All code availible via git from http://www.gnome.org/~alexl/git/gnome-gdb.git. Needs latest version of the archer-tromey-python branch from Archer git.

If you have other cool gdb hacks please send them so we can collect them all in the same place.

A small preview of subwindowless Gtk+

Before:

[videofile width=”500″ height=”400″]http://www.gnome.org/~alexl/flicker.ogg[/videofile]

After:

[videofile width=”500″ height=”400″]http://www.gnome.org/~alexl/noflicker1.ogg[/videofile]

This code is still early in progress work, but it shows a lot of promise.

Note that the effect in the videos above is somewhat exagurated by doing two ssh forwards over the network so that i could screengrab the flicker, its not as obvious in plain local use.

Gtk can haz aliens too!

Towards a Ridley-based platform

Today I finally made the last changes to remove libgnome and its dependencies from nautilus. Its not in svn yet because some patches to gnome-desktop has to go in first.

Before:

ldd `which nautilus` | wc -l
91

After:

ldd `which nautilus` | wc -l
60

So, we are now linking to  30 libraries less! Libraries which we hardly used but still take time to load, initialize and resolve symbols from.

As a comparison, gtk-demo links to 37 libraries on my system. The additional libraries come from session management, thread use, dbus, gconf, libgnomedesktop and some other nautilus-specific features.

So, Project Ridley is alive and kicking (even if the wiki page is a bit outdated). I’m just waiting for dconf/GtkSettings to be finished and then we’ll really have a competitive next generation platform without all the old deprecated libgnome era stuff.

UPDATE: I’ve now commited this to trunk. You need new gnome-desktop, eel and nautilus