Eternal Vigilance!

I’ve spent a lot of time during the years fixing nautilus memory use. I noticed the other day that it seemed to be using a lot of memory again, doing nothing but displaying the desktop:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
14315 alex      20   0  487m  46m  15m S  0.3  1.2   0:00.86 nautilus

So, its time for another round of de-bloating. I fired up massif to see what used so much memory, and it turns out that there is a cache in GnomeBG that caches the original desktop background image. We don’t really need that since we keep around the final pixmap for the background.

It turns out that my desktop image is 2560×1600, which means the unscaled pixbuf uses 12 megs of memory. Just fixing this makes things a bit better:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
16129 alex      20   0  538m  33m  15m S  4.9  0.8   0:00.87 nautilus

However, looking at the actual allocations in massif its obvious that we’re not actually using this much memory. For a short time when creating the desktop background pixmap we do several large temporary allocations, but these are quickly freed. So, it seems we’re suffering from the heap growing and then not being returned to the OS due to fragmentation.

It is ‘well known’ that glibc uses mmap for large (> 128k by default) allocations and that such allocations should be returned to the OS directly when freed. However, this doesn’t seem to happen for some reason. Lots of research follows…

It turns out that this isn’t true anymore, since about 2006. Glibc now uses a dynamic threshold for when to start using mmap for allocations. It uses the size of freed mmaped memory chunks to update the threshold, and this is causing problems for nautilus which has a behaviour where almost all allocations are small or medium sized, but there are a few large allocations when handling the desktop background. This is leading to several large temporary allocations going to the heap, never to be returned to the OS.

Enter mallopt(), with lets us set a static mmap limit. So, we set this back to the old value of 128k:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND          
 4971 alex      20   0  479m  26m  15m S  0.0  0.7   0:00.90 nautilus

Not bad. Dropped 20 meg of resident size with a few hours of work. Now nautilus isn’t leading the pack anymore but comes after apps like gnome-panel, gnome-screensaver and gnome-settings-daemon.

23 thoughts on “Eternal Vigilance!”

  1. Thanks for your work! I recently tried installing Ubuntu on an old 256MB RAM laptop, and it was basically unusable, as soon as I opened a browser everything goes to hell and it just starts swapping non-stop. I guess maybe I’m just expecting too much from such an old laptop, but if more applications can be improved like this maybe there’s hope…

  2. Cool good work. its great to show linux systems working well on old small RAM machines.

    Need one worry about the virt size? if i was running with no swap (eg a livecd session on a machine without a swap partition), would nautilus be using up 500MB of RAM? or is that not really what it is measuring?

  3. @ Hans : really ? I do have an 256 MB old laptop and it is not unusable; slow, yes, but not unusable. I’ve trimed it bit though. And I do use Firefox 3 on it.

    Maybe cleaning up thing you won’t use, reduce applet to the minimum and disable nautilus desktop handling could give a second life.

    But don’t expect a Ferrari.

  4. ajax:

    Try: info libc “malloc tunable”

    ssam:

    Most of the virtual space is not mapped, so takes no space, or is mapped from files, so takes no swap space. You can more or less ignore that.

  5. Perhaps something in libgtk or libgnome can set this threshold, since the auto mmap threshold is done for performance reason (so repeat large allocations don’t have to always mmap), which is not something I expect desktop applications to care about.

  6. ka-hing:

    I’m not sure i follow. The glibc behaviour was changed so that apps would get better performance, as the costs of mmap is higher that what it used to be for various reasons (see the code). I don’t think desktop applications in general would perform better with this change, rather the reverse (i.e. perform worse).

    The behavour in the nautilus case in kinda special for two reasons:
    1) It has a single very short time where it uses much memory but then doesn’t
    2) Its a long lived background process where its important that we’re not wasting memory when not in use

    There might be other applications which happen to trigger similar bad behaviour from the dynamic threshold decision algorithm, in particular gnome-settings-daemon might be interesting to look at. However, we certainly don’t want to always do this without analysis.

  7. Quite interesting, thanks!

    Wouldn’t this issue be worth a specific allocation method that you would use for backgrounds? I mean: keep the glibc optimized behavior for everything but background pics. Maybe you could gain more memory, and it could possibly be used by other apps.

    Just my two cents…

  8. Milan:

    The mmap treshold I set should be ok for nautilus as a whole. No need to keep the dynamic threshold for that.

    I don’t know exactly what you mean by using another allocator for the background pics. The allocations happen in the standard pixbuf loader code, there is no way to make that use a special allocator. And using a special allocator for all pixbufs is not a good idea, as what is good for this specific case may not be good for others.

  9. Awesome work!

    > I noticed the other day that it seemed to be using a lot of memory again
    It would be better if Gnome tracked its’ memory usage (and startup time) on a daily basis (AFAIR Mozilla does it). That way, regressions could be tracked and fixed right after they happen.

  10. On my system gnome-panel needs more memory than nautilus, which seems strangely reversed. And this doesn’t include the memory of various applets, of which multiload-applet alone accounts for 30 megs of RSS.

  11. alexl: Yes I understand why that change was made. My comment was pointing out that many desktop applications are not performance sensitive (and even fewer have big allocations in the hot path), so making such a change _may_ save memory while not negatively affecting users in a noticeable way.

    Applications that can take user inputed file sound like they would benefit, for example if I open a 1MB file with gedit it would bump the mmap threshold to at least 1MB (to hold the content). While it’s true that the user can always restart them it’s less than ideal.

  12. Ka-Hing:

    When the 1mb file is freed that would bump the mmap threshold. However, i don’t think this is a bad idea. It generally means you’re editing large files in your editor, and thus its not a bad idea to allocate these on the heap. After all, gedit is not a long-running background thing like nautilus, and while its in use its not a large problem if it uses the memory it requires.

    I guess if you load one large file, close it and then keep gedit around for a long time while only loading small files there would be some memory not returned to the os. (Although it would just be swapped out, so its not a huge issue.) But i’m not sure this is the normal case when using gedit.

  13. Well, i guess there is a risk of heap fragmentation in gedit if its using the heap for large files…. So perhaps gedit is a good target for something like this. Would need some testing though.

  14. i’m using the Ubuntu Netbook Remix (UNR) launcher which means that i don’t use the desktop to display icons nor volumes etc.
    I only display the panel, the wall paper and the UNR launcher

    Is there a setting to configure to decrease memory use in that specific case ?

Leave a Reply

Your email address will not be published. Required fields are marked *