delayed-apply, again

my thought experiment on delayed-apply dialogs yesterday got quite a strong response. the response was generally to the effect of “please, oh god, no!“. that’s sort of what i expected :)

the reason i was thinking about this at all is because jon mccann had sent me an email saying that he wanted to use dconf for his gdm rewrite. after a talk on jabber with him i realised that dconf currently has no support for delayed-apply — it has been engineered under the assumption of instant-apply.

jon’s problem is that changes to gdm config might involve starting or stopping x servers and the like. what he really wants is to get a single change notification for a bunch of changes that the user has made (instead of one at a time). he’s not the first person to have requested this. lennart mentioned something similar.

this got me thinking. the solution i came up with was to support an idea of a “transaction” on a given path in the dconf database. there were to be 4 apis for dealing with these transactions:

dconf_transaction_start (const char *path);
dconf_transaction_end (const char *path);
dconf_transaction_commit (const char *path);
dconf_transaction_revert (const char *path);

these “transactions” would be implemented in a very trivial (but perhaps confusing way):

if a process had a transaction registered for a given path (say “/apps/gdm/”) then:

  • any writes to a path under it would redirect to /apps/gdm/.working-set/
    • for example, writing to /apps/gdm/foo goes to /apps/gdm/.working-set/foo
  • any reads from a path under it would redirect similarly, with fallback
    • for example, reading /apps/gdm/foo would first try to read from /apps/gdm/.working-set/foo and then from /apps/gdm/foo if the former is unset.

all redirection is done on the client — not the server. the set requests that the client sends to the server are actually explicitly for the keys inside of “.working-set”.

if two people open transactions on conflicting paths then, well, you lose. you could easily get into a situation where /apps/gdm/foo is represented by both /apps/.working-set/gdm/foo and /apps/gdm/.working-set/foo. too bad. lock on the same resource if you require sanity.

commit would mean “copy the all of /apps/gdm/.working-set/ down to /apps/gdm/ and destroy the working set”.

revert would mean “unset everything in /apps/gdm/.working-set/” (ie: destroy the working set).

the idea is that a delayed-apply dialog box would open a transaction on startup and close the transaction on exit. it would continue to read and write keys directly to /apps/gdm/* but because of the open transaction its reads and writes would actually be redirected to the working set. the gdm daemon would see no changes on the actual keys until a commit occured.

the question that inspired yesterday’s blog entry: what is the lifecycle of the working set?

consider a problem with delayed-apply dialogs: what happens if two of them are open? in the instant apply case this is easy: the two dialog boxes affect each other in realtime. if you check something off in one of them then the other updates straight away. for delayed-apply this is very much more difficult.

if the first user applies, do the settings of the second user get wiped out? does the second user ignore the first user’s changes and write their own set over top? do we have some complicated merge operation? do we ask the user what they meant? insanity lies this way.

with the working set idea, the two dialogs would simply both be in on a sort of “shared transaction”. they would see each other’s changes in realtime but the changes would not be visible to gdm until one of them called commit(). it would be impossible to get into a position where you’d have to think about merging inconsistent sets of changes. pretty cool stuff.

under this mode of thinking, obviously if user1 opens the dialog, makes some changes, then user2 opens the dialog (and sees the unapplied changes made by user1) and then user1 closes the dialog, user2′s dialog would still contain the changes in progress.

so the lifecycle of the working set is at least as long as one person has a dialog open.

it’s easy (and probably fitting with existing user expectations) to make the lifecycle of the working set exactly as long as one person has a dialog open. to do this requires that the dconf server track processes and keep some sort of a refcount on how many people are interested in the working set. when the last caller disappears then the working set is automatically destroyed.

it’s obviously a very simple change in code, though, to make the dconf server fail to destroy the working set on the exit of the last dialog. this is what gave me the idea of having a working set of changes that stuck around after you dismissed a dialog.

it’s also a very simple change in code to cause the dconf server to deny the second process’s attempt to open a transaction when a current transaction is open. this sidesteps the whole “two dialog boxes open” problem rather effectively, but is far less fun if the code that is already written is perfectly capable of handling it.

the most useful affect of my blog entry is that it immediately started a discussion on #gnome-hackers. a few minutes after posting, owen asked me if i was around for the whole “72 buttons in the gnome 1.x control centre” mess. havoc joined in with the beating me about the head. together they made some very good points:

  • first and foremost, users expect their working set of changes to be tied to the dialog. when the dialog closes they go away. multiple dialogs don’t share the working set. the working set is something that is private to that one little window.
  • the multiple-dialogs problem is best solved with a single-instance-app mechanism
  • the multiple dialogs thing isn’t even too much of a problem. the last person to click apply wins. this is what most people expect anyway.
  • an undo button isn’t useful enough to be a part of the ui (just close and reopen for those rare circumstances) and an apply button is very questionable on the same grounds

there’s also a fundamental technical problem with my approach. dconf is designed so that everyone in a single process share access to the database through a shared client-side “stack”. if you have multiple libraries in a single process and one of them starts a transaction on the shared stack then the other parts of the process may become confused (imagine the case of a gdm preferences dialog built into the main gdm process). having the entire process enter and exit transactions is clearly undesirable.

the upshot of all of this is that i think i’m not going to do transactions in this way. as a side effect, my ideas for crazy dialogs that share working sets that stick around even after the dialog closes are possibly also dead.

my next post will be about how i intend to support transactions.

non-instant-apply preferences dialogs

everything in this post is just talking about ideal concepts of user interaction. technical aspects are not discussed here since they’re actually very easy.

very fortunately, gnome has adopted an auto-apply interaction for all of its preferences dialogs. the familiar dialog style that everyone knows and loves:


(standard instant-apply preferences dialog)

one of the nicest things about this dialog type is that showing and hiding the dialog has no side-effects. they’re sort of like spatial nautilus windows in a way — something that is conceptually always there, but usually not shown.

unfortunately, instant-apply isn’t for everyone and everything. for example, when settings in gdm change it may result in x servers being started or stopped — you really don’t want this type of thing going on as you click around with checkboxes. for some things we need to have a delayed apply.


(delayed-apply preferences dialog)

with this sort of dialog, your changes are made all at once when you close the dialog (via the “ok” button).

of course, if we haven’t actually made the changes yet, there must be the ability to revert them. this ability to revert isn’t present in instant-apply (as we know it) but users want it for delayed-apply. the way of doing this for ages, of course, has been the “cancel” button.


(delayed-apply preferences dialog with cancel button)

and some people seem to think that maybe you want to apply the settings without closing the dialog box. that’s easy enough to do, right?


(delayed-apply preferences dialog with cancel and apply)

so now our three buttons do:

  • apply changes
  • undo changes, close the dialog
  • apply changes, close the dialog

but what if we wanted to undo the changes without closing the dialog? sometimes you see this.


(delayed-apply preferences dialog with cancel, apply, undo)

wow. that’s a lot of buttons. but now our user can do both applies and undos without closing the dialog. nice.

  • apply changes
  • undo changes, close the dialog
  • apply changes, close the dialog
  • undo changes

there’s always this sort of implicit assumption, though, that closing the dialog will either apply or destroy your in-progress settings. your “working set” of changes are, for technical reasons, tied to the dialog box. what if the dialog crashed or your computer lost power and you were in the middle of making a rather large set of changes? could we have crash recovery that brought you back to the changes that you were in the middle of when the dialog next opened again?

and if we have crash recovery able to remember the changes that you were working on, why not have this as a normal feature of the dialog? in essence, why not add an option for “close the dialog” that neither applies or undoes your changes?


(delayed-apply preferences dialog with pain)

ouch.

but now we have actually gotten somewhere. we support everything that the user could possibly want to do:

  • apply changes
  • undo changes, close the dialog
  • apply changes, close the dialog
  • undo changes
  • close the dialog (and don’t mess with my working set)

the dialog is absolutely painful, though, in terms of the number of buttons it has. it’s a little bit redundant, too; two of the buttons (“ok” and “cancel”) are now combined actions that can be performed with the other buttons.

what about this?


(dialog with apply, close, undo)

here is a neat idea for a delayed-apply dialog. if you make some changes and “close” it, you can come back to your working set of changes later. you can “undo” your working set to be the same as the live (applied) version, and you can “apply” it.

with this sort of model it even makes sense to do things like open a preferences dialog, click “apply”, then click “close” without doing anything else.

the downside is that “ok” and “cancel” are gone. people are familiar with these buttons and they probably like them. they might be annoyed by the fact that they have to press “apply” and then “close” instead of just “ok”.

people might also be confused by the fact that their working set of preferences stick around after closing a dialog and bringing it back.

with the instant-apply preference dialog we have right now in gnome, life is great. your mental model is that a preference dialog box is a thing that can be shown or hidden without these actions having any implicit side effects.

this is something that i want for delayed-apply dialogs too.

is it worth it or is it just too confusing?

ISO/IEC 9899:1999 (E) § 6.7.5.3.7

this is a rant.

i have never found a misfeature in the core c language before. i’ve found many lacking features and many quirky things about how library functions work, but when it came to the core language i was always pretty happy that everything had been done reasonably.

two days ago this changed. i’ve found a bug in c.

imagine we have two function prototypes, thus:

void takes_evil_ptr (evil *x);

void takes_evil (evil x);

where evil is defined by some typedef to have some (complete) type.

now, of course, if we wanted to call these functions from another function that provides an instance of evil then it would look something like this:

void
provides_evil (void)
{
  evil x;

  takes_evil_ptr (&x);
  takes_evil (x);
}

everything is good.

now, let’s say we want to implement takes_evil() as a simple wrapper around takes_evil_ptr(). to make it easier, let’s say that we’re not even concerned about the state that the argument is left in after the call finishes. how should we do this?

the naïve approach would be to write this function:

void
takes_evil (evil x)
{
  takes_evil_ptr (&x);
}

clearly this takes a pointer to the copy of x that was passed as the argument to takes_evil and passes that pointer along to takes_evil_ptr().

wrong.

i said above that evil merely has to be some complete type.

imagine we did the following:

typedef int evil[1];

and consider the declaration

void takes_evil (evil x);

in light of iso/iec 9899:1999 (e) § 6.7.5.3 which states

  1. A declaration of a parameter as ‘‘array of type’’ shall be adjusted to ‘‘qualified pointer to type’’, where the type qualifiers (if any) are those specified within the [ and ] of the array type derivation. If the keyword static also appears within the [ and ] of the array type derivation, then for each call to the function, the value of the corresponding actual argument shall provide access to the first element of an array with at least as many elements as specified by the size expression.

so this declaration really reads:

void takes_evil (int *x);

and the code

void
takes_evil (int *x)
{
  takes_evil_ptr (&x);
}

is very clearly in error (since x is already a pointer).

of course, this wouldn’t be a problem in most sane situations. normally we would know if the evil type that we are dealing with is typedef’ed as a scalar or an array type.

the “evil” type, of course, is va_list.

§ 6.7.5.3.7 is just stupid, too. it prevents the user from passing an array by value even if that is what they intended to do. if the user really wanted to pass a pointer then they could just declare the function as taking a pointer type. consider that structures are passed by value and that structures can even contain arrays!

i have functions in dvalue that take va_list * and functions in gsettings which take va_list and call into the dvalue functions. ouch. the best workaround i can think to do is to make an autoconf-defined macro that either adds a & or not depending on if your va_list implementation is detected as being array-based.

another solution would be to never allow the passing of va_list and use the parameter type va_list *. on systems that implement va_list as an array this would effectively do nothing and on systems that have it as a scalar type it would only be one extra dereference. of course, this parts with convention (functions that take va_list are everywhere).

((ps: one good thing is that § 7.15 says “It is permitted to create a pointer to a va_list and pass that pointer to another function, in which case the original function may make further use of the original list after the other function returns.” this is the part that i was worried about, but it seems to be ok.))

what is this Private_Dirty:?

i was poking around trying to figure out the memory use of dconf. it has been one of my goals to ensure that there is only a very small per-application footprint (ie: writable memory). i’m ok with a slightly larger shared read-only footprint since this is shared between all applications.

here is what i see in the “smaps” for a small test application linked against and using dconf:

b7936000-b7944000 r-xp 00000000 08:01 54510      /opt/gnome/lib/libdconf.so.0.0.0
Size:                56 kB
Rss:                 56 kB
Shared_Clean:        40 kB
Shared_Dirty:         0 kB
Private_Clean:        0 kB
Private_Dirty:       16 kB

b7944000-b7945000 rw-p 0000d000 08:01 54510      /opt/gnome/lib/libdconf.so.0.0.0
Size:                 4 kB
Rss:                  4 kB
Shared_Clean:         0 kB
Shared_Dirty:         0 kB
Private_Clean:        0 kB
Private_Dirty:        4 kB

so 4kb of memory is mapped read-write as a result of linking against libdconf. i can deal with that since i pretty much have to deal with that. as far as i know, there is absolutely no way to get rid of all relocations.

what worries me, though, is the first bit. even though this memory is mapped read-only, it is mapped private rather than shared. i always assumed that readonly/private mappings are the same as their readonly/shared counterparts (for the same reason that a readwrite/private mapping is the same as a readwrite/shared mapping up to the point that you perform your first copy-on-write).

in the first section, however, you see

Shared_Clean:        40 kB
...
Private_Dirty:       16 kB

what’s this private dirty stuff? does this mean that each application using the library has a private 16kb of memory in use because of it? why does this happen at all with read-only mappings?

does anyone know what’s going on here?

(ps: two copies of the test application are running)

update: problem solved

i did a strace and discovered something tricky was going on:

...
open("/opt/gnome/lib/libdconf.so.0", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\3\3\1\320;"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=140341, ...}) = 0
mmap2(NULL, 60548, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7889000
....
mprotect(0xb7889000, 57344, PROT_READ|PROT_WRITE) = 0
mprotect(0xb7889000, 57344, PROT_READ|PROT_EXEC) = 0
...

what is this?

you can pretty much guess that mprotect() isn’t being called and then undone for no reason at all. there are writes going on there. sure enough, it’s the dynamic linker doing relocations.

but doesn’t libtool build my library with -fpic?

libtool is smart enough to know that .c files built to be part of a shared library need -fpic.

in my case, all of the backends for dconf are in a separate directory. i manage this by building that separate directory as a static library and then linking that into the dconf shared library. libtool doesn’t have smarts to figure this “will become part of a shared library” thing out beyond the first level of indirection.

one tweak to the CFLAGS for the static library and now everything is good :)

i’m excited about the future of gnome

about half a year ago i was looking around me and seeing stagnation in the gnome community. i was concerned that gnome had lost its momentum and that we were just making boring incremental releases that added very little new functionality.

i think i was very wrong.

i’d like to take this time to list some things that are happening right now in the gnome community that have me very excited. these are the projects that are actively improving the future of the gnome desktop.

many of these things are infrastructure items. i really see this as a fantastic time for the improvement of the inner workings of our desktop. a lot of the things listed below are going to come together with each other very nicely.

it also seems that there is a renewed focus on efficiency and doing things the right way. the past few years have seen a lot of cries of “save memory”, “perform io more intelligently”, “don’t abuse timers”, etc. many of the projects listed below seem to be taking these ideas well into account. many of the projects are replacements for larger and more complicated things.

this is just a list i thought up in a few minutes. i have probably forgotten a thing or two, so please don’t be offended if your project is not listed here.

dconf – hopefully the future of configuration in the gnome desktop.

epiphany+webkit – this is an exciting hack. i look forward to the day where this is stable enough for general consumption. i’d love to see gnome using webkit as its stock ‘embed some html’ widget.

gbus – the future glib/gobject bindings for dbus. currently in the very early stages, these bindings will integrate with gobject introspection and make it hilariously easy to put your application on the bus. we currently have a summer of code student laying some of the initial groundwork required to make this a reality.

gdm rework – jon mccann is currently rewriting gdm to better support multiple users. his efforts include consolekit integration and a more flexible greeter system. it was cool to be able to spend a night hacking alongside him at guadec — it looks like some exciting things are on the way.

gtk+/glib awesome – every new release brings exciting new features and moves us closer to removing our dependency on those crufty old libraries that nobody seems to care to have around anymore.

gvfs – by all estimations, a fantastic piece of work. this is currently being hacked on by the one person who would know better than anyone else what is wrong with gnome-vfs. i’ll be very happy when this work appears in next summer’s glib release.pimlico applications – these very attractive-looking applications are designed for use on mobile devices but are very usable in a normal desktop environment. they make me dream of an evolution-free future.

policykit – will allow us to move away from running our administrative applications with gksudo (or equivalent) and toward using protected methods on bus-activated system services while at the same time providing a sane centralised location for system administrators and distributors to control what users are allowed to do.

telepathy – a project that needs no introduction. this is just a fantastic idea and it will make gnome kick ass in ways that we probably haven’t even realised yet. tubes!!

tinymail and modest – i can’t wait to read my email using this stuff. if it’s half as good as that pvanhoof guy keeps saying it will be then i’ll be quite happy indeed.

tracker – and the fact that it is now enabled by default in ubuntu. i hope jamie can handle all the feedback that he’ll surely be getting. :)

vala – i’m not currently hacking on a project for which it would be appropriate, but this looks like it is becoming a fantastic language. when hacking gobject in c you always have this dilemma between doing everything “the proper way” and not giving yourself carpal tunnel syndrome. vala lets you do it the proper way without the stress injuries and without the overhead that accompanies other high level languages.

xorg – may not technically be part of gnome, but definitely shaping the future of our desktop. it was fantastic to be able to talk to the xorg hackers at guadec about features that gnome wants implemented and to hear them say “ok. we’ll try to do that.”. it’s very nice to have a transparent and open team of people working on such an important piece of software.

there’s lots of talk of “gnome 3.0″. “3.0″ is just a name. if you look around in the next few releases i suspect you’ll see that gnome has added much more functionality since “2.0″ than goes into new “major releases” of almost anything else.