Since he caught a glimpse of Kristian’s wobbly windows, Bryan has stalked Red Hat’s dark and hallowed halls, breathing fire, demanding his chance in the directorial seat. So it is that we bring you Monkey Hoot productions first, uh, production. Since a lot of people have asked, these videos show Luminocity running on two different laptops, both with fairly slow/old video cards (Intel i830 and ATI Radeon 7500 mobility) and open source drivers.


Theora | MPEG4

Kristian showing off his spring-modeled “wobbly windows” effect in Luminocity,Owen‘s crack-tastic OpenGL based window/compositing manager. This is the only effect that requires GL hardware acceleration in Luminocity (and not even much at that, Kristian’s development machine uses an embedded Intel video card). Notice that menus and tooltips are also animated as they pop on and off the screen. The animation effects on window impulses are implementable in a modular manner, allowing anyone to write new effects. Monkey Hoot productions would like to thank “The Blair Witch Project” for its inspirational camera work and lighting, and apologize to our viewers.

Physics Models for Window Moving

Theora | MPEG4 | MJPEG

The wobbly window effect is mildly addictive. Kristian hasn’t gotten much work done since he wrote it. He (and now I) spends all day moving windows around and watching them settle. This video shows off the motion a little better. It also demonstrates Luminocity’s live workspace switcher (aka pager) which updates in synch with the screen. We were surprised by how much more tangible windows felt when they gave a little (i.e. less than in this video) as you moved them (like a real world object). Of course, we turned the effect on “high” for this demo so it’d be very visible.


Live Updating Workspace Switcher

Theora | MPEG4 | MJPEG

The workspace switcher in luminocity is updated in-synch with the window contents. Also notice that the workspace switcher renders each window rather than just “capturing” what each workspace looks like (this can be seen in the absence of a background in the pager), allowing us to do nice UI tricks in the future. Since its just re-using the existing window textures, applying them to a new (smaller) surface, the workspace switcher has basically no performance overhead when using hardware accel (other than a few new surfaces for your graphics card to render, no biggie for the card).
Its a little hard to see in this video, but Luminocity also has a nifty workspace switching animation. It zooms out as it pans down to the next workspace and then zooms back in. Of course, since its also the compositing manager, any on screen action doesn’t freeze as you switch workplaces. Watch for this as we switch into the 3rd workspace containing an animated circle-o-icons, the icons keep spinning as you switch.


Movies Still Play as the Window is Warped

Theora | MJPEG

A GStreamer movie pipeline rendering into Luminocity. Notice that its warping the movie as it plays without slowdown (and of course, updating the workspace switcher live, which is just re-using the same GL texture rendered onto a smaller surface).


OpenGL Accelerated Alpha Compositing

Theora | MPEG4 | MJPEG


Luminocity uses GL for hardware accelerated alpha compositing. It works well with software GL implementations too. Of course, since Luminocity is a technology testbed, we use it for “unfocused windows” here, probably not a very good long term use ;-). In one of his earlier demos Owen hijacked the mouse scroll wheel to control window transperancy. Bad Owen! This video also has another nice demonstration of wobbly menus. They feel really nice, though they’ll probably need to be faster in a “real world” version. The screenshot shows fdclock rendering in Luminocity. fdclock, unlike the video, actually uses a 32-bit ARGB visual to specify where (and how much) transparency it wants. No videos because our camera man is tired (you can run it yourself with “fdclock -ts”).


Border/Contents Resize Synchronization

No videos yet, alas.
Wicked, naughty, camera man.
And there’s only one punishment…

When you resize a window inside Luminocity it doesn’t redraw the borders until the application is done redrawing the window contexnts. This means that they feel like “one piece” instead of the staggered redraw you see on traditional window managers (where the border gets ahead of the contents, and then they catch up). The effect on the perceived “reality” of windows on the screen is excellent, i.e. windows feel more like solid real objects (the same sort of improvement as double-buffering of widgets gave). It also, ironically, makes window resizing feel smoother (though each redraw is slightly slower and not progressive).



While not as sexy as the Luminocity videos, here (finally) are screenshots of GTK+ themes rendering with Cairo enhancements. Cairo both increases the rendering quality of GTK+ widgets, and allows for widgets that scale beautifully to different sizes (of course, we also have a Cairo driven SVG renderer, knock yourself out). When you get your 600 dpi monitor, we’ll be there :-)

Dynamic Themes – each widget unique

Tiger Stripes

Planet Rings


In my last X rendering post I discussed dynamic theme rendering, where every time a widget is rendered it looks slightly different. By writing algorithmic renders rather than fixed pixbuf based widgets, we can increase how dramatic the visual effects are without driving people nuts. For example, the tiger stripe buttons have proved very reasonable for long term use. However, any single rendering of a tiger stripe button would get old very quickly when repeated all over the screen ad nauseum. Currently visual designers are extremely restricted in what they can do without a theme being unusable. That’s largely the reason all themes look basically the same. We hope dynamic themes will allow visual designers to increase the variety of their palette without producing themes that wear quickly. Of course, its still easy to go overboard, *grin*. By providing higher level drawing primitives, Cairo makes it much easier to implement dynamic themes.


Resolution Independent Rendering

A large checkbox rendered with Cairo. This would look fantastic as a checkbox “normal size” on a next-gen 600dpi display. :-)

Cairo makes it easy to draw well rendered custom-widgets. Here’s an example of how the GTK+ color picker looked before and after Cairo integration.

Getting Luminocity

It took me about half an hour of work (and some compiling time) to get Luminocity running using jhbuild. Eventually we’ll add a jhbuild target for compiling Luminocity. Luminocity is not intended to turn into a real world window/compositing manager. Instead, its a technology test bed. We’re trying stuff out in Luminocity and will be rolling them into Metacity (and hence stock GNOME) as they mature. Don’t expect Luminocity to have the frills and smarts you’d expect from a normal window manager. You’ll need hardware GL acceleration enabled to have wobbly windows work, though you can try the other bits of luminocity without it. Emebedded Intel video cards (which have open source DRI drivers) will work just fine. ATI and NVidia cards, of course, work even better.

This section has been superceded by the Luminocity wiki page which has simpler more up to date build instructions

  1. If you have not used jhbuild, get jhbuild from Gnome CVS module ‘jhbuild’. Then run jhbuild bootstrap to compile basic tools such as autoconf and automake (just agree with its defaults).
  2. Run jhbuild build xserver Xcomposite Xdamage Xrender Xext Xcursor X11 Xtst. This will build the xserver, including the damage and composite extensions, and the Xephyr/Xfake nested X servers.
  3. Apply a small patch that evilly hacks around some issues with DAMAGE in the X server
  4. Checkout module “luminocity” from Gnome CVS.
  5. With the jhbuild buildroot at the start of your PATH (so you get autoconf, automake, etc from the buildroot): from the luminocity directory run ./ –prefix=PATH_TO_JHBUILD_TREE, then make and finally make install. Alternatively, see README.jhbuild for instructions on adding a “luminocity” target to jhbuild (eventually we should just include this in jhbuild).
  6. Now to get things running. Luminocity grabs windows from an existing X server and renders them in its own GL context. This technique is not intended to be particularly efficient, but it works surprisingly well for a development testbed. We will use “Xfake” as the X server. Xfake doesn’t display windows sent to it, so they only get rendered on screen once (by luminocity). Xfake is included in the “xserver” module built by jhbuild above. If you are running at 1024×768, run jhbuild run Xfake -ac -screen 1024x3072x32 :1 to start Xfake on display :1. Basically, use XRESULTIONx(4*YRESOLUTION)x32. This is because Luminocity starts with 4 workspaces by default.
  7. Now lets display something on the Xfake display. From a new window, set DISPLAY to “:1” (e.g. export DISPLAY=:1). Then run any program you want to use, e.g. gnome-terminal. Of course, you won’t see a window since its displaying to the fake X server.
  8. Start Luminocity with luminocity -f :1 PATH_TO_BACKGROUND_IMAGE. Including a background image is important since a bug in the wobbly windows rendering code causes major performance problems when the background is missing. Luminocity should now be running fullscreen, display whatever application you launched earlier (in this example, gnome-terminal). Another bug in wobbly windows increases the animation timeout every time you open a new window. This means that for every window you open, wobbly windows get jerkier and jerkier. Oops! Don’t worry, this is a silly bug and not a sign that we’re overloading your card or something.
  9. Sometimes windows start with their titlebars off screen. To move them onto the screen, you’ll need to drag them while holding down the super key (if you have a Windows key on your keyboard, try this). You may have to remap your super key to make this work, esp. if you have no Windows key, e.g. xmodmap -e ‘keycode 95=Super_L’, which will then allow you to move windows by dragging them while holding down the F11 key.

If you need help, or you’re interested in contributing to Luminocity etc, you can probably find some knowledgable people on #fedora-desktop on (naturally, you don’t have to be running fedora ;-) Eventually we’ll probably have a channel for this. People to look out for are: “owen”, “ssp” (Soeren Sandmann), “krh” (Kristian Ho/gsberg) and “seth” (Though I’m just a user, *grin*).

Update: I wrote a little more explaining how Luminocity relates to xcompmgr/metacity/Xgl in another blog entry

Update: People have been asking what sort of hardware this was done on. Videos were shot on a mix of an IBM thinkpad X30 (with a paltry Intel i830 video card using open source drivers) and an IBM thinkpad T41 (with a slightly beefier but still pretty old Radeon Mobility 7500, also using open source drivers). Everything we’re doing so far is light on hardware requirements. FYI, a locking bug was introduced in Luminocity that causes wobbly windows to get increasingly jerky as more windows are opened (or if there’s no background image present, go figure!). This is not related to its CPU or graphics card use, and should be easy to fix without major codebase changes.

Update: If you’re having build problems, I’ve updated the “jhbuild” line to include more luminocity dependencies than just xserver. Also added a note about “jhbuild bootstrap” for building the initial dev environment (auto*, etc).

Update: Build section now superceded by the Luminocity Wiki page

Building Luminocity

March 25, 2005

Just created a wiki page for Luminocity with improved build instructions. Should be a lot easier now, esp. thanks to all the people who have reported problems and found solutions on #fedora-desktop. Its basically “jhbuild build xserver luminocity” at this point, except that a patch has to be applied to xserver first.

Relation to Metacity

When it has proved itself, Luminocity’s compositing manager will probably be moved into Metacity (along with any effects / extra features we consider good and stable). We originally considered doing the work in Metacity itself, but didn’t want to destabilize it until various approaches were tested. Luminocity is, effectively, a testbed for Metacity. It provides a smaller/simpler codebase to test interesting rendering code with, and means we don’t have to worry about fucking up Metacity in the process. Soeren’s computer is (as of tonight, at least, that’s the first I saw of it) running a version of Metacity that’s apparently using the compositing manager code from Luminocity to render to a GL context.

Relation to xcompmgr

Luminocity has an internal compositing manager that performs the same function as xcompmgr. The compositing manager / window manager integration allows Luminocity to do things that an individual compositing manager or window manager couldn’t. Of course, Luminocity composites using OpenGL, unlike xcompmgr. This apparently can be an upside and a downside, but I don’t know anything about it so I’ll shut my trap.

Relation to Xgl

This is the complicated one :-). I’m loathe to stick my toes in these waters because I’m the wrong person to do it, but I’m also afraid that we’re going to end up with two rendering infrastructures down the road and no clarity for application developers as to which (if either) they can use. I don’t know if that’s where we’re headed, I hope not, but I have this vague (probably, hopefully unfounded) fear… The effect will be slow adoption and general suck. I should premise this by saying that I have no direct knowledge of the Xgl codebase. I have knowledgable sources, and I know what Xgl generally is, but I haven’t personally used Xgl, let alone looked at its codebase (I’ve barely looked at the Luminocity codebase either, for that matter).

Xgl is an X server implementation that, rather than directly accessing chip specific hardware drivers, does its low-level drawing using OpenGL calls. That means Xgl is functionally equivalent to a traditional X server, it just uses a different rendering path. Put another way, Xgl is to X11 as Glitz is to Cairo: it provides the same APIs rendered in a much smarter way.

Luminocity, on the other hand, is a compositing manager / window manager fusion that composites using OpenGL. Compositing and Window managing are all about what you do with client-rendered windows. Luminocity doesn’t know what’s inside windows, and it doesn’t care. Xgl, on the other hand, I would characterize as primarily being about how the contents of windows are drawn (in this case: quickly and with less CPU load, *grin*). Xgl can do some other non-inside-window things like drop shadows, but I’m going to argue later those are mostly expedient demos of cool technology and Xgl is probably not the place we want to be doing those things long term. From the perspective that Luminocity is mostly about rendering windows and Xgl is mostly about rendering window contents, they are theoretically complimentary. At the moment, they can not be used in conjuction with one another (since they both want to directly drive the GL hardware), but they’re goals are at least compatible.

Neither Xgl nor Luminocity are complete on their own. Xgl provides an X server and requires a window manager (and a compositing manager?) (and an X server for doing GL calls into, but see below, that will hopefully cease to be an issue eventually). Luminocity provides a window manager and a compositing manager but requires an X server (currently using Xfake or Xephyr, though supposedly there’s some plan for modifying the core fd.o X server so Luminocity will work using only the host X server?). With some hand waving (in particular there’s no way to hand OpenGL textures residing in the video card between processes), perhaps we could get Xgl to render windows into textures on the video card, and then use Luminocity to figure out what do with those textures. All graphics computations are done by the card, and data flows only once to the card. Perfect! Other than those niggly make-or-break technical details ;-)

As far as I know (and I’m pretty sure of this), there is no systematic way (besides GLX inside a running XFree86 / fdo.o X server) to access the “hardware accelerated OpenGL drivers”. That means that Xgl and Luminocity are currently forced to have a traditional host X server, open a fullscreen window on the host server and draw into it using OpenGL. Both Luminocity and Xgl are premised on OpenGL as the standard API through which vendors can provide graphics hardware acceleration (as opposed to, say, RENDER).

Update: Soeren, one of our X hackers, thinks that Xgl actually includes no cross-window stuff but just uses an existing compositing manager (and of course, accelerates its rendering). In that case, the next couple paragraphs are totally unnecessary. Like I said above, I don’t know anything about the Xgl codebase.

In addition to traditional X server features, Xgl performs some cross-window effects (such as drop shadows). This is the main area where Luminocity and Xgl could be seen as overlapping. As a mentioned before, I would argue that the X server (including Xgl) should not be doing these things long term, for a few reasons. I am not sure if David considers this point contentious or not. It could well be that he too considers these effects just a quick way to get some neat effects in play, not the best way long term, I have no idea.

  1. Drawing drop shadows on windows in the X server is equivalent to drawing titlebars on windows in the X server (instead of the window manager). One (dumb) example is that this will mean they are outside the purvue of themes (short of having an “X server theme”, *wink*). If you believe in the separation of window manager and xserver (fwiw, I think its valid to believe that wm and xserver should be merged), that’s an argument against doing this sort of effect in Xgl.
  2. The X server does not have high-level information available to it, compared with the information made available to the compositing/window managers. For example, using our drop shadow example again, window manager hints will let applications tell the window manager not to shadow something (say, the gnome panel). An X server like Xgl is forced to resort to guessing (of course, sometimes window managers resort to guessing too since WM hints are often vague and implemented differently ;-). To give another example, consider the window border/contents synchronization on resize feature of luminocity. This relies on WMapplication communication to specify when a redraw has been completed so the WM doesn’t draw its borders to the screen until the application is redrawn, and compositing manager support to double buffer the change when its actually applied, removing the last little bit of flicker. If its even possible to do this in the X server, its going to require some serious hackery (with the emphasis on hack), and probably some guessing in addition.
  3. Loosely related to both #1 and #2, putting this stuff in the X server means you have to upgrade your xserver (or add some sort of effects plugin system to the xserver) to get changes to the visuals. It sort of defeats the idea of the X server as the low-level no-nonsense piece.

I would not take something I say here as authoritative! My knowledge of this stuff only scratches the surface. But many people have been saying even less informed things, so I wanted to get slightly more accurate info out there (esp. on online forum comments). Enjoy :-)

The three immediate design stakeholders in the ‘enterprise desktop’ are: end users, help desk staff, and desktop system administrators. Most design work for GNOME has gone into improving the end user experience, which is really the dominant stakeholder of those three. Some improvements aimed at end-users, like promoting preferences instead of settings you can get wrong, have also made life a little easier for help desk staff (as people are that much less likely to hose things). Recently Mark’s work on Vino has added a very large improvement for help desk staff: the ability to remotely view and operate user’s desktops (there is nothing more frustrating than blindly stepping people through computer operations over the phone).

So what about sysadmins? Sabayon is GNOME’s first major design targeted at improving the user experience for people who administer GNOME systems, and hopefully the start of an initiative toward designing for this important group of users. I’m jazzed about Sabayon as the first step toward a historic goal: GNOME as the definitive desktop management experience for sysadmins. We have a long way to go, but if there’s a hundred possible improvements to make over Windows and MacOS/X toward the end-user experience, there’s a thousand for admins. But big things start with small steps, right? I see promise for Sabayon as the ground floor of the revolution! <seth takes a deep breath and returns back to earth> In any case, whatever the future holds, this is fertile territory because the status quo is so much worse than it needs to be.

GConf, with its support for mandatory settings and system defaults, was supposed to be a big improvement for system administrators, but it ended up being something of a boondogle because the features were hard for sysadmins to use. In most cases it actually made things harder as sysadmins struggled through the giant XML files for defaults (most probably tried to edit schemas instead, which isn’t even the right thing, but its not their fault because we didn’t publicize this well). Even apart from the XML files being long and verbose, remember that most sysadmins in the world (think Windows), esp. desktop sysadmins, are not uber-leet Unix haxors who adore vi and the command-line.

Speaking of leetness, two super-leet Red Hat desktop hackers with funny accents are kicking off work on Sabayon: Mark McLoughlin (panel maintainer, etc) and Daniel Veillard (libxml & gamin maintainer). There was a tuffle over the name, but the French (what with their centuries of cultural sophistication and all) beat out the elves. As Mark explains it, DV probably just wanted to be able to say, “Hello I’m Daniel Veillard and I pronounce Sabayon ‘Sa-ba-yon'”. Our Irish hackers really are like little elves that write code. You go to bed and when you wake up in morning a bunch of code has magically appeared. In retaliation, I was assigned the mythical character of a “Troll” by DV, but this does not hinder my speaking the truth. I may be a troll, but I am a truthful troll. The only thing that serves to dampen Mark’s elf-nature is when he lights up like a chimney stack, strangles me with scarves, whacks me with bats, drives through red lights and otherwise engages in behavior liable to result in death. But back to Sabayon.

Humble Beginnings, What Sabayon Does Today

First and foremost, Sabayon provides a sane way to edit GConf defaults and GConf mandatory keys: the same way you edit your desktop. Sabayon launches profiles in an Xnest window. Any changes you make in the Xnest window are saved back to the profile file, which can then be applied to user’s accounts. Want to add a new applet to the panel? Right click on the panel and add one just like you usually would. Of course, you’re also free to use gconf-editor to change keys at a lower level, or download any GNOME setting tweaking program from the internet and use that. Sabayon also uses gamin to watch changes you make to the filesystem. So if you want to change the font for your users, you can drag a TTF to ~/.fonts, change it in “Font Preferences”, and voila. When you’re done making changes, you can save the profile. A change log will automatically be generated so an organization with a number of sysadmins can track down what changed when. Hopefully in the future we’ll also have revision support for desktop profiles.

Right now Sabayon has support for tracking: GConf settings, panel applet addition/removal, general files and special Firefox profile support.

The Illustrated Tour of Sabayon HEAD

  1. First we launch Sabayon (if a non-root user it uses console helper to get root).
  2. Lets create a new profile for panty-waist designers. We base it off our existing Office Desktop profile.

  3. Sabayon starts an instance of that profile in an Xnest, including the sabayon monitor window.

  4. Designers need to be coddled, we create a welcoming text file for them and save it to the desktop.
  5. In response to saving the new text file, two new entries appear in the sabayon monitor. We don’t actually want to change the recently used list, so we tell sabayon to ignore that setting.
  6. We drag a new Gimp launcher to the panel. Gimp is like crack for designers.

  7. In response to the new launcher, sabayon monitor shows some new entries (and I have a continuity error in taking screenshots, there should still be the two items for creating the text file because we haven’t yet saved, oops). Notice that Sabayon records a “Panel object added” change rather than a dozen GConf keys being added. Sabayon can be taught to aggregate standard groups of changes together to make it clearer to admins what’s going on when they read through the change log.
  8. Designers like pretty things, lets change the background. (As a total aside… the background capplet rewrites its GConf keys constantly a couple times a second whether they have changed or not, which makes the sabayon monitor flash a bunch in the background. The monitor has been interesting in revealing a lot of apps that are setting keys / saving settings files at weird times, which would be sucky in a networked environment)

  9. And, as expected, the Sabayon monitor shows a bunch of GConf keys being changed. We’ve also gone ahead and checked the keys for adding the Gimp launcher to be “mandatory”. That means users that have this profile applied will be unable to remove the Gimp launcher. Unexpectedly, there’s a bunch of “.fonts.cache” files in the list too. Sabayon has a list of files and directories to ignore, but its not complete yet. For now, some operations will generate a bunch of file change noise.

  10. If we just quit now, the all-in-one Desktop profile in /etc/desktop-profiles would not have been updated. If we’re happy with the changes, we can save them back to the profile.
  11. The profile can then be distributed to computer(s) and applied to the relevant user’s homedirs. We haven’t started working on the mechanisms for this yet, Sabayon is the first piece in a bigger framework. For example, once we get the Netscape directory server code released and have a robust free ldap server, we can potentially host e.g. the GConf settings there and push to the server instead of applying bits to actual hard drives (or NFS shares).

    In the interim, the SabayonProfile class already knows how to apply profiles onto a directory. Actually, every time you edit a profile, a new temp directory is created first, and the profile is then applied to it. Consequently, it should be pretty easy for sysadmins to cook up their own python scripts using the SabayonProfile class that work on their custom systems today.

To Infinity, And Beyond!

Sabayon is just the first step in improving the manageability of GNOME. We (well, I) wanted to get something concrete landed that will help us focus on sysadmins as users, rather than designing a bunch of abstract features. It also exposes manageability features GNOME has theoretically had, but never exposed in a way people could easily exploit, which is good. I’m rambling now, again, but here are some random things markmc, dv and jdennis might be working on in the future:

  • Making sabayon solid. Its still a very young project (its one month birthday is tomorrow), and is rather rough around the edges. Things are falling into place pretty quickly now, but there’s a lot of work still to go just in making the current feature set work better. Some simple improvements like expanding the “ignore changes to these directories” list will make things a lot better. We also have a number of UI features that aren’t in the current codebase.
  • Supporting revision history on profiles
  • Figure out how Stateless Linux (in a nutshell, where the root partition is mounted read-only and synched transparently with a central source, giving the central-state advantages of thin client with the low hardware and network infrastructure costs of cheap-intel-box thick client) and Sabayon work together. Stateless Linux makes it easier for one admin to support many machines. Sabayon (particularly sabayon of the future) will make it easier for one admin to support many users. The intersection of these two is a very nice place to be!
  • We might try to figuring out a short term solution to distributing profiles to user home-directories (whether those be on an NFS share or spread across a couple dozen computer hard drives).
  • A real icon and a logo, because self-respecting GNOME projects these days need kewl logos from day one. By showing the world the icon I barfed up (), Diana will be forced to make us a new icon, pronto. Designers can’t stand ugly graphics.
  • Backing GConf with some sort of network store, perhaps LDAP. If we could get a drop in and run GConf server using the better-be-freed-soon netscape directory code, that would be awesome.
  • Reducing the pain of panel management and upgrading by moving to a new layout/storing model where applets are either “on” or “off”. Panel cursors allow control over where applets go. This means adding/removing/changing applets in upgrades becomes possible. Currently it breaks everything, which is a management nightmare for distros, let alone the lone sysadmin
  • Figuring out how to improve managebility of the Frankendesktop (word thanks to Luis). OO.o and Firefox mean that GConf support alone isn’t enough for now. But if we’re tied into supporting all these systems, we may never have the ability to do something as nice and universal as Windows group policy. So one project is to figure out if we can back OO.o and Firefox preferences using GConf. Then we can support GConf with all our heart, soul and mind in the tools and on the server.
  • Extend GConf to support features that allow small numbers of admins to support hundreds or thousands of users (such as group policy). We don’t just want to copy giant technical architectures blindly, and we haven’t started looking into this design yet, so its very vague for now.

Getting Sabayon

Sabayon is a little buggy atm, but its pretty easy to get running :-). Python source is available from the sabayon module in GNOME cvs. The major dependencies are pygtk and the gamin python bindings (these are available in fedora core HEAD, but gamin-python is not in FC3, I think). I think the GConf parts will still work even if you don’t have the gamin python bindings, but YMMV. You’ll also have to paste in two one-line text files in /etc/gconf/2 as per the README, but its pretty easy.

And now for a less sexy blog post. I just sent this message to desktop-devel, but as per the message, I know many GNOME hackers no longer read lists completely, soooo….:

Revitalizing the Urban Center of GNOME

We need to get desktop-devel back to the useful hacker exchange it once was (probably only in the soft glow of memory, but hey). That means not only do GNOME enthusiasts need to be more restrained, but we (the core hacking community) need to get back on the list, start using shared channels like #gnome-hackers (even for hacker-to-hacker social purposes) again, etc.

Forward: For a drawn out post on next-generation X rendering, this blog entry is really short on eye candy. I apologize, but I’m at home, separated from my beloved eye candy, and figured I should write this while I felt motivated. As a way of forcing my own hand, I’m making a link now to a blog entry I haven’t yet written that will contain screenshots in the future :-)

Next-Generation Rendering For the Free Desktop

For the past half year or so Red Hat’s desktop team has had people working toward making accelerated graphics rendering on the free desktop badass, but doing an ass job of actually talking about what they’re doing in a larger public / GNOME context. They’ve been doing a combination of experimentation (from that cracktastic OpenGL compositing/window manager luminocity to xsnow for the Xcomposite generation) and knuckle-down no-holds-barred infrastructure work (like making Win32 GTK work on Cairo so GTK can move to cairo as the default backend). With RHEL4 kicked out the door we’ve been able to rebalance day-to-day work on GTK and X onto other people to give the nextgenren hackers free hands. Currently the full-time nextgenren team at Red Hat is Owen Taylor (gtk/pango maintainer), Søren Sandmann (x hacker), Diana Fong (visual designer), Kristian Høgsberg (x hacker) and Carl Worth (cairo maintainer).

I’m really excited because these guy’s expertise is across a broad chunk of the rendering pipeline, from the toolkit down to the x server, which is going to give this effort the ability to work on this from a global perspective rather than optimizing the bits where we happen to have influence in. I’m doubly excited because other companies (well, Novell at least, but hopefully others will join) are starting to invest in this effort too!

I’m hoping to drag Owen into spinning this off into an umbrella effort (ala project utopia) to help maintain a coherent story/platform even as lots of people pour work into lots of different packages and distros. There are so many different ways to attack the X rendering issue that I’m a little worried about seeing a lot of fragmentation of effort and the result not being particularly coherent. I do hope people experiment with lots of different approaches, but I also really hope that in we can give developers a consistent platform for doing cool graphics on the free desktop. It would be a real shame to end up with the message in two years being “well, platform X has the feature you want, but you have to worry about also working with Y because X won’t work well on distro Z”. This sort of technology-choice morass can really dampen developers playing with this stuff and adding support all over GNOME, which is exactly the sort of quick-fiddling big-payoff stuff I think we’ll see a lot of as soon as this stuff starts landing. In other words, lets push toward the point where people can feel confident and start hacking up cool things for this system inside GNOME.

What It Might Look Like

A really good system needs to have lots of pieces in place all hooked together….its not something that can be hacked apart and replaced by arbitrary random incompatible bits (though there are points of commonality, such as OpenGL or Render). For example the pieces in one imaginable architecture – by no means the decided-upon final one or anything – might look like:

  • A sophisticated drawing layer (cairo using glitz/opengl or render as backends)
  • Stock renderers built on top of that drawing layer (pdf/ps rendering backed by cairo – such as Alex Larsson’s xpdf fork in evince, svg rendering backed by cairo, etc)
  • A toolkit that agressively takes advantage of the features in the drawing layer, exposing them to applications and themes (gtk+)
  • A window+compositing manager that can work closely with the toolkit but essentially takes the window contents as a static image in compositing (metacity with luminocity-like GL compositing manager features fused in to deal with window effects, synching up smooth resizing, live window thumbnailing, crazy pagers, etc)
  • A hardware driver system to expose a low-level hardware accelerated rendering path to the drawing layer (opengl or render with hardware accel)

With that model we can implement things like:

  • Toolkit themes that draw with layer blending effects, delightful bezier curves, and irritating alpha gradients
  • Indiana Jones buttons that puff out smoothly animated clouds of smoke when you click on them
  • Alpha transparency in applications whenever and wherever the urge strikes us
  • Live window thumbnails
  • Hardware accelerated PDF viewers
  • Hundreds of spinning soft snowflakes floating over your screen…. without messing up nautilus
  • A photograph of a field of long dry savanna grass as your desktop background… where the grass is gently swooshed around by a breeze created by moving your mouse across the background
  • Windows that shrink scale and move all over the fucking place with cool animations
  • Synchronized smooth resizing so there’s no disjunct between window borders moving and the contents redrawing (you should see the demos of this in luminocity… it really makes a difference in how real the interface feels, just as double-buffering did for stuff moving)
  • A shared path between on-screen display and printing (using Cairo’s PDF/PS backends)
  • Vector icons with very occasional super subtle animations rendered in realtime…a tiny fly which buzzes around the trash every several minutes, etc… think mood animations as in Riven (which as a total random aside is still a shockingly beautiful and atmospheric game years after it came out, postage stamp sized multimedia videos notwithstanding)
  • Workspace switching effects so lavish they make Keynote jealous
  • Brush stroke / Sumi-e, tiger striped, and other dynamically rendered themes where every button, every line looks a little different (need to post shots / explanation of this stuff, but another day)
  • Progress bars made with tendrils of curves that smoothly twist and squirm like a bucket of snakes as the bar grows
  • Text transformed and twisted beyond recognition in a manner both unseemly and cruel
  • A 10% opaque giant floating head of tigert overlayed above all the windows and the desktop.
  • etc etc. In short: awesome.

And that’s a conservative approach to this: each window essentially renders into a texture which are then combined in a separate rendering pass by the compositing manager. A lot of the work Diana does challenges our assumptions about what this rendering system should be able to do. For example, something as simple as a swoosh that cuts across both the window and the titlebar is currently very tricky. Diana’s work has illustrated something that may be obvious, but seems to be forgotten in the excitement to build the One True Graphics Pipeline (this does not exist!): Its very important to figure out many of the things you want to do with the graphics system before you get in too deep and dirty, because there are a lot of directions we could go that call for rather different architectural choices. To give one example, if we decided we really cared about having lots of animations throughout GNOME (this isn’t something we’re pushing, but we talked about it) that would dictate a very different approach from a graphics system where we really really cared about printing. You can’t always have your cake and eat it too… especially not when you consider implementation constraints.

Another example of how prioritizing “what do we want to improve with this” can change the direction: Since taking advantage of these new toys would require a new theme system, Havoc and I have been talking about how a very different theme / widget rendering system might work with this that allows for custom design of any window, widget, or anything in between. One of the things us designers have been experimenting with behind closed doors is what you can do with a window’s design when its not drawn out of a bunch of stock widgets but you have a freer hand. (This does not mean visual inconsistency, just as a magazine can maintain a consistent look but still do a fresh layout for each page using a mix of stock and new elements.) The results can be really good. No matter how good the artist, you can only get so far designing a crude palette of some fixed number of widgets which are then used in preset. A good theme/widget rendering framework would help us negotiate this balance between re-using stock elements, and overriding the rendering of widgets at appropriate points to customize how a “Control Center Preference Page” is drawn or to simply shift the text in buttons over 10 pixels to the left. Figuring out how this stuff works, or if we just want to leave the theming issue alone (which would sort of be a shame given how much of the old flooring we’re tearing up around it), may also have a significant impact on the final architecture.

A radical model (which also avoids multi-pass rendering without opening up security issues present in sharing direct access to existing graphic cards between processes) might involve a centrally rendered scene-graph where each client is given a subtree to add higher-level primitives. That could give us access to candy like pixel and vertex shaders (which we experimented with several months ago as part of rendering subtle but live backgrounds of grass fields, etc), which are attached to nodes on the render tree. Of course, there are many paths for leveraging shaders short of a full scene graph system. The scene graph model has a lot of significant concerns that are not as relevant to, say, 3D games where this model is common. Text rendering is one example.

Owen and company have slides from the X dev conf, but the punks did them as SVGs so unless you have their k-rad Cairo backed SVG slide presentation program, or if you’re willing to view slides in Inkscape… they’re not much good (though it is cool that you can find the slide you need using Nautilus thumbnails, but I digress) (hmmm, you can also open them in eog). Honestly, not the most inspiring OR detailed slides in the world either. I don’t think they’d had much sleep when they wrote them up. *grin*

Anyway… I’m rambling. I’ve given a couple points too much depth, most points not enough depth, many points I’ve missed, and doubtless some I’ve gotten wrong, but I knew if I waited to write the perfect post on this there’d be only more backlog of material to share… so a braindump it was. :-) I guess in the end I’m pretty excited. It feels like we’re running the last couple miles to get to the giant great-rendering payoff Keith Packard kicked off in the X world several years ago.

Code and stuff

  • Cairo I think everyone knows about… writing for Cairo in Python or Mono is especially cool. Its really easy to get something that looks good going in short order. If you haven’t played with it, you should!
  • Luminocity is in GNOME cvs with the module name ‘luminocity’
  • Metacity compositing work is in ‘metacity’ with the branch ‘spiffifity
  • GTK+ / Cairo integration…. gtk+ HEAD!

Apparently they also have a jhbuild setup that’ll build all this stuff thats headed for CVS in fairly short order.

And for my last point…


I promised my next blog manifesto would be handed over to The Journal,
and, behold, the latest GNOME Journal
is upon us.

Read Experimental Culture

In it, I chronicle the rise and fall of GNOME. Its a rousing tale of
charred corpses and classical chrome starring Enlightenment as the
wayward prostitute and George Jirka as Her Royal Majesty the Queen
of England. Cameos by Beagle, PyGTK, and the cultural revolution.

In all seriousness (well, more seriousness, at least), I hope
after reading the article people will at least talk about the problem:
GNOME is sort of boring right now. When you interpret usability soley as
restraint and polishing it can really dampen project enthusiasm over time.
All work and no play makes jack a dull boy.

Design not Usability

The partial solution I would proffer is to focus on design instead
of usability. There’s a big difference. I’m sure there will be a big
hoopla over Apple today owing to the expo, and they deserve it. I think
it would be very hard to argue that the things Apple does are not
interesting. Part of the reason Apple is interesting is because they
encourage designs that change market norms. Good design is challenging.
I mean that two ways: both that it is hard to do, and that it tends
to shake things up.

Extreme shaftation is an oft used and effective approach to producing
really good designs. That’s part of the reason its far harder to do a good
design in a non-1.0 product. In a 1.0 product you don’t have existing
users, there’s nobody to shaft. You can choose who you want to target,
and do it well (unless you position yourself, say, as a Microsoft Word
replacement in which case you inherit the set of expectations!). As
soon as you have users, its very very hard to drop things from the
requirements list. The point of the shafting isn’t to remove individual
features, or to increase simplicity (necessarily). Simplicity sucks
if it doesn’t do anything. The point is expand the scope of possible
designs, its to let you do new and more interesting things.

Focusing on usability devolves into a sort of bean counting.
You divide up the “requirements list” and figure out how to cram all
of it in, and then trying to organize the minutia (button labels, menu
organization, etc) so it somehow still
all makes sense. The result isn’t very sexy, and is agressively mediocre.
Every point on the requirements list pins you down. In the end the
requirements list does the design instead of you.
When everybody else is producing nutso apps with a billion buttons
and no sort of consistency (c.f. GNOME 1.x), the result of usability looks
pretty good. But by shedding some constraints, losing most of the
requirements, and focusing carefully you can usually make something
much better.

Shedding the Requirements List by Zeroing User Expectations (MS Office)

Microsoft Office exemplifies usability in action. They have a huge
list of features that Office must have or users will be angry. They have
done a good job of taking that massive list and producing something
sane. I am sure that every dialogue and menu in MS Office is poured over
with excruciating care: “Will that wording confuse people?”, “What are
people most likely to be looking for in this menu?” etc. It shows. Office
is very polished. Its also a very poor design.

If I were commissioned by Microsoft to dramatically improve Office,
my first step would be to position the project not as a next-generation
Microsoft Office, but as a new product. I might even start with the Office
codebase, but I sure as hell couldn’t work with the smothering
mantle of user expectations that looms over Office. Done well, I think
you’d largely displace Office in the market (assuming this was a
Microsoft product, I don’t mean to imply that anybody could just make
a better product and flounce Office in the market). So you are
meeting the goals people have in using Office. What you’re not doing is
slogging through trying to meet the specific needs people have of the
existing software. If you do that, you’ll just end up writing Office

New Software Resets the Requirements List Anyway (E-mail)

Its important to understand that most ‘feature’ or ‘requirements’
lists are a reflection of user’s needs and desires relative to
existing implementations
. If you improve the model enough, most of
this is renegotiable.

E-mail is a great example of this. Lets say the internet hadn’t
appeared until 2004. You are right now in the process
of designing the first E-mail app. Clearly users need the ability to
make tables, right? I mean, that’s “word processing 101”. And to format
them precisely, oh and insert drawings. And equations. And to edit
graphs inline, and to set the margins and page settings. etc etc.

You could easily end up with the requirements list for Microsoft Word:
a design for creating multi-page labour intensive laid-out documents.
These are the requirements you’d extract from the “word processor + postal
mail” model. But E-mail totally renegotiated this. Short little
messages are the norm, not multi-page documents. You receive many dozens of
mails a day, not several. There’s no question that being able to
insert a table here and there would be nice, but its by no means
a requirement. E-mail’s one compelling feature, instant and effortless
transmission of text, renders the old model’s “must have requirements”
list a moot point.

Dear Professor Harris

January 17, 2005

Dear Professor Harris, your course has been remarkably useful to me. I
recently discovered I can view archived copies
of your past lectures through stanford online. Reliving those memories
has helped me recapture something I had lost since leaving your class.
Now whenever I find myself off-center, struggling with my personal demon,
I log on to the website and help is only a few key clicks away.
(P.S, the issue I’ve been struggling with is insomnia)

gnome-blog 0.8 released

January 12, 2005

Just released gnome-blog 0.8. New features include drag and drop uploading of images (to compatible blog software), spell checking, more blogs supported, and proxy support. Currently we are known to support: pyblosxom,,, movable type,, and wordpress. It should work with any MetaWeblog or bloggerAPI compatible blog, but YMMV.

See the gnome-blog web site for more info, tarballs, rpms, etc

I wrote this article the better part of a year ago and forgot about it. I just noticed it was pushed live:

Improving Usability: Principles and Steps for Better Software

Actually, I don’t see any steps in there. Apparently I was also interested in the history of design at the time (which is a cool topic, really, so I guess I’m still interested). But I enjoyed rereading it, and its nice to notice that, while I would have written the article from a very different angle today, the principles are still the same. You know its been a good year when your principles are still the same at the end of it. :-)

Executive Summary:
The article covers a number of design principles, situating them in the historical context that made the principle relevant. The principles are:

  1. User Knowledge Principle Figure out who your user is, what they do, and what they need.
  2. Feature Bloat Principle Recognize the cost of each feature you add and each exceptional use case you accommodate.
  3. Focus Principle Good design requires editing. Focus the design on one principle class of users.
  4. Abstraction Principle Keep track of conceptual model your software requires, and work at making it simpler. Reduce cognitive friction.
  5. Direct Manipulation Principle Enable the illusion of direct manipulation when there is a reasonable physical metaphor.

Then the article dives through four of the most important phases (suppose this is the wrong word since they often overlap, repeat, occur simulateously, etc) of software design.