Neat hacks

I’m doing optimization tasks recently. The first one is covered on the gtk mailing list and bugzilla. It involves a filechooser, many thousands of files and deleting a lot of code. But I’m not gonna talk about it now.

What I’m gonna talk about started with a 37MB Firefox history from my windows days and Epiphany. Epiphany deletes all history after 10 days, because according to its developers the location bar autocomplete becomes so slow it’s unusable. As on WIndows I had configured the history to never delete anything, I had a large corpus to test with. So I converted the 100.000 entries into epiphany’s format and started the application. That took 2 minutes. Then I started entering text. When the autocomplete should have popped up, the application started taking 100% CPU. After 5 minutes I killed it. Clearly, deleting all old history entries is a good thing in its current state.
So I set out to fix it. I wrote a multithreaded incremental sqlite history backend that populates the history on-demand. I had response times of 2-20ms in my command-line tests, which is clearly acceptable. However, whenever I tried to use them in a GtkEntryCompletion, the app started “stuttering”: keys I pressed didn’t immediately appear on screen, the entry completion lagged behind. And all that without taking a lot of CPU or loading stuff from disk or anything noticable in sysprof. It looked exactly like on this 6MB gif – when you let it finish loading at least. And that’s where the neatest of all hacks came in.

That hack is documented in bug 588139 and is what you see scrolling past in the terminal in the background. It tells you which callbacks in the gtk main loop take too long. So if you have apps that seem to stutter, you can download the patch, apply it to glib, LD_PRELOAD the lib you just built and you’ll get prints of the callbacks that take too long. I was amazed at how useful that small hack is. Because it told me immediately what the problem was. And it was something I had never thought would be an issue: redraws of the tree view.

And that gets me back to the beginning and my filechooser work, because stuff there was sometimes still a bit laggy, too. So I ran it with this tool, and the same functions were showing up. It looks like a good idea to look into tree view redraws and why they can take half a second.

vision

It’s early in the morning, I get up. During breakfast I check the happenings on the web on my desktop – my laptop broke again. People blog about the new Gnumeric release that just hit the repositories. I click the link and it starts. I play around in it until I need to leave.
On the commute I continue surfing the web on my mobile. I like the “resume other session” feature. Turns out there’s some really cute features in the new Gnumeric that I really need to try.
Fast forward, I’m at the university. Still some time left before the meeting, so I can play. I head to the computer facilities in the lab, log in and start Gnumeric. Of course it starts up just as I left it at home: same settings, same version, same document.
20 minutes later, I’m in the meeting. As my laptop is broken, I need to use a different computer for my slides. I grab a random one and do the usual net login. It looks just like at home. 3 clicks, and the right slides are up.

This is my vision of how I will be using my computer in the future. (The previous paragraph sounds awkward, but describing ordinary things always does, doesn’t it?) So what are the key ingredients here?

The key ingredient is ubiquity. My stuff is everywhere. It’s of course on my desktop computer and my laptop. But it’s also on every other computer in the world, including Bill Gates’ cellphone. I just need to log in. It will run on at least as many devices as current web apps like GMail does today. And not only does GMail run everywhere, but my GMail runs everywhere: It has all my emails and settings available all the time. I want a desktop that is as present as GMail.

So here’s a list of things that need to change in GNOME to give it ubiquity:

  • I need my settings everywhere

    Currently it is a hard thing to make sure one computer has the same panel layout as another one. I bet at least 90% of the people configure their panels manually and download their background image again when they do a new install. So roughly every 6 months. Most people don’t even know that copying the home directory from the old setup solves these issues. But I want more than that. Copying the home directory is a one-time operation but stuff should be continually synchronized. In GNOME, we could solve most of this already by making gconf mirror the settings to a web location, say gconf.gnome.org and sync whenever we are connected.

  • I need my applications everywhere

    I guess you know what a pain it is when you’ve set up a new computer and all the packages are missing. Gnumeric isn’t installed, half the files miss thumbnailers, bash tells you it doesn’t know all those often-used commands and no configure script finds all pkg-config files. Why doesn’t the computer just install the files when it needs them? Heck, you could probably implement it right now by mounting /usr with NFS from a server that has all packages installed. Say x86.jaunty.ubuntu.com? Add a huge 15GB offline cache on my hard disk using some sort of unionfs. Instantly all the apps are installed and run without me manually installing them. As a bonus, you get rid of nagging security update dialogs.

  • I really need my applications everywhere

    Have you ever noticed that all our GNOME applications only work on a desktop computer? GNOME Mobile has been busy duplicating all our applications for mobile devices: different browser, different file manager, different music player. And of course those mobile applications use other methods to store settings. So even if the settings were synced perfectly, it wouldn’t help against having to configure again. Instead of duplicating the applications, could we please just adapt the user interfaces?

When looking at this list, and comparing it to the web, it’s obvious to me why people prefer the web as an application delivery platform, even though the GNOME APIs are way nicer. The web gets all of those points right:

  • My settings are stored on some server, so they are always available.
  • Applications are installed on demand – without even asking the users. The concept is called “opening a web page”.
  • Device ubiquity is the only thing the web doesn’t get perfect by default. But a) the browser developers are hard at work ensuring that every web site works fine on all devices, no matter the input method or the screen size and b) complex sites can and do ship user interfaces adapted to different devices.

The web has its issues, too – like offline availability and performance – but it gets these steps very right.

So how do we get there? I don’t think it’s an important question. It’s not even an interesting question to me. The question that is interesting to me is: Do we as GNOME want to go there? Or do we keep trying to work on our single-user oriented desktop platform?

decision-making

I’ve recently been wondering about decision-making, namely how decisions are made. I had 5 questions in particular that I was thinking about where I could not quite understand why they worked out the way they did and was unhappy with the given reasons. The questions were:

  • Why does GNOME switch to git?
  • Should the GNOME browser use Webkit or Mozilla?
  • Why has GNOME not had any significant progress in recent years but only added polish?
  • Why did Apple decide to base their browser on KHTML and not on Mozilla?
  • What is Google Chrome trying to achieve – especially on Linux?

If you have answers to any of these question, do not hesitate to tell me, but make sure they are good answers, not the standard ones.

Because the first thing I realized was that all of these questions have been answered, usually in a lot of technical detail providing very good reasons for why a particular decision was taken. But when reading about the GNOME git migration it struck me: There were equally good technical reasons to switch to almost any other version control system. And I thought I knew the reasons for the git switch, but they were not listed. So are good technical reasons completely worthless? It turns out that for the other questions, there are equally good technical reasons for the other answer: There were good reasons for Apple to base their browser on Mozilla or buy Opera, for GNOME to be more innovative or for Google to not develop a browser or invest in Mozilla, just none of them are usually listed. Plus, I knew that if different people had decided on the GNOME vcs switch, we would not be using git now.

This lead me to think that decisions are personal, not technical. People decide what they prefer and then use or make up good technical reasons for why that decision was taken. But this in turn means technical reasons are only a justification for decisions. Or even worse: they deceive others from the real reasons. And that in turn means: Don’t look at the given reasons, look at the people.

But apparently people will not tell you the real reasons. Why not? What were my reasons for prefering git as a GNOME vcs? I definitely did not do a thorough investigation of the options, I didn’t even really use anything but git. I just liked the way git worked. And there were lots of other people using it, so I could easily ask them questions. So in the end I prefer git because I used it first and others use it. Objectively, those are not very convincing reasons. But I believe those are the important ones for other people as well. I think GNOME switched to git because more hackers, in particular the “important” ones, wanted git. Why did it not happen earlier? I think that’s because a) they wanted to hack and not sysadmin their vcs of choice and b) they needed a good reason for why git is better than bzr to not get into arguments, and Behdad’s poll gave them one. So there, git won because more people want it. Why do more people want it? Probably because it’s the vcs they used first and there was no reason to move away. The only technical reason in here was “good enough”.

I don’t think technical reasons are completely unimportant. They are necessary to make an option reasonable. But once an option is a viable one, the decision process is likely not driven by the technical reasons anymore. After that it gets personal. With that it mind, things like Enron or the current bailout don’t seem that unbelievable anymore. And a lot of decisions suddenly make a lot more sense.

And because I asked the other questions above, let me give you the best answers I currently have for them.

  • First of all, GNOME should use Webkit and not Mozilla, because the GNOME people doing the actual work want to use that one. There’s a lot of people that prefer Mozilla, but they don’t do any work, so they don’t – and shouldn’t matter. This was the easy question.
  • GNOME does not have any innovation because GNOME people – both hackers and users – don’t want any. Users want to use the same thing as always and most GNOME hackers these days are paid to ensure a working desktop. And they’re not students anymore so they prefer a paycheck to the excitement of breaking things.
  • For Safari I don’t have a satisfactory answer. I’m sure though it wasn’t because it was a “modern and standards compliant web browser”. Otherwise they would have just contributed to KHTML. Also, the Webkit team includes quite some former Mozilla developers, who should feel more confident in the Mozilla codebase. So I guess again, this is personal.
  • For Google Chrome it’s the same: I don’t have a real clue. Google claims it wants to advance the state of the art in browsers, but this sounds backwards. Chrome would need a nontrivial market share to make it possible for their webapps to target the features of Chrome. Maybe it’s some people inside Google that wrote a browser as their 20% project. Maybe Google execs want more influence on the direction the web is going. I don’t know.

browsing in GNOME

This is a loose follow up to GNOME and the cloud. You might want to read that post first.

There are 2 ways to provide access to the web for GNOME. One is to integrate into an existing framework. The other is to write our own. I think I can safely ignore the “write our own browser” option without anyone complaining.
I know 2 Free projects providing a web browsing framework that we could try to integrate into GNOME: Webkit and Mozilla. I’ve recently talked to various people trying to figure out what the best framework to use would be. As it turns out, there is no simple answer. In fact, I didn’t even succeed in coming to an opinion for myself, so I’ll discuss the pros and cons of both options and let you form your own opinion.

current browser usage in GNOME

GNOME’s web browsing applications currently use a Mozilla bridge (epiphany, yelp, devhelp) or the custom gtkhtml library (evolution). However, none of these are actively developed any more, because roughly a year ago we started to switch to Webkit, trying to build the webkit-gtk library. This switch has not been completed. It was originally scheduled to be delivered with GNOME 2.26, but has been postponed to GNOME 2.28.
It should also be noted that vendors shipping GNOME mostly ignore the GNOME browser epiphany and ship Firefox instead, which is based on Mozilla. There is no indication that this will change soon.

Mozilla and Webkit in a nutshell

Mozilla is a foundation trying to Free the web. Most of the work done by the foundation’s members is focused on the Firefox web browser, which ensures that it is very cross-platform by shipping almost all functionality required internally, including their own UI toolkit. A smaller side-project is XULRunner, which aims to provide the framework Firefox uses for integrating into other applications. It is not related to GNOME in any way.

Webkit is a fork of KDE’s KHTML browser engine by Apple for creating the Safari browser and adding web functionality to their desktop. It is also used by Google Chrome and various other projects (Nokia S60 browser, Adobe AIR, Qt).
It only contains a web browsing component and a Javascript interpreter and requires the backends to provide integration into their platform. Among other things a rendering library, a network layer and an API to successfully write an application are required. The webkit-gtk project aims to provide this for GNOME.

development drivers

Mozilla’s development is driven in large part by employees of the Mozilla Corporation, which was founded and is owned by the Mozilla Foundation to develop the Firefox browser as its sole purpose.
Webkit’s core has been developed in large part by Apple after forking from KHTML. Other browsers built on it have usually been forks of the Webkit codebase. Recently this changed when the Google Chrome and Nokia Qt teams started to merge their code back into the Webkit codebase and started to base their work on top of Webkit directly. I do expect to see more formalization of the government model during the next year, though I have not seen any indication of this yet.

community

Both Mozilla and Webkit have an active development community that is roughly the size of GNOME (that’s estimated, I could be off by a factor of 2-5). A lot of them are open source developers. Members of the different projects know each other well; they hang out on the same IRC channels and mailing lists. A lot of the current Webkit developers have been former Mozilla contributors.
Mozilla has also managed to gain respect in the web development community, both by sponsoring development of web tools and by creating channels of communication< with them. Add-On development is a strong focus, too. Mozilla encourages development of sophisticated customizations to drive the browser. Last but not least they are reaching out to end users directly. As a result, Mozilla and in particular Firefox have become strong brands.
In Webkit country, outreach to web developers and end users is done by the browser vendors themselves. Google and Apple are well known brands themselves and market their browsers using traditional channels. As such, the name Webkit itself is relatively unknown and attracts relatively few external contributors from the web and add-on development communities. However, Webkit encourages porting the core for different purposes and as such attracts a lot of browser developers, including the webkit-gtk port.

compatibility with the Web

Mozilla probably has 20-30% of the world-wide market share, Webkit has 5-15%. Both numbers are big enough to ensure that websites test for the browsers, so that in general, websites should work.
However, no website will check for GNOME, so all features that are provided by us will not be tested. This means that all features we maintain exclusively for GNOME can cause issues if they don’t work exactly the same way as in the other backends. And from Swfdec development I know that this is incredibly hard to achieve, as it does not only require an identical featureset, but also identical performance characteristics. This is less of a problem in Mozilla, where the GNOME-specific code is pretty small, but might become a big problem in Webkit, where a lot of code is solely written for the GNOME backend.

featureset

First a note: This point is subjective, you are free to have a differing opinion. If you do, please explain yourself in the comments.

Both browsers implement a similar set of features. I don’t know of a single (non-testcase) website that provides reduced functionality in either of the browsers. Still, the codebases are very different. Mozilla is a pretty old codebase that grew into what it is today and while there were some attempts to clean it up at times, there is still a lot of cruft around and it is easily the bigger size. I know lots of GNOME hackers that use Mozilla code as an example for scary code. On the other hand, Mozilla makes up for this by providing a lot of small niceties that Webkit does not provide and that make for a smoother experience in general. An example is kerning between tags: LLL (In Mozilla, the L and ” glyphs will overlap, in Webkit they won’t). Another example I know is windowless browser plugins.
Webkit in general is simpler, more compact and easier to understand code. I attribute this to KHTML being newer and focussing on maintainability and clarity. This shows when Webkit is able to show excellent performance. However, this quality is not always true. Parts that are not maintained by the core team are often very ad-hoc, lack design and bitrot easily. In short, everything out of the ordinary can be very ugly. Also, I am not sure if the old goals are still held in the same high esteem by the current maintainers. Code quality doesn’t sell after all.

code handling

Mozilla is written in C++ and uses a heavily customized version of autotools as the build system. The source code is since recently kept in Mercurial, older branches still use CVS. Webkit uses Subversion with an official mirror in git. The core is written in C++. Each port adds its own build system and API on top; Apple Webkit for example uses Objective C and Xcode, webkit-gtk uses Gobject-style C and autotools.
Both Webkit and Mozilla use a review process where all features and bug fixes have an attached bug in the respective bugzillas (Webkit and Mozilla) and all patches need to be reviewed by at least one reviewer. Webkit uses buildbots to check that the various ports and branches still compile. It pales in comparison to Mozilla’s extensive QA that also includes performance regression testing and important policies that govern checkins.
Releases in Mozilla are handled centrally, all Firefox ports are released with a common release number at the same time. This is built on top of a core branch, which other projects built on Mozilla (Camino, Epiphany) use. This allows an easy matching of browser version to feature availability. Webkit projects releases are not coordinated between different ports, and contain their own branches. so Safari, Chrome, webkit-gtk and Webkit master may all support different features.

initial work required to integrate

The next two paragraphs again are heavily subjective. They assume that GNOME wanted to include a library for integrating the web.

For Webkit, work is being done to integrate GNOME. The problem is that a lot of base libraries are required that do not exist today. So not only does webkit-gtk still need to gain basic features, it also requires testing all the new code excessively for conformance to standards, stability, security and performance. None of this is being done today.
Were such a library to be based on Mozilla, much less code would need to be written. However, a lot of integration work would need to be done, so that GNOME had proper access to the library functions (like for manipulating the DOM, http access or the Javascript interpreter). A lot more work would be needed to convince the Mozilla developers to support such a stable API, as currently Mozilla does not guarantee stability on any internal APIs and even add-ons have to be adapted for every major Firefox release.

Regardless of which project were to be chosen, my expectation would be that if we were to start now, it would take 5 experienced GNOME developers roughly a year to get this work to a point were it would hold up against todays requirements of the web. For Webkit, this would mostly require writing source code. For Mozilla, both writing code and evangelizing inside their community would be necessary.

predicted maintenance required once project is running

If the project in the above paragraph were done today, continuous maintenance of this library would be required. Not only for bugfixing and adding features, but also to keep up with the constant change in the browser cores, which refactor for performance and security or add new features for new web standards. I’d expect the current GNOME browser team to be able to do this.
However, we would need to build up respect in the browser community, both for ensuring our work was still necessary and for driving change in the browser’s core and the standards community to help our goals. Currently no GNOME developer that I know participates in any W3C working group. This would likely require people working on this project full-time, as respect is usually concentrated towards single people. Also, contributing to the core requires knowledge of not only the large code base, but also keeping up with the multiple different ports. I don’t see any developer of GNOME doing this currently.

view towards GNOME

Mozilla and Webkit developers pretty much share their opinion about GNOME: It doesn’t matter. This has two reasons. The first reason is that it doesn’t have a significant enough market share for them to be interesting. Even though they target the Linux desktop, none of the browsers target GNOME in particular, as they try to ship a browser that works everywhere on Linux, like KDE or CDE.
The second and more interesting reason is that the browser development communities think desktops are going to die sooner or later and the browser will be the only application that you run on your computer, so developing “desktop stuff” is not in any way interesting to them. Just like the GNOME community treats the web, the web developers treat the desktop.

view towards Free software

Mozilla has it this in their manifesto:

Free and open source software promotes the development of the Internet as a public resource.

This is shown by the Mozilla code being tri-licensed under GPL 2 or later, LGPL 2.1 or later or MPL.
Webkit is de-facto licensed under the LGPL 2 or later, as that is the license of the original KHTML code. New contributions are encouraged under the BSD license, but code licensed as LGPL 2.1 is accepted, too. All Apple and Google contributions are BSD-licensed. However, while the companies follow the letter of those licenses, they don’t follow the spirit. Both Android phones and the iPhone contain a Webkit browser that is not updatable by users.

conclusion

This is the short version of things I evaluated. In short, Mozilla is the adult browser engine with clearly defined rules and goals, while Webkit is the teenager that is still trying to find its place in the browser world. I don’t know which one would benefit GNOME more, so feel free to tell me in the comments. Also, if you feel things in here are factually wrong, feel free to point them out there or privately.

read this – now

If you know what valgrind is, you want to make sure you read this blog post – now.

That is all.
Well, apart from the fact that I’d like to have Nicolas’ blog syndicated on Planet GNOME, but don’t dare to make an official request, as he has not a lot to do with GNOME – or does maintaining valgrind count?

GNOME and the Cloud

Let’s start with a little thought experiment: If you had to chose between only running a browser and no other application or all applications but no browser, what would you chose? Sure, using wget and vim instead is fine, as is running a vnc client in your browser window.

Ok, now that you’ve thought about it a bit, here’s what I’ve been thinking about: Why is there such a huge distinction between our browsers and the rest of our desktops? Especially because the browser is roughly as important as the rest of our software taken together.
Or from a GNOME point of view: We’re doing a very bad job at integrating the web into the desktop. Apart from epiphany (and the weather applet), what application does GNOME ship that speak http?

I believe there are two main reasons for this: One is GNOME developers are not “web-enabled”. We’re a bit like Microsoft in the early 90s: We focus on the local computer and ignore the internet. It’s nice that someone rewrites session management, but why not keeping the same session across computers? Why does dconf (or GConf) not store my settings in the cloud so it uses the same settings on my university login? (I had to think hard to come up with these, I bet there’s lots of other, probably more obvious examples, but as I said, I’m not web-enabled.) We seem to completely ignore these issues and think locally.

The other, related reason is that we don’t have the software to do this. The core GNOME libraries do not have any support for the Internet, there is no glib_open_tcp (char *hostname, guint port). We don’t even have a http library that knows cookies.
This is slowly beginning to change with work done on Gnio (Did I mention that the name sucks, can we call it GNetwork?), libsoup or the RTP framework inside GStreamer. But it’s a long way from GNOME’s current state to having a full Internet experience, in particular getting a web browsing framework into GNOME. Note that I’m not talking about a web browser. I’m talking about a full featured framework that allows every application to display (partial) web pages, execute XHRs, upload photos to flickr, run user-downloaded scripts or otherwise integrate with the web. In fact, even the browser situation is worse than ever as GNOME is currently trying to switch to a different framework.

So how are we gonna marry GNOME with the web?

Australia

So I’ve recently been to LCA and FOMS. It was definitely an interesting experience, because I met lots of new people from the other part of the world. I’m pretty sure I wouldn’t go back there for another LCA, the journey is way too long and there’s enough awesome conferences in closer proximity.

However, while I was there, I appended a 3 week holiday in Australia. And Australia in general is awesome. I think it’s my favorite place to live in apart from home, surpassing all the other places I ever visisted. To say it one sentance: Australia feels like the American dream done right. Laid back instead of presumptuous and Wowcow instead of McDonald’s (You should really drink a moothie there if you ever get a chance to go to Warringah Mall), but all the advantages like outlet malls or Surfers Paradise. Oh, and they have the cuter animals here: Platypusses, Tasmanian devils and een Wombats are way cuter than the other animals.

Company seeks Company

So I spent the holidays looking at gtk-webkit. It’s an interesting mix of code, but it has issues.

  • Quite some functions still read NotImplemented();
  • The existing code is very rough. From my looks of it, this is related to missing in-depth knowledge of how the APIs behave, both Webkit APIs and GTK APIs.
  • The whole process, in particular reviewing patches, has a pretty low bus factor.

I’d like to change that. But it’s not very useful if I’d try that as a side project. Both keeping up with all the APIs and becoming respected enough to be a reviewer require a significant time investment. So when I do this, I need to do this as a day job. Which is why I’m posting this here:

If you are happening to own a company that wants someone doing Webkit work, please drop me a line.

warning options

I’m a very vocal person when it comes to enabling warnings for compilation of C code. I’m also very vocal about using -Werror, but that’s a longer topic I won’t go into here. I guess I could give a talk about why it’s critically important for the quality of your code to use -Werror. No, I’m gonna talk about the warning flags I’m using in Swfdec.

An important thing to note about warnings is that they flag code that is perfectly valid. So enabling warnings restricts you. It complains even though you write perfectly valid C code. What it does however is help you avoid common errors and force you to write cleaner code. It’s especially good at making sure you write clean code from the start instead of “I’ll clean that up later” code.

-Wall
This is the default list of compiler warnings that even a lot of build scripts enable automatically. You should always enable it, as new warnings are only added to it when they clearly show bad code.

-Wextra
This is another set of compiler warnings. Most of them are very useful, some of them aren’t. I prefer to enable it and disable the not-so-useful ones (see below), as I profit from new warnings being added automatically.

-Wno-missing-field-initializers
This warning complains when you initialize sturcts or arrays only partially. It’s usual glib coding style to do just that:

GValue val = { 0, };

So I’ve disabled the warning. It’s important to know that partially initialized values do indeed initialize the rest of the value to 0, so the other values are by no means undefined.

-Wno-unused-parameter
This warns about parameters in functions that aren’t used in the function body. As vfuncs in classes and signal callbacks often have parameters that you don’t use, I prefer to disable this warning. (Note to self: file a bug against gcc so static functions that are used by reference don’t get flagged?)

-Wold-style-definition
This is a mostly useless warning to avoid really old code (and associated bugs). It disallows function definitiona like

void
foo (x)
  int x;
{
}

If you’ve ever seen this, you should consider yourself seriously oldschool.

-Wdeclaration-after-statement
This is a matter of taste. This warning enforces that all declarations are at the top of a function and you don’t do stuff like

for (int i = 0; i < x; i++)

It also ensures portability to old-school compilers.

-Wmissing-declarations
Warns if a non-static function is not declared previously. As non-static functions should always be exported in headers, the header will have the previous declaration. So this warning finds two important cases:

  • You forgot to make the function static even though it should be. This happened very often to me before using this warning.
  • You forgot to include the header that declares this function. This can cause very nasty side effects, when you change the function parameters but not the prototype. You won't get a warning if you forget the declaring header.

-Wmissing-prototypes is a less-strict warning of the same type and included in Swfdec's flags just because we can.

-Wredundant-decls
Warns if a declaration appears twice. This usually means you have either forgotten to put #ifdef MY_HEADER_H defines in your header or you moved functions to a different header and forgot to delete the old one.

-Wmissing-noreturn
Warns if functions that don't ever return (such as exit) are not flagged as such. Use G_GNUC_NORETURN to indicate those functions. This warning is mostly useless, but for me it managed to detect when I screwed up my ifs so that all code paths ended at a g_assert_not_reached() statement.

-Wshadow
A really really useful warning (that emits way too many warnings in gcc 3.x). It warns about code like this:

int x = 3;
/* lots of code */
if (foo) {
  int x = 5;
  /* lots more code */
  printf ("%d\n", x);
}

You might have wanted to print the x you declared first, not the second one. It's one of the bugs you'll have a very hard time finding. It's a bit annoying that functions such as index or y0 are part of the C standard, but you learn to avoid them.

-Wpointer-arith
Warns about adding/subtracting from void pointers. This usually works the same like char pointers, but it enforces that you manually cast it. The more important thing is that it catches you when a pointer was a void pointer even though you didn't intend it to, especially in macros. Stupid example:

MyExtendedGArray *array = some_value;
MyType *nth_element = array->data[n];

This is actually the reason why the g_array_index() takes the type of the array data.

-Wcast-align
This warning is useful for portable code. Some weird processors (read: ARM) need to have values aligned to the right addresses and just doing int i = *(int *) some_pointer; can cause this to fail. gcc will warn you about such code, but unfortunately only for the architecture you're compiling to. But still, if you include this warning, all the Nokia people interested in your code will tell you about it and then you can fix it.
Important side note: You can avoid this warning if you know that the alignment is correct by casting to an intermediate void pointer, like int i = *(int *) (void *) some_pointer;

-Wwrite-strings
Another very important warning. Strings like "Hello World" are constant in C and must not be modified. This warning makes all these strings be treated like const char * instead of just char *. This helps a lot in avoiding stupid things, but can require quite a bit of constness cleanup in ones own code if it's retrofit onto existing code. It's well worth doing the work however.

-Winline
Warns if a function declared as inline cannot be inlined. This warning should not ever triggered, because a function should never be declared as inline. Wether to inline or not is a decision of the compiler and not of some coder. So never use the inline attribute. Not even when you are writing boot sequences in an operating system.

-Wformat-nonliteral
Warns about stuff like printf (foo);. You should use printf ("%s", foo); instead, as the variable foo can contain format strings such as %s. The printf argument should always be a constant string.

-Wformat-security
Warns about usages of printf() and scanf() that can cause security issues. You absolutely do want this warning.

-Wswitch-enum
Warns if you a switch statement does not have case statements for every value in an enumeration. This warning is invaluable for enumerations that you intend to add values to later on, such as cairo_format_t, because adding the new value will cause warnings in all switch statements using it instead of using the default case - which (in my code at least) is often an assert_not_reached. You can avoid this warning by casting the enumeration to an int: switch ((int) value)

-Wswitch-default
Warns whenever there's no default case in a switch statement. I tend to forget these quite often. If you don't need a default case, just do:

default:
  break;

-Winit-self
Wrns if you initialize a variable with itself, like MySomething *sth = MY_SOMETHING (sth); (instead of MySomething *sth = MY_SOMETHING (object);). This was one of my favorite errors before using this warning.

-Wmissing-include-dirs
Warns if a specified include irectory does not exist. This usually means you need to improve your autotools-fu, because you seriously messed up.

-Wundef
Warn if an #undef appears for something that wasn't ever defined. This not only detects typos, but is most useful for clean code: undefs are after the code, so I often forget deleting it when removing a macro.

-Waggregate-return
Warns if a struct or array is used as a return value. Either return a pointer instead or preallocate the value to be filled and pass it to your function.

-Wmissing-format-attribute
Warns if a function looks like it should take printf-style arguments but isn't flagged with G_GNUC_PRINTF.

-Wnested-externs
Detects if you have used extern inside a function. Workaround: Include the right header. Or use the extern before the function.

You should probably read the definitive guide to compiler warnings, maybe you'll find more useful warnings that I've overlooked. You'll likely have a lot of a-ha experiences while reading it.

If you now decided you want to use these warnings in your code, what's the best way to do it? The best way I know is to grab David's awesome m4 script and put it in your m4 scripts directory. After that just use AS_COMPILER_FLAGS like this in your configure.ac:

AS_COMPILER_FLAGS (WARNING_FLAGS, "-Wall -Wswitch-enum -Wswitch-default")

This will check each of the flags you put there and put all the ones that work into the WARNING_FLAGS variable and ignores the ones that don't work. So it'll work perfectly fine with older or non-gcc compilers. And if you ever get annoyed by a flag or want to add new ones. you just edit the list.

on loneliness

I had a major breakthrough. Actually two. The first one was that I managed to watch a video via RTMP using Swfdec (Yahoo music and BBC use that). That got me really excited so I went to our IRC channel to tell people. This was when I had the second major breakthrough.

I relaized at that moment that there was noone anywhere that wanted to share my happiness. Noone saying “congrats”, let alone asking “How did you do that?”. I realized I’m a lonely hacker these days.

This got me wondering why I’m still working on Swfdec, and the answer was that I don’t know. Originally it started because I wanted to watch Youtube with free software. I’m able to do this for almost 2 years now. Then it was exciting to figure out how Flash worked and how hard it’d be to write a Free player. I know pretty much everything I want to know about Flash today. The last year I’ve worked on Swfdec because other people were interested in it. But those people went away to do other things and these days I’m the only one left working on it. And as I mentioned above, that’s not fun.

So I’m going to change something. I’m not sure what it will be, but it’s equally likely that I continue working on Swfdec as it is that I’m gonna do something else entirely. We’ll see. If you know something You’d like me doing, mail me. :)