Open Source will scale

I’ve always been a bit scared about the day when 99% percent of the world uses GNOME. It’s always been said that there will be huge amounts of people that will come and post questions to the mailing lists or filing bugs and the current developers won’t be able to handle the load. This sounds like a very convincing argument. If currently 1% of the world uses GNOME and it suddenly were 100x as many, we’d be at 40 million bugs right now. Even Andre wouldn’t be able to keep up anymore.

Since the last few days I’m not scared anymore. We’ll easily scale to much more than 100% of the world’s users. And the reason for that is easy: Most people won’t come to us. Most people won’t know or care that there is a way to complain about something and will instead moan about missing features in their favorite internet forum. I’d even go so far as to say that the amount of bugs wouldn’t increase at all if we suddenly had 99% market share, because everyone interested in working on GNOME already does.

What made me say this? The online forums of/for distributions. I tend to google Swfdec regularly – particularly after releases – to see what the public perception of it is; it helps a lot in identifying issues. Lots of people talk in those forum, but even though they are really close to the distro (as opposed to a WoW forum), there is a huge disconnect between the forum communities and the distro community. It almost never occurs to the forum members to file bugs, check the homepage of upstream projects or otherwise interact with the distribution. Instead, they spend most of the time with anecdotal stories of how they fixed problems and hearsay about what they think happens in the Linux world. In shot, they’re as well informed about their distro as the tabloid press is. What makes this even more interesting is the fact that this seems to be by choice. Noone is discouraging them from participating in the Free Software world. At least they don’t sound bitter. They seem to be content the way it is. And I see no reason for why the remaining 99% of the world will be any different.

bug reports

I’m actually not sure if I should be happy, sad or scared after reading this bug report. After all, a karate ninja might decide to hunt down me and not Linus.

help Ctrl-Q die

Dear maintainers of Gnome applications that don’t allow closing the window with Ctrl-W, please “fix” your application. Ctrl-Q is outdated for anything but weird applications that are multi-windowed. Most of the time this is a simple fix in your .ui file where you just need to change the accelerator key.

Please don’t ask me for a reason for this change, I just want all apps to close using the same key, no matter wich one. After consulting lots of people it seems Ctrl-W is the way to go forward. So gcalctool, totem and friends, please make me happy. I’m way too lazy to file lots of bugs for a 1-character change.

And to answer all the dob^wpeople asking me on IRC: I don’t want Ctrl-Q to go away when it does something different from Ctrl-W. That’s perfectly fine. Just make Ctrl-W close in every app.

done

Today is the last day where the gvfs team is allowed to fix issues without getting spanked by the release team. And I’m already done. The feature set I wanted to have working for ftp in Gnome 2.22 is working, there’s no more bugs and even seb128 said it doesn’t crash. That was a pretty tough 2 week coding marathon but gvfs is pretty stable, at least for the sort of stresstesting I am doing in Nautilus. An example screenshot of such stresstesting can be seen here. So happy playing for everybody with a non-sucking VFS for 2.22. Go Alex!

more ftp goodness

So, after lots of hacking on the ftp backend, it should be stable now. I consider it stable enough to close the famous bug report at least.

So, what works? You shoud be able to upload files, download files, move stuff, create and delete files, the usual operations. You should also see proper icons and mime types for the files listed. And I even made it work with ip6.
What doesn’t work yet is proxies, which is gicmo‘s fault. Another thing that doesn’t yet work is interrupting/resuming downloads, when the server supports that. I also didn’t have a chance to test this on lots of wird FTP servers. All the ones I have access to are pretty sane vsftpds. And then there’s probably the plethora of little bugs left.

So this is a call to everybody that wants to use ftp in Gnome 2.22: If you know you have to access (or access to) a weird ftp server, please try the new gvfs with it. And if you find out it doesn’t work properly (or doesn’t work at all), either fix it yourself and file patches in bugzilla or get me access to that server so I can fix it myself. (An example for such a server would be Netware, which allow not-very-standard filenames starting with two slashes.) If noone files those bugs, I’ll spend the next few days finetuning interaction with vsftpd.

The easiest way to test GVfs is to use Nautilus from Ubuntu Hardy, because seb128 is busy uploading new gvfs packages all the time, so we get proper testing exposure. A big thank you for that!

so much to blog…

So much to blog, so little time. After releasing Swfdec 0.6 I wanted to take a break from Swfdec development. And as it sounded like fun (I have a weird sense of fun, I wouldn’t hack Flash otherwise) and because it was somewhat important, I thought I’d help the GVfs people getting an ftp backend working. Unfortunately, one week is not a lot of time to learn gvfs and the ftp protocol, produce a working backend and also write a lot of text in my blog. So I’ll stop now and let pictures speak:

functionality on both sides

While the iPhone doesn’t have a touchscreen on both sides, it already has functionality on both sides. I realized this yesterday in the subway. It has a touchscreen on one side and a mirror on the other side. Which might not be important for geeks, but is very important for the people they like to look at.

error handling

Jeff talks about wether apps should be robust against malloc failures. Which reminded me of the reason why I would object to malloc failure robustness. It has more to do with how people would implement it. It would end up messing up the readability of my code, because I’d have to do error handling everywhere. I’ll give an example. How would you read the contents of a file? Right, you’d open it, get its size, allocate a buffer large enough to hold the contents, read the contents into the buffer and close the file. So your code should look something like this:

void
file_get_contents (const char *filename, char **data_out, size_t *len_out)
{
  File *f = file_open (filename);
  size_t len = file_get_length (f);
  char *data = malloc (len);
  file_read (f, &data, &len);
  file_close (f);
  *data_out = data;
  *len_out = len;
}

Of course, this isn’t robust against malloc failures. And there’s also no magic file object that always succeeds. So let’s make sure we end up with a correct version:

boolean
file_get_contents (const char *filename, char **data, size_t *len)
{
  ssize_t len;
  char *data;
  int fd;

  fd = open (filename, O_RDONLY);
  if (fd < 0)
    return false;
  len = get_length (fd); // magic function
  if (len < 0)
    goto error1;
  data = malloc (len);
  if (data == NULL)
    goto error2;
  if (read (fd, data, len) != len)
    goto error2;
  if (close (fd) < 0) {
    free (data);
    return FALSE;
  }
  *data_out = data;
  *len_out = len;
  return TRUE;
error2:
  free (data);
error1:
  close (fd);
  return FALSE;
}

Now, we’ve just blown up the code to rughly three times its previous size. And that’s just because we use GOTO. If we had coded it “properly”, like the g_file_get_contents function, we’d have taken 170 lines. With a helper library like GIO, g_file_load_contents is only 55 lines. But they both don’t handle out-of-memory errors. And it’s both roughly 10x as much code as the first example.

This size growth has multiple problems. First of all, the code is a lot less readable. I’m pretty sure, everyone can easily follow what my example above does. I’m not so sure about g_file_get_contents. Second, most of the code will likely never be executed. Probably it hasn’t ever been executed either, because the people that wrote the code were not able to replicate the error condition that would trigger this (see the comments in the glib code). Third, it’s hard to write this code. It requires a lot more thinking and typing to come up with it. And last but not least, it’s hard to miss error cases. Are you sure you always check the return value for the close function?

Luckily I’ve found a solution for this problem. And that solution is present in cairo. If you’ve used cairo, you’ll have noticed that you can draw all the time and never need to care about errors, neither memory allocation errors nor any other other errors. That’s because the cairo object take does two things: It moves itself into an error state should it encounter an error and it defines a “correct” behavior of its exported API in such an event. Of course the objects provide a function to check for an error so you are free to check for errors after every call. So with this method we could make our code look like this:

boolean
file_get_contents (const char *filename, char **data_out, size_t *len_out)
{
  File *f = file_open (filename); // returns a preallocated "error" object on OOM
  size_t len = file_get_length (f);
  char *data = malloc (len);
  if (data == NULL) {
    file_destroy (file);
    return FALSE;
  }
  file_read (f, &data, &len);
  file_close (f);
  if (!file_in_error (f)) {
    *data_out = data;
    *len_out = len;
    file_destroy (f);
    return TRUE;
  } else {
    file_destroy (f);
    return FALSE;
  }
}

This includes OOM and other error checking and is still pretty well readable. However, it requires design and lots of work by the library designer that provide these objects. And a lot of libraries don’t provide this design. So unless people start to spend a lot of time on API design, I’m not seeing any advantage in making libraries memory allocation failure resistant. And currently this does not seem to be happening.

Gnome rocks

Gnome rocks. And no, this time I’m not talking about the software. I’ve just finished integrating swfdec-gnome into Gnome (for everyone that I didn’t tell this yet: swfdec-gnome will be part of the upcoming Gnome 2.22 release). While this is a pretty complex process (SVN transition, bugzilla setup, l10n, …) and it takes a while to get done, everybody is responsive and the subprojects are nice to maintainer newbies like me. Plus, all the processes feel like they are well thought out. And then the whole process is even documented very well. And after doing all the work, everything starts working. We even got 5 translations in the first 24 hours. Great work everybody.

The only thing that still sucks is that Gnome uses SVN and we had to convince SVN to import a git repository. And using svn after being used to git reminds me how awesomely fast git is. But thanks to git-svn I can live even with SVN.

Pinball

So it seems there’s a huge amount of requests for a Gnome pinball game. Which is great, because I’d love to have a cool pinball game as part of Gnome, too.

I hereby announce that I’m gonna do hacking sessions for a pinball game next GUADEC, should I not forget it until then. So if you are also interested in making a pinball for Gnome happen, go to Guadec and remind me, so I don’t forget it. And remember: A game doesn’t just need coders, it also need artists, level designers, and lots of testers.