The universe is at risk

July 27, 2008

Yesterday evening, it first rained, then the water drops turned into ice drops, which was quite unexpected and devastating to the leaves and fruits.

I had a mystic feeling/inspiration when I saw that mess : something big was happening. And indeed, this morning I had the confirmation that this ice was just a sign of something much larger, an event of great importance which threatens the very fabric of the universe. Teuf blogs.

More seriously, ekiga 3.0 is stabilizing more or less rapidly, and we hope no last minute issue will make us miss yet another gnome release.

I have been pretty surprised (and somewhat disgusted) to see ekiga listed as an example of a project trying “not to bite too much” : ekiga 3.00 isn’t just a new release, it’s a new major release!

And it’s not because we changed the version number that it’s a major version, but rather the reverse : many things have been changed and improved, the code structure has been heavily modified. The codecs have been improved by so many order of magnitudes the video output needed rewriting, for instance!

On computer languages

June 4, 2008

Eiffel : tearful future ahead

As I explained some time ago, I spent quite some time doing financial simulations lately. At first, I was using python, which is pretty nice to do a simple and direct simulation, but I always feel it’s too toyish for serious use.

So I made my more complex simulations using eiffel, and more precisely smarteiffel’s compiler. This language has many good features :

  • it’s garbage-collected (wink),
  • does inheritance right (hint: no virtual keyword to make the concept half-stupid — yes, I know, optimisation : C++ is 0.001% faster in not doing the right thing!),
  • including multiple inheritance (undefining methods, renaming them),
  • allows determining which API parts of an object are visible to whom (not limited to a simple public/private or public/protected/private di(tri)chotomy of the world),
  • does pre/post conditions at the method and object level (and they do get inherited correctly, and you can refine them if needed : it’s not just a hack!) ;
  • it does generics with constraints (ie: you can tell your generic container will only accept objets of base class something — for example the compiler will let you use a SOMETHING[RECTANGLE] because you said SOMETHING[A->QUADRILATERAL] and RECTANGLE is a QUADRILATERAL, but won’t let you use a SOMETHING[HEXAGONAL]) ;
  • it does functions-as-values (called agents),
  • since typing is pretty strict (the only automatic/implicit ‘conversion’ is if you consider a descendant class as an ascendant), the compiler is generally able to pinpoint exactly where the problem is in the sources, so a compilable source is generally good,
  • but even if by a surprising accident it doesn’t, then the trace you get will tell you exactly where you made a mistake.

So what’s the catch? Well, it has a few shortcomings.

One of them is that you have to give each type explicitely, which can become cumbersome when you deal with something like a COLLECTION[PAIR[INTEGER, COLLECTION[STRING, STRING]]… even though the problem is a little dampered with syntactic sugar in the form of anchored types (ie: once you told it foo was of type FOO, you can tell it that other_foo is of type “like foo”, so if you change your mind about an api, it’s easy to just change one place).

Another one is that too few people know about it to reach a critical mass although it’s much much much easier and coherent to read and/or write than other more mainstream object-oriented languages (or claimed as such… wink).

This last problem has gotten worse recently (since 2005) when the smarteiffel (GNU compiler for the eiffel language) people decided they didn’t like the way ECMA (yes, that ECMA) was pushing the language, and forked it.

A few weeks ago, all smarteiffel-related debian packages got orphaned, and no other eiffel compiler is in debian anymore, which means it’s getting harder to install too.

Lisaac : future ahead (!)

There is another language, with many common points with eiffel, but still pretty different, since it’s a prototype-based language : you define base objects, by declaring “slots”, which you can either write for the new object or “inherit” from a base object. Again, multiple inheritance is no problem and you can export the various API parts an object has to whatever prototype-object you want.

An important difference with eiffel is that the compiler will do auto-casts for you. The main difference with C++ is that you will need to explicitly declare that casting FOO to BAR is possible (not at the place where you use obviously, or I wouldn’t call it ‘auto’), and it will only do one single cast, and not a series of them.

The compiler does a quite thorough analysis of the sources, so it can spot more issues and optimize more completely, which probably explains why it’s so well-placed in the computer language shootout.

Both the language and its compiler are pretty young though, so stability isn’t as good as with smarteiffel : I’ve seen it crash on some simple code (reported, of course).

OCaml : precious!

Well, I haven’t written financial simulations with OCaml : since I made some simulations with python, others with smarteiffel and others with lisaac, I do have a better view of how I should manage the little money I have! [no, I didn’t just write the same simulation in several languages over and over]

Still, I couldn’t blog about languages without mentioning that one! Again, it’s a garbage-collected language, it does inheritance and multiple inheritance (not as good as eiffel, even if it has constrained inheritance), unfortunately only with a public/private dichotomy (there are ways to offset that issue using ‘modules’). It won’t cast behind your back.

It’s time to mention the two main points of the language : it is a functional language, and it does type inference. The former is quite well-known, since hopefully everyone has heard of lisp and scheme, so I don’t need to tell that much about it : it’s pretty renowned to lead to terse and elegant code.

The type inference can be new to some. If you decide to write a prepend function which takes two arguments : the first is an object, and the second a list containing elements of the same type as the object, which will just prepend the element at the start of the list, you’ll just write (in the interpreter) : “let prepend obj lis = obj :: lis;;” (the ‘;;’ is unneeded if you do it in a file you compile — only the interpreter needs it), you’ll get back : “val prepend : ‘a -> ‘a list -> ‘a list = <fun>” which means : prepend is a function, which takes an object of unknown (polymorphic) type (denoted ‘a), a list of object of the same type (‘a list), and returns a list of objects of that same type.

Some say its syntax is hard : take a look at C/C++ code with an eye as fresh as you use reading OCaml, and you’ll see none wins.

Ekiga

I spent most of my spare time thinking about making more money by managing it better, so it has been quite some time since I seriously worked on it : I saw new things mature, bugs get fixed, little improvements here and there… exciting things I haven’t taken a serious part — hopefully they left some bugs for me to work on!

No, that’s not a blog post about C++ (I would need to blog about programming languages…), it’s about the new vacuum cleaner we just bought, and the title is what is said on the box. Bah, they all say that!

An ordinary buy like this doesn’t look that interesting, but the vacuum cleaner was one of the first things I bought with my own money in 1997, so having the previous one dead and needing replacement does bear an emotional significance : I’m getting old.

The new one still doesn’t do its work alone though : there’s still the need for a sucker to drag the sucker all around the house… hopefully the oldest kid will soon be strong enough to help there *g*

SSH

I had to change a few keys on my LAN and the key I use on GNA — and I’ll certainly have to change the key for gnome’s svn account, since I used it on a theorically unsafe box. Sigh.

Python and finance

My wife and myself are currently trying to change our loan (yes, there’s only one : the one for the flat), and use the occasion to learn about personal finance. It seems we’ve been pretty bad at it : sure, it could have been worse, but it seems we’ve mostly let money sleep and the loan roar. How naive!

To help us make sounder decision and learn what works and what doesn’t, I ended writing some python code to simulate different behaviours. As usual, I find the language pretty nice to do quick and dirty work, but would never consider it for a serious app : eiffel and OCaml are more my style.

Salon

I had the occasion to go to Salon de Provence and enjoy the patrouille de France‘s latest show. I unfortunately couldn’t see my ex-students there.

Ekiga

Bugs are coming down, slowly but surely.

People?!

April 11, 2008

I find the recent posts about that project pretty interesting ; and since I’m the one who rewrote most of ekiga’s contact management code, you can tell having a look at the api is of great interest to me : can ekiga easily integrate and use that?

Unfortunately, it seems there’s no good doc yet, which isn’t that surprising for such a young project. Luckily, I’m not that bad at code-diving, and this is where things get interesting : vala!!! Don’t take me wrong, vala is a nice and interesting idea (didn’t I already blog about it?), but for a framework which is supposed to get wide use, I’m not sure that’s such a good idea : doesn’t a vala interface mean a GObject-based interface? If so, won’t that alienate kde, gnustep and enlightenment people (yes, they are people, and they do interesting things)?

Aside from the previous points, there are a number of questions which spring to mind when reading the vala api : an addressbook seems to be something which can be opened and closed, and to which one can ask a contact from an identifier. How do I find the correct identifiers? Ah, there’s a view interface, and the comment there tell I get it from performing a request on an addressbook — but there’s nothing about it in the addressbook api! There a nice iterator api on a view, which does the strict minimum : next and get… It’s pretty young, indeed.

I hope it will age well!

PS: sigh… that meme looks less stupid than the others :

jpuydt@noether:~$ history|awk ‘{a[$2]++ } END{for(i in a){print a[i] ” ” i}}’|sort -rn|head
49 make
48 git-checkout
41 git-rebase
38 cd
36 git-gui
28 ./src/ekiga
25 git-svn
21 git-status
20 su
20 grep

(can you guess I keep several branches in my ekiga.git/ directory?)

GMarkup : memories

February 14, 2008

A few years ago, we wanted to make it as easy as possible to port then-gnomemeeting to win32, and of course, since we had no access to a win32 box, that mostly meant we would make as much of the code portable as we could, by relying on as few dependancies as possible, which meant : glib/gtk+ for the frontend and pwlib/openh323 for the backend.

For that reason, one of us (Damien?) made bonobo optional with careful #ifdef magic ; I took the gnome-druid source and made it pure gtk+ (so gnomemeeting with gnome used normal gnome druid, and gnomeeting without gnome used those butchered sources) ; this took care of most of the ‘offenders’.

Unfortunately, there was still one big hurdle, in the form of numerous and scattered lines of code : we used gconf! So I did the obvious thing : I wrote a nice gmconf api to act as a proxy for gconf, with a mostly trivial gconf implementation, and a mostly trivial glib implementation, and replaced the thousands of call to gconf by calls to gmconf (boring to say the least).

Once there, I could store, modify and react to changes during runtime, but couldn’t store them across runs. This problem is closely related to the issue of default settings. That was also something gconf was doing for us, which had to be handled separately. Creating another default settings file besides the existing .schema didn’t look like a bright idea : that would have meant keeping them in sync, by hand!

This is where GMarkup comes into the picture : it’s a nice and easy way to handle simplified XML, readily available in glib. And what is a gconf .schema file? A simplified XML file! So I dived into GMarkup’s documentation and examples, and quickly had an honestly working piece of code.

This code is still used in ekiga today : when compiled either without gnome support on GNU/* or *BSD or on win32, parsing settings (system or user) is still GMarkup’s task. I’m not sure that will last very long : it’s much easier to port to win32 those days, with already-ported and already-packaged base libs, so perhaps having a non-gnome version loses interest — especially since the gmconf-glib implementation is a piece of code which only us ekiga developpers maintain (although most bugs must have been ironed out since…).

Spreading the rumors

February 6, 2008

Rumor has it that ekiga might be willing to add PulseAudio support to their cross-platform version, as long as they get all the help necessary to make it happen smoothly. No secret meeting is expected to take place since it’s a libre software using open protocols (notice the plural), but interested PulseAudio community member can get in touch whenever they want!

About error handling

February 4, 2008

I was pretty interested by this blog post, because that’s something I have been pretty annoyed with lately.

I’m not discussing just g_malloc and OOM issues there, but more generally error handling. There are several ways to deal with them, but they mostly all fall short somewhere.

There is a first scheme, let’s call it the GError scheme, which looks like this:
result = do_foo (arg1, …, argN, &error);
this is pretty nice, and can allow displaying errors to the user pretty readily, but the error is generally a complex type, so one has to some code to write them, and using them isn’t straightforward. For example, you generally see in the api that you’re getting an error, but not exactly which…

There is a second, which I don’t exactly know how to name, and which looks like this:
error = do_foo (arg1, …, argN, &result);
that one I find nicer : generally the error isn’t some complex type, but an enumeration, with an ALL_OK somewhere and a few ERROR_FOO, ERROR_BAR, etc — so you really know what you’re getting, and you have a real chance to be able to deal with it.

A third scheme requires the result to have special types — special in that there are error cases embedded in them. Typical examples are functions which return a NULL pointer on error, or how cairo does things, with a more general error handling (basically, the result object has some kind of get_error_condition which gives something like the enum of the second scheme). That is pretty much the best option in my opinion, even if that cannot always be readily done.

Finally, there are exceptions, which are pretty special in that a few languages have special support for them. I find them the worse of the lot. They’re much like the first scheme : they’re generally complex types. But they don’t even have the decency to turn up in the api, even in otherwise sane languages! For example, even in a language like Ocaml, you end up with :
# List.find;;
– : (‘a -> bool) -> ‘a list -> ‘a =
# List.find (fun a -> (a=2)) [1;3;5];;
Exception: Not_found.

Gasp! I was promised a function with an api, and the exception wasn’t even visible! That is already pretty bad in Ocaml where you don’t manage memory, but reading any C++ documentation about how to handle exceptions and avoid leaks is… a very interesting read, in a way.

Sebastien, I see no problem if distributions don’t update stable releases that fast : let them do what they want.

I see issues when they point bug-buddy or any type of automatic bug reporter directly to upstream bugzilla, then decide not to upgrade. Because that means even if upstream fixes things, they can still get a huge number of bug reports because of that choice they didn’t make.

A prime example of that is bug #359655, for which we got duplicates by the hundreds. We had a hard time fixing it, but even when it was, we still were drowned by the bug reports afterwards because ubuntu didn’t deem wise to upload the fix.

Let distributions both make decisions and assume them!