Security From Whom, Indeed

So Spectre and Meltdown happened.

That was completely predicable, so much so that I, in fact, did predict that side-channel attacks, including those coming via javascript run in a browser, was the thing to look out for. (This was in the context of pointing out that pushing Wayland as a security improvement over plain old X11 was misguided.)

I recall being told that such attacks were basically nation-state level due to cost, complexity, and required target information. How is that prediction working out for you?

Gtk+ Versioning

New thoughts are being expressed about Gtk+ versioning.

There is something about numbering. Whatever. The numbering of Gtk+ versions is a problem I do not have. A problem I do not expect to have. Hence “whatever”.

But there is also a message about stability and it is a scary one.

A cynical reading or, one might claim, any reading consistent with persistent prior behaviour, would come to the conclusion that the Gtk+ team wants to be released from all responsibility of medium and long term stability. If, for no good reason, they feel like breaking the scroll wheel behaviour of all applications again then evidently so be it.

But maybe that is too dark a view. There is some hint that there will be something that is stable. I just do not see how the versioning plan can possibly provide that.

What is missing from the plan is a way to make things stable. A snapshot at a semi-random time does not do it. Specifically, in order to provide stability of the “stable” release, I believe that a 3-6 months long period before or after the stable release is declared should be devoted exclusively to making the release stable. Bug fixes, automated tests, running with Valgrind, Coverity, gcc -fsanitize=undef, bug triaging, etc. No new feature work.

A belief that achieving stability can be done after most of the paid contributors have run off to play with new toys is delusional. The record does not support it.

Security From Whom?

Secure from whom? I was asked after my recent post questioning the positioning of Mir/Wayland as security improvement.

Excellent question — I am glad you asked! Let us take a look at the whos and compare.

To take advantage of the X11 protocol issues, you need to be able to speak X11 to the server. Assuming you haven’t misconfigured something (ssh or your file permissions) so other users’ software can talk to your
server, that means causing you to run evil X11 protocol code like XEvilTeddy. Who can do that? Well, there are probably a few thousand people who can. That is a lot, but most of application developers or maintainers who have to sneak the changes in via source form. That is possible, but it is slow, has high risk of discovery, and has problems with deniability. And choosing X11 as a mechanism is just plain silly. Just contact a command-and-control server and download the evil payload instead. There are also a smaller number of people who can attack via binaries, either because distributions take binaries directly from them or because the can change and re-sign binary packages. That would mean your entire distribution is compromised and choosing the X11 attack is really silly again.

Now, let us look at the who of a side-channel attack. This requires the ability to run code on your machine,
but it does not have to be code that can speak X11 to your X server equivalent. It can be sand-boxed code such as javascript even when the sand-box is functioning as designed. Who can do that? Well, anyone who controls a web server you visit; plus any adserver network used by such web servers; plus anyone buying ads from such adserver networks. In short, just about anyone. And tracking the origin of such code created by an evil advertiser would be extremely hard.

So to summarize: attacking the X11 protocol is possible by a relatively small group of people who have much better methods available to them; attacking via side-channel can be done by a much wider group who probably do not have better methods. The former threat is so small as to be irrelevant in the face of the second.

Look, it is not that I think of security in black and white terms. I do not. But if improved security is your motivation then looking at a Linux laptop and deciding that pouring man-decades into a partial replacement for the X server is what needs doing is a bad engineering decision when there are so many more important concerns, i.e., you are doing it wrong. And selling said partial X server replacement as a security improvement is at best misleading and uninformed.

On the other hand, if you are working on Mir/Wayland because that kind of thing floats your boat, then fine. But please do not scream “security!” when you break, say, my colour picker.

XReallyEvilTeddy

Recently, Matthew Garrett wrote about the abysmal X inter-app security situation. I.e., the total lack of a security situation. It came with an interesting proof-of-concept application, XEvilTeddy, demonstrating the ability to steal passwords and upload them elsewhere. Everybody knew such an application was possible; the interesting part was exhibiting one.

All good and fine, but one thing has been bothering me. Matthew wrote “if you’re using Snap packages on Mir (ie, Ubuntu mobile) then there’s a genuine improvement in security.” But is that really true?

Now, getting rid of X means that an application no longer can simply ask the X server to get all the keystrokes and that would seem to be an obvious improvement in security. It is, however, only an actual improvement in security if it is the only way of getting the keystrokes. It is not.

Recent years have seen a slew of side-channel attacks on, say, gpg. For example, see here and here. Basically, the cpu leaks information about the program it is running in the form of timing, current use, sound(!), electromagnetic radiation, etc. Some of these are observable from another process on the same machine, others from a laptop in the next room. If there is a direction in the field, my take on it is that attacks running on the same machine are considered a bit too easy nowadays.

It is hard to avoid side-channel leakage. gpg gets hardened every time an attack is discovered, but (say) firefox and gtk+ almost certainly leak like crazy.

“But such an attack is hard,” I hear you say. Maybe, but I do not think so. The thinking used to be that exploiting overflow of stack-based variables was hard, but all it took was one explanatory article and that cat was out of the sack.

If I was not such an incurably lazy person I would create XReallyEvilTeddy to demonstrate this. I am, so I have not. But it would be naïve to believe such applications do not exist. And it would therefore be naïve to believe that Mir and Wayland really do have better security.

Change

We learn from Matthias that the right way to describe what happened with recent Gtk+ releases is that it changed.

And provided you are thinking of source code, that is not an unreasonable nomenclature: before it worked one way, now it works a different way — it changed. And source code that has to interact with Gtk+ used to do it one way, but now needs to do it another way — it needs to change.

But what if you are thinking of binaries? That is, existing, already-distributed binaries sitting on users’ machines. With the installation of the new Gtk+, such binaries changed from working to non-working. Such a binary evidently needs to change itself. Now, I have been known to prefer to make changes by editing binaries directly (interestingly, arguably thereby turning the binary into source code in the eyes of the GPL) but it is generally not a convenient way of making changes and as a Gnumeric developer I do not expect my users to do this. So how are the binaries on users’ machines going to change from non-working to working? I have no means of reaching users. I can and I will release changed source code, but binaries from that will not reach users anytime soon. Change is not a reasonable description for this; break is. Gtk+ broke Gnumeric. Again. And note, that some of the changes appear to be completely gratuitous.

Emmanuele is rather adamant that these changes were happening to API that was pre-announced to be unstable. I think he is mistaken in the sense that while it might have been decided that this API was unstable, I do not think it was announced. At least I do not seem to be able to find it. Despite prodding, Emmanuele does not seem to be able to come up with a URL for such an announcement, and certainly not an announcement in a location directed at Gtk+ application writers. It may exist, but if it does then it is not easy to find. I looked in the obvious places: The API documentation was not changed to state that the API was subject to change. The release announcements were not changed to state that the API was subject to change. The application development mailing list was not changed by sending a message warning that the API was subject to change. Sitting around a table and agreeing on something is not an announcement. If you want to announce something to application developers then you need to use a channel or channels aimed at application developers.

The situation seems to lend itself to Douglas Adams quotes. I have already used the destruction-of-Earth situation, so here is the earlier one involving the destruction of Arthur Dent’s house:

“But the plans were on display...”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.”

ODF Plus Ten Years

It’s time for another five-year update on ODF for spreadsheets. Read the initial post from 2005 and the 2010 update for context. Keep in mind that I only have an opinion on ODF for spreadsheets, not text documents.

TL;DR: Better, but ODF still not suitable for spreadsheets.

So what’s new? Well, basically one thing: we now have a related standard for formulas in ODF spreadsheets! This is something that obviously occurred 5-10 years too late, but better late than never. The Wikipedia article on OpenFormula is a fairly amusing example of the need to justify and rationalize mistakes that seems to surround the OpenDocument standard.

OpenFormula isn’t bad as standards go. It has a value system, operators, and a long list of functions, for example. Nice Where it does have problems is in the many choices it allows implementations. For example, it allows a choice whether logical values are numbers or their own distinct type. That would not have been necessary if spreadsheets had been considered in the original standard — at that time OO could have bitten the bullet and aligned with everyone else.

Back to the standard proper. What has happened in the past five years? In a word, nothing. We still have a standard whose aim was to facilitate interoperability, but isn’t achieving it.

There are actually two flavours of the standard: strict and extended. “Strict” has a well-defined syntax complete with an xml schema. Extended is strict with add-your-own tags and attributes. No-one uses strict because there are common things that cannot be represented using it. Error values, for example. A simple line graph with a regression line and a legend, for example.

When the Gnumeric team needs to add something outside “strict” we first look to see if, say, LO has already defined a syntax would can use. We only invent our own when we have to and we try to read any LO extension that we can.

The OO/LO approach, however, appears to be to ignore any other producer and define a new extension. This is part of the “ODS by definition is what we write” mindset. The result is that we end up with multiple extensions for the same things.

So extensions are a free-for-all mess. In fact it is so big a mess that the schema for Gnumeric’s extensions that was hacked up a week ago appears to be the first. Let me rephrase that: for the past ten years no-one in the ODS world has been performing even basic document validation on the documents produced. There are document checkers out there, but they basically work by discarding anything non-strict and validating what is left.

There are also inherent performance problems with ODF. Many spreadsheets contain large areas of identical formulas. (“Identical” does not mean “textually identical” in ODF syntax but rather in the R1C1 syntax where “the cell to the left of this” always has the same name.) ODF has no concept of shared formulas. That forces reparsing of different strings that produce identical formulas over and over again. Tens of thousands of times is common. That is neither good for load times nor for file sizes.

A more technical problem with ODF is that the size of the sheet is not stored. One consequence is that you can have two different spreadsheets that compute completely different things but save to identical ODF files. At least one of them will be corrupted on load. That is mostly a theoretical concern, but the lack of size information also makes it harder to defend against damaged (deliberately or otherwise) input. For example, if a file says to colour cell A12345678 red we have no way of telling whether you have a damaged file or a very tall spreadsheet.

Gnumeric continues to support ODF, but we will not be making it the primary format.

No, I am the CADT

Sorry, Luis, I am the CADT. I believe you have your timing wrong.

At the time, bugs.gnome.org was run out of some server Miguel had set up in Mexico. It was some buggy, early version of Debian’s bug system that rolled over and died when someone shipped binary data. I.e., all the time.

It was also low on disk space. Consequently, in order to keep it running, I wrote scripts to mass close (and therefore let expire) thousands of bugs. It was that or not having a running bug system. Owen Taylor was most unhappy about the expiration — can’t really fault him — and, I believe, brought in the current bugzilla based system served by Redhat.

There was something about screensaver bugs having jwz’s name on them that caused him to get more than his fair share of the resulting emails. I forget the details of that.

Gcc vs. Clang for Error Messages

So the gcc vs. clang debate flamed up again. I thought I would deliver my few cents too.

It is claimed from time to time that clang has more helpful error messages. It is, in my opinion, a clain that is just plain wrong. They both stink. Let’s look at a few samples:

static int foo (int a, int b) { return a + b; }
int bar (int a) { return foo (a (4 + 1) * 2); }

gcc says (excerpts):

e1.c:2:33: error: called object ‘a’ is not a function or function pointer
e1.c:2:1: error: too few arguments to function ‘foo’

clang says (excerpts):

e1.c:2:33: error: called object type 'int' is not a function or function pointer

The best thing you can say about the error messages here is that they at least point you to the right location. gcc is a tad better by virtue of printing the second error message which at least hints of the real problem, but neither compiler tell us what the problem is: “missing comma”. It looks like clang is suppressing the second and further errors on a line. Note, however, that in this case it has suppressed the more informative error.

Moving on with a missing opening parenthesis:

static int foo (int a, int b) { return a + b; }
int bar (int a) { return foo a); }

From gcc we get the wisdom

e2.c:2:19: warning: return makes integer from pointer without a cast [enabled by default]
e2.c:2:30: error: expected ‘;’ before ‘a’
e2.c:2:31: error: expected statement before ‘)’ token

while clang produces

e2.c:2:26: warning: incompatible pointer to integer conversion returning
'int (int, int)' from a function with result type 'int' [-Wint-conversion]
e2.c:2:29: error: expected ';' after return statement

I don’t see that one set of utter nonsense is better than the other and spending time colour coding the output shows a dubious set of priorities. Clang would do well to add hyphens to “pointer to integer”.

How about this?

#include <stdio.h>
#define EMIT(c) fprintf(stderr,"%c",(c))

Nothing from gcc, nothing from clang, nothing from sparse. Yet it’s a clear violation of C99’s paragraph 7.26.3.

C++ doesn’t fare any better:

#include <vector>
std::vector<int,int> foo; // should have been map

gcc delivers 67 lines of nonsense starting with

/usr/include/c++/4.8/ext/alloc_traits.h:199:53: error: ‘int’ is not a class, str
uct, or union type
typedef typename _Alloc::pointer pointer;

whereas clang emits 87 lines of garbage starting with

/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../include/c++/4.8/ext/alloc_traits.h
:199:22: error: type 'int' cannot be used prior to '::' because it has no member
typedef typename _Alloc::pointer pointer;

Again, the error messages are startingly useless in both cases. A sane error message would start with “type int is not valid for the second template argument to std::vector”.

The quality of error messages has been the subject of jokes for decades. Insofar clang is new code, it would appear that they have squandered any opportunity for making real improvements opting instead for putting lipstick on a pig.

GMail Cross-Mailbox Information Leakage

GMail likes to present ads that are relevant to you by looking at the information in your mailbox. Fine. That is well known and just using the information you have chosen to store at Google.

However, when you use GMail to reply to a message sent by another GMail user, you will be shown ads based on the other user’s recent email activity.

That is news to me and outright scary.

I.e., do not use GMail to communicate to your doctor about an embarrassing disease because next time you write to your mother-in-law she will know. (Even if she always suspected.)