Unsurprisingly, the biggest European Free Software event happened in Brussels, Belgium again. I’m talking about FOSDEM, of course. It’s a fixed entry in many peoples calendar and always a good excuse to visit Brussels 🙂
I’m a bit late to report on what talks I managed to see as others have already covered some of the talks, but I still want to add some observations.
Richard Brown from SuSE talked about dinosaurs and resurrecting them (video). It was more about containerised apps than actual dinosaurs, though. The general theme was about repeating mistakes that we might or should have learned in the past. He started by mentioning that the Windows DLL Hell was a nightmare. You needed to test your application with each and every version combination of every possible library. The DLLs did not necessarily have ABI compatibility so it was very cumbersome to test. Windows 2000 brought Side-by-Side assembly, which is some form of DLL containerisation, he said. It uses separate memory space for each app and its DLLs. Programs can ship “private” DLLs in their application directory so you don’t necessarily break other apps with your DLL carrying the same name. This approach, however, still has issues: Security wise each app needs to update their libraries themselves rather than have them updated. So each app needed to build and ship their own updater which is not trivial to do. Legally it’s also interesting, he said, because bundling these DLLs may impose restrictions. Last but not least, you have to have the same DLL potentially multiple times on disk, because each app may ship the same DLL.
The contemporary software distribution model has its problems, too, he said. Compatibility with various distros is an issue, because each distro is slightly different. Each distribution also has their own pace of change which may be incompatible with the application in question, e.g. the distros may decide to ship an older version because they have tested it more. Different distributions have different libraries and versions thereof. Also, each distribution has different toolsets to package applications up for their environment. Application developers, however, don’t want to care about these details.
Containerised applications solve these issues. Maybe. He mentioned Flatpak, snappy, and Appimage. The latter is the oldest technology dating all the way back to 2003. The solutions have in common that they bundle the app and run it in some kind of container or sandbox. From his criteria, the compatibility issue is solved, because the libraries are in the bundles. Portability is solved, because all dependencies are shipped in the bundle. And the pace of change is up to the app developer.
The containerisations, though, make assumptions of a common standard base provided by the distributions. According to him, such a common standard base does not exist in a practical sense, though. With containerised apps, he said, we might be repeating history. He explained that we might get a security nightmare because each app needs to update their dependencies themselves. The question also begs whether all the libraries can actually be bundled and shipped. App developers are picking up the responsibilities that distros used to have. You still have to test everything on each distro just to be sure that your base dependencies still work correctly, he said. He sees distributions as part of the solution to these problems. He thinks that a rolling release might solve the issues we’re trying to solve with containserised apps. A rolling release can ship new releases of applications very quickly. The distribution still uses their tools for the common problems like maintenance, security, and legal stuff.
In a lightning talk, David talked about “practical TPM 2.0 usage”. He showed how to generate a signing key, sign a document with it, and verify the signature. He said that Microsoft mandated TPM2.0 for Windows 10 Mobile and that it is a cryptographic processor rather than an accelerator. TPM2.0 is different from TPM1.2 in various ways, he said. For example, the 2.0 can do ECC (P256 and BN256) and SHA-256. But it’s also “algorithm agile” which means that you can add algorithms without having to change the specification. He sees three main usages: Platform integrity like secure boot and trusted boot, disk encryption where the TPM stores and controls access to the key, and Digital Restriction Management by verifying code signatures. In order to use the TPM you have two options, he said. IBM or Intel have developed some tools. IBM doesn’t have a “resource manager” according to the specification. Like a multiplexer. Intel does have such a resource manager and they are working on putting that into Linux. However, Intel has less tools, he said, although it’s wasn’t entirely clear to me what he was referring to. He mentioned that his employer, Facebook, uses TPMs for platform attestation.
Hanno talked the security on the Linux desktop. He referred to the issues Chris Evans exposed a few weeks ago.
He wanted to make the audience angry, he said. But not at him, I suppose because he considers himself to be the messenger only. The basic problem is an unfortunate agglomeration of bugs or behaviours. It starts with the browser automatically downloading files into the users downloads folder, i.e. without asking the user. Then there is Tracker which indexes files that you add to your home directory. Such as the download folder. And then there are buggy (read: vulnerable) implementations of file parsers.
He also referred to Carlos’ comment about bugs being bugs and no problem being found except bugs being bugs. Hanno’s point, as far as I could make it out, was that a project of the size as tracker, especially with that number of dependencies that you don’t control, cannot make sure that there will be not yet another bug that can be exploited. That’s quite fatalistic but probably not too far from reality. It’s not just a Tracker issue, though, he said. KDE has Baloo and everybody wants to have thumbnails of the files in your folders. He reiterated that automatic downloads AND automatic indexing creates a huge attack surface. And that the indexers support a vast variety of file formats by using many libraries of varying quality. While Tracker quickly adopted sandboxing, he said, KDE hasn’t.
He mentioned other exploit mitigation techniques such as ASLR or CFI. With ASLR, he said, the idea is to load code and data into random addresses in memory. This mitigates exploits, because they cannot reliably target valid code in memory. A least that’s the idea. You need to compile the code with -fpic and -pie, he said. Linux distribution have been slow in adopting ASLR though. Ubuntu has introduced it with 16.10, Feora with 23, and Debian is WIP. OpenSuSE has it for a few packages only. It should be the default, he said. Windows, on the other hand, has it since Vista. They also explore and experiment with more modern mitigations like CFI. Yet another approach is to avoid the C language, because “[it] is full of memory corruptions”. Rust comes to mind as an alternative. GStreamer already supports plugins in Rust, he said. He concluded that fixing all these bugs, like Carlos seemed to be wanting, is very hard. Not only because GStreamer is very prone to memory corruption due to the amount of complicated formats it parses. He mentioned fuzzing as a viable strategy to shake out bugs and he found many bugs in a few days. He mentioned that probably to make do so more of that ourselves. I’m working on it. More to posted separately.
The next talk was about testing TLS implementations. For the last year or so I began investigating TLS issues myself and I was wishing for a TLS testing framework. Now I learned about an implementation. Hubert Karlo introduced his “tls fuzzer” which is a bit of a misnomer, because it actually doesn’t perform any fuzzing. He said that TLS was complex and that it has 326 official ciphersuites, 4 PKI cryptosystems, 16 signature-hash pairs, and many more countable things that make the test matrix grow fast. There is a lot of state to be maintained, he said. He presented his tool which takes care of TLS specifics but allows you to define your own payloads and modifications to them. For example, with a few lines of code you can define a client to open a TLS connection and to use a GCM ciphersuite for collecting the nonces. He claims to have found more than 20 issues in NSS, GnuTLS, and OpenSSL. I’m curious to play around with it and maybe hook it up with Scapy’s fuzzing facilities.
Another TLS related talk was given by Fridolin who showed us a TLS Linux Kernel module implementation. The advantages are manyfold he said. Obviously, establishing the connection should be cheaper in terms of computation because the context does not need to be switched so often. Others are already using a kernel implementation of TLS, he said. He mentioned that Solaris has a kssl socket and that netflix uses a modified sendfile() for TLS on BSD. His implementation has been evaluated by Facebook, he said. The implementation leaves the handshaking still to user space and cares about the symmetric encryption only.
Compared to other FOSDEMs, I was able to actually see a few talks, although I was impressed by the number of people I randomly bumped into and who kept me from attending more talks 😉 The size of FOSDEM is its cause and solution to problems. A good thing about it was that I could bribe something to cook up a Debian package for GNOME Keysign so that, hopefully, 200 people don’t have to queue up and do weird things :o)
(not) glad that you mention Hanno’s talk! As far as tracker is concerned, his talk is a bunch of post-fact “run in circles” BS. His talk targets largely Tracker, even though it 1) is now doing everything by the book and 2) implements no “parsers” itself.
The “large codebase” argument is easy to disprove too, Tracker isn’t just a “file metadata indexer”, nor all code in tracker is being hit during metadata indexing, if it were just that of course it could be a 10th of its code size.
He also didn’t bother to contact me for fact-checking nor communicated the talk at all, I missed it even if I was at Fosdem. And he selectively picked a quote of me to support some point of his, even if it’s by all lights is incomplete (I’ll give you the context, I was writing the sandbox support hours after writing that comment, which landed within that week).
He’s only redeemed because the talk didn’t cause further pointless noise, I hope you don’t change that 😉
I’m very happy that Tracker does everything by the book now. And IIRC he also mentioned that. I think he also mentioned that the fix was extra ordinarily quickly available. Big kudos for that. As I’ve stated, he called out KDE for not doing things correctly there. It’s a bit unfair, maybe, to still make a fuss about Tracker. I think reasons that Tracker spearheaded the bandwagon, in this case at least, include the sensational headline “Windows is more secure than a GNU/Linux desktop”, Chris’ well received blog post, and the fact that people care (even if only a little). I’m confident that if KDE was Fedora’s default then it would have hit Baloo instead. I can also imagine that the comment “You could claim that other libraries used for metadata extraction are just as insecure, but that’d really be bugs in these libraries to fix”, despite being incomplete and taken out of context, is a motivation to seek more publicity as it’s something that security people feel they need to address, because they believe that one cannot just fix all the bugs.
Now, as the dust has settled a bit, we can hopefully take home that parsing complex formats is hard and that we better separate privileges when attempting it. Especially if it involves untrusted inputs. And we can all look at Tracker to see how it should be done.
Flatpak originated in 2003? Wait…what?
hah. Good catch! Thanks. I twisted the names, but now I’ve updated the post.
Thank you for the report, a very good work. I didn’t see such coverage/developed article so far.