Talking at ARES 2019 in Canterbury, UK

It’s conference season and I attended the International Conference on Availability, Reliability, and Security (ARES) in Canterbury, UK. (note that in the future, the link might change to something more sustainable)

A representative of the Kent University opened the event. It is the UK’s European University, he said, with 20000 students, many of them being from other countries. He attributed that to the proximity to mainland Europe. Indeed it’s only an hour away (if you don’t have to go back to London to catch a direct Eurostar rather than one that stops in, say, Ashford). The conference was fairly international, too, with 230 participants from 33 countries. As an academic conference, they care about the “acceptance rate” which, in this case, was at 20.75%. Of course, he could have mentioned any number, because it’s impossible to verify.

The opening keynote was given by Alistair MacWilson from Bletchley Park. Yeah, the same Bletchley Park which Alan Turing worked at. He talked about the importance of academia in closing the cybersecurity talent gap. He said that the deficit of people knowing anything about cybersecurity skills is 3.3M with 380k alone in Europe, but APAC being desperately short of 2.1M professionals. All that is good news for us youngsters in the business, but not so good, he said, if you rely on the security of your IT infrastructure… It’s not getting any better, he said, considering that the number of connected devices and the complexity of our infrastructure is rising. You might think, he said, that highly technical skills are required to perform cybersecurity tasks. But he mentioned that 88% of the security problems that the global 5000 companies have stem from human factors. Inadequate and unfocussed training paired with insufficient resources contribute to that problem, he said. So if you don’t get continuous training then you will fall behind with your skill-set.

There were many remarkable talks and the papers can be found online; albeit behind a paywall. But I expect SciHub to have copies and authors to be willing to share their work if you ask. Anyway, one talk I remember was about delivering Value Added Services to electric vehicle charging. They said that it is currently not very attractive for commercial operators to provide charging stations, because the margin is low. Hence, additional monetisation in form of Value Added Services (VAS) could be added. They were thinking of updating the software of the vehicle while it is charging. I am not convinced that updating the car’s firmware makes a good VAS but I’m not an economist and what do I know about the world of electric vehicles. Anyway, their proposal to add VAS to the communication protocol might be justified, but their scenario of delivering software updates over that channel seems like a lost opportunity to me. Software updates are currently the most successful approach to protecting users, so it seems warranted to have an update protocol rather than a VAS protocol for electric vehicles.

My own talk was about using the context and provenance of USB-borne events (illegal public copy) to mitigate attacks via that channel. So general idea, known to readers of my blog, is to take the state of the session into account when dealing with events stemming from USB devices. More precisely, when your session is locked, don’t automatically load drivers for a new USB device. Your session is locked, after all. You’re not using your machine and cannot insert a new device. Hence, the likelihood of someone else maliciously inserting a device is higher than when your session is unlocked. Of course, that’s only a heuristic and some will argue that they frequently plug devices into their machine when it’s locked. Fair enough. I argue that we need to be sensitive and change as little as possible to the user’s way of working with the machine to get high acceptance rates. Hence, we need to be careful when devices like keyboards are inserted. Another scenario is the new network card that has been attached via USB. It should be more suspicious to accept that nameserver that came from the new network card’s DHCP server when the system has a perfectly working network configuration (and the DHCP response did not contain a default gateway). Turns out, that those attacks are mounted right now in real-life and we have yet to find defences that we can deploy on a large scale.

It’s been a nice event, even though the sandwiches for lunch got boring after a few days 😉 I am happy to have met researchers from other areas and I hope to stay in touch.

Know your corridors – booking cheaper train tickets

In the past I showed you some interesting tricks to get cheaper travel fares. In a similar vein, I’d like to explore different train corridors with you.

Let’s consider a hypothetical route from Bremen to Jena. That’s from the north of Germany to somewhere in the middle east. The “normal price” of such a ticket is anything between 85 and 137 EUR.

Search result for going from Bremen to Jena

“Why the difference?” you may ask. Good question. By looking at the details of the connections we can see that the transfer stations differ. The first connection goes north through Hamburg.

Search result of going through the north

Map of going through the north

The second connection seems smarter, going south through Hannover but then plowing through the east.

Search result for going through the east

Map of connection going east

The last connection is arguably the most natural one: Going to Goettingen and then with a local train to Jena.

Search results for going through Goettingen

map of connection going through Goettingen

More combinations exist. For example, going through Hamburg, then via the east route to Erfurt and then to Jena. That is probably also the most expensive route.

We can see those lines on the official map of long distance train lines.

Plan of long distance (IC) trains

If you are looking for cheap train tickets you should ask the “Sparpreisfinder” (or “cheap fare finder”). If we provide that with our intended journey a few days in advance, it finds tickets as cheap as 29.90 EUR. That’s already quite good. It’s not a shame to stop here and buy that ticket. After all, the Sparpreisfinder is advertised as finding the cheapest ticket, so it can’t get any better, can it?

Sparpreisfinder results

Notice how our connections so far have made use of local trains. According to the map we can take long distance trains only via another corridor. More specifically: through Leipzig. It’s a longer route but may be cheaper due to the complicated pricing model and convoluted stack of stakeholders associated with the various trains and lines being operated. It’s not imperative to know that the Deutsche Bahn has three product categories, but it may help to understand the pricing system a little bit better. Product category “A” is for long distance ICE trains. “B” is for the less comfortable and slower IC trains. Finally, “C” is for local trains.

We can force the search to find connections via that new corridor, Leipzig, and hope to find fewer product categories used:

Notice how the both checkboxes in the bottom are unticked

Notice how the both checkboxes in the bottom are not ticked. This is to find longer routes. And indeed, we find an even cheaper connection than the DB’s Sparpreisfinder was willing to give us.

cheapest fare not found by the sparpreisfinder

You may say now that using long distance trains only can be achieved more easily by adapting the search to only find those:

Notice the checkboxes in the bottom

But even then you wouldn’t get that 19.90 ticket:

no ticket for 19.90 although only long distance trains are being sought

So lessons learned: Look at the map of train lines to see which connections exists. Check whether your journey can be performed with long distance trains, only. Check your search results for various corridors and notice whether long distance train-only connections exist. If not, force the search to find you a connection through the corridor you have identified.

Speaking at FOSDEM 2019 in Belgium, Brussels

This year I spoke at FOSDEM again. It became sort of a tradition to visit Brussels in winter and although I was tempted to break with the tradition, I came again.

I had two talks at this year’s FOSDEM, both in the Security track. One on my work with Ludovico on protecting against rogue USB devices and another one on tracking users with core Internet protocols. We got a bigger room this year, but it was still packed. Despite the projector issues, which seem to be appearing more often recently, the talks went well. The audience was very engaged and we had a lively discussion in the hallway. In fact, the discussion was extremely fruitful because we were told about work in similar areas which we ought to check out.

For our USB talk I thought I’d set the mindset first and explain how GNOME thinks it should interact with the user. That is, the less interaction is required, the better it is. Especially for a security system where the user may not know what to do. In fact, we try to just make it work™ without the user having to do anything. That is vastly different from other projects are doing. In particular, Kaspersky wants you to enter a PIN when attaching a new keyboard and the USBGuard dialogue is not necessarily suitable for our users.

View post on imgur.com

In the talk on Internet protocols I mainly showed that optimisations regarding the latency need to be balanced against the privacy needs of the users. Because in order to reduce latency you usually share a state with the other end which tends to be indicated through some form of token or cookie. And because you have this shared state, the server can discriminate you. What you can try to do is to not send the token or cookie in first place. Of course, then you lose the optimisation. In turns out, however, that TLS 1.3 can be as fast, i.e. 1 round trip, and that the latency is not better or worse if you resume a previous session. Note how I talk about latency only and ignore other aspects such as CPU cycles spent for the connection establishment. Another strategy is to not send the token unencryptedly. With TLS 1.2 the Session Ticket is sent without any form of encryption which enables a network-based attacker to see your token and correlate your requests. The same is true for other optimisations such as TCP Fast Open. I have also presented our approach to balancing privacy and latency, namely a patched WolfSSL and Linux. With these patched versions we send the TCP Fast Open cookie via TLS s.t. the attacker cannot see it when we request it.

The conference was super busy and I was super busy with talking to people. It’s amazing how fast time flies when you are engaged in interesting discussions. I bumped from one person into another and then it was already time for dinner. The one talk I’ve seen was done by my colleague on preventing cryptographic misuse of libraries. More precisely, an attempt to provide sane APIs which make shooting yourself in the foot very hard.

Talking at HITCon 2018 in Taipei, Taiwan

I was invited to give a talk at Hacks in Taiwan Conference, or HITCon. Since I missed the GNOME Asia Summit and COSCUP just before, I was quite happy to go to Taiwan still.

The country is incredibly civilised and friendly. I felt much more reminded of Japan rather than China. It’s a very safe and easy place to travel. The public transportation system is fast and efficient. The food is cheap and you’ll rarely be surprised by what you get. The accommodation is a bit pricey but we haven’t been disappointed. But the fact the
Taiwan is among the 20 countries which are least reliant on tourism
, you may also say that they have not yet developed tourism as a GDP dominating factor, shows. Many Web sites are in Chinese, only. The language barrier is clearly noticeable, albeit fun to overcome. Certain processes, like booking a train ticket, are designed for residents, only, leaving tourists only the option of going to a counter rather than using electronic bookings. So while it’s a safe and fun country to travel, it’s not as easy as it could or should be.

The conference was fairly big. I reckon that there have been 500 attendees, at least. The tracks were a bit confusing as there were info panels showing the schedule, a leaflet with the programme, and a Web site indicating what was going on, but all of those were contradicting each other. So I couldn’t know whether a talk was in English, Chinese, or a wild mix of those. It shouldn’t have mattered much, because, amazingly enough, they had live translation into either language. But I wasn’t convinced by their system, because they had one poor person translating the whole talk. And after ten minutes or so I noticed how the guy lost his concentration.

Anyway, a few interesting talks I have seen were given by Trend Micro’s Fyodor about fraud in the banking and telephony sector. He said that telcos and banks are quite similar and in fact, in order to perform a banking operation a phone is required often times. And in certain African countries, telcos like Vodafone are pretty much a bank. He showed examples of how these sectors are being attacked by groups with malicious intents. He mentioned, among others, the Lazarus group.

Another interesting talk was about Korean browser plugins which are required by banks and other companies. It was quite disastrous. From what I understood the banks require you to install their software which listens on all interfaces. Then, the bank’s Web site would contact that banking software which in turn cryptographically signs a request or something. That software, however, is full of bugs. So bad, that you can exploit them remotely. To make matters worse, that software installs itself as a privileged program, so your whole machine is at risk. I was very surprised to learn that banks take such an approach. But then again, currently banks require us to install their proprietary apps on proprietary phone operating systems and at least on my phone those apps crash regularly 🙁

My own talk was about making operating system more secure and making more secure operating systems. With my GNOME hat on, I mentioned how I think that the user needs to led in a cruel world with omnipresent temptation to misbehave. I have given similar presentations a few times and I developed a few questions and jokes to get the audience back at a few difficult moments during the presentation. But with that didn’t work so well due to the language barrier. Anyway, it was great fun and I still got some interesting discussions out of it afterwards.

Big kudos to the organisers who have been running this event for many many years now. Their experience can certainly be seen in the quality of the venue, the catering, and the selection of speakers. I hope to be able to return in the next few years.

The Patch that converts a Firefox to a Tor Browser

Have you ever wondered was makes the Tor Browser the Tor Browser? That is, what patch you would have to apply to Firefox in order to end up with a Tor Browser.

The answer is not really easy to get. I expected to do something like git clone tor-browser ; git diff firefox-upstream or so. But for an odd reason, the Tor Browser people do not import the pristine Firefox version to their repository. As a side note, the build instructions are a bit arcane. I was hoping for such an important tool to be built easily. But it requires root access and a wild container technology. Weird. In fact, this is preventing me from flatpaking the Tor Browser right now. But I’m working on it.

Long story story, to get the diff, do something like:

wget releases.mozilla.org/pub/firefox/releases/60.2.1esr/source/firefox-60.2.1esr.source.tar.xz
git clone  https://git.torproject.org/tor-browser.git
cd tor-browser
git checkout --orphan firefox
tar --extract --strip-components=1 --file ../firefox-60.2.1esr.source.tar.xz 
git add .
git commit -m 'firefox upstream import'

git diff firefox...tor-browser-60.2.1esr-8.5-1-build1

Of course, you need to adjust the Firefox and the Tor Browser version in the future. I have imported the upstream firefox code into this repository so that you can make diffs as you like. Unfortunately, the Github Web interface does not show diffs of unrelated branches.

Talking at PETCon2018 in Hamburg, Germany and OpenPGP Email Summit in Brussels, Belgium

Just like last year, I managed to be invited to the Privacy Enhancing Technologies Conference to talk about GNOME. First, Simone Fischer-Huebner from Karlstadt University talked about her projects which are on the edge of security, cryptography, and usability, which I find a fascinating area to be in. She presented outcomes of her Prismacloud project which also involves fancy youtube videos…

I got to talk about how I believe GNOME is in a good position make a safe and secure operating system. I presented some case studies and reported on the challenges that I see. For example, Simone mentioned in her talk that certain users don’t trust a software if it is too simple. Security stuff must be hard, right?! So how do measure the success of your security solution? Obviously you can test with users, but certain things are just very hard to get users for. For example, testing GNOME Keysign requires a user not only with a set up MUA but also with a configured GnuPG. This is not easy to come by. The discussions were fruitful and I got sent a few references that might be useful in determining a way forward.

OpenPGP Email Summit

I also attended the OpenPGP Email Summit in Brussels a few weeks ago. It’s been a tiny event graciously hosted by a local company. Others have written reports, too, which are highly interesting to read.

It’s been an intense weekend with lots of chatting, thinking, and discussing. The sessions were organised in a bar-camp style manner. That is, someone proposed what to discuss about and the interested parties then came together. My interest was in visual security indication, as triggered by this story. Unfortunately, I was lured away by another interesting session about keyserver and GDPR compliance which ran in parallel.

For the plenary session, Holger Krekel reported on the current state of Delta.Chat. If you haven’t tried it yet, give it a go. It’s trying to provide an instant messaging interface with an email transport. I’ve used this for a while now and my experience is mixed. I still get to occasional email I cannot decrypt and interop with my other MUA listening on the very same mailbox is hit and miss. Sometimes, the other MUA snatches the email before Delta.chat sees it, I think. Otherwise, I like the idea very much. Oh, and of course, it implements Autocrypt, so your clients automatically encrypt the messages.

Continuing the previous talk, Azul went on to talk about countermitm, an attempt to overcome Autocrypt 1.0‘s weaknesses. This is important work. Because without the vision of how to go from Autocrypt Level 1 to Level 2, you may very well question to usefulness. As of now, Emails are encrypted along their way (well. Assuming MTA-STS) and if you care about not storing plain text messages in your mailbox, you could encrypt them already now. Defending against active attackers is hard so having sort of a plan is great. Anyway, countermitm defines “verified groups” which involves a protocol to be run via Email. I think I’ve mentioned earlier that I still think that it’s a bit a sad that we don’t have the necessary interfaces to run protocols over Email. Outlook, I think, can do simple stuff like voting for of many options or retracting an email. I would want my key exchange to be automated further, i.e. when GNOME Keysign sends the encrypted signature, I would want the recipient to decrypt it and send it back.

Phil Zimmermann, the father of PGP, mentioned a few issues he sees with the spec, although he also said that it’s been a while that he was deeply into this matter. He wanted the spec to be more modern and more aggressively pushing for today’s cryptography rather than for the crypto of the past. And in fact, he wants the crypto of tomorrow. Now. He said that we know that big agencies are storing message today for later analyses. And we currently have no good way of having what people call “perfect forward secrecy” so a future key compromise makes the messages of today readable. He wants post quantum crypto to defeat the prying eyes. I wonder whether anybody has implemented pq-schemes for GnuPG, or any other OpenPGP implementation, yet.

My takeaways are: The keyserver network needs a replacement. Currently, it is used for initial key discovery, key updates, and revocations. I think we can solve some of these problems better if we separate them. For example, revocations are pretty much a fire and forget thing whereas other key updates are not necessarily interesting in twenty years from now. Many approaches for making initial key discovery work have been proposed. WKD, Autocrypt, DANE, Keybase, etc. Eventually one of these approaches wins the race. If not, we can still resort back to a (plain) list of Email addresses and their key ids. That’s as good or bad as the current situation. For updates, the situation is maybe not as bad. But we might still want to investigate how to prevent equivocation.

Another big thing was deprecating cruft in the spec to move a bit faster in terms of cryptography and to allow implementers to get a compliant program running (more) quickly. Smaller topics were the use of PQ safe algorithm and exploitation of backwards incompatible changes to the spec, i.e. v5 keys with full fingerprints. Interestingly enough, a trimmed down spec had already been developed here.

Speaking at FIfFKon 18 in Berlin, Germany

I was invited to be a panellist at this year’s FIfFKon in Berlin, Germany. While I said hi to the people at All Systems Go!, my main objective in Berlin was to attend the annual conference of the FIfF, the association for people in computing caring about peace and social responsibility.

The most interesting talk for me was held by Rainer Mühlhoff on the incapacitation if the user. The claim, very broadly speaking, is that providing a usable interface prevents your users from learning how to operate the machine properly. Or in other words: Making an interface for dumb people will attract dumb people and not make them smarter. Of course, he was more elaborate than that.

He presented Android P which nudges the user into a certain behaviour. In Android, you get to see for how long you have used an app and encourages you to stop. Likewise, Google nudges you into providing your phone number for account recovery. The design of that dialogue makes it hard to hit the button to proceed without providing the number. Those nudges do not prevent a choice to be made, they just make it more likely that the user makes one particular choice. The techniques are borrowed from public policy making and commercial settings. So the users are being an instrument themselves rather than a sovereign entity.

Half way through his talk he made a bit of a switch to “sealed interfaces” and presented the user interface of a vacuum cleaner. In the beginning, the nozzle had a “bristly” or “flat” setting, depending on whether you wanted to use it on a carpet or a flat surface. Nowadays, the pictogram does not show the nozzle any more, but rather the surface you want to operate on. Similarly, microwave ovens do not show the two levers for wattage and time any more, but rather full recipes like pizza, curry, or fish.
The user is prevented from understanding the device in its mechanical details and use it as an instrument based on what it does. Instead the interaction is centred on the end purpose rather than using the device as a tool to achieve this end. The commercialisation of products numbs people down in their thinking. We are going from “Don’t make me think” to “Can you do the thinking for me” as, he said, we can see with the newer Android interfaces which tries to know already what you intend to do.

Eventually, you adapt the technology to the human rather than adapting the human to the technology. And while this is correct, he says, and it has gotten us very far, it is wrong from a social theory point of view. Mainly because it suggests that it’s a one-way process whereas it really is an interdependency. Because the interaction with technology forms habits and coins how the user experiences the machine. Imagine, he said, to get a 2018 smartphone in 1995. Back in the day, you probably could not have made sense out of it. The industrial user experience design is a product of numbing users down.

A highly interesting talk that got me thinking a little whether we ought to teach the user the inner workings of software systems.

The panel I was invited for had the topic “More privacy for smart phones – will the GDPR get us a new break through?” and we were discussing with a corporate representative and other people working in data protection. I was there in my capacity as a Free Software representative and as someone who was working on privacy enhancing technologies. I used my opportunities to praise Free Software and claim that many problems we were discussion would not exist if we consequently used Free Software. The audience was quite engaged and asked a lot of questions. Including the ever popular point of *having* to use WhatsApp, Signal, or any of those proprietary products, because of the network effect and they demanded more regulation. I cautioned that call for various reasons and mentioned that the freedom to choose the software to run has not yet fully been exploited. Afterwards, some projects presented themselves. It was an interesting mix of academic and actual project work. The list is on the conference page.

Talking at OSDNConf in Kyiv, Ukraine

I was fortunate enough to be invited to Kyiv to keynote (video) the local Open Source Developer Network conference. Actually, I had two presentations. The opening keynote was on building a more secure operating system with fewer active security measures. I presented a few case studies why I believe that GNOME is well positioned to deliver a nice and secure user experience. The second talk was on PrivacyScore and how I believe that it makes the world a little bit better by making security and privacy properties of Web sites transparent.

The audience was super engaged which made it very nice to be on stage. The questions, also in the hallway track, were surprisingly technical. In fact, most of the conference was around Kernel stuff. At least in the English speaking track. There is certainly a lot of potential for Free Software communities. I hope we can recruit these excellent people for writing Free Software.

Lennart eventually talked about CAsync and how you can use that to ship your images. I’m especially interested in the cryptography involved to defend against certain attacks. We also talked about how to protect the integrity of the files on the offline disk, e.g. when your machine is off and some can access the (encrypted) drive. Currently, LUKS does not use authenticated encryption which makes it possible that an attacker can flip some bits in the disk image you read.

Canonical’s Christian Brauner talked about mounting in user namespaces which, historically, seemed to have been a contentious topic. I found that interesting, because I think we currently have a problem: Filesystem drivers are not meant for dealing with maliciously crafted images. Let that sink for a moment. Your kernel cannot deal with arbitrary data on the pen drive you’ve found on the street and are now inserting into your system. So yeah, I think we should work on allowing for insertion of random images without having to risk a crash of the system. One approach might be libguestfs, but launching a full VM every time might be a bit too much. Also you might somehow want to promote drives as being trusted enough to get the benefit of higher bandwidth and lower latency. So yeah, so much work left to be done. ouf.

Then, Tycho Andersen talked about forwarding syscalls to userspace. Pretty exciting and potentially related to the disk image problem mentioned above. His opening example was the loading of a kernel module from within a container. This is scary, of course, and you shouldn’t be able to do it. But you may very well want that if you have to deal with (proprietary) legacy code like Cisco, his employer, does. Eventually, they provide a special seccomp filter which forwards all the syscall details back to userspace.

As I’ve already mentioned, the conference was highly technical and kernel focussed. That’s very good, because I could have enlightening discussions which hopefully get me forward in solving a few of my problems. Another one of those I was able to discuss with Jakob on the days around the conference which involves the capabilities of USB keyboards. Eventually, you wouldn’t want your machine to be hijacked by a malicious security device like the Yubikey. I have some idea there involving modifying the USB descriptor to remove the capabilities of sending funny keys. Stay tuned.

Anyway, we’ve visited the city and the country before and after the event and it’s certainly worth a visit. I was especially surprised by the coffee that was readily available in high quality and large quantities.

GNOME Keysign 0.9.9

tl;dr: We have a new Keysign release with support for exchanging keys via the Internet.

I am very proud to announce this version of GNOME Keysign, because it marks an important step towards a famous “1.0”. In fact, it might be just that. But given the potentially complicated new dependencies, I thought it’d be nice to make sort of an rc release.

The main feature is a transport via the Internet. In fact, the code has been lurking around since last summer thanks to Ludovico’s great work. I felt it needed some massaging and more gentle introduction to the code base before finally enabling it.

For the transport we use Magic Wormhole, an amazing package for transferring files securely. If you don’t know it yet, give it a try. It is a very convenient tool for sending files across the Internet. They have a rendezvous server so that it works in NATted environments, too. Great.

You may wonder why we need an Internet transport, given that we have local network and Bluetooth already. And the question is good, because initially I didn’t think that we’d expose ourselves to the Internet. Simply because the attack surface is just so much larger and also because I think that it’s so weird to go all the way through the Internet when all we need is to transfer a few bytes between two physically close machines. It doesn’t sound very clever to connect to the Internet when all we need is to bridge 20 centimetres.

Anyway, as it turns out, WiFi access points don’t allow clients to connect to each other 🙁 Then we have Bluetooth, but it’s still a bit awkward to use. My impression is that people are not satisfied with the quality of Bluetooth connections. Also, the Internet is comparatively easy to use, both as a programmer and a user.

Of course, we now also have the option to exchange keys when not being physically close. I do not recommend that, though, because our security assumes the visual channel to be present and, in fact, secure. In other words: Scan the barcode for a secure key signing experience. Be aware that if you transfer the “security code” manually via other means, you may be compromised.

With this change, the UX changes a bit for the non-Internet transports, too. For example, we have a final page now which indicates success or failure. We can use this as a base for accompanying the signing process further, e.g. sign the key again with a non-exportable short-term signature s.t. the user can send an email right away. Or exchange the keys again after the email has been received. Exciting times ahead.

Now, after the wall of text, you may wonder how to get hold of this release. It should show up on Flathub soon.

Talking at GUADEC 2018 in Almería, Spain

I’ve more or less just returned from this year’s GUADEC in Almeria, Spain where I got to talk about assessing and improving the security of our apps. My main point was to make people use ASan, which I think Michael liked 😉 Secondarily, I wanted to raise awareness for the security sensitivity of some seemingly minor bugs and how the importance of getting fixes out to the user should outweigh blame shifting games.

I presented a three-staged approach to assess and improve the security of your app: Compilation time, Runtime, and Fuzzing. First, you use some hardening flags to compile your app. Then you can use amazing tools such as ASan or Valgrind. Finally, you can combine this with afl to find bugs in your code. Bonus points if you do that as part of your CI.

I encountered a few problems, when going that route with Flatpak. For example, the libasan.so is not in the Platform image, so you have to use an extension to have it loaded. It’s better than it used to be, though. I tried to compile loads of apps with ASan in the past and I needed to compile a custom GCC. And then mind the circular dependencies, e.g. libmfpr is needed by GCC. If I then compile a libmfpr with ASan, then GCC would stop working, because gcc itself is not linked against ASan. It seems silly to have those annoyances in the stack. And it is. I hope that by making people play around with these technologies a bit more, we can get to a point where we do not have to catch those time consuming bugs.

Panorama in Frigiliana

The organisation around the presentation was a bit confusing as the projector didn’t work for the first ten minutes. And it was a bit unclear who was responsible for making it work. In that room the audio also used to be wonky. I hope it went well alright after all.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.