Talking at Cubaconf 2017 in Havanna, Cuba

Few weeks ago I had a talk at Cubaconf 2017 in Havanna, Cuba. It’s certainly been an interesting experience. If only because of Carribean people. But also because of the food and the conditions the country has be run under the last decades.

Before entering Cuba, I needed a tourist visa in form of the turist trajeta. It was bothering me for more than it should have. I thought I’d have to go to the embassy or take a certain airline in order to be able to get hold of one of these cased. It turned out that you can simply buy these tourist cards in the Berlin airport from the TUI counter. Some claimed it was possible to buy at the immigration, but I couldn’t find any tourist visa for sale there, so be warned. Also, I read that you have to prove that you have health insurance, but nobody was interested in mine. That said, I think it’s extremely clever to have one…

Connecting to the Internet is a bit difficult in Cuba. I booked a place which had “Wifi” marked as their features and I naïvely thought that it meant that you by booking the place I also get to connect to the Internet. Turns out that it’s not entirely correct. It’s not entirely wrong either, though. In my case, there was an access point in the apartment in which I rented a room. The owner needs to turn it on first and run a weird managing software on his PC. That software then makes the AP connect to other already existing WiFis and bridges connections. That other WiFi, in turn, does not have direct Internet access, but instead somehow goes through the ISP which requires you to log in. The credentials for logging in can be bought in the ISPs shops. You can buy credentials worth 1 hour of WiFi connection (note that I’m avoiding the term “Internet” here) for 3 USD or so from the dealer around the corner. You can get your fix from the legal dealer cheaper (i.e. the Internet office…), but that will probably involve waiting in queues. I often noticed people gathering somewhere on the street looking into their phones. That’s where some signal was. When talking to the local hacker community, I found out that they were using a small PCB with an ESP8266 which repeats the official WiFi signal. The hope is that someone will connect to their piece of electronics so that the device is authenticated and also connects the other clients associated with the fake hotspot. Quite clever.

The conference was surprisingly well attended. I reckon it’s been around hundred people. I say surprisingly, because from all what I could see the event was weirdly organised. I had close to zero communication with the organisers and it was pure luck for me to show up in time. But other people seemed to be in the know so I guess I fell through the cracks somehow. Coincidentally, you could only install the conference’s app from Google, because they wouldn’t like to offer a plain APK that you can install. I also didn’t really know how long my talks should be and needed to prepare for anything between 15 and 60 minutes.

My first talk was on PrivacyScore.org, a Web scanner for privacy and security issues. As I’ve indicated, the conference was a bit messily organised. The person before me was talking into my slot and then there was no cable to hook my laptop up with the projector. We ended up transferring my presentation to a different machine (via pen drives instead of some fancy distributed local p2p network) in order for me to give my presentation. And then I needed to rush through my content, because we were pressed for going for lunch in time. Gnah. But I think a few people were still able to grasp the concepts and make it useful for them. My argument was that Web pages load much faster if you don’t have to load as many trackers and other external content. Also, these people don’t get updates in time, so they might rather want to visit Web sites which generally seem to care about their security. I was actually approached by a guy running StreetNet, the local DIY Internet. His idea is to run PrivacyScore against their network to see what is going on and to improve some aspects. Exciting.

My other talk was about GNOME and how I believe it makes more secure operating systems. Here, my thinking was that many people don’t have expectations of how their system is supposed to be looking or even working. And being thrown into the current world in which operating systems spy on you could lead to being primed to have low expectations of the security of the system. In the GNOME project, however, we believe that users must have confidence in their computing being safe and sound. To that end, Flatpak was a big thing, of course. People were quite interested. Mostly, because they know everything about Docker. My trick to hook these people is to claim that Docker does it all wrong. Then they ask pesky questions which gives me many opportunities to mention that for some applications squashfs is inferior to, say, OStree, or that you’d probably want to hand out privileges only for a certain time rather than the whole life-time of an app. I was also to make people look at EndlessOS which attempts to solve many problems I think Cubans have.

The first talk of the conference was given by Ismael and I was actually surprised to meet people I know. He talked about his hackerspace in Almería, I think. It was a bit hard to me to understand, because it was in Spanish. He was followed by Valessio Brito who talked about putting a price on Open Source Software. He said he started working on Open Source Software at the age of 16. He wondered how you determine how much software should cost. Or your work on Open Source. His answer was that one of the determining factors was simply personal preference of the work to be performed. As an example he said that if you were vegan and didn’t like animals to be killed, you would likely not accept a job doing exactly that. At least, you’d be inclined to demand a higher price for your time. That’s pretty much all he could advise the audience on what to do. But it may also very be that I did not understand everything because it was half English and half Spanish and I never noticed quickly enough that the English was on.

An interesting talk was given by Christian titled “Free Data and the Infrastructure of the Commons”. He began saying that the early textile industry in Lyon, France made use of “software” in 1802 with (hard wired) wires for the patterns to produce. With the rise of computers, software used to be common good in the early 1960s, he said. Software was a common good and exchanged freely, he said. The sharing of knowledge about software helped to get the industry going, he said. At the end of the 1970s, software got privatised and used to be licensed from the manufacturer which caused the young hacker movement to be felt challenged. Eventually, the Free Software movement formed and hijacked the copyright law in order to preserve the users’ freedoms, he said. He then compared the GPL with the French revolution and basic human rights in that the Free Software movement had a radical position and made the users’ rights explicit. Eventually, Free Software became successful, he said, mainly because software was becoming more successful in general. And, according to him, Free Software used to fill a gap that other software created in the 80s. Eventually, the last bastion to overcome was the desktop, he said, but then the Web happened which changed the landscape. New struggles are software patents, DRM, and privacy of the “bad services”. He, in my point of view rightfully so, said that all the proliferation of free and open source software, has not lead to less proprietary software though. Also, he is missing the original FOSS attitude and enthusiasm. Eventually he said that data is the new software. Data not was not an issue back when software, or Free Software even, started. He said that 99% of the US growth is coming from the data processing ad companies like Google or Facebook. Why does data have so much value, he asked. He said that actually living a human is a lot of work. Now you’re doing that labour for Facebook by entering the data of your human life into their system. That, he said, is where the value in coming from. He made the the point that Software Freedoms are irrelevant for data. He encouraged the hackers to think of information systems, not software. Although he left me wondering a bit how I could actually do that. All in all, a very inspiring talk. I’m happy that there is a (bad) recording online:

I visited probably the only private company in Cuba which doubles as a hackerspace. It’s interesting to see, because in my world, people go and work (on computer stuff) to make enough money to be free to become a singer, an author, or an artist. In Cuba it seems to be the other way around, people work in order to become computer professionals. My feeling is that many Cubans are quite artsy. There is music and dancing everywhere. Maybe it’s just the prospects of a rich life though. The average Cuban seems to make about 30USD a month. That’s surprising given that an hour of bad WiFi costs already 1 USD. A beer costs as much. I was told that everybody has their way to get hold of some more money. Very interesting indeed. Anyway, the people in the hackerspace seemed to be happy to offer their work across the globe. Their customers can be very happy, because these Cubans are a dedicated bunch of people. And they have competitive prices. Even if these specialists make only hundred times as much the average Cuban, they’d still be cheap in the so called developed world.

After having arrived back from Cuba, I went to the Rust Hackfest in Berlin. It was hosted by the nice Kinvolk folks and I enjoyed meeting all the hackers who care about making use of a safer language. I could continue my work on rustifying pixbuf loaders which will hopefully make it much harder to exploit them. Funnily enough, I didn’t manage to write a single line of Rust during the hackfest. But I expected that, because we need to get to code ready to be transformed to Rust first. More precisely, restructure it a bit so that it has explicit error codes instead of magic numbers. And because we’re parsing stuff, there are many magic numbers. While digging through the code, other bugs popped up as well which we needed to eliminate as side challenges. I’m looking much forward to writing an actual line of Rust soon! 😉

Talking at GI Tracking Workshop in Darmstadt, Germany

Uh, I almost forgot about blogging about having talked at the GI Tracking Workshop in Darmstadt, Germany. The GI is, literally translated, the “informatics society” and sort of a union of academics in the field of computer science (oh boy, I’ll probably get beaten up for that description). And within that body several working groups exist. And one of these groups working on privacy organised this workshop about tracking on the Web.

I consider “workshop” a bit of a misnomer for this event, because it was mainly talks with a panel at the end. I was an invited panellist for representing the Free Software movement contrasting a guy from affili.net, someone from eTracker.com, a lady from eyeo (the AdBlock Plus people), and professors representing academia. During the panel discussion I tried to focus on Free Software being the only tool to enable the user to exercise control over what data is being sent in order to control tracking. Nobody really disagreed, which made the discussion a bit boring for me. Maybe I should have tried to find another more controversial argument to make people say more interesting things. Then again, it’s probably more the job of the moderator to make the participants discuss heatedly. Anyway, we had a nice hour or so of talking about the future of tracking, not only the Web, but in our lives.

One of the speakers was Lars Konzelmann who works at Saxony’s data protection office. He talked about the legislative nature of data protection issues. The GDPR is, although being almost two years old, a thing now. Several types of EU-wide regulations exist, he said. One is “Regulation” and the other is “Directive”. The GDPR has been designed as a Regulation, because the EU wanted to keep a minimum level of quality across the EU and prevent countries to implement their own legislation with rather lax rules, he said. The GDPR favours “privacy by design” but that has issues, he said, as the usability aspects are severe. Because so far, companies can get the user’s “informed consent” in order to do pretty much anything they want. Although it’s usefulness is limited, he said, because people generally don’t understand what they are consenting to. But with the GDPR, companies should implement privacy by design which will probably obsolete the option for users to simply click “agree”, he said. So things will somehow get harder to agree to. That, in turn, may cause people to be unhappy and feel that they are being patronised and being told what they should do, rather than expressing their free will with a simple click of a button.

Next up was a guy with their solution against tracking in the Web. They sell a little box which you use to surf the Web with, similar to what Pi Hole provides. It’s a Raspberry Pi with a modified (and likely GPL infringing) modification of Raspbian which you plug into your network and use as a gateway. I assume that the device then filters your network traffic to exclude known bad trackers. Anyway, he said that ads are only the tip of the iceberg. Below that is your more private intimate sphere which is being pried on by real time bidding for your screen estate by advertising companies. Why would that be a problem, you ask. And he said that companies apply dynamic pricing depending on your profile and that you might well be interested in knowing that you are being treated worse than other people. Other examples include a worse credit- or health rating depending on where you browse or because your bank knows that you’re a gambler. In fact, micro targeting allows for building up a political profile of yours or to make identity theft much easier. He then went on to explain how Web tracking actually works. He mentioned third party cookies, “social” plugins (think: Like button), advertisement, content providers like Google Maps, Twitter, Youtube, these kind of things, as a means to track you. And that it’s possible to do non invasive customer recognition which does not involve writing anything to the user’s disk, e.g. no cookies. In fact, such a fingerprinting of the users’ browser is the new thing, he said. He probably knows, because he is also in the business of providing a tracker. That’s probably how he knows that “data management providers” (DMP) merge data sets of different trackers to get a more complete picture of the entity behind a tracking code. DMPs enrich their profiles by trading them with other DMPs. In order to match IDs, the tracker sends some code that makes the user’s browser merge the tracking IDs, e.g. make it send all IDs to all the trackers. He wasn’t really advertising his product, but during Q&A he was asked what we can do against that tracking behaviour and then he was forced to praise his product…

Eye/o’s legal counsel Judith Nink then talked about the juristic aspects of blocking advertisements. She explained why people use adblockers in first place. I commented on that before, claiming that using an adblocker improves your security. She did indeed mention privacy and security being reasons for people to run adblockers and explicitly mentionedmalvertising. She said that Jerusalem Post had ads which were actually malware. That in turn caused some stir-up in Germany, because it was coined as attack on German parliament… But other reasons for running and adblocker were data consumption and the speed of loading Web pages, she said. And, of course, the simple annoyance of certain advertisements. She presented some studies which showed that the typical Web site has 50+ or so trackers and that the costs of downloading advertising were significant compared to downloading the actual content. She then showed a statement by Edward Snowden saying that using an ad-blocker was not only a right but is a duty.

Everybody should be running adblock software, if only from a safety perspective

Browser based ad blockers need external filter lists, she said. The discussion then turned towards the legality of blocking ads. I wasn’t aware that it’s a thing that law people discuss. How can it possibly not be legal to control what my client does when being fed a bunch of HTML and JavaScript..? Turns out that it’s more about the entity offering these lists and a program to evaluate them *shrug*. Anyway, ad-blockers use either blocking or hiding of elements, she said where “blocking” is to stop the browser from issuing the request in first place while “hiding” is to issue the request, but to then hide the DOM element. Yeah, law people make exactly this distinction. She then turned to the question of how legal either of these behaviours is. To the non German folks that question may seem silly. And I tend to agree. But apparently, you cannot simply distribute software which modifies a Browser to either block requests or hide DOM elements without getting sued by publishers. Those, she said, argue that gratis content can only be delivered along with ads and that it’s part of the deal with the customer. Like that they also transfer ads along with the actual content. If you think that this is an insane argument, especially in light of the customer not having had the ability to review that deal before loading that page, you’re in good company. She argued, that the simple act of loading a page cannot be a statement of consent, let alone be a deal of some sorts. In order to make it a deal, the publishers would have to show their terms of service first, before showing anything, she said. Anyway, eye/o’s business is to provide those filter lists and a browser plugin to make use of those lists. If you pay them, however, they think twice before blocking your content and make exceptions. That feels a bit mafiaesque and so they were sued for “aggressive geschäftliche Handlung”, an “aggressive commercial behaviour”. I found the history of cases interesting, but I’ll spare the details for the reader here. You can follow that, and other cases, by looking at OLG Koeln 6U149/15.

Next up was Dominik Herrmann to present on PrivacyScore.org, a Web portal for scanning Web sites for security and privacy issues. It is similar to other scanners, he said, but the focus of PrivacyScore is publicity. By making results public, he hopes that a race to the top will occur. Web site operators might feel more inclined to implement certain privacy or security mechanisms if they know that they are the only Web site which doesn’t protect the privacy of their users. Similarly, users might opt to use a Web site providing a more privacy friendly service. With the public portal you can create lists in order to create public benchmarks. I took the liberty to create a list of Free Desktop environments. At the time of creation, GNOME fell behind many others, because the mail server did not implement TLS 1.2. I hope that is being taking as a motivational factor to make things more secure.

Talking at Kieler LinuxTage 2017 in Kiel, Germany

I was invited to present GNOME at the Kieler LinuxTage in Kiel, Germany.

logo

Compared to other events, it’s a tiny happening with something between fifty and hundred people or so. I was presenting on how I think GNOME pushes the envelope regarding making secure operating systems (slides, videos to follow). I was giving three examples of how GNOME achieves its goal of priding a secure OS without compromising on usability. In fact, I claimed that the most successful security solutions must not involve the user. That sounds a bit counter intuitive to people in the infosec world, because we’re trying to protect the user, surely they must be involved in the process. But we better not do that. This is not to say that we shouldn’t allow the user to change preferences regarding how the solutions behave, but rather that it should work without intervention. My talk was fairly good attended, I think, and we had a great discussion. I tend to like the discussion bit better than the actual presentation, because I see it as an indicator for how much the people care. I couldn’t attend many other presentations, because I would only attend the second day. That’s why I couldn’t meet with Jim :-/

But I did watch Benni talking about hosting a secure Web site (slides). He started his show with mentioning DNS which everybody could read, He introduced DNSSEC. Which, funnily enough, everybody also can read, but he failed to mention that. But at least nobody can manipulate the response. Another issue is that you leak information about your host names with negative responses, because you tell the client that there is nothing between a.example.com and b.example.com. He continued with SSH for deploying your Web site and mentioned SSHFP which is a mechanism for authenticating the host key. The same mechanism exists for Web or Mail servers, he said: DANE, DNS-based Authentication of named entities. It works via TLSA records which encode either the certificate or the used public key. Another DNS-based mechanism is relatively young: CAA. It asserts that a certificate for a host name shall be signed by a certain entity. So you can hopefully prevent a CA that you’ve never heard of creating a certificate for your hosts. All of these mechanisms try to make the key exchange in TLS a bit less shady. TLS ensures a secure channel, i.e. confidentiality, non-repudiation, and integrity. That is considered to be generally useful in the Web context. TLS tends to be a bit of a minefield, because of the version and configuration matrix. He recommended to use at least TLS as of version 1.2, to disable compression due to inherent attacks on typical HTTP traffic (CRIME), and to use “perfect forward secrecy” ciphers for protecting the individual connections after the main key leaked. Within TLS you use x509 certificates for authenticating the parties, most importantly in the Web world, the server side. The certificate shall use a long enough RSA key, he said, The certificate shall not use a CN field to indicate the host name, but rather the SAN field. The signatures should be produced with “at least SHA-256”. He then mentioned OCSP because life happens and keys get lost or stolen. However, with regular OSCP the clients expose the host names they visit, he said. Enter OCSP Stapling. In that case the Web server itself gets the OCSP response and hands it over to the client. Of course, this comes with its own challenges. But it may also happen that CAs issue certificates for a host name which doesn’t expect that new certificate. In that case, Certificate transparency becomes useful. It’s composed of three components, he said. Log servers which logs all created certificates, monitors which pull the logs, and auditors which check the logs for host names. Again, your Browser may want to check whether the given certificate is in the CT logs. This opens the same privacy issue as with OCSP and can be somewhat countered with signed log statements from a few trusted log servers.

In any case, TLS is only useful, he said, if you are actually using it. Assuming you had a secure connection once, you can use the TLS Strict Transport Security header. It tells the browser to use TLS in the future. Of course, if you didn’t have that first connection, you can have your webapp entered in the STS Preloading list which is then baked into major browsers. Another mechanism is HTTP Public Key Pinning which is a HTTP header to tell the client which certificates or CAs shall be accepted. The header value is a simple list of hashed certificates. He mentioned the risk of someone hijacking your Web presence with an injected HPKP header. A TLS connection has eventually been established successfully. Now the HTTP layer gets interesting, he said. The Content Type Options header prevents Internet Explorer from snooping content types which might cause an image to be executed as JavaScript. Many Cross-Site Scripting attacks, he said, originate from being embedded in a frame. To prevent that, you can set the X-Frame-Options header. To activate Cross-Site Scripting protection mechanisms, the X-XSS-Protection header can be set. It’s probably turned off by default for compatibility reasons, he said. If you know where exactly your data is coming from, you can make use of a Content Security Policy which is like SELinux for your browser. It’s a bit of a complicated mechanism though. For protecting your Webapp he mentioned Sub-Resource Integrity which is essentially the hash of what script you expect. This prevents tampering with the foreign script, malicious or not.

I think that was one of the better talks in the schedule with many interesting details to be discovered. I enjoyed it a lot. I did not enjoy their Web sites, though, which are close to being unusable. The interface for submitting talks gives you a flashback to the late 90’s. Anyway, it seems to have worked for many years now and hope they will have many years to come.

Talking at PET-CON 2017.2 in Hamburg, Germany

A few weeks ago, I was fortunate enough to talk at the 7th Privacy Enhancing Techniques Conference (PET-CON 2017.2) in Hamburg, Germany. It’s a teeny tiny academic event with a dozen or so experts in the field of privacy.

The talks were quite technical, involving things like machine learning over logs or secure multi-party computation. I talked about how I think that the best technical solution does not necessarily enable the people to be more private, simply because the people might not be able to make use of the tool properly. A concern that’s generally shared in the academic community. Yet, the methodology to create and assess the effectiveness of a design is not very elaborated. I guess we need to invest more brain power into creating models, metrics, and tools for enabling people to do safer computing.

So I’m happy to have gone and to have had the opportunity of discussing the issues I’m seeing. Likewise, I find it very interesting to see where the people are currently headed towards.

Talked at mrmcd 2017 in Darmstadt, Germany

I attended this year’s MRMCD in Darmstadt, Germany. I attended a few times in the past and I think this year’s edition was not as successful as the last ones. The venue changed this year, what probably contributed to some more chaos than usual and hence things not running as smoothly as they did. I assume it will be better next year, when people know how to operate the venue. Although all tickets were sold during the presale phase, it felt smaller than in the last years. In fairness, though, the venue was also bigger this year. The schedule had some interesting talks, but I didn’t really get around to attend many, because I was busy preparing my own shows (yeah, should’ve done that before…).

I had two talks at this conference. The first was on playing the children’s game “battleship” securely (video). That means with cryptography. Lennart and I explained how concepts such as commitment schemes, zero knowledge proofs of knowledge, oblivious transfer, secure multiparty computation and Yao’s protocol can be used to play that game without a trusted third party. The problem, in short, is to a) make sure that the other party’s ships are placed correctly and b) to make sure the other party answers correctly. Of course, if you get hold of the placements of the ships these problems are trivial. But your opponent doesn’t like you to know about the placements. Then a trusted third party would solve that problem trivially. But let’s assume we don’t have such a party. Also, we want to decentralise things, so let’s come up with a solution that involves two players only.

The second problem can be solved with a commitment. A commitment is a statement about a something you’ve chosen but that doesn’t reveal the choice itself nor allows for changing ones mind later. Think of a letter in a closed envelope that you hand over. The receiver doesn’t know what’s written in the letter and the sender cannot change the content anymore. Once the receiver is curious, they can open the envelope. This analogy isn’t the best and I’m sure there’s better real-world concepts to compare to commitment schemes. Anyway, for battleship, you can make the other party commit to the placement of the ships. Then, when the battle starts, you have the other party open the commitment for the field that you’re shooting. You can easily check whether the commitment verifies correctly in order to determine whether you hit a ship or water.

The other problem is the correct placement of the ships, e.g. no ships shall be adjacent, exactly ten ships, exactly one five-field ship, etc. You could easily wait until the end of the game and then check whether everything was placed correctly. But that wouldn’t be (cryptographic) fun. Let’s assume one round of shooting is expensive and you want to make sure to only engage if the other party indeed follows the rules. Now it’s getting a bit crazy, because we need to perform a calculation without learning anything else than “the ships are correctly placed”. That’s a classic zero knowledge problem. And I think it’s best explained with the magic door in a cave.

Even worse, we need to somehow make sure that we cannot change our placement afterwards. There is a brain melting concept of secure multi-party computation which allows you to do exactly that. You can execute a function without knowing what you’re doing. Crazy. I won’t be able to explain how it works in a single blog post and I also don’t intend to, because others are much better in doing that than I could ever be. The gist of the protocol is, that you model your functionality as a Boolean circuit and assign random values to represent “0” or “1” for each wire. You then build the truth table for each gate and replace the values of the table (zeros and ones) with an encryption under both the random value for the first input wire and the random value for the second input wire. The idea now is that the evaluator can only decrypt one value in the truth table given the input keys. There are many more details to care about but eventually you have a series of encrypted, or garbled, gates and you need the relevant keys in order to evaluate it. You can’t tell from the keys you get whether it represents a “0” or a “1”. Hence you can evaluate without knowing the other party’s input.

My other talk was about a probable successor of Return Oriented Programming: Data Oriented Programming (video). In Return Oriented Programming (ROP) and its variants like JOP the aim is to diverge the original control flow in order to make the program execute the attacker’s functionality. This, however, can probably be thwarted by Control Flow Integrity. In its simplest form, it checks on every branch whether it is legit. Think of a database with a list of addresses which are allowed to a list of other addresses. Of course, real-world implementations are more clever. Anyway, let’s assume that we’ll have a hard time exploiting our target with ROP, because we cannot change the CFG of the program. If our attack doesn’t change the CFG, though, we should be safe for anything that detects its modification. That’s the central idea of DOP.

Although I’m not super excited about this year’s edition, I’m looking forward to seeing the next year’s event. I hope it’s going to be a bit more organised; including myself 😉

GUADEC 2017

It’s summer and it’s GUADEC time! This year’s GUADEC took place in Manchester, England. It was surprisingly less bad for that location 😉 The organisers deserve a big round of applause for having pulled the event off. After having organised last year’s GUADEC I have first hands experience running such an event. So a big “thank you” to the team from England 🙂

The venue was a big and modern university and the accommodation was neatly located a few footsteps from the lecture hall. That’s especially nice for the typical English weather 😉 We got to live in the student dorms and I’m a bit jealous of today’s student to be able to live in such a comfortable place.

I attended a few talks from the list, among them was Christian Hergert reporting on The State of Builder which was a bit scattered and not very well structured for beginners like me. I guess was meant to be more of a showing off new features instead of a structured walk through the design and thoughts behind the project. I knew the project existed but I never really got around to work with it so I was a bit put off. But I took that for a good opportunity for installing the latest Flatpaked application 🙂

I liked Simon’s talk on enabling users to modify the software they are running. Essentially, you can click a button in the application and it’ll fire up an IDE where you can change code and hit “play” to run the new version. Amazing. Software Freedom at its best. He demoed a prototype and I think it’s got potential. I really like the idea of the user being able to tinker around easily. Especially given that the status quo is jhbuild. That’s a nice tool, but it proves to be hard for people to make good use of it. I hope we will see something like this being used in the future.

Federico was telling us about the efforts to make use of the Rust language for GNOME. The gist is, essentially, that you better start with leaf functions of your app or library rather than a central function in your architecture. I then tried to find leaf functions with the help of the compiler, but I failed. I tried Egypt but I wasn’t patient enough to make proper use of the generated dot file in order to identify leaf functions. Maybe I should give cflow a try next time.

I used the BoF days to dip a little bit into Rust. It’s always helpful to have a bunch of smart hackers around. That’s what I like about these kind of events. You get to know and talk to very smart people. I also tried to catch up with my very talented student and discuss the changes we’d like to see.

Thanks to the GNOME Foundation for sponsoring my travel and to the local team for having organised a successful event!

GUADEC 2017 group photo

Attended FOSDEM 2017

Unsurprisingly, the biggest European Free Software event happened in Brussels, Belgium again. I’m talking about FOSDEM, of course. It’s a fixed entry in many peoples calendar and always a good excuse to visit Brussels 🙂

I’m a bit late to report on what talks I managed to see as others have already covered some of the talks, but I still want to add some observations.

Richard Brown from SuSE talked about dinosaurs and resurrecting them (video). It was more about containerised apps than actual dinosaurs, though. The general theme was about repeating mistakes that we might or should have learned in the past. He started by mentioning that the Windows DLL Hell was a nightmare. You needed to test your application with each and every version combination of every possible library. The DLLs did not necessarily have ABI compatibility so it was very cumbersome to test. Windows 2000 brought Side-by-Side assembly, which is some form of DLL containerisation, he said. It uses separate memory space for each app and its DLLs. Programs can ship “private” DLLs in their application directory so you don’t necessarily break other apps with your DLL carrying the same name. This approach, however, still has issues: Security wise each app needs to update their libraries themselves rather than have them updated. So each app needed to build and ship their own updater which is not trivial to do. Legally it’s also interesting, he said, because bundling these DLLs may impose restrictions. Last but not least, you have to have the same DLL potentially multiple times on disk, because each app may ship the same DLL.

The contemporary software distribution model has its problems, too, he said. Compatibility with various distros is an issue, because each distro is slightly different. Each distribution also has their own pace of change which may be incompatible with the application in question, e.g. the distros may decide to ship an older version because they have tested it more. Different distributions have different libraries and versions thereof. Also, each distribution has different toolsets to package applications up for their environment. Application developers, however, don’t want to care about these details.

Containerised applications solve these issues. Maybe. He mentioned Flatpak, snappy, and Appimage. The latter is the oldest technology dating all the way back to 2003. The solutions have in common that they bundle the app and run it in some kind of container or sandbox. From his criteria, the compatibility issue is solved, because the libraries are in the bundles. Portability is solved, because all dependencies are shipped in the bundle. And the pace of change is up to the app developer.

The containerisations, though, make assumptions of a common standard base provided by the distributions. According to him, such a common standard base does not exist in a practical sense, though. With containerised apps, he said, we might be repeating history. He explained that we might get a security nightmare because each app needs to update their dependencies themselves. The question also begs whether all the libraries can actually be bundled and shipped. App developers are picking up the responsibilities that distros used to have. You still have to test everything on each distro just to be sure that your base dependencies still work correctly, he said. He sees distributions as part of the solution to these problems. He thinks that a rolling release might solve the issues we’re trying to solve with containserised apps. A rolling release can ship new releases of applications very quickly. The distribution still uses their tools for the common problems like maintenance, security, and legal stuff.

In a lightning talk, David talked about “practical TPM 2.0 usage”. He showed how to generate a signing key, sign a document with it, and verify the signature. He said that Microsoft mandated TPM2.0 for Windows 10 Mobile and that it is a cryptographic processor rather than an accelerator. TPM2.0 is different from TPM1.2 in various ways, he said. For example, the 2.0 can do ECC (P256 and BN256) and SHA-256. But it’s also “algorithm agile” which means that you can add algorithms without having to change the specification. He sees three main usages: Platform integrity like secure boot and trusted boot, disk encryption where the TPM stores and controls access to the key, and Digital Restriction Management by verifying code signatures. In order to use the TPM you have two options, he said. IBM or Intel have developed some tools. IBM doesn’t have a “resource manager” according to the specification. Like a multiplexer. Intel does have such a resource manager and they are working on putting that into Linux. However, Intel has less tools, he said, although it’s wasn’t entirely clear to me what he was referring to. He mentioned that his employer, Facebook, uses TPMs for platform attestation.

Hanno talked the security on the Linux desktop. He referred to the issues Chris Evans exposed a few weeks ago.
He wanted to make the audience angry, he said. But not at him, I suppose because he considers himself to be the messenger only. The basic problem is an unfortunate agglomeration of bugs or behaviours. It starts with the browser automatically downloading files into the users downloads folder, i.e. without asking the user. Then there is Tracker which indexes files that you add to your home directory. Such as the download folder. And then there are buggy (read: vulnerable) implementations of file parsers.

He also referred to Carlos’ comment about bugs being bugs and no problem being found except bugs being bugs. Hanno’s point, as far as I could make it out, was that a project of the size as tracker, especially with that number of dependencies that you don’t control, cannot make sure that there will be not yet another bug that can be exploited. That’s quite fatalistic but probably not too far from reality. It’s not just a Tracker issue, though, he said. KDE has Baloo and everybody wants to have thumbnails of the files in your folders. He reiterated that automatic downloads AND automatic indexing creates a huge attack surface. And that the indexers support a vast variety of file formats by using many libraries of varying quality. While Tracker quickly adopted sandboxing, he said, KDE hasn’t.

He mentioned other exploit mitigation techniques such as ASLR or CFI. With ASLR, he said, the idea is to load code and data into random addresses in memory. This mitigates exploits, because they cannot reliably target valid code in memory. A least that’s the idea. You need to compile the code with -fpic and -pie, he said. Linux distribution have been slow in adopting ASLR though. Ubuntu has introduced it with 16.10, Feora with 23, and Debian is WIP. OpenSuSE has it for a few packages only. It should be the default, he said. Windows, on the other hand, has it since Vista. They also explore and experiment with more modern mitigations like CFI. Yet another approach is to avoid the C language, because “[it] is full of memory corruptions”. Rust comes to mind as an alternative. GStreamer already supports plugins in Rust, he said. He concluded that fixing all these bugs, like Carlos seemed to be wanting, is very hard. Not only because GStreamer is very prone to memory corruption due to the amount of complicated formats it parses. He mentioned fuzzing as a viable strategy to shake out bugs and he found many bugs in a few days. He mentioned that probably to make do so more of that ourselves. I’m working on it. More to posted separately.

The next talk was about testing TLS implementations. For the last year or so I began investigating TLS issues myself and I was wishing for a TLS testing framework. Now I learned about an implementation. Hubert Karlo introduced his “tls fuzzer” which is a bit of a misnomer, because it actually doesn’t perform any fuzzing. He said that TLS was complex and that it has 326 official ciphersuites, 4 PKI cryptosystems, 16 signature-hash pairs, and many more countable things that make the test matrix grow fast. There is a lot of state to be maintained, he said. He presented his tool which takes care of TLS specifics but allows you to define your own payloads and modifications to them. For example, with a few lines of code you can define a client to open a TLS connection and to use a GCM ciphersuite for collecting the nonces. He claims to have found more than 20 issues in NSS, GnuTLS, and OpenSSL. I’m curious to play around with it and maybe hook it up with Scapy’s fuzzing facilities.

Another TLS related talk was given by Fridolin who showed us a TLS Linux Kernel module implementation. The advantages are manyfold he said. Obviously, establishing the connection should be cheaper in terms of computation because the context does not need to be switched so often. Others are already using a kernel implementation of TLS, he said. He mentioned that Solaris has a kssl socket and that netflix uses a modified sendfile() for TLS on BSD. His implementation has been evaluated by Facebook, he said. The implementation leaves the handshaking still to user space and cares about the symmetric encryption only.

Compared to other FOSDEMs, I was able to actually see a few talks, although I was impressed by the number of people I randomly bumped into and who kept me from attending more talks 😉 The size of FOSDEM is its cause and solution to problems. A good thing about it was that I could bribe something to cook up a Debian package for GNOME Keysign so that, hopefully, 200 people don’t have to queue up and do weird things :o)

Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! 😉

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something 🙂

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement 😉 I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me 😉

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city 🙂 By that time the links here in this post will probably be broken 😉 I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable 🙁 …

First OpenPGP.conf 2016 in Cologne, Germany

Recently, I’ve attended the first ever OpenPGP conference in Cologne, Germany. It’s amazing how 25 years of OpenPGP have passed without any conference for bringing various OpenPGP people together. I attended rather spontaneously, but I’m happy to have gone. It’s been very insightful and I’m really glad to have met many interesting people.

Werner himself opened the conference with his talk on key discovery. He said that the problem of integrating GnuPG in MUAs is solved. I doubt that with a fair bit of confidence. Besides few weird MUAs (mutt, gnus, alot, …) I only know KMail (should maybe also go into the “weird” category 😉 ) which uses GnuPG through gpgme, which is how a MUA really should consume GnuPG functionality. GNOME’s Evolution, while technically correct, supports gnugp, but only badly. It should really be ported to gpgme. Anyway, Werner said that the problem of encryption has been solved, but now you need to obtain the key for the party you want to communicate with. How can you find the key of your target? He said that keyservers cannot map a mail address to a key. It was left a bit unclear what he meant, but he probably referred to the problem of someone uploading a key for your email address without your consent. Later, he mentioned the Web of Trust, which is meant for authenticating the other user’s key. But he disliked the fact that it’s “hard to explain”. He didn’t mention why, though. He did mention that the WoT exposes the global social graph, which is not a desirable feature. He also doubts that the Web of Trust scales, but he left the audience wondering why. To solve the mapping problem, you might imagine keyservers which verify your email address before accepting your key. These, he said, “harm the system”. The problem, he said, is that this system only works with one keyserver which would harm the decentralised nature of the OpenPGP system and bring us back in to the x.500 dark age. While I agree with the conclusion, I don’t fully agree with the premise. I don’t think it’s clear that you cannot operate a verifying server network akin to how it’s currently done. For example, the pool of keyservers could only accept keys which were signed by one of the servers of the pool within the last, say, 6 months. Otherwise, the user has to enrol by following a challenge-response protocol. The devil may be in the details, but I don’t see how it’s strictly impossible.

However, in general, Werner likes the OpenSSH approach better. That is, it opportunistically uses a key and detects when it changes. As with the Web of Trust, the key validation happens on your device, only. Rather than, say, have an external entity selling the trust as with X.509.

Back to the topic. How do you find a key of your partner now? DANE would be an option. It uses DNSSEC which, he said, is impossible to implement a client for. It also needs collaboration of the mail provider. He said that Posteo and mailbox.org have this feature.

Anyway, the user’s mail provider needs to provide the key, he said. Web Key Directory is a new proposal. It uses https for key look-up on a well known name on the domain of the email provider. Think .well-known/openpgp/. It’s not as distributed as DNS but as decentralised as eMail is, he said. It still requires collaboration of the email provider to set the Web service up. The proposal trusts the provider to return genuine keys instead of customised ones. But the system shall only be used for initial key discovery. Later, he mentioned to handle revocation via the protocol™. For some reason, he went on to explain a protocol to submit a key in much more detail rather than expanding on the protocol for the actual key discovery, what happens when the key gets invalid, when it expired, when it gets rolled over, etc.
—-

Next up was Meskio who talked about Key management at LEAP, the LEAP Encryption Access Project. They try to provide a one-stop solution to encrypting all the things™. One of its features is to transparently encrypt emails. To achieve that, it opens a local MTA and an IMAPd to then communicate via a VPN with the provider. It thus builds on the idea of federation the same way current email protocols do, he said. For LEAP to provide the emails, they synchronise the mailbox across devices. Think of a big dropbox share. But encrypted to all devices. They call it soledad which is based on u1db.

They want to protect the user from the provider and the provider from the user. Their focus on ease of use manifests itself in puppet modules that make it easy to deploy the software. The client side is “bitmask“, a desktop application written in Qt which sets everything up. That also includes transparently getting keys of other users. Currently, he said, we don’t have good ways of finding keys. Everything assumes that there is user intervention. They want to change that and build something that encrypts emails even when the user does not do anything. That’s actually quite an adorable goal. Security by default.

Regarding the key validation they intend to do, he mentioned that it’s much like TOFU, but with many many exceptions, because there are many corner cases to handle in that scheme. Keys have different validation levels. The key with the highest validation level is used. When a key roll-over happens, the new key must be signed by the old one and the new key needs to be at least of a validation level as the old one. Several other conditions need to also hold. Quite an interesting approach and I wish that they will get more exposure and users. It’s promising, because they don’t change “too” much. They still do SMTP, IMAP, and OpenPGP. Connecting to those services is different though which may upset people.


More key management was referred on by Comodo’s Phillip Hallam-Baker who went then on to talk about The Mathematical Mesh: Management of Keys. He also doesn’t want to change the user experience except for simplifying everything. Every button to push is one too many, he said. And we must not write instructions. He noted that if every user had a key pair, we wouldn’t need passwords and every communication would be secured end-to-end. That is a strong requirement, of course. He wants to have a single trust model supporting every application, so that the user does not have to configure separate trust configurations for S/MIME, OpenPGP, SSH, etc. That again is a bit of a far fetched dream, I think. Certainly worth working towards it, but I don’t believe to experience such a thing in my lifetime. Except when we think of single closed system, of course. Currently, he said, fingerprints are used in two ways: Either users enter them manually or they compare it to a string given by a secure source.

He presented “The Mesh” which is a virtual store for configuration information. The idea is that you can use the Mesh to provision your devices with the data and keys it needs to make encrypted communication happen. The Mesh is thus a bit of a synchronised storage which keeps encrypted data only. It would have been interesting to see him relate the Mesh to Soledad which was presented earlier. Like Soledad, you need to sign up with a provider and connect your devices to it when using the Mesh. His scheme has a master signature key which only signs a to be created administration key. That in turn signs application- and device keys. Each application can create as many keys as it needs. Each device has three device keys which he did unfortunately not go into detail why these keys are needed. He also has an escrow method for getting the keys back when a disaster happens: The private keys are encrypted, secret shared, and uploaded. Then, you can use two out of three shares to get your key back. I wonder where to upload those shares to though and how to actually find your shares back.

Then he started losing me when he mentioned that OpenPGP keyservers, if designed today, would use a “linked notary log (blockchain)”. He also brought (Proxy-) reencryption into the mix which I didn’t really understand. The purpose itself I think I understand: He wants the mesh to cater for services to re-encrypt to the several keys that all of one entity’s devices have. But I didn’t really understand why it’s related to his Mesh at all. All together, the proposal is a bit opportunistic. But it’s great to have some visions…

Bernhard Reiter talked about getting more OpenPGP users by 2017. Although it was more about whitewashing the money he receives from German administration… He is doing gpg4win, the Windows port of GnuPG. The question is, he said, how to get GnuPG to a large user base and to make them use it. Not surprisingly, he mentioned that we need to improve the user experience. Once he user gets in touch with cryptography and is exposed to making trust decisions, he said, the user is lost. I would argue otherwise, because the people are heavily exposed to cryptography when using whatsapp. Anyway, he then referred to an idea of his: “restricted documents”. He wants to build a military style of access control for documents. You can’t blame him; it’s probably what he makes money off.

He promised to present ideas for Android and the Web. Android applications, he said, run on devices that are ten times smaller and slower compared to “regular” machines. They did actually conduct a study to find this, and other things, out. Among the other things are key insights such as “the Android permission model allows for deploying end to end encryption”. Genius. They also found out that there is an OpenPGP implementation in Bouncy Castle which people use and that it’s possible to wrap libgcrypt for Java. No shit!!1 They have also identified OpenKeychain and K9 mail as applications using OpenPGP. Wow. As for the Web, their study found out that Webmail is a problem, but that an extension to a Web browser could make end to end encryption possible. Unbelievable. I am not necessarily disappointed given that they are a software company and not a research institute. But I’m puzzled in what reality these results are interesting to the audience of OpenPGP.conf. In any case, his company conducted the study as part of the public tender they won and their results may have been biased by his subcontractors who are deeply involved in the respective projects (i.e. Mailvelope, OpenKeychain, …).

As for his idea regarding UX, his main idea is to implement Web Key Directory (see Werner’s opening talk) discovery mechanism. While I think the approach is good, I still don’t think it is sufficient to make 2017 the year of OpenPGP. My concerns revolve about the UX in non straight-forward cases like people revoking their keys. It’s also one thing to have a nice UX and another to actually have users going for it. Totally unrelated but potentially interesting: He said that the German Federal Office for Information Security (“BSI”) uses 500 workstations with GNU/Linux with a Plasma desktop in addition to Windows.

—-

Holger Krekel then went on to refer about automatic end to end encrypted emails. He is working on an EU funded project called NEXTLEAP. He said that email is refusing to die in favour of Facebook and all the other new kids on the block. He stressed that email is the largest open social messaging system and that many others use it as an anchor of identity. However, many people use it for “SPAM and work” only, he said. He identified various usability problems with end to end encrypted email: key distribution, preventing SPAM, managing secrets across devices, and handle device or key loss.

To tackle the key distribution problem, he mentioned CONIKS, Werner’s Webkey, Mailvelope, and DANE as projects to look into. With these, the respective providers add APIs to find public keys for a person. We know about Werner’s Webkey proposal. CONIKS, in short, is a key transparency approach which requires identity providers to publicly testify your key. Mailvelope automatically asks a verifying key server to provide the recipient’s key. DANE uses DNS with DNSSEC to distribute keys.

He proposed to have inline keys. That means to attach keys and cryptographic information to your emails. When you receive such a message, you would parse the details and use them for encryption the next time you create a message. The features of such a scheme, he said, are that it is more private in the sense that there is no public key server which exposes your identity. Also, it’s simpler in the sense that you “only” need to get support from MUAs and you don’t need to care about extra infrastructure. He identified that we need to run a protocol over email if we ever want to support that scheme. I’m curious to see that, because I believe that it’s good if we support protocols via email. Much like Outlook already does with its voting. SPAM prevention would follow naturally, he said. Because the initial message is sent as plain text, you can detect SPAM. Only if you reply, the other party gets your key, he said. I think it should be possible to get a person’s key out of band, but that doesn’t matter much, I guess. Anyway, I couldn’t really follow that SPAM argument, because it seems to imply that we can handle SPAM in the plain text case. But if that was the case, then we wouldn’t have the SPAM problem today. For managing keys, he thinks of sharing your keys via IMAP, like in the whiteout proposal.

—-

Stefan Marsiske then talked about his concerns regarding the future directions of GnuPG. He said he did some teaching regarding crypto and privacy preserving tools and that he couldn’t really recommend GnuPG to anyone, because it could not be used by the people he was teaching. Matt Green and Schneier both said that PGP is not able to secure email or that email is “unsecurable”. This is inline with the list that secushare produced. The saltpack people also list some issues they see with OpenPGP. He basically evaluated gpg against the list of criteria established in the SoK paper on instant messaging which is well worth a read.

Lutz Donnerhacke then gave a brief overview of the history of OpenPGP. He is one of the authors of the initial OpenPGP standard. In 1992, he said, he read about PGP on the UseNet. He then cared about getting PGP 2.6.3(i)n out of the door to support larger keys than 1024 and fix other bugs that annoyed him. Viacrypt then sold PGP4 which was based on PGP2. PGP5 was eventually exported in books and were scanned back in during HIP97 and CCCamp99, he said. Funnily enough, a bug lurked for about five years, he said. Their get_random always returned 1…

Funnily enough he uses a 20 years old V3 key so at least his Key ID is trivially forgeable, but the fingerprint should also be easy to create. He acknowledges it but doesn’t really care. Mainly, because he “is a person from the last century”. I think that this mindset is present in many people’s heads…

The next day Intevation’s Andre Heinecke talked about the “automated use of gpg through gpgme“. GPGME is the abbreviation of “GnuPG made easy” and is meant to be a higher level abstraction for gpg. “gpg is a tool not a library”, he said. For a library you can apply versioning while the tool may change its output liberally, he said. He mentions gpg’s machine interface with --with-colons and that changes to that format will break things. GpgME will abstract that for you and tries to make the tool a library. There is a defined interface and “people should use it”. A selling point is that it works with all gpg versions.

When I played around with gpgme, I found it convoluted and lacking basic operations. I think it’s convoluted because it is highly stateful and you need to be careful with calling (many) functions in the correct order if you don’t want it to complain. It’s lacking, because signing other people’s keys is a weird thing to do and the interface is not designed with that in mind. He also acknowledged that it is a fairly low level API in the sense that every option has to be set distinctly and that editing keys is especially hard. In gpgme, he said, operations are done based on contexts that you have to create. The context can be created for various gpg protocols. Surprisingly, that’s not only OpenPGP, but also CMS, GpgConf, and others.

I don’t think GNOME Software is ported to gpgme. At least Evolution and Seahorse call gpg directly rather than using gpgme. We should change that! Although gpgme is a bit of a weird thing. Normally™ you’d have a library build a tool with it. With gpgme, you have a tool (gpg) and build a library. It feels wrong. I claim that if we had an OpenPGP library that reads and composes packets we would be better off.

Vincent and Dominik came to talk about UX decisions in OpenKeychain, the Android OpenPGP implementation. It does key management, encryption and decryption of files, and other OpenPGP operations such as signing keys. The speakers said that they use bouncy castle for the crypto and OpenPGP serialisation. They are also working on K9 which will support PGP/MIME soon. They have an Open Tech Fund which finances that work. In general, they focused on the UX to make it easy for the consumer. They identified “workflows” users possibly want to carry out with their app. Among them are the discovery and exchange of keys, as well as confirming them (signing). They gave nice looking screenshots of how they think they made the UI better. They probably did, but I found it the foundations a bit lacking. Their design process seems to be a rather ad-hoc affair and they themselves are their primary test subjects. While it’s good work, I don’t think it’s easily applicable to other projects.

An interesting thing happened (again): They deviate from the defaults that GnuPG uses. Unfortunately, the discussions revolving about that fact were not very elaborate. I find it hard to imagine that both tools have, say, different default key lengths. Both tools try to prevent mass surveillance so you would think that they try to use the same defaults to achieve their goal. It would have been interesting to find out what default value serves the desired purpose better.

Next up was Kritian Fiskerstrand who gave an update on the SKS keyserver network. SKS is the software that we trust our public keys with. SKS is written in OCaml, which he likes, but of which he said that people have different opinions on. SKS is single threaded which is s a problem, he said. So you need to have a reverse proxy to handle more than one client.

He was also talking about the Evil32 keys which caused some stir-up recently. In essence, the existing OpenPGP keys were duplicated but with matching short keyids. That means that if you lookup a key by its short key ID, you’re screwed, because you get the wrong key. If you are using the name or email address instead, then you also have a problem. People were upset about getting the wrong key when having asked the keyserver to deliver.

He said that it’s absolutely no problem because users should verify the keys anyway. I can only mildly agree. It’s true that users should do that. But I think we would live in a nicer world where we could still maintain a significantly high security level of such a rigorous verification does not happen. If he really maintains that point of view then I’m wondering why he is allowing keys to be retrieved by name, email address, or anything else than the fingerprint in first place.

—-

Volker Birk from pretty Easy privacy talked about their project which aims at making encrypted email possible for the masses.
they make extensive use of gpgme and GnuNet, he said. Their focus is “privacy by default”. Not security, he said. If security and privacy are contradicting in some cases, they go for privacy instead of security. For example, the Web of Trust is a good idea for security, but not for privacy, because it reveals the social graph. I really like that clear communication and the admission of security and privacy not necessarily going well together. I also think that keyservers should be considered harmful, mainly because they are learning who is attempting to communicate with whom. He said that everything should be decentralised and peer-to-peer. Likewise, a provider should not be responsible for binding an email address to a key. The time was limited, unfortunately, so the actual details of how it’s supposed to be working were not discussed. They wouldn’t be the first ones to attempt a secure or rather privacy preserving solution. In the limited time, however, he showed how to use their Python adapter to have it automatically locate a public key of a recipient and encrypt to it. They have bindings for various other languages, too.

Interestingly, a keysigning “party” was scheduled on the first evening but that didn’t take place. You would expect that if anybody cared about that it is the OpenPGP hardcore hackers, all of which were present. But not a single person (as in nobody, zero “0”, null) was interested. You can’t blame them. It’s probably been a cool thing when you were younger and GnuPG this was about preventing the most powerful targetted attacks. I think people realised that you can’t have people mumble base16 encoded binary strings AND mass adoption. You need to bring at least cake to the party… Anyway, as you might be aware, we’re working towards a more pleasant key signing experience 🙂 So stay tuned for updates.

Talking at mrmcds 2016 in Darmstadt, Germany

A couple of weeks ago, I attended the mrmcds in Darmstadt, Germany. Just like I did the last years. Like the years before, the conference was nicely themed. This year, the theme was all things medical. So speakers were given doctors’ coats, conference staff were running around like surgeons, alcohol could be had intravenously …

mrmcd 2016 logo

The talk on medical device nightmares (video) showed some medical devices like which show and record vital signs such as the pulse or blood pressure. But also more fancy devices such as an MRI. Of course, he did not only show the devices themselves, but rather how they tested them on their external interfaces, i.e. the networking port. Speaking of the MRI: It exposed a few hundred open ports. The services listening on these ports crashed when nmap scanned the host… But at least apparently they recovered automatically. He also presented an anaesthetic monitoring device, which is supposed to show how much alive a patient still is. The device seems to have a telnet interface which you can log on to with default credentials. The telnet interface has, not surprisingly, a command injection vulnerability, which allowed them to take ownership of the device. The next step was then to hijack the framebuffer and to render whatever they wanted on it. For example nice looking vital data; as if the patient was still alive. Or, probably the more obvious thing to do: Show Rick Astley.

It’s been an entertaining talk which makes you realise how complicated the whole area of pharmaceutical or medical appliances is. They need to go through a long and troublesome certification process, not unlike other businesses (say, car manufacturers). Patching the underlying Windows is simply not possible without losing the certification. You may well ask whether a certificate or an up-to-date OS is better for your health. And while I make it look a bit ridiculous now, I do appreciate that it’s a tough subject.

My own talk on GNOME (video) was well visited. I explained why I think GNOME is a good candidate for shipping security software to the masses. I said that GNOME cares about its users and goes the extra mile to support as many users as possible. That includes making certain decisions to provide a secure by default system. I gave two examples of how I think GNOME pushes the envelope when it comes to making security usable. One was the problem of OpenPGP Keysigning. I mentioned that it’s a very geeky thing which mortals do not understand. Neither do many security people, to be honest. And you can’t even blame them because it’s a messy thing to do. Doing it properly™ involves a metric ton of OpSec to protect the integrity of the key to be signed. I think that we can make the process much more usable than it is right now while even maintaining security. This year, I had Andrei working with me to make this happen.

The other example I gave was the problem of USB security. Do you know when you use your USB? And do you know when you don’t? And do you know when other people use your USB? I talked about the possibility to lock down your USB ports while you’re not in front of your computer. The argument goes that you can’t possibly insert anything if you’re away. Of course, there are certain cases to keep in mind, like not forbidding a keyboard to be plugged in, in case the old one breaks. But there is little reason to allow your USB camera to work unless you are actively using your machine. I presented how this could look like by showing off the work the George did last summer.

My friend Jens talked about Reverse Engineering of applications. He started to explain why you would do that in first place. Analysing your freshly received malware or weaknesses (think backdoors or bypasses) in your software are motivations, he said. But you might as well tinker with old software which has no developer anymore or try to find APIs of other software for interoperational purposes, he said. Let me note that with Free Software, you wouldn’t have to reverse engineer the binary 😉 But he also mentioned that industrial espionage is a reason for people to reverse engineer a compiled programme. The tool he uses the most is the “file” tool. He went on to explain the various executable formats for various machine flavours (think: x86, ELF, PE, JVM). To go practical, he showed a .NET application which only writes “hello, world!”, because malware, he said, is written in .NET nowadays. In order to decompile the binary he recommended “iLspy” as a one-stop suite for reverse engineering .NET applications. Next up were Android applications. He showed how to pull the APK off the device and how to decompose it to JAR classes. Then he recommended CFR for decompiling those into Java code. His clients, mostly banks, he said, try to hide secret keys in their apps, so the first thing he does when having a new job is to grep for “secret”. In 80% of the cases, he said, it is successful. To make it harder for someone to reverse engineer the binary, obfuscators exist for Java, but also for C. He also mentioned some anti debugging techniques such as to check for the presence of certain DLLs or to throw certain interrupts to determine whether the application runs under a debugger. It was a very practical talk which certainly made it clear that the presented things are relevant today. Due to the limited time and the many examples, he could only scratch the surface, though.

It’s been a nice conference with 400ish attendees. I really like how they care about the details, also when it comes to make the speakers feel good. It’s too sad that it’s only one weekend. I’m looking forward to attending next year’s edition 🙂

The Hackocratic Oath

I swear by Eris, Goddess of Chaos, Discordia, and all other Gods above, making them my witnesses, that, according to my ability and judgement, I will keep this Oath and this contract:
To hold those who taught me this art equally dear to me as my friends, to share the Net with them, and to fulfill all their needs for data when required; to look upon their offspring as equals to my own friends, and to teach them this art, if they shall wish to learn it, without fee or contract.
By the set rules, lectures, cat content, and every other mode of instruction, I will impart a knowledge of the art to my own offspring, and those of my friends, and to students bound by this contract and having sworn this Oath to the Hacker Ethic, but to no others.
I will use those data regimens which will benefit my users according to my greatest ability and judgement, and I will do no harm or injustice to them. I will not give a lethal maleware to anyone if I am asked, nor will I advise such a plan; and similarly I will not give a comuter a rootkit.
In purity and according to divine law will I carry out my life and my art.
I will not develop crypto algorithms, but I will leave this to those who are trained in this craft.
Into whatever networks I go, I will enter them for the benefit of the users, avoiding any voluntary act of impropriety or corruption, including hurtful comments towards users or hackers, whether they are online or offline.
Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or by way of leaks, which ought to be private,
I will keep faithfully secret.
So long as I maintain this Oath faithfully and without corruption, may it be granted to me to partake of life fully and the practice of my art, gaining the respect of all for all time. However, should I transgress this Oath and violate it, may the opposite be my fate.
www.mrmcd.net

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.