Talked at mrmcd 2017 in Darmstadt, Germany

I attended this year’s MRMCD in Darmstadt, Germany. I attended a few times in the past and I think this year’s edition was not as successful as the last ones. The venue changed this year, what probably contributed to some more chaos than usual and hence things not running as smoothly as they did. I assume it will be better next year, when people know how to operate the venue. Although all tickets were sold during the presale phase, it felt smaller than in the last years. In fairness, though, the venue was also bigger this year. The schedule had some interesting talks, but I didn’t really get around to attend many, because I was busy preparing my own shows (yeah, should’ve done that before…).

I had two talks at this conference. The first was on playing the children’s game “battleship” securely (video). That means with cryptography. Lennart and I explained how concepts such as commitment schemes, zero knowledge proofs of knowledge, oblivious transfer, secure multiparty computation and Yao’s protocol can be used to play that game without a trusted third party. The problem, in short, is to a) make sure that the other party’s ships are placed correctly and b) to make sure the other party answers correctly. Of course, if you get hold of the placements of the ships these problems are trivial. But your opponent doesn’t like you to know about the placements. Then a trusted third party would solve that problem trivially. But let’s assume we don’t have such a party. Also, we want to decentralise things, so let’s come up with a solution that involves two players only.

The second problem can be solved with a commitment. A commitment is a statement about a something you’ve chosen but that doesn’t reveal the choice itself nor allows for changing ones mind later. Think of a letter in a closed envelope that you hand over. The receiver doesn’t know what’s written in the letter and the sender cannot change the content anymore. Once the receiver is curious, they can open the envelope. This analogy isn’t the best and I’m sure there’s better real-world concepts to compare to commitment schemes. Anyway, for battleship, you can make the other party commit to the placement of the ships. Then, when the battle starts, you have the other party open the commitment for the field that you’re shooting. You can easily check whether the commitment verifies correctly in order to determine whether you hit a ship or water.

The other problem is the correct placement of the ships, e.g. no ships shall be adjacent, exactly ten ships, exactly one five-field ship, etc. You could easily wait until the end of the game and then check whether everything was placed correctly. But that wouldn’t be (cryptographic) fun. Let’s assume one round of shooting is expensive and you want to make sure to only engage if the other party indeed follows the rules. Now it’s getting a bit crazy, because we need to perform a calculation without learning anything else than “the ships are correctly placed”. That’s a classic zero knowledge problem. And I think it’s best explained with the magic door in a cave.

Even worse, we need to somehow make sure that we cannot change our placement afterwards. There is a brain melting concept of secure multi-party computation which allows you to do exactly that. You can execute a function without knowing what you’re doing. Crazy. I won’t be able to explain how it works in a single blog post and I also don’t intend to, because others are much better in doing that than I could ever be. The gist of the protocol is, that you model your functionality as a Boolean circuit and assign random values to represent “0” or “1” for each wire. You then build the truth table for each gate and replace the values of the table (zeros and ones) with an encryption under both the random value for the first input wire and the random value for the second input wire. The idea now is that the evaluator can only decrypt one value in the truth table given the input keys. There are many more details to care about but eventually you have a series of encrypted, or garbled, gates and you need the relevant keys in order to evaluate it. You can’t tell from the keys you get whether it represents a “0” or a “1”. Hence you can evaluate without knowing the other party’s input.

My other talk was about a probable successor of Return Oriented Programming: Data Oriented Programming (video). In Return Oriented Programming (ROP) and its variants like JOP the aim is to diverge the original control flow in order to make the program execute the attacker’s functionality. This, however, can probably be thwarted by Control Flow Integrity. In its simplest form, it checks on every branch whether it is legit. Think of a database with a list of addresses which are allowed to a list of other addresses. Of course, real-world implementations are more clever. Anyway, let’s assume that we’ll have a hard time exploiting our target with ROP, because we cannot change the CFG of the program. If our attack doesn’t change the CFG, though, we should be safe for anything that detects its modification. That’s the central idea of DOP.

Although I’m not super excited about this year’s edition, I’m looking forward to seeing the next year’s event. I hope it’s going to be a bit more organised; including myself ;-)

GUADEC 2017

It’s summer and it’s GUADEC time! This year’s GUADEC took place in Manchester, England. It was surprisingly less bad for that location ;-) The organisers deserve a big round of applause for having pulled the event off. After having organised last year’s GUADEC I have first hands experience running such an event. So a big “thank you” to the team from England :)

The venue was a big and modern university and the accommodation was neatly located a few footsteps from the lecture hall. That’s especially nice for the typical English weather ;-) We got to live in the student dorms and I’m a bit jealous of today’s student to be able to live in such a comfortable place.

I attended a few talks from the list, among them was Christian Hergert reporting on The State of Builder which was a bit scattered and not very well structured for beginners like me. I guess was meant to be more of a showing off new features instead of a structured walk through the design and thoughts behind the project. I knew the project existed but I never really got around to work with it so I was a bit put off. But I took that for a good opportunity for installing the latest Flatpaked application :)

I liked Simon’s talk on enabling users to modify the software they are running. Essentially, you can click a button in the application and it’ll fire up an IDE where you can change code and hit “play” to run the new version. Amazing. Software Freedom at its best. He demoed a prototype and I think it’s got potential. I really like the idea of the user being able to tinker around easily. Especially given that the status quo is jhbuild. That’s a nice tool, but it proves to be hard for people to make good use of it. I hope we will see something like this being used in the future.

Federico was telling us about the efforts to make use of the Rust language for GNOME. The gist is, essentially, that you better start with leaf functions of your app or library rather than a central function in your architecture. I then tried to find leaf functions with the help of the compiler, but I failed. I tried Egypt but I wasn’t patient enough to make proper use of the generated dot file in order to identify leaf functions. Maybe I should give cflow a try next time.

I used the BoF days to dip a little bit into Rust. It’s always helpful to have a bunch of smart hackers around. That’s what I like about these kind of events. You get to know and talk to very smart people. I also tried to catch up with my very talented student and discuss the changes we’d like to see.

Thanks to the GNOME Foundation for sponsoring my travel and to the local team for having organised a successful event!

GUADEC 2017 group photo

Attended FOSDEM 2017

Unsurprisingly, the biggest European Free Software event happened in Brussels, Belgium again. I’m talking about FOSDEM, of course. It’s a fixed entry in many peoples calendar and always a good excuse to visit Brussels :-)

I’m a bit late to report on what talks I managed to see as others have already covered some of the talks, but I still want to add some observations.

Richard Brown from SuSE talked about dinosaurs and resurrecting them (video). It was more about containerised apps than actual dinosaurs, though. The general theme was about repeating mistakes that we might or should have learned in the past. He started by mentioning that the Windows DLL Hell was a nightmare. You needed to test your application with each and every version combination of every possible library. The DLLs did not necessarily have ABI compatibility so it was very cumbersome to test. Windows 2000 brought Side-by-Side assembly, which is some form of DLL containerisation, he said. It uses separate memory space for each app and its DLLs. Programs can ship “private” DLLs in their application directory so you don’t necessarily break other apps with your DLL carrying the same name. This approach, however, still has issues: Security wise each app needs to update their libraries themselves rather than have them updated. So each app needed to build and ship their own updater which is not trivial to do. Legally it’s also interesting, he said, because bundling these DLLs may impose restrictions. Last but not least, you have to have the same DLL potentially multiple times on disk, because each app may ship the same DLL.

The contemporary software distribution model has its problems, too, he said. Compatibility with various distros is an issue, because each distro is slightly different. Each distribution also has their own pace of change which may be incompatible with the application in question, e.g. the distros may decide to ship an older version because they have tested it more. Different distributions have different libraries and versions thereof. Also, each distribution has different toolsets to package applications up for their environment. Application developers, however, don’t want to care about these details.

Containerised applications solve these issues. Maybe. He mentioned Flatpak, snappy, and Appimage. The latter is the oldest technology dating all the way back to 2003. The solutions have in common that they bundle the app and run it in some kind of container or sandbox. From his criteria, the compatibility issue is solved, because the libraries are in the bundles. Portability is solved, because all dependencies are shipped in the bundle. And the pace of change is up to the app developer.

The containerisations, though, make assumptions of a common standard base provided by the distributions. According to him, such a common standard base does not exist in a practical sense, though. With containerised apps, he said, we might be repeating history. He explained that we might get a security nightmare because each app needs to update their dependencies themselves. The question also begs whether all the libraries can actually be bundled and shipped. App developers are picking up the responsibilities that distros used to have. You still have to test everything on each distro just to be sure that your base dependencies still work correctly, he said. He sees distributions as part of the solution to these problems. He thinks that a rolling release might solve the issues we’re trying to solve with containserised apps. A rolling release can ship new releases of applications very quickly. The distribution still uses their tools for the common problems like maintenance, security, and legal stuff.

In a lightning talk, David talked about “practical TPM 2.0 usage”. He showed how to generate a signing key, sign a document with it, and verify the signature. He said that Microsoft mandated TPM2.0 for Windows 10 Mobile and that it is a cryptographic processor rather than an accelerator. TPM2.0 is different from TPM1.2 in various ways, he said. For example, the 2.0 can do ECC (P256 and BN256) and SHA-256. But it’s also “algorithm agile” which means that you can add algorithms without having to change the specification. He sees three main usages: Platform integrity like secure boot and trusted boot, disk encryption where the TPM stores and controls access to the key, and Digital Restriction Management by verifying code signatures. In order to use the TPM you have two options, he said. IBM or Intel have developed some tools. IBM doesn’t have a “resource manager” according to the specification. Like a multiplexer. Intel does have such a resource manager and they are working on putting that into Linux. However, Intel has less tools, he said, although it’s wasn’t entirely clear to me what he was referring to. He mentioned that his employer, Facebook, uses TPMs for platform attestation.

Hanno talked the security on the Linux desktop. He referred to the issues Chris Evans exposed a few weeks ago.
He wanted to make the audience angry, he said. But not at him, I suppose because he considers himself to be the messenger only. The basic problem is an unfortunate agglomeration of bugs or behaviours. It starts with the browser automatically downloading files into the users downloads folder, i.e. without asking the user. Then there is Tracker which indexes files that you add to your home directory. Such as the download folder. And then there are buggy (read: vulnerable) implementations of file parsers.

He also referred to Carlos’ comment about bugs being bugs and no problem being found except bugs being bugs. Hanno’s point, as far as I could make it out, was that a project of the size as tracker, especially with that number of dependencies that you don’t control, cannot make sure that there will be not yet another bug that can be exploited. That’s quite fatalistic but probably not too far from reality. It’s not just a Tracker issue, though, he said. KDE has Baloo and everybody wants to have thumbnails of the files in your folders. He reiterated that automatic downloads AND automatic indexing creates a huge attack surface. And that the indexers support a vast variety of file formats by using many libraries of varying quality. While Tracker quickly adopted sandboxing, he said, KDE hasn’t.

He mentioned other exploit mitigation techniques such as ASLR or CFI. With ASLR, he said, the idea is to load code and data into random addresses in memory. This mitigates exploits, because they cannot reliably target valid code in memory. A least that’s the idea. You need to compile the code with -fpic and -pie, he said. Linux distribution have been slow in adopting ASLR though. Ubuntu has introduced it with 16.10, Feora with 23, and Debian is WIP. OpenSuSE has it for a few packages only. It should be the default, he said. Windows, on the other hand, has it since Vista. They also explore and experiment with more modern mitigations like CFI. Yet another approach is to avoid the C language, because “[it] is full of memory corruptions”. Rust comes to mind as an alternative. GStreamer already supports plugins in Rust, he said. He concluded that fixing all these bugs, like Carlos seemed to be wanting, is very hard. Not only because GStreamer is very prone to memory corruption due to the amount of complicated formats it parses. He mentioned fuzzing as a viable strategy to shake out bugs and he found many bugs in a few days. He mentioned that probably to make do so more of that ourselves. I’m working on it. More to posted separately.

The next talk was about testing TLS implementations. For the last year or so I began investigating TLS issues myself and I was wishing for a TLS testing framework. Now I learned about an implementation. Hubert Karlo introduced his “tls fuzzer” which is a bit of a misnomer, because it actually doesn’t perform any fuzzing. He said that TLS was complex and that it has 326 official ciphersuites, 4 PKI cryptosystems, 16 signature-hash pairs, and many more countable things that make the test matrix grow fast. There is a lot of state to be maintained, he said. He presented his tool which takes care of TLS specifics but allows you to define your own payloads and modifications to them. For example, with a few lines of code you can define a client to open a TLS connection and to use a GCM ciphersuite for collecting the nonces. He claims to have found more than 20 issues in NSS, GnuTLS, and OpenSSL. I’m curious to play around with it and maybe hook it up with Scapy’s fuzzing facilities.

Another TLS related talk was given by Fridolin who showed us a TLS Linux Kernel module implementation. The advantages are manyfold he said. Obviously, establishing the connection should be cheaper in terms of computation because the context does not need to be switched so often. Others are already using a kernel implementation of TLS, he said. He mentioned that Solaris has a kssl socket and that netflix uses a modified sendfile() for TLS on BSD. His implementation has been evaluated by Facebook, he said. The implementation leaves the handshaking still to user space and cares about the symmetric encryption only.

Compared to other FOSDEMs, I was able to actually see a few talks, although I was impressed by the number of people I randomly bumped into and who kept me from attending more talks ;-) The size of FOSDEM is its cause and solution to problems. A good thing about it was that I could bribe something to cook up a Debian package for GNOME Keysign so that, hopefully, 200 people don’t have to queue up and do weird things :o)

Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! ;-)

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something :-)

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement ;-) I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me ;-)

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city :-) By that time the links here in this post will probably be broken ;-) I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable :-(

First OpenPGP.conf 2016 in Cologne, Germany

Recently, I’ve attended the first ever OpenPGP conference in Cologne, Germany. It’s amazing how 25 years of OpenPGP have passed without any conference for bringing various OpenPGP people together. I attended rather spontaneously, but I’m happy to have gone. It’s been very insightful and I’m really glad to have met many interesting people.

Werner himself opened the conference with his talk on key discovery. He said that the problem of integrating GnuPG in MUAs is solved. I doubt that with a fair bit of confidence. Besides few weird MUAs (mutt, gnus, alot, …) I only know KMail (should maybe also go into the “weird” category ;-) ) which uses GnuPG through gpgme, which is how a MUA really should consume GnuPG functionality. GNOME’s Evolution, while technically correct, supports gnugp, but only badly. It should really be ported to gpgme. Anyway, Werner said that the problem of encryption has been solved, but now you need to obtain the key for the party you want to communicate with. How can you find the key of your target? He said that keyservers cannot map a mail address to a key. It was left a bit unclear what he meant, but he probably referred to the problem of someone uploading a key for your email address without your consent. Later, he mentioned the Web of Trust, which is meant for authenticating the other user’s key. But he disliked the fact that it’s “hard to explain”. He didn’t mention why, though. He did mention that the WoT exposes the global social graph, which is not a desirable feature. He also doubts that the Web of Trust scales, but he left the audience wondering why. To solve the mapping problem, you might imagine keyservers which verify your email address before accepting your key. These, he said, “harm the system”. The problem, he said, is that this system only works with one keyserver which would harm the decentralised nature of the OpenPGP system and bring us back in to the x.500 dark age. While I agree with the conclusion, I don’t fully agree with the premise. I don’t think it’s clear that you cannot operate a verifying server network akin to how it’s currently done. For example, the pool of keyservers could only accept keys which were signed by one of the servers of the pool within the last, say, 6 months. Otherwise, the user has to enrol by following a challenge-response protocol. The devil may be in the details, but I don’t see how it’s strictly impossible.

However, in general, Werner likes the OpenSSH approach better. That is, it opportunistically uses a key and detects when it changes. As with the Web of Trust, the key validation happens on your device, only. Rather than, say, have an external entity selling the trust as with X.509.

Back to the topic. How do you find a key of your partner now? DANE would be an option. It uses DNSSEC which, he said, is impossible to implement a client for. It also needs collaboration of the mail provider. He said that Posteo and mailbox.org have this feature.

Anyway, the user’s mail provider needs to provide the key, he said. Web Key Directory is a new proposal. It uses https for key look-up on a well known name on the domain of the email provider. Think .well-known/openpgp/. It’s not as distributed as DNS but as decentralised as eMail is, he said. It still requires collaboration of the email provider to set the Web service up. The proposal trusts the provider to return genuine keys instead of customised ones. But the system shall only be used for initial key discovery. Later, he mentioned to handle revocation via the protocol™. For some reason, he went on to explain a protocol to submit a key in much more detail rather than expanding on the protocol for the actual key discovery, what happens when the key gets invalid, when it expired, when it gets rolled over, etc.
—-

Next up was Meskio who talked about Key management at LEAP, the LEAP Encryption Access Project. They try to provide a one-stop solution to encrypting all the things™. One of its features is to transparently encrypt emails. To achieve that, it opens a local MTA and an IMAPd to then communicate via a VPN with the provider. It thus builds on the idea of federation the same way current email protocols do, he said. For LEAP to provide the emails, they synchronise the mailbox across devices. Think of a big dropbox share. But encrypted to all devices. They call it soledad which is based on u1db.

They want to protect the user from the provider and the provider from the user. Their focus on ease of use manifests itself in puppet modules that make it easy to deploy the software. The client side is “bitmask“, a desktop application written in Qt which sets everything up. That also includes transparently getting keys of other users. Currently, he said, we don’t have good ways of finding keys. Everything assumes that there is user intervention. They want to change that and build something that encrypts emails even when the user does not do anything. That’s actually quite an adorable goal. Security by default.

Regarding the key validation they intend to do, he mentioned that it’s much like TOFU, but with many many exceptions, because there are many corner cases to handle in that scheme. Keys have different validation levels. The key with the highest validation level is used. When a key roll-over happens, the new key must be signed by the old one and the new key needs to be at least of a validation level as the old one. Several other conditions need to also hold. Quite an interesting approach and I wish that they will get more exposure and users. It’s promising, because they don’t change “too” much. They still do SMTP, IMAP, and OpenPGP. Connecting to those services is different though which may upset people.


More key management was referred on by Comodo’s Phillip Hallam-Baker who went then on to talk about The Mathematical Mesh: Management of Keys. He also doesn’t want to change the user experience except for simplifying everything. Every button to push is one too many, he said. And we must not write instructions. He noted that if every user had a key pair, we wouldn’t need passwords and every communication would be secured end-to-end. That is a strong requirement, of course. He wants to have a single trust model supporting every application, so that the user does not have to configure separate trust configurations for S/MIME, OpenPGP, SSH, etc. That again is a bit of a far fetched dream, I think. Certainly worth working towards it, but I don’t believe to experience such a thing in my lifetime. Except when we think of single closed system, of course. Currently, he said, fingerprints are used in two ways: Either users enter them manually or they compare it to a string given by a secure source.

He presented “The Mesh” which is a virtual store for configuration information. The idea is that you can use the Mesh to provision your devices with the data and keys it needs to make encrypted communication happen. The Mesh is thus a bit of a synchronised storage which keeps encrypted data only. It would have been interesting to see him relate the Mesh to Soledad which was presented earlier. Like Soledad, you need to sign up with a provider and connect your devices to it when using the Mesh. His scheme has a master signature key which only signs a to be created administration key. That in turn signs application- and device keys. Each application can create as many keys as it needs. Each device has three device keys which he did unfortunately not go into detail why these keys are needed. He also has an escrow method for getting the keys back when a disaster happens: The private keys are encrypted, secret shared, and uploaded. Then, you can use two out of three shares to get your key back. I wonder where to upload those shares to though and how to actually find your shares back.

Then he started losing me when he mentioned that OpenPGP keyservers, if designed today, would use a “linked notary log (blockchain)”. He also brought (Proxy-) reencryption into the mix which I didn’t really understand. The purpose itself I think I understand: He wants the mesh to cater for services to re-encrypt to the several keys that all of one entity’s devices have. But I didn’t really understand why it’s related to his Mesh at all. All together, the proposal is a bit opportunistic. But it’s great to have some visions…

Bernhard Reiter talked about getting more OpenPGP users by 2017. Although it was more about whitewashing the money he receives from German administration… He is doing gpg4win, the Windows port of GnuPG. The question is, he said, how to get GnuPG to a large user base and to make them use it. Not surprisingly, he mentioned that we need to improve the user experience. Once he user gets in touch with cryptography and is exposed to making trust decisions, he said, the user is lost. I would argue otherwise, because the people are heavily exposed to cryptography when using whatsapp. Anyway, he then referred to an idea of his: “restricted documents”. He wants to build a military style of access control for documents. You can’t blame him; it’s probably what he makes money off.

He promised to present ideas for Android and the Web. Android applications, he said, run on devices that are ten times smaller and slower compared to “regular” machines. They did actually conduct a study to find this, and other things, out. Among the other things are key insights such as “the Android permission model allows for deploying end to end encryption”. Genius. They also found out that there is an OpenPGP implementation in Bouncy Castle which people use and that it’s possible to wrap libgcrypt for Java. No shit!!1 They have also identified OpenKeychain and K9 mail as applications using OpenPGP. Wow. As for the Web, their study found out that Webmail is a problem, but that an extension to a Web browser could make end to end encryption possible. Unbelievable. I am not necessarily disappointed given that they are a software company and not a research institute. But I’m puzzled in what reality these results are interesting to the audience of OpenPGP.conf. In any case, his company conducted the study as part of the public tender they won and their results may have been biased by his subcontractors who are deeply involved in the respective projects (i.e. Mailvelope, OpenKeychain, …).

As for his idea regarding UX, his main idea is to implement Web Key Directory (see Werner’s opening talk) discovery mechanism. While I think the approach is good, I still don’t think it is sufficient to make 2017 the year of OpenPGP. My concerns revolve about the UX in non straight-forward cases like people revoking their keys. It’s also one thing to have a nice UX and another to actually have users going for it. Totally unrelated but potentially interesting: He said that the German Federal Office for Information Security (“BSI”) uses 500 workstations with GNU/Linux with a Plasma desktop in addition to Windows.

—-

Holger Krekel then went on to refer about automatic end to end encrypted emails. He is working on an EU funded project called NEXTLEAP. He said that email is refusing to die in favour of Facebook and all the other new kids on the block. He stressed that email is the largest open social messaging system and that many others use it as an anchor of identity. However, many people use it for “SPAM and work” only, he said. He identified various usability problems with end to end encrypted email: key distribution, preventing SPAM, managing secrets across devices, and handle device or key loss.

To tackle the key distribution problem, he mentioned CONIKS, Werner’s Webkey, Mailvelope, and DANE as projects to look into. With these, the respective providers add APIs to find public keys for a person. We know about Werner’s Webkey proposal. CONIKS, in short, is a key transparency approach which requires identity providers to publicly testify your key. Mailvelope automatically asks a verifying key server to provide the recipient’s key. DANE uses DNS with DNSSEC to distribute keys.

He proposed to have inline keys. That means to attach keys and cryptographic information to your emails. When you receive such a message, you would parse the details and use them for encryption the next time you create a message. The features of such a scheme, he said, are that it is more private in the sense that there is no public key server which exposes your identity. Also, it’s simpler in the sense that you “only” need to get support from MUAs and you don’t need to care about extra infrastructure. He identified that we need to run a protocol over email if we ever want to support that scheme. I’m curious to see that, because I believe that it’s good if we support protocols via email. Much like Outlook already does with its voting. SPAM prevention would follow naturally, he said. Because the initial message is sent as plain text, you can detect SPAM. Only if you reply, the other party gets your key, he said. I think it should be possible to get a person’s key out of band, but that doesn’t matter much, I guess. Anyway, I couldn’t really follow that SPAM argument, because it seems to imply that we can handle SPAM in the plain text case. But if that was the case, then we wouldn’t have the SPAM problem today. For managing keys, he thinks of sharing your keys via IMAP, like in the whiteout proposal.

—-

Stefan Marsiske then talked about his concerns regarding the future directions of GnuPG. He said he did some teaching regarding crypto and privacy preserving tools and that he couldn’t really recommend GnuPG to anyone, because it could not be used by the people he was teaching. Matt Green and Schneier both said that PGP is not able to secure email or that email is “unsecurable”. This is inline with the list that secushare produced. The saltpack people also list some issues they see with OpenPGP. He basically evaluated gpg against the list of criteria established in the SoK paper on instant messaging which is well worth a read.

Lutz Donnerhacke then gave a brief overview of the history of OpenPGP. He is one of the authors of the initial OpenPGP standard. In 1992, he said, he read about PGP on the UseNet. He then cared about getting PGP 2.6.3(i)n out of the door to support larger keys than 1024 and fix other bugs that annoyed him. Viacrypt then sold PGP4 which was based on PGP2. PGP5 was eventually exported in books and were scanned back in during HIP97 and CCCamp99, he said. Funnily enough, a bug lurked for about five years, he said. Their get_random always returned 1…

Funnily enough he uses a 20 years old V3 key so at least his Key ID is trivially forgeable, but the fingerprint should also be easy to create. He acknowledges it but doesn’t really care. Mainly, because he “is a person from the last century”. I think that this mindset is present in many people’s heads…

The next day Intevation’s Andre Heinecke talked about the “automated use of gpg through gpgme“. GPGME is the abbreviation of “GnuPG made easy” and is meant to be a higher level abstraction for gpg. “gpg is a tool not a library”, he said. For a library you can apply versioning while the tool may change its output liberally, he said. He mentions gpg’s machine interface with --with-colons and that changes to that format will break things. GpgME will abstract that for you and tries to make the tool a library. There is a defined interface and “people should use it”. A selling point is that it works with all gpg versions.

When I played around with gpgme, I found it convoluted and lacking basic operations. I think it’s convoluted because it is highly stateful and you need to be careful with calling (many) functions in the correct order if you don’t want it to complain. It’s lacking, because signing other people’s keys is a weird thing to do and the interface is not designed with that in mind. He also acknowledged that it is a fairly low level API in the sense that every option has to be set distinctly and that editing keys is especially hard. In gpgme, he said, operations are done based on contexts that you have to create. The context can be created for various gpg protocols. Surprisingly, that’s not only OpenPGP, but also CMS, GpgConf, and others.

I don’t think GNOME Software is ported to gpgme. At least Evolution and Seahorse call gpg directly rather than using gpgme. We should change that! Although gpgme is a bit of a weird thing. Normally™ you’d have a library build a tool with it. With gpgme, you have a tool (gpg) and build a library. It feels wrong. I claim that if we had an OpenPGP library that reads and composes packets we would be better off.

Vincent and Dominik came to talk about UX decisions in OpenKeychain, the Android OpenPGP implementation. It does key management, encryption and decryption of files, and other OpenPGP operations such as signing keys. The speakers said that they use bouncy castle for the crypto and OpenPGP serialisation. They are also working on K9 which will support PGP/MIME soon. They have an Open Tech Fund which finances that work. In general, they focused on the UX to make it easy for the consumer. They identified “workflows” users possibly want to carry out with their app. Among them are the discovery and exchange of keys, as well as confirming them (signing). They gave nice looking screenshots of how they think they made the UI better. They probably did, but I found it the foundations a bit lacking. Their design process seems to be a rather ad-hoc affair and they themselves are their primary test subjects. While it’s good work, I don’t think it’s easily applicable to other projects.

An interesting thing happened (again): They deviate from the defaults that GnuPG uses. Unfortunately, the discussions revolving about that fact were not very elaborate. I find it hard to imagine that both tools have, say, different default key lengths. Both tools try to prevent mass surveillance so you would think that they try to use the same defaults to achieve their goal. It would have been interesting to find out what default value serves the desired purpose better.

Next up was Kritian Fiskerstrand who gave an update on the SKS keyserver network. SKS is the software that we trust our public keys with. SKS is written in OCaml, which he likes, but of which he said that people have different opinions on. SKS is single threaded which is s a problem, he said. So you need to have a reverse proxy to handle more than one client.

He was also talking about the Evil32 keys which caused some stir-up recently. In essence, the existing OpenPGP keys were duplicated but with matching short keyids. That means that if you lookup a key by its short key ID, you’re screwed, because you get the wrong key. If you are using the name or email address instead, then you also have a problem. People were upset about getting the wrong key when having asked the keyserver to deliver.

He said that it’s absolutely no problem because users should verify the keys anyway. I can only mildly agree. It’s true that users should do that. But I think we would live in a nicer world where we could still maintain a significantly high security level of such a rigorous verification does not happen. If he really maintains that point of view then I’m wondering why he is allowing keys to be retrieved by name, email address, or anything else than the fingerprint in first place.

—-

Volker Birk from pretty Easy privacy talked about their project which aims at making encrypted email possible for the masses.
they make extensive use of gpgme and GnuNet, he said. Their focus is “privacy by default”. Not security, he said. If security and privacy are contradicting in some cases, they go for privacy instead of security. For example, the Web of Trust is a good idea for security, but not for privacy, because it reveals the social graph. I really like that clear communication and the admission of security and privacy not necessarily going well together. I also think that keyservers should be considered harmful, mainly because they are learning who is attempting to communicate with whom. He said that everything should be decentralised and peer-to-peer. Likewise, a provider should not be responsible for binding an email address to a key. The time was limited, unfortunately, so the actual details of how it’s supposed to be working were not discussed. They wouldn’t be the first ones to attempt a secure or rather privacy preserving solution. In the limited time, however, he showed how to use their Python adapter to have it automatically locate a public key of a recipient and encrypt to it. They have bindings for various other languages, too.

Interestingly, a keysigning “party” was scheduled on the first evening but that didn’t take place. You would expect that if anybody cared about that it is the OpenPGP hardcore hackers, all of which were present. But not a single person (as in nobody, zero “0”, null) was interested. You can’t blame them. It’s probably been a cool thing when you were younger and GnuPG this was about preventing the most powerful targetted attacks. I think people realised that you can’t have people mumble base16 encoded binary strings AND mass adoption. You need to bring at least cake to the party… Anyway, as you might be aware, we’re working towards a more pleasant key signing experience :) So stay tuned for updates.

Talking at mrmcds 2016 in Darmstadt, Germany

A couple of weeks ago, I attended the mrmcds in Darmstadt, Germany. Just like I did the last years. Like the years before, the conference was nicely themed. This year, the theme was all things medical. So speakers were given doctors’ coats, conference staff were running around like surgeons, alcohol could be had intravenously …

mrmcd 2016 logo

The talk on medical device nightmares (video) showed some medical devices like which show and record vital signs such as the pulse or blood pressure. But also more fancy devices such as an MRI. Of course, he did not only show the devices themselves, but rather how they tested them on their external interfaces, i.e. the networking port. Speaking of the MRI: It exposed a few hundred open ports. The services listening on these ports crashed when nmap scanned the host… But at least apparently they recovered automatically. He also presented an anaesthetic monitoring device, which is supposed to show how much alive a patient still is. The device seems to have a telnet interface which you can log on to with default credentials. The telnet interface has, not surprisingly, a command injection vulnerability, which allowed them to take ownership of the device. The next step was then to hijack the framebuffer and to render whatever they wanted on it. For example nice looking vital data; as if the patient was still alive. Or, probably the more obvious thing to do: Show Rick Astley.

It’s been an entertaining talk which makes you realise how complicated the whole area of pharmaceutical or medical appliances is. They need to go through a long and troublesome certification process, not unlike other businesses (say, car manufacturers). Patching the underlying Windows is simply not possible without losing the certification. You may well ask whether a certificate or an up-to-date OS is better for your health. And while I make it look a bit ridiculous now, I do appreciate that it’s a tough subject.

My own talk on GNOME (video) was well visited. I explained why I think GNOME is a good candidate for shipping security software to the masses. I said that GNOME cares about its users and goes the extra mile to support as many users as possible. That includes making certain decisions to provide a secure by default system. I gave two examples of how I think GNOME pushes the envelope when it comes to making security usable. One was the problem of OpenPGP Keysigning. I mentioned that it’s a very geeky thing which mortals do not understand. Neither do many security people, to be honest. And you can’t even blame them because it’s a messy thing to do. Doing it properly™ involves a metric ton of OpSec to protect the integrity of the key to be signed. I think that we can make the process much more usable than it is right now while even maintaining security. This year, I had Andrei working with me to make this happen.

The other example I gave was the problem of USB security. Do you know when you use your USB? And do you know when you don’t? And do you know when other people use your USB? I talked about the possibility to lock down your USB ports while you’re not in front of your computer. The argument goes that you can’t possibly insert anything if you’re away. Of course, there are certain cases to keep in mind, like not forbidding a keyboard to be plugged in, in case the old one breaks. But there is little reason to allow your USB camera to work unless you are actively using your machine. I presented how this could look like by showing off the work the George did last summer.

My friend Jens talked about Reverse Engineering of applications. He started to explain why you would do that in first place. Analysing your freshly received malware or weaknesses (think backdoors or bypasses) in your software are motivations, he said. But you might as well tinker with old software which has no developer anymore or try to find APIs of other software for interoperational purposes, he said. Let me note that with Free Software, you wouldn’t have to reverse engineer the binary ;-) But he also mentioned that industrial espionage is a reason for people to reverse engineer a compiled programme. The tool he uses the most is the “file” tool. He went on to explain the various executable formats for various machine flavours (think: x86, ELF, PE, JVM). To go practical, he showed a .NET application which only writes “hello, world!”, because malware, he said, is written in .NET nowadays. In order to decompile the binary he recommended “iLspy” as a one-stop suite for reverse engineering .NET applications. Next up were Android applications. He showed how to pull the APK off the device and how to decompose it to JAR classes. Then he recommended CFR for decompiling those into Java code. His clients, mostly banks, he said, try to hide secret keys in their apps, so the first thing he does when having a new job is to grep for “secret”. In 80% of the cases, he said, it is successful. To make it harder for someone to reverse engineer the binary, obfuscators exist for Java, but also for C. He also mentioned some anti debugging techniques such as to check for the presence of certain DLLs or to throw certain interrupts to determine whether the application runs under a debugger. It was a very practical talk which certainly made it clear that the presented things are relevant today. Due to the limited time and the many examples, he could only scratch the surface, though.

It’s been a nice conference with 400ish attendees. I really like how they care about the details, also when it comes to make the speakers feel good. It’s too sad that it’s only one weekend. I’m looking forward to attending next year’s edition :-)

Talking at OpenSuSE Conference 2016 in Nuremberg

I was invited to this year’s OpenSuSE Conference in Nuremberg, Germany. I had been to that event two years ago in Dubrovnik which I enjoyed so much that I was eager to go again.

oscfinal

The venue was very easy to find due to poster hanging everywhere. The flow of information was good in general. That includes emails being every day which highlighted items in the schedule or restaurant recommendations for the evening.

I arrived just in time for my first show on GNOME Keysign. For better or worse we only very few people so we could discuss matters deeply. It was good, because we found bugs and other user facing issues that need to be resolved. The first and most obvious one was GnuPG 2.1 support. Although still experimental, OpenSuSE ships 2.1 by default. The wrapping library we’re using to interact with GnuPG did not support calling the newer gpg, so we had to identify the issues, find a fix, and test. It eventually worked out :-)

I also had a talk called “Five years after 3.0” which, to my surprise, has been covered by reddit and omgubuntu. I was also surprised by the schedule which only gave me 30 minutes instead of the usual 45 or 60. I was eventually politely reminded that I have significantly exceeded my time *blush*. We thus needed to move discussions outside which was fruitful. People at OpenSuSE Con are friendly and open-minded. It’s a pleasure to have arguments there :)

I didn’t actually see many talks myself. Although the schedule was quite full with interesting topics! But knowing that the VoCCC people were running the video recordings, I could count on recordings being available after a few days hours.

But I have had very interesting and enlightening discussions about distributions, containerised apps, Open Build Service, OpenQA, dragging more GNOME people towards OpenSuSE, Fonts, and other issues. That’s the great thing about conferences: You get to know people with interesting stories. As for the fonts, for example, I was discussing the complexity involved in rendering glyphs and whether this could eventually lead to security problems. I think the attack surface of fonts has been undervalued and needs some investigation. I hope I can invest some time in looking at building and modifying fonts. I also found it interesting to discuss why I would not recommend OpenSuSE as a GNU/Linux distribution to anyone, mainly because I need to reflect and challenge myself. Turns out, I don’t have any good reason except that my habits simply don’t include using OpenSuSE myself and I am thus unable to give a recommendation. I think they have interesting infrastructure though. I see the build service for having peoples’ apps built and OpenQA for having them tested. Both seem to be a little crude overall, but could become the tools to use for distributing your flatsnappimgpack. An idea was circling around to have a freedesktop.org for those app image formats and execution environments. But in a somewhat more working state. I think key to success of any such body is being lightweight and not end up like openstack. Let’s hope we can bring people who work on various parts or even implementation of containerisation for desktop applications together. I also hope that the focus for containered desktop apps will be isolation from other apps rather than actually distributing the software, because I don’t think we have a big problem with getting Free Software into the user’s hands.

So a big “thank you” to this year’s organisers for this event. I hope I can attend on of the following conferences :)

Talking at FOSSASIA 2016 in Singapore

This year I was able to attend this year’s FOSSASIA in Singapore. It’s quite a decently sized event with more than 150 speakers and more than 1000 people attending. Given the number of speakers you can infer that there was an insane number of talks in the two and a half day of the conference. I’ve seen recordings being made so I would expect those to show up at some stage, but I don’t have any details. The atmosphere was very friendly and the venue a-maze-ing. By that I mean that it was a fantastic and huge maze. We were hosted in Singapore’s Science Museum which exhibits various things around biology, physics, chemistry, and much more. It is a rather large building in which it was easy to get lost. But it was great being among those sciency exhibits and to exchange ideas and thoughts. Sometimes, we could see an experiment being made as a show to the kids visiting the museum. These shows included a Tesla coil or a fire tornado. Quite impressive.

One of the first things I could see was Cat Altmann talking about her position being in the field of marketing which, she says, engineers tend to not like. But as opposed to making people like things they don’t need or want, she is rather concerned with reaching out to people to open source code. The Making and Science team within Google exists to work on things like sending kids from under resourced schools to field trips. Science also plays a role in this year’s Summer of Code. 43+ out of 180 projects are related to science, she said.

Nikolai talked about the Nefertiti hack. In case you missed it, the Nefertiti was “cloned”. The bust is a 3000 years old artefact which is housed in Berlin and is publicly available (for people who can travel to Berlin…). The high resolution data of the scanned object is, however, not available, along with many many other data that the museums have about their objects. He compared that behaviour to colonisation; I couldn’t really follow why, though. Anyway, they managed to scan the bust themselves by sneaking into the museum and now they’ve released the data. Their aim, as far as I could follow, was to empower people to decide about what culture is and what not. Currently, it is the administration which decides, he said. With the data (and I think with a printed copy of the bust) they travelled to Egypt to make an exhibition. But beforehand, they had produced a video which substantiates the claim of having found a second bust while digging for artefacts. The talk itself was interesting, but the presentation was bad. the speaker was lost a few times and didn’t know how to handle the technical side of the presentation. Anyway, I like it when such guerrilla art makes it into the news.

Lenny talked about systemd which uncovered some news that you may have missed if you’re not following its development too closely. He said that systemd has moved to github and while it’s attracting new contributors it still has major issues which he didn’t mention though. A component named networkd is now the default on both Fedora and Ubuntu. It’s a rather underwhelming piece of software though, because it has no runtime interface. The nspawn tool is used by CoreOS’ rkt docker alternative. He also mentioned sd-bus which he claims is a replacement for the reference DBus implementation. Another interesting thing he mentioned is that systemd can not only do socket activation but also USB Function FS activation. So whenever you are in the need to start your USB gadget only when the USB cable has been plugged in, systemd may be for you.

In another session, Lenny continued talking about systemd with regards to its container capabilities. Containers are all the rage, right..? He said that all the systemd tools work on containers, too, with the -M switch. systemd also just works inside a container, with the exception of Docker, he said. It is also possible to make systemd download and verify images to run full system images. Funnily enough, he said, Ubuntu images are properly signed but Fedora images are not.

I also had a talk and a workshop to give. The workshop was titled “Functionality, Security, Usability: Choose any two. Or GNOME.” which is a bit sensational, I admit. I’m not experienced with holding workshops and it was unclear to me what to expect. Workshop sounds to me like people come and they want to hack on something. The venue, however, was not necessarily equipped with a reliable Internet connection for the attendees. Also, the time was set to one hour. I don’t think you can do a meaningful workshop within one hour. so I didn’t really know how to prepare. I ended up ranting about OpenPGP, GnuPG, and SKS. Then I invited people to hack on GNOME Keysign which was a bit difficult. given the time constraints we had. Well, that I mainly had, because I was meant to give a proper talk shortly after.

During my talk, I gave a glimpse on what to expect from GNOME 3.20 codename Delhi. It was a day before the release, so it was the perfect timing for getting people excited. And I think it worked reasonably well. I would have loved to be able to show the release video, but it wasn’t finished until then. So I mainly showed screenshots of the changes and discussed on a high level what GNOME is and what it is not. People were quite engaged and still believe GNOME 3 was designed for tablets.

FOSSASIA used to be in Vietnam and it was actually co-hosted with GNOME.Asia Summit once. It smells like we could see such a double event in the future, but probably in Singapore. I think that’d be great, because FOSSASIA is a well organised event, albeit a little chaos here and there. But who doesn’t have that… In fact, I nearly couldn’t make it to the conference, because GNOME did not react for two weeks so the conference removed me from the schedule. Eventually, things worked out, so all is good and I would like to thank the GNOME Foundation for contributing to the coverage of the costs.

Sponsored by GNOME!

FOSDEM 2016

It the beginning of the year and, surprise, FOSDEM happened :-) This year I even managed to get to see some talks and to meet people! Still not as many as I would have liked, but I’m getting there…

Lenny talked about systemd and what is going to be added in the near future. Among many things, he made DNSSEC stand out. I not sure yet whether I like it or not. One the one hand, you might get more confidence in your DNS results. Although, as he said, the benefits are small as authentication of your bank happens on a different layer.

Giovanni talked about the importance of FOSS in the surveillance era. He began by mentioning that France declared the state of emergency after the Paris attacks. That, however, is not in line with democratic thinking, he said. It’s a tool from a few dozens of years ago, he said. With that emergency state, the government tries to weaken encryption and to ban any technology that may be used by so called terrorists. That may very well include black Seat cars like the ones used by the Paris attackers. But you cannot ban simple tools like that, he said. He said that we should make our tools much more accessible by using standard FLOSS licenses. He alluded to OpenSSL’s weird license being the culprit that caused Heartbleed not to have been found earlier. He also urged the audience to develop simpler and better tools. He complained about GnuPG being too cumbersome to use. I think the talk was a mixed bag of topics and got lost over the many topics at hand. Anyway, he concluded with an interesting interpretation of Franklin’s quote: If you sacrifice software freedom for security you deserve neither. I fully agree.

In a terrible Frenglish, Ludovic presented on Python’s async and await keywords. He said you must not confuse asynchronous and parallel execution. With asynchronous execution, all tasks are started but only one task finishes at a time. With parallel execution, however, tasks can also finish at the same time. I don’t know yet whether that description convinces me. Anyway, you should use async, he said, when dealing with sending or receiving data over a (mobile) network. Compared to (p)threads, you work cooperatively on the scheduling as opposed to preemptive scheduling (compare time.sleep vs. asyncio.sleep).

Aleksander was talking on the Tizen security model. I knew that they were using SMACK, but they also use a classic DAC system by simply separating users. Cynara is the new kid on the block. It is a userspace privilege checker. A service, like GPS, if accessed via some form of RPC, sends the credentials it received from the client to Cynara which then makes a decision as to whether access is allowed or not. So it seems to be an “inside out” broker. Instead of having something like a reference monitor which dispatches requests to a server only if you are allowed to, the server needs to check itself. He went on talking about how applications integrate with Cynara, like where to store files and how to label them. The credentials which are passed around are a SMACK label to identify the application. The user id which runs the application and privilege which represents the requested privilege. I suppose that the Cynara system only makes sense once you can safely identify an application which, I think, you can only do properly when you are using something like SMACK to assign label during installation.

Daniel was then talking about his USBGuard project. It’s basically a firewall for USB devices. I found that particularly interesting, because I have a history with USB security and I do know that random USB devices pose a problem. We are also working on integrating USB blocking capabilities with GNOME, so I was keen on meeting Daniel. He presented his program, what it does, and how to use it. I think it’s a good initiative and we should certainly continue exploring the realm of blocking USB devices. It’s unfortunate, though, that he has made some weird technological choices like using C++ or a weird IPC system. If it was using D-Bus then we could make use of it easily :-/ The talk was actually followed by Krzyzstof who I reported on last time, who built USB devices in software. As I always wanted to do that, I approached him and complained about my problems doing so ;-)

Chris from wolfSSL explained how they do testing for their TLS implementation. wolfSSL is 10 years old and secures over 1 billion endpoints, he said. Most interestingly, they have interoperability testing with other TLS implementations. He said they want to be the most well tested TLS library available which I think is a very good goal! He was a very good speaker and I really enjoyed learning about their different testing strategies.

I didn’t really follow what Pam was talking about implicit trademark and patent licenses. But it seems to be an open question whether patents and trademarks are treated similarly when it comes to granting someone the right to use “the software”. But I didn’t really understand why it would be a question, because I haven’t heard about a case in which it was argued that the right on the name of the software had also been transferred. But then again, I am not a lawyer and I don’t want to become one…

Jeremiah referred on safety-critical FOSS. Safety critical, he said, was functional safety which means that your device must limp back home at a lower gear if anything goes wrong. He mentioned several standards like IEC 61508, ISO 26262, and others. Some of these standards define “Safety Integrity Levels” which define how likely risks are. Some GNU/Linux systems have gone through that certification process, he said. But I didn’t really understand what copylefted software has to do with it. The automotive industry seems to be an entirely different animal…

If you’ve missed this year’s FOSDEM, you may want to have a look at the recordings. It’s not VoCCC type quality like with the CCCongress, but still good. Also, you can look forward to next year’s FOSDEM! Brussels is nice, although they could improve the weather ;-) See you next year!

Talking on Searchable Encryption at 32C3 in Hamburg, Germany

This year again, I attended the Chaos Communication Congress. It’s a fabulous event. It has become much more popular than a couple of years ago. In fact, it’s so popular, that the tickets (probably ~12000, certainly over 9000) have been sold out a week or so after the sales opened. It’s gotten huge.

This year has been different than the years before. Not only were you able to use your educational leave for visiting the CCCongress, but I was also giving a talk. Together with my colleague Christian Forler, we presented on Searchable Encryption. We had the first slot on the last day. I think that’s pretty much the worst slot in the schedule you could get ;-) Not only because the people are in Zombie mode, but also because you have received all those viruses and bacteria yourself. I was lucky enough, but my colleague was indeed sick on day 4. It’s a common thing to be sick after the CCCongress :-/ Anyway, we have hopefully entertained the crowd with what I consider easy slides, compared to the usual™ Crypto talk. We used a lot imagery and tried to allude to funny stuff. I hope people enjoyed it. If you have seen it, don’t forget to leave feedback! It was hard to decide on the appropriate technical level for the almost 1800 people. The feedback we’ve received so far is mixed, so I guess we’ve hit a good spot. The CCCongress was amazingly organised for speakers. They really did care for us and made sure everything was right. So everything was perfect expect for pdfpc which crashed whenever it was meant to display a certain slide… I used Evince then and it worked…

The days at the CCCongress were intense as you might be able to tell from the Fahrplan. It generally started at about 12:00 and ended at about 01:00. And that’s only the talks. You can’t avoid bumping into VIP (very interesting people) and thus spend time in the hallway. And then you have these amazing parties. This year, they had motor-homes and lasers in the dance hall (last year it was a water cannon…). Very crazy atmosphere. It’s highly recommended to spend a night there.

Anyway, day 1 started for me with the Keynote by Fatuma Musa Afrah. The speaker stretched her time a little, I felt. At the beginning I couldn’t really grasp what her topic was or what she wanted to tell us. She repeatedly told us that we had to “kill the time together” which killed my sympathy to some extent. The conference’s motto was Gated Communities. She encouraged us to find ways to open these gates by educating people and helping them. She said that we have to respect each other irrespective of the skin colour or social status. It was only later that she revealed being refugee who came to Germany. Although she told us that it’s “Newcomers”, not “refugees”. In fact, she grew up in Kenya where she herself was a newcomer. She fled to Kenya, so she fled twice. She told us stories about her arriving and living in Germany. I presume she is breaking open the gates which separate the communities she’s living in, but that’s speculation. In a sense she was connecting her refugee community with our hacker community. So the keynote was interesting for that perspective.

Joanna Rutkowska then talked about trustworthy laptops. The basic idea is to have no state on the laptop itself, i.e. no place where malware could be injected. The state should instead be kept on a personal storage medium, like an SD card or a pen drive. She said that laptops are inherently not trustworthy. Trust, she said can be broken up into Trusted, Secure, and Trustworthy. Secure is resistant to attacks. Trusted is something we, as Security community, do not want to have, like a Trusted third party. Trustworthy, she said, is something different, like the Intel Management Engine which might be resistant to attacks, yet it is not acting in the interest of the user. Application level security is meaningless, she said, when we cannot trust the Operating System, because it is the trusted part. If it is compromised then every effort is not useful. Her project, Qubes OS, attempts to reduce the Trusted Computing Base. What is Operating system to the application, is the hardware to the Operating system. The hardware, she said, has been assumed to be trusted. A single malicious peripheral, like a malicious wifi module, can compromise the whole personal computer, the whole digital life, she said. She was referring to problems with Intel x86 platforms. Present Intel processors integrate everything on the main chip. The motherboard has been made more or less only a holder for the CPU and the memory. The construction of those big chips is completely opaque. We have no control over what is inside. And we cannot look inside, even if we wanted to. The firmware is being loaded during boot from a discrete element on the mainboard. We cannot, however, verify what firmware really is on the chip. One question is how to enforce read-only-ness of the system or how to upload your own firmware. For many years, she and others believed that TPM, TXT, or UEFI secure boot could solve that problem. But all of them have shown to fail horribly, she said. Unfortunately, she didn’t mention how so. So as of today, there is no such thing as a secure boot. Inside the processor is a management engine which special, because it is the perfect entry for backdooring and zombification of personal computing. By zombification, she means that the involvement of the Apps (vs. OS vs. Hardware) is decreasing heavily and make the hardware have much more of a say. She said that Intel wants to make the Hardware fully control your computing by having much more logic in the management engine. The ME is, in a sense, a gated community, because you cannot, whatsoever, inspect it, tinker with it, or otherwise get in touch. She said that the war is lost on X86. Even if we didn’t have the management engine. Again, she didn’t say why. Her proposal is to move all those moving firmware parts out to a trusted storage. It was an interesting perspective on what I think is a “simple” Free Software problem. Because we allow proprietary software, we now have the problem to see what is loaded into the hardware. With Free Software we’d still have backdoors in hardware, but assuming that most functionality is encoded in firmware, we could see and modify the existing firmware and build, run, and share our “better” firmware.

Ilja van Sprundel talked about Windows driver security or rather their attack surface. I’m not necessarily interested in Windows per se, but getting some lower level knowledge sounded intriguing. He gave a more high level overview of what to do and what to not do when doing driver development for Windows. The details, he said, matter. For example whether the IOManager probes a buffer in *that* instance. The Windows kernel is made of several managers, he said. The Windows Driver Model (WDM) is the standard model for how drivers are written. The IO Manager proxies requests from user to (WDM drivers. It may or may not validate arguments. Another central piece in the architecture are IO Request Packets (IRPs). They are being delivered from the IO Manager to the driver and contain all the necessary information for the operation in question. He went through the architecture really fast and it was hard for a kernel newbie like me to follow all the concepts he mentioned. Interestingly though, the IO Manager seems to also care about transferring the correct amount of memory from userspace to kernel space (e.g. makes sure data does not overflow) if you want it to using METHOD_BUFFERED. But, as he said, most of the drivers use METHOD_NEITHER which does not check anything at all and is the endless source of driver bugs. It seems as if KMDF is an alternative framework which makes it harder to have bugs. However, you seem to need to understand the old framework in order to use that new one properly. He then went on to talk about the actual attack surface of drivers. The bugs are either privilege escalation, denial of service, or information leak. He said that you could avoid the problem of integer overflow by using the intsafe library. But you have to use them properly! Most importantly, you need to check their return type and use the actual values you want to have been made safe. During creation of a device, a driver can call either IoCreateDeviceSecure with an SDDL string or use an INF file to ACL the device. That is, however, done either rarely or wrongly, he said. You need to think about who needs to have access to your device. When you work with the IOManager, you need to check whether Irp->MdlAddress is NULL which can happen, he said, if it’s a zero sized buffer. Similarly, when using the safer METHOD_BUFFERED mentioned earlier, Irp->AssociatedIrp.SystemBuffer can also be NULL. So avoid having that false sense of security when using that safe API. Another area of bugs is the cancellation of IRPs. The userland can cancel requests which apparently is not handled gracefully by drivers and leads to deadlocks, memory leaks, race conditions, double frees, and other classes of bugs. When dealing with data from userland, you are supposed to “probe” the memory which is basically checking whether the pointers are valid and in the expected range. If you don’t do that, it’ll lead to you writing to arbitrary kernel memory. If you do validate the data from userspace, make sure you don’t fetch it again from user space assuming that it hasn’t changed. There might be race between your check and your usage (TOCTOU). So better capture, validate, and use the data. The same applies when using MDLs. That, however, is more tricky, because you have a double mapping and you are using a kernel pointer. So it is very subtle. When you do memory allocation you can either use ExAllocatePool or ExAllocatePoolWithQuota. The latter throws an exception instead of returning NULL. Your exception or NULL pointer handling needs to be double checked, he said. It was a very technical talk on Windows which was way out of my comfort zone. I only understood a tiny fraction of what he was presenting. But I liked it for the new insight on Windows drivers and that the same old classes of bug have not died yet.

High up on my list of awaited talks was the talk on train systems by the SCADA strangelove people. Railways, he said, is the biggest system built by mankind. It’s main components are signals and switches. Old switches are operated manually by pure force. Modern switches are interlocked with signals such that the signals display forbidden entry when switches are set in certain positions. On tracks, he said, signals are transmitted over the actual track by supplying them with AC or DC. The locomotive picks up the signals and supplies various systems with them. The Eurostar, they said, has about seven security systems on board, among them a “RPS”, a Reactor Protection System which alludes to nuclear trains… They said that lately the “Bahn Automatisierungssystem (SIBAS)” has been updated to use much more modern and less proprietary soft- and hardware such as VxWorks and x86 with ELF binaries as well as XML over HTTP or SS7. In the threat model they identified, they see several attack vectors. Among them are making someone plug a malicious USB device in controlling machines in some operation center. He showed pictures from supposedly real operation centers. The physical security, he said, is terrible. With close to no access control. In one photograph, he showed a screenshot from a documentary aired on TV which showed credentials sticking on the screen… Even if the security is quite good, like good physical security and formally proven programs, it’s still humans who write the software, he said, so there will be bugs to be exploited. For example, he showed screenshots of when he typed “railway” into Shodan and the result included a good number of railway stations. Another vector is GSM-R. If you jam the train’s GSM-R connection, the train will simply stop. Also, you might be able to inject SIM toolkit malware. Via that vector, you might make the modem identify as, e.g. a keyboard and then penetrate further into the systems. Overall an entertaining talk, but the claims were a bit over the top. So no real train hacking just yet.

The talk on memory corruption by Mathias Payer started off by saying that software is unsafe and insecure. Low level languages trade type safety and memory safety for performance. A large set of legacy applications, he said, are prone to memory vulnerabilities. There are, he continued too many bugs to find and fix manually. So we need a runtime system to ensure security. An invalid dereference or an out of bounds pointer is the core of memory unsafety problems. But according to the C language, he claimed, it’s only a violation if the invalid pointer is read, written to, or freed. So during runtime, there are tons and tons of dangling pointers which is perfectly fine. With such a vulnerability a control-flow attack could be executed. Several defenses exist: Data Execution Prevention prevents code from actually being executed. Address Space Layout Randomisation scrambles the memory locations of executable code which makes it harder to successfully exploit a vulnerable. Stack canaries are special values which are supposed to detect overflowing writes. Safe exception handlers ensure that exception code paths follow predefined patterns. The DEP can only work together with ASLR, he said. If you broke ASLR, you could re-use existing code; as it turns out, people do break ASLR every now and then. Two new mechanisms are Stack Integrity and Code Flow Integrity. Stack Integrity enforces to return to the actual caller by having a shadow stack. He didn’t mention how that actually works, though. I suppose you obtain a more secret stack address somewhere and switch the stack pointer before returning to check whether the return address is still correct. Control Flow Integrity builds a control flow graph during compilation and for every control flow change it checks at run time whether the target address is allowed. Apparently, many CFI implementations exist (eleven were shown). He said they’ve measured those and IFCC and Lockdown performed rather badly. To show how all of the protection mechanisms fail, he presented printf-oriented programming. He said that printf was Turing complete and presented a domain specific language. They have built a brainfuck interpreter with snprintf calls. Another rather technical talk by a good speaker. I remember that I was already impressed last year when he presented on these new defense mechanisms.

DJB and Tanja Lange started their “late night show” by bashing TLS as a “gigantic clusterfuck”. They were presenting on quantum computing and cryptography. They started by mentioning that the D-Wave quantum computer exists, but it’s not useful, he said. It doesn’t do the basic things, and can only do limited computations. It can especially not perform Shor’s algorithm. So there’s no “Shor monster coming”. They recommended the Timeline of Quantum Computing as a good reference of the massive research effort going into quantum computing. If there was a general quantum computer pretty much every public key scheme deployed on the Internet today will be broken. But also symmetric schemes are under attack due to Grover’s algorithm which speeds up brute force algorithms significantly. The solution could be physical crypto like using strong (physical) locks. But, he said, the assumptions of those systems are already broken. While Quantum Key Distribution is secure under certain assumptions, those assumptions are off, he said. Secure schemes that survive the quantum era were the topic of their talk. The first workshop on that workshop happened in 2006 and efforts are still being made, e.g. with EU projects on the topic. The time it takes for a crypto scheme to gain significant traction has been long, so far. They gave ECC as an example. It has been introduced in the 1980s, but it’s only now that it’s taking over the deployed crypto on the Internet. So the time it takes is long. They gave recommendations on what to do to have connections that are secure “for at least the next hundred years”. These include at least 256 bit keys for symmetric encryption. McEliece with binary Goppa codes n=6960 k=5413 t=119. An efficient implementation of such a code based scheme is McBits, she said. Hash based signatures with, e.g. XMSS or SPHINCS-256. All you need for those is a proper hash function. The stuff they recommend for the next 100 years, like the McEliece system, are things from the distant past, she said. He said that Post Quantum Cryptography will be the standard in a couple of years from now so he urged the cryptographers in the audience to “get used to this stuff”.

Next on my list was Markus’ talk on Landesverrat which is the incident of Netzpolitik.org being investigated for revealing secret documents. He referred on the history of the case, how it came around that they were suspected of revealing secret documents. He said that one of their believes is to publish their sources, even the secret ones. They want their work to be critically reviewed and they believe that it is only possible if the readers can inspect the sources. The documents which lead to the criminal investigations were about finances of the introduction of the XKeyscore software. Then, the president of the “state security” filed a case against because of revealing secret documents. They wanted to publish the investigation files, but they couldn’t see them, because they were considered to be more secret than the documents they have already published… From now on, he said, they are prepared for the police raiding their offices, which I suppose is good standard preparation. They were lucky, he said, because their case fell into the regular summer low of news which could make the case become quite popular in the media. A few weeks earlier or later and they were much less popular due to the refugees or Greece. During the press coverage, they had a second battleground where they threw out a Russian television team who entered their offices without having called or otherwise introduced themselves… For the future, he wants to see changes in what is considered to be a state secret. He doesn’t want the government to decide what such a secret is. He also wants to have much more protection for whistle blowers. Freedom of press should also hold for people who do not blog for their “occupation”, but also hobbyists.

Vincent Haupert was then talking on App-based TAN online banking methods. It’s a classic two factor method: Not only username and password, but also a TAN. These TAN methods have since evolved in various ways. He went on to explain the general online banking process: You log in with your credentials, you create a new wire transfer and are then asked to provide a TAN. While ChipTAN would solve many problems, he said, the banking industry seems to want their customers to be able to transfer money everywhere™. So you get to have two “Apps” on your mobile computer. The banking app and a TAN app. However, malware in “official” app stores are a reality, he said. The Google Playstore cannot protect against malware, as a colleague of him demonstrated during his bachelor thesis. This could also been by the “Brain Test” app which roots your device and then loads malware. Anyway, they hijacked the connection from the banking app to modify the recipient of the issued wire transfer and the TAN being pushed on the device. They looked at the apps and found that they “protected” their app with “Promon Shield“. That seems to be a strong obfuscation framework. Their attack involved tricking the root and hooks detection. For the root detection they check on the file system for certain binaries. He could simply change the filenames and was good to go. For the hooks (Xposed) it was pretty much the same with the exception of a few filenames which needed more work. With these modifications they could also “hack” the newer version 1.0.7. Essentially the biggest part of the problem is that the two factors are on one device. If the attacker hijacks that one device then ,

The talk by Christian Schaffner on Quantum Cryptography was introducing the audience to quantum mechanics. He said that a qubit can be imagined as the direction of a polarised photon. If you make the direction of the photons either horizontal or vertical, you can imagine that as representing 0 or 1. He was showing an actual physical experiment with a laser pointer and polarisation filters showing how the red dot of the laser pointer is either filtered or very visible. He also showed how actually measuring the polarisation changes the state of the photons! So yet another filter made the point in the back brighter. That was a bit weird, but that’s quantum mechanics. He showed a quantum random number generator based on that technology. One important concept is the no-cloning theorem which state that you can make a perfect copy of a quantum bit. He also compared current and “post quantum” crypto systems against efficient classical attackers and efficient quantum attackers. AES, SHA, RSA (or discrete logs) will be broken by quantum attacks. Hash-based signatures, McEliece, and lattice-based cryptography he considered to be resistant against quantum based attacks. He also mentioned that Quantum Key Distribution systems will also be against an exhaustive attacker who applies brute force. QKD is based on the no-cloning theorem so an eavesdropper cannot see the same bits as the communicating parties do. Finally, he asked how you could prove that you have been at a certain location to avoid the pizza delivery problem (i.e. to be certain about the place of delivery).

Fefe was talking on privileges. He said that software will be vulnerable. Various techniques should be applied such as simply fixing all the bugs (haha…) or make exploitation harder by applying ASLR or ROP protection. Another idea is to put the program in a straight jacket and withdraw privileges. That sounds a lot like containerisation. Firstly, you can drop your privileges from superuser down to the least privileges you need, then do privilege separation. Another technique is the admin confining the app in a jail instead of the app confining itself. Also, you can implement access control via a broker service by splitting up your process into, say, a left half which opens and reads files and a right half which processes data. When doing privilege separation, the idea is to split up the process into several separately running programs. Jailing is like firewall rules for syscalls which, he said, is impossible for complex programs. He gave Firefox as an example of it being impossible to write a rule set for. The app containing itself is like a werewolf chaining itself to the wall before midnight, he said. You restrict yourself from opening files, creating socket, or from attaching yourself as a debugger to other processes. The broker service is probably like a reference monitor. He went on showing how old-school privilege dropping works. You could do it as easily as seteuid(getuid()), but that’s not enough, because there is the saved UID, so you need to setresuid and not forget to check the return code. Because the call can fail if, for example, the target UID had already been running too many processes for its quota. He said that you should fail the build if your target platform does not provide setresuid. However, dropping privileges is more than setting your UID. It’s also about freeing resources you don’t necessarily need. Common approaches to jailing your process are to have a fake filesystem with only the necessary files, so your process cannot ever access anything that it shouldn’t. On Linux, that would probably be chroot. However, you can escape using fchdir . Also, mounting your /proc into the chroot, information about the host is exposed. So you need to do more work than calling chroot. The BSDs, he said, have Securelevel which is a kernel mode that only increases which withdraws certain privileges. They also have jails which is a chroot on steroids, he said. It leaks some information due the PIDs, though, he said.

The next talk was on Shellphish, an automatic exploitation framework. This is really fascinating stuff. It’s been used for various Capture the Flag contests which are basically about hacking other teams’ software services. In fact, the presenters were coming from the UCSB which is hosting the famous iCtF. They went from solving security challenges to developing a system which solves security challenges. From a vulnerability binary, they (automatically) develop an exploit and a patched binary which is immune to the exploit, but preserves the functionality of the program. They automatically find vulnerabilities, patches, and test both the exploits and the patches. For the automated vulnerability component, they presented Angr. It has a symbolic execution engine looking for memory accesses outside allocated regions and unconstrained instruction pointer which is a jump controlled by user input (JMP eax). They have written a paper for NDSS about “Augmenting Fuzzing Through Selective Symbolic Execution“. Angr is a Python library and they showed how to use it for identifying the overhyped Back to 28 vulnerability. Actually, there is too much state for a regular symbolic executor to find this problem. Angr does “veritesting“. He showed that his Angr script found the vulnerability by him having excluded many paths of execution that don’t really generate new state with a few lines of code. He didn’t show though what the lines of code were and how he determined how the states are not adding any new information.

The next talk was given by the people behind Intelexit was about convincing NSA agents to stop their work and serve democracy instead. They rented a van with big mottoes printed on them, like “Listen to your heart, not to private phone calls”. They also stuck the constitution on the “constitution protection office” which then got torn apart. Another action they did was to fly over the dagger complex and to release flyers about leaving the secret services. They want to have a foundation helping secret service agents to leave their job or to blow the whistle. They also want an anonymous call service where agents can call to talk about their job. I recommend browsing their photos.

Another artsy talk was on a cheap Facebook army. Actually it was on Instagram followers. The presenter is an artist himself and he’d buy Instagram followers for fellow artists “to make them all equal”. He dislikes the fact that society seems to measure the value or quality of art in followers or likes on social media.

Around the CCCongress were also other artsy installations like this one called “machine learning”:

It’s been a fabulous event. I really admire the people organising this event each and every year. Thank you so much and see you next year, hopefully.