OpenSuSE Conference 2017 Nuremberg, Germany

As last year I was honoured to be invited to OpenSuSE Conference in Nuremberg, Germany.

The event has grown and I felt a relaxed yet productive atmosphere when entering the venue. Just a few minutes after I arrived I hooked up with interesting people with even more interesting discussions. It was very nice to get together with all the Free Software friends I made over the last years. It was also pleasent to see the event becoming bigger and bigger. I take that as a sign that our community grows although it might also just be consolidation of events.

The organision team provided everything from Brazilian BBQ to perfect weather :) The schedule hosted interesting sessions, including mine of course ;-) I had a small workshop on signing OpenPGP keys and I made people use GNOME Keysign. ;-) It was successful in the sense that we were able to shake out a few bugs to make the application more robust. I also realised that networks are not very permissive nowadays. More precisely, the WiFi blocked mDNS traffic preventing the apps from finding each other :( One design goal of the app was to not have to rely on an Internet connection. But if the networks prevent clients from talking to each other then I think we need to go via the Internet in order to transfer files locally :/ Fortunately, we are working on an Internet transport. Stay tuned for further posts on this issue.

Oh, and we even had a GNOME stand full of amazing stuff.

Talking at GPN 2017 in Karlsruhe, Germany

Although the GPN is an annual event, I haven’t managed to go very often. Last time has already been a while. It’s a pity, because the event is very cute. The location is just amazing which makes being there really fun. It’s a museum hosting many things from our digital world. If you visit only one thing in Karlsruhe, go and visit it. In fact, we tried to organise a small excursion during GUADEC last year. Bloomberg also has an article about the event.

I could only stay one day, but I opened the conference with my talk on building a more secure operating system without sacrificing usability which, of course, was a GNOME related talk. The room was packed and people had to sit on the floor. Based on the feedback I think the people liked to be explained what challenges are to be solved in order to ship more secure systems to more people. You can find the slides here. In case you want to practise your German, you can watch the video here.

The schedule had a few other gems, too. My favourite was Loeschi talking about the upcoming Smart Meter Gateway situation in Germany and how it compares to the rest of Europe. The talk about QR Codes was also nicely done and explained quite well how they work. I hope to be able to attend the event more often :) Especially because I wish the Free Software and the “hacker” people would mingle a bit more.

GNOME Keysign 0.9 released

Oh boy, it’s been a while that we’ve released GNOME Keysign 0.9. We changed quite a few things since last time I’ve reported but the most visible change are the new widgets which I already announced last time. Now it should be much easier to make changes with the GUI and experiment with designs.

Other changes include less visible things like the ability to run the program in a VM. We use gtksink now which not only reduces the amount of code we have to maintain but also makes it easier for us to maintain compatibility with different display servers. Similarly, we don’t use the v4l2src but rather a autovideosrc hoping that it will be more compatible with other platforms.

If you want to try to new version, the instructions in the README should get you going:

pip install --user 'git+https://github.com/GNOME-Keysign/gnome-keysign.git#egg=gnome-keysign'

Alternatively, you may try the Debian or openSuSE package. The Flatpak is still work in progress as we still need to figure out how to work with GnuPG running on the host.

The future will bring exciting changes, too. I plan for i18n support and more Python 3 compatibility.

Attended FOSDEM 2017

Unsurprisingly, the biggest European Free Software event happened in Brussels, Belgium again. I’m talking about FOSDEM, of course. It’s a fixed entry in many peoples calendar and always a good excuse to visit Brussels :-)

I’m a bit late to report on what talks I managed to see as others have already covered some of the talks, but I still want to add some observations.

Richard Brown from SuSE talked about dinosaurs and resurrecting them (video). It was more about containerised apps than actual dinosaurs, though. The general theme was about repeating mistakes that we might or should have learned in the past. He started by mentioning that the Windows DLL Hell was a nightmare. You needed to test your application with each and every version combination of every possible library. The DLLs did not necessarily have ABI compatibility so it was very cumbersome to test. Windows 2000 brought Side-by-Side assembly, which is some form of DLL containerisation, he said. It uses separate memory space for each app and its DLLs. Programs can ship “private” DLLs in their application directory so you don’t necessarily break other apps with your DLL carrying the same name. This approach, however, still has issues: Security wise each app needs to update their libraries themselves rather than have them updated. So each app needed to build and ship their own updater which is not trivial to do. Legally it’s also interesting, he said, because bundling these DLLs may impose restrictions. Last but not least, you have to have the same DLL potentially multiple times on disk, because each app may ship the same DLL.

The contemporary software distribution model has its problems, too, he said. Compatibility with various distros is an issue, because each distro is slightly different. Each distribution also has their own pace of change which may be incompatible with the application in question, e.g. the distros may decide to ship an older version because they have tested it more. Different distributions have different libraries and versions thereof. Also, each distribution has different toolsets to package applications up for their environment. Application developers, however, don’t want to care about these details.

Containerised applications solve these issues. Maybe. He mentioned Flatpak, snappy, and Appimage. The latter is the oldest technology dating all the way back to 2003. The solutions have in common that they bundle the app and run it in some kind of container or sandbox. From his criteria, the compatibility issue is solved, because the libraries are in the bundles. Portability is solved, because all dependencies are shipped in the bundle. And the pace of change is up to the app developer.

The containerisations, though, make assumptions of a common standard base provided by the distributions. According to him, such a common standard base does not exist in a practical sense, though. With containerised apps, he said, we might be repeating history. He explained that we might get a security nightmare because each app needs to update their dependencies themselves. The question also begs whether all the libraries can actually be bundled and shipped. App developers are picking up the responsibilities that distros used to have. You still have to test everything on each distro just to be sure that your base dependencies still work correctly, he said. He sees distributions as part of the solution to these problems. He thinks that a rolling release might solve the issues we’re trying to solve with containserised apps. A rolling release can ship new releases of applications very quickly. The distribution still uses their tools for the common problems like maintenance, security, and legal stuff.

In a lightning talk, David talked about “practical TPM 2.0 usage”. He showed how to generate a signing key, sign a document with it, and verify the signature. He said that Microsoft mandated TPM2.0 for Windows 10 Mobile and that it is a cryptographic processor rather than an accelerator. TPM2.0 is different from TPM1.2 in various ways, he said. For example, the 2.0 can do ECC (P256 and BN256) and SHA-256. But it’s also “algorithm agile” which means that you can add algorithms without having to change the specification. He sees three main usages: Platform integrity like secure boot and trusted boot, disk encryption where the TPM stores and controls access to the key, and Digital Restriction Management by verifying code signatures. In order to use the TPM you have two options, he said. IBM or Intel have developed some tools. IBM doesn’t have a “resource manager” according to the specification. Like a multiplexer. Intel does have such a resource manager and they are working on putting that into Linux. However, Intel has less tools, he said, although it’s wasn’t entirely clear to me what he was referring to. He mentioned that his employer, Facebook, uses TPMs for platform attestation.

Hanno talked the security on the Linux desktop. He referred to the issues Chris Evans exposed a few weeks ago.
He wanted to make the audience angry, he said. But not at him, I suppose because he considers himself to be the messenger only. The basic problem is an unfortunate agglomeration of bugs or behaviours. It starts with the browser automatically downloading files into the users downloads folder, i.e. without asking the user. Then there is Tracker which indexes files that you add to your home directory. Such as the download folder. And then there are buggy (read: vulnerable) implementations of file parsers.

He also referred to Carlos’ comment about bugs being bugs and no problem being found except bugs being bugs. Hanno’s point, as far as I could make it out, was that a project of the size as tracker, especially with that number of dependencies that you don’t control, cannot make sure that there will be not yet another bug that can be exploited. That’s quite fatalistic but probably not too far from reality. It’s not just a Tracker issue, though, he said. KDE has Baloo and everybody wants to have thumbnails of the files in your folders. He reiterated that automatic downloads AND automatic indexing creates a huge attack surface. And that the indexers support a vast variety of file formats by using many libraries of varying quality. While Tracker quickly adopted sandboxing, he said, KDE hasn’t.

He mentioned other exploit mitigation techniques such as ASLR or CFI. With ASLR, he said, the idea is to load code and data into random addresses in memory. This mitigates exploits, because they cannot reliably target valid code in memory. A least that’s the idea. You need to compile the code with -fpic and -pie, he said. Linux distribution have been slow in adopting ASLR though. Ubuntu has introduced it with 16.10, Feora with 23, and Debian is WIP. OpenSuSE has it for a few packages only. It should be the default, he said. Windows, on the other hand, has it since Vista. They also explore and experiment with more modern mitigations like CFI. Yet another approach is to avoid the C language, because “[it] is full of memory corruptions”. Rust comes to mind as an alternative. GStreamer already supports plugins in Rust, he said. He concluded that fixing all these bugs, like Carlos seemed to be wanting, is very hard. Not only because GStreamer is very prone to memory corruption due to the amount of complicated formats it parses. He mentioned fuzzing as a viable strategy to shake out bugs and he found many bugs in a few days. He mentioned that probably to make do so more of that ourselves. I’m working on it. More to posted separately.

The next talk was about testing TLS implementations. For the last year or so I began investigating TLS issues myself and I was wishing for a TLS testing framework. Now I learned about an implementation. Hubert Karlo introduced his “tls fuzzer” which is a bit of a misnomer, because it actually doesn’t perform any fuzzing. He said that TLS was complex and that it has 326 official ciphersuites, 4 PKI cryptosystems, 16 signature-hash pairs, and many more countable things that make the test matrix grow fast. There is a lot of state to be maintained, he said. He presented his tool which takes care of TLS specifics but allows you to define your own payloads and modifications to them. For example, with a few lines of code you can define a client to open a TLS connection and to use a GCM ciphersuite for collecting the nonces. He claims to have found more than 20 issues in NSS, GnuTLS, and OpenSSL. I’m curious to play around with it and maybe hook it up with Scapy’s fuzzing facilities.

Another TLS related talk was given by Fridolin who showed us a TLS Linux Kernel module implementation. The advantages are manyfold he said. Obviously, establishing the connection should be cheaper in terms of computation because the context does not need to be switched so often. Others are already using a kernel implementation of TLS, he said. He mentioned that Solaris has a kssl socket and that netflix uses a modified sendfile() for TLS on BSD. His implementation has been evaluated by Facebook, he said. The implementation leaves the handshaking still to user space and cares about the symmetric encryption only.

Compared to other FOSDEMs, I was able to actually see a few talks, although I was impressed by the number of people I randomly bumped into and who kept me from attending more talks ;-) The size of FOSDEM is its cause and solution to problems. A good thing about it was that I could bribe something to cook up a Debian package for GNOME Keysign so that, hopefully, 200 people don’t have to queue up and do weird things :o)

GNOME Keysign 0.8

I’ve just release GNOME Keysign 0.8. It’s an exciting step towards a more mature codebase with less cruft and pieces of code moved to places where they should be more discoverable. To get the app, we have a tarball as usual, or an experimental flatpak (see below). Also notice that the repository has changed. The new URL should be more discoverable and cause less confusion. I will take down the old URL soon. Also note that this release will not be compatible with older releases. So you cannot find older clients on the network.
One problem that existed was when you selected a key and then pushed the “back” button, the UI would stall an unpleasantly long time. The actual problem is Python’s HTTPd implementation using select() with a relatively long interval instead of, say, doing things asynchronously. The interval is now shorter which increases the number of times the polling loop is executed but should make the UI more responsive. I wonder whether it makes sense to investigate hooking up the GLib Mainloop with Python’s SocketServer…

Another fix went into the HTTP client side which you could stall with a non reacting keyserver, i.e. when the HTTP request was simply not answered. Because the download is not done asynchronously as it should, the UI waits for the completion of the download. The current mitigation is to let the HTTP request time out.

A new thing is a popup when an uncaught exception happens. It’s copy and pasted from MyPaint and works by setting Python’s sys.excepthook.

You can also now switch the screen on which the fullscreen barcode is being shown. Once you have selected a key, you get the barcode displayed. If you click it it will cover your whole screen. If you are hooked up to a projector you might want to make sure that the barcode is shown on the bigger screen. Now you can press the left or right key to “move” the barcode. I needed to work around a bug in GTK which seems to prevent gtk_window_fullscreen_on_monitor () from working.

Finally, a new GPG abstraction consolidates all the required functionality into one module rather than having the required functionality spread around various modules. I named it “gpgmh” for “gpg made hard” which is a pun on “gpgme”, “gpg made easy”. The new module will also allow to use the real™ gpg module instead of the gpg executable wrapper provided by monkeysign. We cannot, however, switch to the library just yet, because it needs gpgme 1.8 which is too recent for current distros (well, Debian and Ubuntu). So we have to wait until we can depend on it.

If you want to try the application, you can now get the Flatpak from here. It should be possible to install the app with a command like flatpak --user install --from http://muelli.cryptobitch.de/tmp/2017-01-29-keysign.flatpakref. You can also grab the bundle if you want. Please note that the flatpak is very experimental. It would be surprising if anything but showing the UI actually worked. There are several issues we still need to work out. One is to send an email from within the sandbox and the other is re-use an existing gpg agent from the existing user session inside the sandbox. Gpg is behaving a bit weirdly there. Just having the agent’s socket available inside the sandbox does not seem to be enough to make it work. We need to investigate what’s going on there.

The future brings other exciting changes, too. We have a new UI in preparation which should be much more appealing. Here is what it will look like:

GNOME Keysign 0.7

I keep forgetting about blogging about the progress we’re making with GNOME Keysign. Since last time I reported several new cool developments happened. This 0.7 release fixes a few bugs and should increase compatibility with recent gpg versions.

The most noticeable change is probably a message when you don’t have a private key. I tried to create something clickable so that the user would be presented, say, seahorse with the relevant widgets that allows the user to quickly generate an OpenPGP key. But we currently don’t seem to be able to do that. It’s probably worth filing a bug against Seahorse.

You may also that the “Next” or “Back” button is now sensitive to the end of the notebook. That is a minor improvement in the UI.

In general, we should be more Python 3 compatible by removing python2-only code in various modules.

Another change is a hopefully more efficient bar code rendering. Instead of using mixed case characters, the newer version tries to use the alphanumeric mode which should use about 5.5 bits per character rather than 8. The barcode reading side should also save some CPU cycles by activating zbar’s cache.

Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! ;-)

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something :-)

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement ;-) I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me ;-)

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city :-) By that time the links here in this post will probably be broken ;-) I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable :-(

Pressemitteilung: GUADEC startet ab morgen in Karlsruhe

(cross posting from http://www.guadec.org which you should follow!)

Am 21.09.2016 wird das nächste GNOME Release (3.22) unter dem Namen “Karlsruhe” erscheinen. Genannt wird ein GNOME Release nach dem letzten Austrageort der GUADEC (GNOME Users And Developers European Conference). Während dieser Konferenz treffen sich jährlich zirka 200 Freie-Software-Enthusiasten zu Workshops, Vorträgen und Arbeitsgruppentreffen und arbeiten an GNOME, der Software-Lösung für alle. Dieses Jahr findet das Treffen vom 11.08. bis zum 17.08. am KIT in Karlsruhe statt.

Die Veranstaltung richtet sich sowohl an Entwickler als auch an Nutzer, und soll dazu dienen, Wissen zu erzeugen und weiter zu leiten. Wir freuen uns, die diesjährige GUADEC in Karlsruhe zu veranstalten. Karlsruhe steht für uns für einen starken wissenschaftlichen und technischen Hintergrund verbunden mit dem gewissen Etwas an Kreativität und Design. Diese Verbindung zwischen Technik und Design passt sehr gut zu dem GNOME Projekt, welches nicht nur auf technischer Seite seit Jahren neue Standards fördert, sondern auch im Bereich des User-Experience-Design innovative Wege geht.

GNOME, die Stiftung mit dem Fuß als Logo, steht für 3 Kernpunkte:

Menschlichkeit

GNOME vereint nicht nur freiwillige und bezahlte Programmierer, sondern auch Firmen und wohltätige Organisationen. Wir machen GNOME 3, ein Computersystem das einfach zu benutzen ist, in über 190 Sprachen übersetzt wurde und großen Wert auf Behindertengerechtigkeit legt. Wir entwickeln in der Öffentlichkeit und jeder kann bei GNOME mitmachen. Unsere Kommunikationskanäle stehen allen offen, der Quellcode kann frei heruntergeladen, modifiziert und weitergegeben werden.

Technologien

GNOME ist eine flexible und mächtige Plattform für Desktop- und mobile Anwendungen. Wir entwickeln Toolkits wie GTK+ oder Clutter, die Webbrowser Engine WebKitGTK+, TMMultimedia-Bibliotheken wie GStreamer, das D-Bus messaging system, oder die Pango Textrendering-Bibliothek. GNOME Entwickler gehen auch weiter und modifizieren essentielle Kern-Infrastruktur wie den Linux Kernel, systemd oder Wayland.

Popularität

GNOME 3 kann über viele bekannte GNU/Linux-Distributionen wie Debian, Fedora, OpenSuSE oder Ubuntu bezogen werden. GNOME Technologien sind der Treiber vieler Firmen wie Endless Mobile, Amazon, TiVo, Nokia, TouchTune, Garmin oder TomTom. Viele andere Firmen benutzen GNOME Komponenten kostenfrei im Rahmen der Freien Lizenz unserer Software.

Wir laden gerne zu einem Treffen oder Interview während unserer Konferenz ein. Bei Fragen steht Tobias Mueller (tobi@guadec.org oder +4915153778790) gerne zur Verfügung.

Mehr Informationen über die GUADEC und GNOME gibt es unter: http://2016.guadec.org

GNOME Keysign 0.6

It’s been a while since I reported on GNOME Keysign. The last few releases have been exciting, because they introduced nice features which I have been waiting long for getting around to implement them.

So GNOME Keysign is an application to help you in the OpenPGP Keysigning process. That process will eventually require you to get hold of an authentic copy of the OpenPGP Key. In GNOME Keysign this is done by establishing a TCP connection between two machines and by exchanging the data via that channel. You may very well ask how we ensure that the key is authentic. The answer for now has been that we transmit the OpenPGP fingerprint via a secure channel and that we use the fingerprint to authenticate the key in question. This achieves at least the same security as when doing conventional key signing, because you get hold of the key either via a keyserver or a third party who organised the “key signing party”. Although, admittedly, in very rare cases you transfer data directly via a USB pendrive or so. Of course, this creates a whole new massive attack surface. I’m curious to see technologies like wormhole deployed for this use case.

The security of you going to the Internet to download the key is questionable, because not only do you leak that you’re intending to communicate with a certain person, but also expose yourself to attacks like someone dropping revocation certificates or UIDs of the key of your interest. While the former issue is being tackled by not going to the Internet in first place, the latter had not been dealt with. But these days are over now.

As of 0.5 GNOME Keysign also generates an HMAC of the data to be transferred and encodes that in the QR Code. The receiving end can then verify whether the data downloaded matches the expected value. I am confident that a new generation hash function serves the same purpose, but I’m not entirely sure how easy it is to get Keccak or siphash into the users’ hands. The HMAC, while being cryptographic overkill, should be fine, though. But the construction leaves a bad taste, especially because a known key is currently used to generate the HMAC. But it’s a mechanism built-in into Python. However, I expect to replace that with something more sensible.

In security, we better imagine a strong attacker who is capable of executing attacks which we think are not necessarily easy or even possible to mount. If we can defend against such a strong attacker then we may trust the system to resist weaker attacks, too. One of such a difficult attack, I think, is to inject just one frame while, at the same time, controlling the network. The attack could then make the victim scan a rogue barcode which delivers a rogue MAC which in turn validates the wrong data. Such an attack should not go unnoticed and, as of 0.5, GNOME Keysign will display the frame that contained the barcode.

This is what it looked like before:

2016-keysign-before

And now you can see the frame that got decoded. This is mainly because the GStreamer zbar element also provides the frame.

2016-keysign-after

Another interesting feature is the availability of a separate tool for producing signatures for a given key in a file. The scenario is that you may have received a key from your friend via a (trusted, haha) pendrive, a secure network connection (like wormhole), or any other means you consider sufficiently integrity preserving. In order to sign that key you can now execute something like python -m keysign.gnome-keysign-sign-key in order to run all the signing logic but without the whole key transfer stuff. This is a bit experimental though and I am not yet happy about the state that program is in, so it’s not directly exposed to users by installing it as executable.

GNOME Keysign is available in OpenSuSE, now. I don’t know the exact details of how to make it work, but rumour has it that you can just do a zypper install gnome-keysign. While getting there we identified a few issues along the way. For example, the gstreamer zbar element needs to be present. But that was a problem, because the zbar element was not built because the zbar library was not available. So that needed to get in first. Then we realised that the most modern OpenSuSE uses a very recent GnuPG which the currently used GnuPG library is not handling so nicely. That caused a few headaches. Also, the firewall seems to be an issue which needs to be dealt with. So much to code, so little time! ;-)

Talking at OpenSuSE Conference 2016 in Nuremberg

I was invited to this year’s OpenSuSE Conference in Nuremberg, Germany. I had been to that event two years ago in Dubrovnik which I enjoyed so much that I was eager to go again.

oscfinal

The venue was very easy to find due to poster hanging everywhere. The flow of information was good in general. That includes emails being every day which highlighted items in the schedule or restaurant recommendations for the evening.

I arrived just in time for my first show on GNOME Keysign. For better or worse we only very few people so we could discuss matters deeply. It was good, because we found bugs and other user facing issues that need to be resolved. The first and most obvious one was GnuPG 2.1 support. Although still experimental, OpenSuSE ships 2.1 by default. The wrapping library we’re using to interact with GnuPG did not support calling the newer gpg, so we had to identify the issues, find a fix, and test. It eventually worked out :-)

I also had a talk called “Five years after 3.0” which, to my surprise, has been covered by reddit and omgubuntu. I was also surprised by the schedule which only gave me 30 minutes instead of the usual 45 or 60. I was eventually politely reminded that I have significantly exceeded my time *blush*. We thus needed to move discussions outside which was fruitful. People at OpenSuSE Con are friendly and open-minded. It’s a pleasure to have arguments there :)

I didn’t actually see many talks myself. Although the schedule was quite full with interesting topics! But knowing that the VoCCC people were running the video recordings, I could count on recordings being available after a few days hours.

But I have had very interesting and enlightening discussions about distributions, containerised apps, Open Build Service, OpenQA, dragging more GNOME people towards OpenSuSE, Fonts, and other issues. That’s the great thing about conferences: You get to know people with interesting stories. As for the fonts, for example, I was discussing the complexity involved in rendering glyphs and whether this could eventually lead to security problems. I think the attack surface of fonts has been undervalued and needs some investigation. I hope I can invest some time in looking at building and modifying fonts. I also found it interesting to discuss why I would not recommend OpenSuSE as a GNU/Linux distribution to anyone, mainly because I need to reflect and challenge myself. Turns out, I don’t have any good reason except that my habits simply don’t include using OpenSuSE myself and I am thus unable to give a recommendation. I think they have interesting infrastructure though. I see the build service for having peoples’ apps built and OpenQA for having them tested. Both seem to be a little crude overall, but could become the tools to use for distributing your flatsnappimgpack. An idea was circling around to have a freedesktop.org for those app image formats and execution environments. But in a somewhat more working state. I think key to success of any such body is being lightweight and not end up like openstack. Let’s hope we can bring people who work on various parts or even implementation of containerisation for desktop applications together. I also hope that the focus for containered desktop apps will be isolation from other apps rather than actually distributing the software, because I don’t think we have a big problem with getting Free Software into the user’s hands.

So a big “thank you” to this year’s organisers for this event. I hope I can attend on of the following conferences :)