GNOME Keysign 0.7

I keep forgetting about blogging about the progress we’re making with GNOME Keysign. Since last time I reported several new cool developments happened. This 0.7 release fixes a few bugs and should increase compatibility with recent gpg versions.

The most noticeable change is probably a message when you don’t have a private key. I tried to create something clickable so that the user would be presented, say, seahorse with the relevant widgets that allows the user to quickly generate an OpenPGP key. But we currently don’t seem to be able to do that. It’s probably worth filing a bug against Seahorse.

You may also that the “Next” or “Back” button is now sensitive to the end of the notebook. That is a minor improvement in the UI.

In general, we should be more Python 3 compatible by removing python2-only code in various modules.

Another change is a hopefully more efficient bar code rendering. Instead of using mixed case characters, the newer version tries to use the alphanumeric mode which should use about 5.5 bits per character rather than 8. The barcode reading side should also save some CPU cycles by activating zbar’s cache.

Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! 😉

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something 🙂

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement 😉 I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me 😉

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city 🙂 By that time the links here in this post will probably be broken 😉 I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable 🙁 …

First OpenPGP.conf 2016 in Cologne, Germany

Recently, I’ve attended the first ever OpenPGP conference in Cologne, Germany. It’s amazing how 25 years of OpenPGP have passed without any conference for bringing various OpenPGP people together. I attended rather spontaneously, but I’m happy to have gone. It’s been very insightful and I’m really glad to have met many interesting people.

Werner himself opened the conference with his talk on key discovery. He said that the problem of integrating GnuPG in MUAs is solved. I doubt that with a fair bit of confidence. Besides few weird MUAs (mutt, gnus, alot, …) I only know KMail (should maybe also go into the “weird” category 😉 ) which uses GnuPG through gpgme, which is how a MUA really should consume GnuPG functionality. GNOME’s Evolution, while technically correct, supports gnugp, but only badly. It should really be ported to gpgme. Anyway, Werner said that the problem of encryption has been solved, but now you need to obtain the key for the party you want to communicate with. How can you find the key of your target? He said that keyservers cannot map a mail address to a key. It was left a bit unclear what he meant, but he probably referred to the problem of someone uploading a key for your email address without your consent. Later, he mentioned the Web of Trust, which is meant for authenticating the other user’s key. But he disliked the fact that it’s “hard to explain”. He didn’t mention why, though. He did mention that the WoT exposes the global social graph, which is not a desirable feature. He also doubts that the Web of Trust scales, but he left the audience wondering why. To solve the mapping problem, you might imagine keyservers which verify your email address before accepting your key. These, he said, “harm the system”. The problem, he said, is that this system only works with one keyserver which would harm the decentralised nature of the OpenPGP system and bring us back in to the x.500 dark age. While I agree with the conclusion, I don’t fully agree with the premise. I don’t think it’s clear that you cannot operate a verifying server network akin to how it’s currently done. For example, the pool of keyservers could only accept keys which were signed by one of the servers of the pool within the last, say, 6 months. Otherwise, the user has to enrol by following a challenge-response protocol. The devil may be in the details, but I don’t see how it’s strictly impossible.

However, in general, Werner likes the OpenSSH approach better. That is, it opportunistically uses a key and detects when it changes. As with the Web of Trust, the key validation happens on your device, only. Rather than, say, have an external entity selling the trust as with X.509.

Back to the topic. How do you find a key of your partner now? DANE would be an option. It uses DNSSEC which, he said, is impossible to implement a client for. It also needs collaboration of the mail provider. He said that Posteo and mailbox.org have this feature.

Anyway, the user’s mail provider needs to provide the key, he said. Web Key Directory is a new proposal. It uses https for key look-up on a well known name on the domain of the email provider. Think .well-known/openpgp/. It’s not as distributed as DNS but as decentralised as eMail is, he said. It still requires collaboration of the email provider to set the Web service up. The proposal trusts the provider to return genuine keys instead of customised ones. But the system shall only be used for initial key discovery. Later, he mentioned to handle revocation via the protocol™. For some reason, he went on to explain a protocol to submit a key in much more detail rather than expanding on the protocol for the actual key discovery, what happens when the key gets invalid, when it expired, when it gets rolled over, etc.
—-

Next up was Meskio who talked about Key management at LEAP, the LEAP Encryption Access Project. They try to provide a one-stop solution to encrypting all the things™. One of its features is to transparently encrypt emails. To achieve that, it opens a local MTA and an IMAPd to then communicate via a VPN with the provider. It thus builds on the idea of federation the same way current email protocols do, he said. For LEAP to provide the emails, they synchronise the mailbox across devices. Think of a big dropbox share. But encrypted to all devices. They call it soledad which is based on u1db.

They want to protect the user from the provider and the provider from the user. Their focus on ease of use manifests itself in puppet modules that make it easy to deploy the software. The client side is “bitmask“, a desktop application written in Qt which sets everything up. That also includes transparently getting keys of other users. Currently, he said, we don’t have good ways of finding keys. Everything assumes that there is user intervention. They want to change that and build something that encrypts emails even when the user does not do anything. That’s actually quite an adorable goal. Security by default.

Regarding the key validation they intend to do, he mentioned that it’s much like TOFU, but with many many exceptions, because there are many corner cases to handle in that scheme. Keys have different validation levels. The key with the highest validation level is used. When a key roll-over happens, the new key must be signed by the old one and the new key needs to be at least of a validation level as the old one. Several other conditions need to also hold. Quite an interesting approach and I wish that they will get more exposure and users. It’s promising, because they don’t change “too” much. They still do SMTP, IMAP, and OpenPGP. Connecting to those services is different though which may upset people.


More key management was referred on by Comodo’s Phillip Hallam-Baker who went then on to talk about The Mathematical Mesh: Management of Keys. He also doesn’t want to change the user experience except for simplifying everything. Every button to push is one too many, he said. And we must not write instructions. He noted that if every user had a key pair, we wouldn’t need passwords and every communication would be secured end-to-end. That is a strong requirement, of course. He wants to have a single trust model supporting every application, so that the user does not have to configure separate trust configurations for S/MIME, OpenPGP, SSH, etc. That again is a bit of a far fetched dream, I think. Certainly worth working towards it, but I don’t believe to experience such a thing in my lifetime. Except when we think of single closed system, of course. Currently, he said, fingerprints are used in two ways: Either users enter them manually or they compare it to a string given by a secure source.

He presented “The Mesh” which is a virtual store for configuration information. The idea is that you can use the Mesh to provision your devices with the data and keys it needs to make encrypted communication happen. The Mesh is thus a bit of a synchronised storage which keeps encrypted data only. It would have been interesting to see him relate the Mesh to Soledad which was presented earlier. Like Soledad, you need to sign up with a provider and connect your devices to it when using the Mesh. His scheme has a master signature key which only signs a to be created administration key. That in turn signs application- and device keys. Each application can create as many keys as it needs. Each device has three device keys which he did unfortunately not go into detail why these keys are needed. He also has an escrow method for getting the keys back when a disaster happens: The private keys are encrypted, secret shared, and uploaded. Then, you can use two out of three shares to get your key back. I wonder where to upload those shares to though and how to actually find your shares back.

Then he started losing me when he mentioned that OpenPGP keyservers, if designed today, would use a “linked notary log (blockchain)”. He also brought (Proxy-) reencryption into the mix which I didn’t really understand. The purpose itself I think I understand: He wants the mesh to cater for services to re-encrypt to the several keys that all of one entity’s devices have. But I didn’t really understand why it’s related to his Mesh at all. All together, the proposal is a bit opportunistic. But it’s great to have some visions…

Bernhard Reiter talked about getting more OpenPGP users by 2017. Although it was more about whitewashing the money he receives from German administration… He is doing gpg4win, the Windows port of GnuPG. The question is, he said, how to get GnuPG to a large user base and to make them use it. Not surprisingly, he mentioned that we need to improve the user experience. Once he user gets in touch with cryptography and is exposed to making trust decisions, he said, the user is lost. I would argue otherwise, because the people are heavily exposed to cryptography when using whatsapp. Anyway, he then referred to an idea of his: “restricted documents”. He wants to build a military style of access control for documents. You can’t blame him; it’s probably what he makes money off.

He promised to present ideas for Android and the Web. Android applications, he said, run on devices that are ten times smaller and slower compared to “regular” machines. They did actually conduct a study to find this, and other things, out. Among the other things are key insights such as “the Android permission model allows for deploying end to end encryption”. Genius. They also found out that there is an OpenPGP implementation in Bouncy Castle which people use and that it’s possible to wrap libgcrypt for Java. No shit!!1 They have also identified OpenKeychain and K9 mail as applications using OpenPGP. Wow. As for the Web, their study found out that Webmail is a problem, but that an extension to a Web browser could make end to end encryption possible. Unbelievable. I am not necessarily disappointed given that they are a software company and not a research institute. But I’m puzzled in what reality these results are interesting to the audience of OpenPGP.conf. In any case, his company conducted the study as part of the public tender they won and their results may have been biased by his subcontractors who are deeply involved in the respective projects (i.e. Mailvelope, OpenKeychain, …).

As for his idea regarding UX, his main idea is to implement Web Key Directory (see Werner’s opening talk) discovery mechanism. While I think the approach is good, I still don’t think it is sufficient to make 2017 the year of OpenPGP. My concerns revolve about the UX in non straight-forward cases like people revoking their keys. It’s also one thing to have a nice UX and another to actually have users going for it. Totally unrelated but potentially interesting: He said that the German Federal Office for Information Security (“BSI”) uses 500 workstations with GNU/Linux with a Plasma desktop in addition to Windows.

—-

Holger Krekel then went on to refer about automatic end to end encrypted emails. He is working on an EU funded project called NEXTLEAP. He said that email is refusing to die in favour of Facebook and all the other new kids on the block. He stressed that email is the largest open social messaging system and that many others use it as an anchor of identity. However, many people use it for “SPAM and work” only, he said. He identified various usability problems with end to end encrypted email: key distribution, preventing SPAM, managing secrets across devices, and handle device or key loss.

To tackle the key distribution problem, he mentioned CONIKS, Werner’s Webkey, Mailvelope, and DANE as projects to look into. With these, the respective providers add APIs to find public keys for a person. We know about Werner’s Webkey proposal. CONIKS, in short, is a key transparency approach which requires identity providers to publicly testify your key. Mailvelope automatically asks a verifying key server to provide the recipient’s key. DANE uses DNS with DNSSEC to distribute keys.

He proposed to have inline keys. That means to attach keys and cryptographic information to your emails. When you receive such a message, you would parse the details and use them for encryption the next time you create a message. The features of such a scheme, he said, are that it is more private in the sense that there is no public key server which exposes your identity. Also, it’s simpler in the sense that you “only” need to get support from MUAs and you don’t need to care about extra infrastructure. He identified that we need to run a protocol over email if we ever want to support that scheme. I’m curious to see that, because I believe that it’s good if we support protocols via email. Much like Outlook already does with its voting. SPAM prevention would follow naturally, he said. Because the initial message is sent as plain text, you can detect SPAM. Only if you reply, the other party gets your key, he said. I think it should be possible to get a person’s key out of band, but that doesn’t matter much, I guess. Anyway, I couldn’t really follow that SPAM argument, because it seems to imply that we can handle SPAM in the plain text case. But if that was the case, then we wouldn’t have the SPAM problem today. For managing keys, he thinks of sharing your keys via IMAP, like in the whiteout proposal.

—-

Stefan Marsiske then talked about his concerns regarding the future directions of GnuPG. He said he did some teaching regarding crypto and privacy preserving tools and that he couldn’t really recommend GnuPG to anyone, because it could not be used by the people he was teaching. Matt Green and Schneier both said that PGP is not able to secure email or that email is “unsecurable”. This is inline with the list that secushare produced. The saltpack people also list some issues they see with OpenPGP. He basically evaluated gpg against the list of criteria established in the SoK paper on instant messaging which is well worth a read.

Lutz Donnerhacke then gave a brief overview of the history of OpenPGP. He is one of the authors of the initial OpenPGP standard. In 1992, he said, he read about PGP on the UseNet. He then cared about getting PGP 2.6.3(i)n out of the door to support larger keys than 1024 and fix other bugs that annoyed him. Viacrypt then sold PGP4 which was based on PGP2. PGP5 was eventually exported in books and were scanned back in during HIP97 and CCCamp99, he said. Funnily enough, a bug lurked for about five years, he said. Their get_random always returned 1…

Funnily enough he uses a 20 years old V3 key so at least his Key ID is trivially forgeable, but the fingerprint should also be easy to create. He acknowledges it but doesn’t really care. Mainly, because he “is a person from the last century”. I think that this mindset is present in many people’s heads…

The next day Intevation’s Andre Heinecke talked about the “automated use of gpg through gpgme“. GPGME is the abbreviation of “GnuPG made easy” and is meant to be a higher level abstraction for gpg. “gpg is a tool not a library”, he said. For a library you can apply versioning while the tool may change its output liberally, he said. He mentions gpg’s machine interface with --with-colons and that changes to that format will break things. GpgME will abstract that for you and tries to make the tool a library. There is a defined interface and “people should use it”. A selling point is that it works with all gpg versions.

When I played around with gpgme, I found it convoluted and lacking basic operations. I think it’s convoluted because it is highly stateful and you need to be careful with calling (many) functions in the correct order if you don’t want it to complain. It’s lacking, because signing other people’s keys is a weird thing to do and the interface is not designed with that in mind. He also acknowledged that it is a fairly low level API in the sense that every option has to be set distinctly and that editing keys is especially hard. In gpgme, he said, operations are done based on contexts that you have to create. The context can be created for various gpg protocols. Surprisingly, that’s not only OpenPGP, but also CMS, GpgConf, and others.

I don’t think GNOME Software is ported to gpgme. At least Evolution and Seahorse call gpg directly rather than using gpgme. We should change that! Although gpgme is a bit of a weird thing. Normally™ you’d have a library build a tool with it. With gpgme, you have a tool (gpg) and build a library. It feels wrong. I claim that if we had an OpenPGP library that reads and composes packets we would be better off.

Vincent and Dominik came to talk about UX decisions in OpenKeychain, the Android OpenPGP implementation. It does key management, encryption and decryption of files, and other OpenPGP operations such as signing keys. The speakers said that they use bouncy castle for the crypto and OpenPGP serialisation. They are also working on K9 which will support PGP/MIME soon. They have an Open Tech Fund which finances that work. In general, they focused on the UX to make it easy for the consumer. They identified “workflows” users possibly want to carry out with their app. Among them are the discovery and exchange of keys, as well as confirming them (signing). They gave nice looking screenshots of how they think they made the UI better. They probably did, but I found it the foundations a bit lacking. Their design process seems to be a rather ad-hoc affair and they themselves are their primary test subjects. While it’s good work, I don’t think it’s easily applicable to other projects.

An interesting thing happened (again): They deviate from the defaults that GnuPG uses. Unfortunately, the discussions revolving about that fact were not very elaborate. I find it hard to imagine that both tools have, say, different default key lengths. Both tools try to prevent mass surveillance so you would think that they try to use the same defaults to achieve their goal. It would have been interesting to find out what default value serves the desired purpose better.

Next up was Kritian Fiskerstrand who gave an update on the SKS keyserver network. SKS is the software that we trust our public keys with. SKS is written in OCaml, which he likes, but of which he said that people have different opinions on. SKS is single threaded which is s a problem, he said. So you need to have a reverse proxy to handle more than one client.

He was also talking about the Evil32 keys which caused some stir-up recently. In essence, the existing OpenPGP keys were duplicated but with matching short keyids. That means that if you lookup a key by its short key ID, you’re screwed, because you get the wrong key. If you are using the name or email address instead, then you also have a problem. People were upset about getting the wrong key when having asked the keyserver to deliver.

He said that it’s absolutely no problem because users should verify the keys anyway. I can only mildly agree. It’s true that users should do that. But I think we would live in a nicer world where we could still maintain a significantly high security level of such a rigorous verification does not happen. If he really maintains that point of view then I’m wondering why he is allowing keys to be retrieved by name, email address, or anything else than the fingerprint in first place.

—-

Volker Birk from pretty Easy privacy talked about their project which aims at making encrypted email possible for the masses.
they make extensive use of gpgme and GnuNet, he said. Their focus is “privacy by default”. Not security, he said. If security and privacy are contradicting in some cases, they go for privacy instead of security. For example, the Web of Trust is a good idea for security, but not for privacy, because it reveals the social graph. I really like that clear communication and the admission of security and privacy not necessarily going well together. I also think that keyservers should be considered harmful, mainly because they are learning who is attempting to communicate with whom. He said that everything should be decentralised and peer-to-peer. Likewise, a provider should not be responsible for binding an email address to a key. The time was limited, unfortunately, so the actual details of how it’s supposed to be working were not discussed. They wouldn’t be the first ones to attempt a secure or rather privacy preserving solution. In the limited time, however, he showed how to use their Python adapter to have it automatically locate a public key of a recipient and encrypt to it. They have bindings for various other languages, too.

Interestingly, a keysigning “party” was scheduled on the first evening but that didn’t take place. You would expect that if anybody cared about that it is the OpenPGP hardcore hackers, all of which were present. But not a single person (as in nobody, zero “0”, null) was interested. You can’t blame them. It’s probably been a cool thing when you were younger and GnuPG this was about preventing the most powerful targetted attacks. I think people realised that you can’t have people mumble base16 encoded binary strings AND mass adoption. You need to bring at least cake to the party… Anyway, as you might be aware, we’re working towards a more pleasant key signing experience 🙂 So stay tuned for updates.

Talking at mrmcds 2016 in Darmstadt, Germany

A couple of weeks ago, I attended the mrmcds in Darmstadt, Germany. Just like I did the last years. Like the years before, the conference was nicely themed. This year, the theme was all things medical. So speakers were given doctors’ coats, conference staff were running around like surgeons, alcohol could be had intravenously …

mrmcd 2016 logo

The talk on medical device nightmares (video) showed some medical devices like which show and record vital signs such as the pulse or blood pressure. But also more fancy devices such as an MRI. Of course, he did not only show the devices themselves, but rather how they tested them on their external interfaces, i.e. the networking port. Speaking of the MRI: It exposed a few hundred open ports. The services listening on these ports crashed when nmap scanned the host… But at least apparently they recovered automatically. He also presented an anaesthetic monitoring device, which is supposed to show how much alive a patient still is. The device seems to have a telnet interface which you can log on to with default credentials. The telnet interface has, not surprisingly, a command injection vulnerability, which allowed them to take ownership of the device. The next step was then to hijack the framebuffer and to render whatever they wanted on it. For example nice looking vital data; as if the patient was still alive. Or, probably the more obvious thing to do: Show Rick Astley.

It’s been an entertaining talk which makes you realise how complicated the whole area of pharmaceutical or medical appliances is. They need to go through a long and troublesome certification process, not unlike other businesses (say, car manufacturers). Patching the underlying Windows is simply not possible without losing the certification. You may well ask whether a certificate or an up-to-date OS is better for your health. And while I make it look a bit ridiculous now, I do appreciate that it’s a tough subject.

My own talk on GNOME (video) was well visited. I explained why I think GNOME is a good candidate for shipping security software to the masses. I said that GNOME cares about its users and goes the extra mile to support as many users as possible. That includes making certain decisions to provide a secure by default system. I gave two examples of how I think GNOME pushes the envelope when it comes to making security usable. One was the problem of OpenPGP Keysigning. I mentioned that it’s a very geeky thing which mortals do not understand. Neither do many security people, to be honest. And you can’t even blame them because it’s a messy thing to do. Doing it properly™ involves a metric ton of OpSec to protect the integrity of the key to be signed. I think that we can make the process much more usable than it is right now while even maintaining security. This year, I had Andrei working with me to make this happen.

The other example I gave was the problem of USB security. Do you know when you use your USB? And do you know when you don’t? And do you know when other people use your USB? I talked about the possibility to lock down your USB ports while you’re not in front of your computer. The argument goes that you can’t possibly insert anything if you’re away. Of course, there are certain cases to keep in mind, like not forbidding a keyboard to be plugged in, in case the old one breaks. But there is little reason to allow your USB camera to work unless you are actively using your machine. I presented how this could look like by showing off the work the George did last summer.

My friend Jens talked about Reverse Engineering of applications. He started to explain why you would do that in first place. Analysing your freshly received malware or weaknesses (think backdoors or bypasses) in your software are motivations, he said. But you might as well tinker with old software which has no developer anymore or try to find APIs of other software for interoperational purposes, he said. Let me note that with Free Software, you wouldn’t have to reverse engineer the binary 😉 But he also mentioned that industrial espionage is a reason for people to reverse engineer a compiled programme. The tool he uses the most is the “file” tool. He went on to explain the various executable formats for various machine flavours (think: x86, ELF, PE, JVM). To go practical, he showed a .NET application which only writes “hello, world!”, because malware, he said, is written in .NET nowadays. In order to decompile the binary he recommended “iLspy” as a one-stop suite for reverse engineering .NET applications. Next up were Android applications. He showed how to pull the APK off the device and how to decompose it to JAR classes. Then he recommended CFR for decompiling those into Java code. His clients, mostly banks, he said, try to hide secret keys in their apps, so the first thing he does when having a new job is to grep for “secret”. In 80% of the cases, he said, it is successful. To make it harder for someone to reverse engineer the binary, obfuscators exist for Java, but also for C. He also mentioned some anti debugging techniques such as to check for the presence of certain DLLs or to throw certain interrupts to determine whether the application runs under a debugger. It was a very practical talk which certainly made it clear that the presented things are relevant today. Due to the limited time and the many examples, he could only scratch the surface, though.

It’s been a nice conference with 400ish attendees. I really like how they care about the details, also when it comes to make the speakers feel good. It’s too sad that it’s only one weekend. I’m looking forward to attending next year’s edition 🙂

The Hackocratic Oath

I swear by Eris, Goddess of Chaos, Discordia, and all other Gods above, making them my witnesses, that, according to my ability and judgement, I will keep this Oath and this contract:
To hold those who taught me this art equally dear to me as my friends, to share the Net with them, and to fulfill all their needs for data when required; to look upon their offspring as equals to my own friends, and to teach them this art, if they shall wish to learn it, without fee or contract.
By the set rules, lectures, cat content, and every other mode of instruction, I will impart a knowledge of the art to my own offspring, and those of my friends, and to students bound by this contract and having sworn this Oath to the Hacker Ethic, but to no others.
I will use those data regimens which will benefit my users according to my greatest ability and judgement, and I will do no harm or injustice to them. I will not give a lethal maleware to anyone if I am asked, nor will I advise such a plan; and similarly I will not give a comuter a rootkit.
In purity and according to divine law will I carry out my life and my art.
I will not develop crypto algorithms, but I will leave this to those who are trained in this craft.
Into whatever networks I go, I will enter them for the benefit of the users, avoiding any voluntary act of impropriety or corruption, including hurtful comments towards users or hackers, whether they are online or offline.
Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or by way of leaks, which ought to be private,
I will keep faithfully secret.
So long as I maintain this Oath faithfully and without corruption, may it be granted to me to partake of life fully and the practice of my art, gaining the respect of all for all time. However, should I transgress this Oath and violate it, may the opposite be my fate.
www.mrmcd.net

(Re)mastering a custom Ubuntu auto-install ISO

Recently, I had to install GNU/Linux on a dozen or so machines. I didn’t want to install manually, mainly because I was too lazy, but also because the AC in the data centre is quite strong and I didn’t want to catch a cold… So I looked for some lightweight way of automatically installing an Ubuntu or so. Fortunately, I don’t seem to be the first person to be looking for a solution, although, retrospectively, I think the tooling is still poor.

I would describe my requirements as being relatively simple. I want to turn one of the to be provisioned machines on, wait, and then be able to log in via SSH. Ideally, most of the software that I want to run would already be installed. I’m fine with software the distribution ships. The installation must not require the Internet and should just work™, i.e. it should wipe the disk and not require anything special from the network which I have only little control over.

I looked at tools like Foreman, Cobbler, and Ubuntu’s MAAS. But I decided against them because it doesn’t necessarily feel lightweight. Actually, Cobbler doesn’t seem to work well when run on Ubuntu. It also fails (at least for me) when being behind an evil corporate proxy. Same for MAAS. Foreman seems to be more of a machine management framework rather than a hit and run style of tool.

So I went for an automated install using the official CD-ROMs. This is sub-optimal as I need to be physically present at the machines and I would have preferred a non-touch solution. Fortunately, the method can be upgrade to delivering the installation medium via TFTP/PXE. But most of the documents describing the process insist on Bind which I dislike. Also, producing an ISO is less error-prone so making that work first should be easier; so I thought.

Building an ISO

The first step is to mount to ISO and copy everything into a working directory. You could probably use something like isomaster, too.


mkdir iso.vanilla
sudo mount -oloop ubuntu.iso ./iso.vanilla
mkdir iso.new
sudo cp -ar ./iso.vanilla/* ./iso.vanilla/.* iso.new/

After you have made changes to your image, you probably want to generate a new ISO image that you can burn to CD later.


sudo mkisofs -J -l -b isolinux/isolinux.bin -no-emul-boot -boot-load-size 4 -boot-info-table -z -iso-level 4 -c isolinux/isolinux.cat -o /tmp/ubuntu-16.04-myowninstall-amd64.iso -joliet-long iso.new

You’d expect that image to work If you now dd it onto a pendrive, but of course it does not… At least it didn’t for me. After trying many USB creators, I eventually found that you need to call isohybrid.


sudo isohybrid /tmp/ubuntu-16.04-myowninstall-amd64.iso

Now you can test whether it boots with qemu:


qemu-img create -f qcow2 /tmp/ubuntu.qcow2 10G
qemu-system-x86_64 -m 1G -cdrom ubuntu-16.04-server-amd64.iso -hda /tmp/ubuntu-nonet.qcow2

If you want to test whether a USB image would boot, try with -usb -usbdevice disk:/tmp/ubuntu-16.04-myowninstall-amd64.iso. If it doesn’t, then you might want to check whether you have assigned enough memory to the virtual machine. I needed to give -m 1G, because the default didn’t work with the following mysterious error.

Error when running with too little memory

It should also be possible to create a pendrive with FAT32 and to boot it on EFI machines. But my success was limited…

Making Changes

Now what changes do you want to make to the image to get an automated installation?
First of all you want to get rid of the language selection. Rumor has it that


echo en | tee isolinux/lang

is sufficient, but that did not work for me. Replacing timeout values in files in the isolinux to something strictly positive worked much better for me. So edit isolinux/isolinux.cfg.

If the image boots now, you don’t want the installer to ask you questions. Unfortunately, there doesn’t seem to be “fire and forget” mode which tries to install as aggressively as possible. But there are at least two mechanisms: kickstart and preseed. Ubuntu comes with a kickstart compatibility layer (kickseed).

Because I didn’t know whether I’ll stick with Ubuntu, I opted for kickstart which would, at least theoretically, allow me for using Fedora later. I installed system-config-kickstart which provides a GUI for creating a kickstart file. You can then place the file in, e.g. /preseed/ks-custom.cfg next to the other preseed files. To make the installer load that file, reference it in the kernel command line in isolinux/txt.cfg, e.g.


default install
label install
menu label ^Install Custom Ubuntu Server
kernel /install/vmlinuz
append file=/cdrom/preseed/ubuntu-server.seed vga=788 initrd=/install/initrd.gz ks=cdrom:/preseed/ks-custom.cfg DEBCONF_DEBUG=5 cdrom-detect/try-usb=false usb_storage.blacklist=yes --

Ignore the last three options for now and remember them later when we talk about issues installing from a pen drive.

When you boot now, you’d expect it to “just work”. But if you are me then you’ll run into the installer asking you questions. Let’s discuss these.

Multiple Network Interfaces

When you have multiple NICs, the installer apparently asks you for which interface to use. That is, of course, not desirable when wanting to install without interruption. The documentation suggest to use


d-i netcfg/choose_interface select auto

That, however, seemed to crash the installer when I configured QEMU to use four NICs… I guess it’s this bug which, at least on my end, had been cause by my accidentally putting “eth0” instead of “auto”. It’s weird, because it worked fine with the single NIC setup. The problem, it seems, is that eth0 does not exist! It’s 2016 and we have “predictable device names” now. Except that we still have /dev/sda for the first harddisk. I wonder whether there is a name for the first NIC. Anyway, if you do want to have the eth0 scheme back, it seems to be possible by setting biosdevname=0 as kernel parameter when booting.

2016-multiple-networks

You can test with multiple NICs and QEMU like this:


sudo qemu-system-x86_64 -m 1G -boot menu=on -hda /tmp/ubuntu-nonet.qcow2 -runas $USER -usb -usbdevice disk:/tmp/ubuntu-16.04-myowninstall-amd64.iso -netdev user,id=network0 -device e1000,netdev=network0 -netdev user,id=network1 -device e1000,netdev=network1 -netdev user,id=network2 -device e1000,netdev=network2 -netdev user,id=network3 -device e1000,netdev=network3 -cdrom /tmp/ubuntu-16.04-myowninstall-amd64.iso

No Internet Access

When testing this with the real servers, I realised that my qemu testbed was still too ideal. The real machines can resolve names, but cannot connect to the Internet. I couldn’t build that scenario with qemu, but the following gets close:


sudo qemu-system-x86_64 -m 1G -boot menu=on -hda /tmp/ubuntu-nonet.qcow2 -runas $USER -usb -usbdevice disk:/tmp/ubuntu-16.04-myowninstall-amd64.iso -netdev user,id=network0,restrict=y -device e1000,netdev=network0 -netdev user,id=network1,restrict=y -device e1000,netdev=network1 -netdev user,id=network2,restrict=y -device e1000,netdev=network2 -netdev user,id=network3,restrict=y -device e1000,netdev=network3 -cdrom /tmp/ubuntu-16.04-myowninstall-amd64.iso

That, however, fails:

2016-default-route

The qemu options seem to make the built-in DHCP server to not hand out a default gateway via DHCP. The installer seems to expect that, though, and thus stalls and waits for user input. According to the documentation a netcfg/get_gateway value of "none" could be used to make it proceed. It’s not clear to me whether it’s a special none type, the string literal “none”, or the empty string. Another uncertainty is how to actually make it work from within the kickstart file, because using this debconf syntax is for preseeding, not kickstarting. I tried several things,


preseed netcfg/get_gateway none
preseed netcfg/get_gateway string
preseed netcfg/get_gateway string 1.2.3.4
preseed netcfg/get_gateway string none
preseed netcfg/no_default_route boolean true

The latter two seemed to worked better. You may wonder how I found that magic configuration variable. I searched for the string being displayed when it stalled and found an anonymous pastebin which carries all the configurable items.

After getting over the gateway, it complained about missing nameservers. By putting


preseed netcfg/get_nameservers string 8.8.8.8

I could make it proceed automatically.

2016-nameservers

Overwriting existing partitions

When playing around you eventually get to the point where you need to retry, because something just doesn’t work. Then you change your kickseed file and try again. On the same machine you’ve just left half-installed with existing partitions and all. For a weird reason the installer mounts the partition(s), but cannot unmount them

2016-mounted

The documentation suggest that a line like


preseed partman/unmount_active boolean true

would be sufficient, but not so for me. And it seems to be an issue since 2014 at least. The workarounds in the bug do not work. Other sources suggested to use partman/early_command string umount -l /media || true, partman/filter_mounted boolean false, or partman/unmount_active seen true. Because it’s not entirely clear to me, who the “owner” , in terms of preseed, is. I’ve also experimented with setting, e.g. preseed --owner partman-base partman/unmount_active boolean true. It started to work when I set preseed partman/unmount_active DISKS /dev/sda and preseed --owner partman-base partman/unmount_active DISKS /dev/sda. I didn’t really believe my success and reordered the statements a bit to better understand what I was doing. I then removed the newly added statements and expected it to not work. However, it did. So I was confused. But I didn’t have the time nor the energy to follow what really was going on. I think part of the problem is also that it sometimes tries to mount the pendrive itself! Sometimes I’ve noticed how it actually installed the system onto the pendrive *sigh*. So I tried hard to make it not mount USB drives. The statements that seem to work for me are the above mentioned boot parameters (i.e. cdrom-detect/try-usb=false usb_storage.blacklist=yes) in combination with:


preseed partman/unmount_active boolean true
preseed --owner partman-base partman/unmount_active boolean true
preseed partman/unmount_active seen true
preseed --owner partman-base partman/unmount_active seen true

partman/unmount_active DISKS /dev/sda
--owner partman-base partman/unmount_active DISKS /dev/sda

preseed partman/early_command string "umount -l /media || true"
preseed --owner partman-base partman/early_command string "umount -l /media ||$

How I found that, you may ask? Enter the joy of debugging.

Debugging debconf

When booting with DEBCONF_DEBUG=5, you can see a lot of information in /var/log/syslog. You can see what items are queried and what it thinks the answer is. It looks somewhat like this:

2016-debug

You can query yourself with the debconf-get tool, e.g.


# debconf-get partman/unmount_active
true

The file /var/lib/cdebconf/questions.dat seems to hold all the possible items. In the templates.dat you can see the types and the defaults. That, however, did not really enlighten me, but only wasted my time. Without knowing much about debconf, I’ve noticed that you seem to be able to not only store true and false, but also flags like “seen”. By looking at the screenshot above I’ve noticed that it forcefully sets partman/unmount_active seen false. According to the documentation mentioned above, some code really wants this flag to be reset. So that way was not going to be successful. I noticed that the installer somehow sets the DISKS attribute to the partman/unmount_active, so I tried to put the disk in question (/dev/sda) and it seemed to work.

Shipping More Software

I eventually wanted to install some packages along with the system, but not through the Internet. I thought that putting some more .debs in the ISO would be as easy as copying the file into a directory. But it’s not just that easy. You also need to create the index structure Debian requires. The following worked well enough for me:


cd iso.new
cd pool/extras
apt-get download squid-deb-proxy-client
cd ../..
sudo apt-ftparchive packages ./pool/extras/ | sudo tee dists/stable/extras/binary-i386/Packages

I was surprised by the i386 suffix. Although I can get over the additional apt-ftparchive, I wish it wouldn’t be necessary. Another source of annoyance is the dependencies. I couldn’t find a way to conveniently download all the dependencies of a given package.

These packages can then be installed with the %packages directive:


%packages
@ ubuntu-server
ubuntu-minimal
openssh-server
curl
wget
squid-deb-proxy-client
avahi-daemon
avahi-autoipd
telnet
nano
#build-essential
#htop

Or via a post-install script:


%post

apt-get install -y squid-deb-proxy-client
apt-get update
apt-get install -y htop
apt-get install -y glusterfs-client glusterfs-server
apt-get install -y screen
apt-get install -y qemu-kvm libvirt-bin

Unfortunately, I can’t run squid-deb-proxy-client in the installer itself. Not only because I don’t know how to properly install the udeb, but also because it requires the dbus daemon to be run inside the to-be-installed system which proves to be difficult. I tried the following without success:


preseed anna/choose_modules string squid-deb-proxy-client-udeb

preseed preseed/early_command string apt-install /cdrom/pool/extras/squid-deb-proxy-client_0.8.14_all.deb

%pre
anna-install /cdrom/pool/extras/squid-deb-proxy-client-udeb_0.8.14_all.udeb

If you happen to know how to make it work, I’d be glad to know about it.

Final Thoughts

Having my machines installed automatically cost me much more time than installing them manually. I expected to have tangible results much quicker than I actually did. However, now I can re-install any machine within a few minutes which may eventually amortise the investment.

I’m still surprised by the fact that there is no “install it, dammit!” option for people who don’t really care about the details and just want to get something up and running.

Unfortunately, it seems to be non-trivial to just save the diff of the vanilla and the new ISO 🙁 The next Ubuntu release will then require me to redo the modifications. Next time, however, I will probably not use the kickseed compatibility layer and stick to the pure method.

Pressemitteilung: GUADEC startet ab morgen in Karlsruhe

(cross posting from http://www.guadec.org which you should follow!)

Am 21.09.2016 wird das nächste GNOME Release (3.22) unter dem Namen “Karlsruhe” erscheinen. Genannt wird ein GNOME Release nach dem letzten Austrageort der GUADEC (GNOME Users And Developers European Conference). Während dieser Konferenz treffen sich jährlich zirka 200 Freie-Software-Enthusiasten zu Workshops, Vorträgen und Arbeitsgruppentreffen und arbeiten an GNOME, der Software-Lösung für alle. Dieses Jahr findet das Treffen vom 11.08. bis zum 17.08. am KIT in Karlsruhe statt.

Die Veranstaltung richtet sich sowohl an Entwickler als auch an Nutzer, und soll dazu dienen, Wissen zu erzeugen und weiter zu leiten. Wir freuen uns, die diesjährige GUADEC in Karlsruhe zu veranstalten. Karlsruhe steht für uns für einen starken wissenschaftlichen und technischen Hintergrund verbunden mit dem gewissen Etwas an Kreativität und Design. Diese Verbindung zwischen Technik und Design passt sehr gut zu dem GNOME Projekt, welches nicht nur auf technischer Seite seit Jahren neue Standards fördert, sondern auch im Bereich des User-Experience-Design innovative Wege geht.

GNOME, die Stiftung mit dem Fuß als Logo, steht für 3 Kernpunkte:

Menschlichkeit

GNOME vereint nicht nur freiwillige und bezahlte Programmierer, sondern auch Firmen und wohltätige Organisationen. Wir machen GNOME 3, ein Computersystem das einfach zu benutzen ist, in über 190 Sprachen übersetzt wurde und großen Wert auf Behindertengerechtigkeit legt. Wir entwickeln in der Öffentlichkeit und jeder kann bei GNOME mitmachen. Unsere Kommunikationskanäle stehen allen offen, der Quellcode kann frei heruntergeladen, modifiziert und weitergegeben werden.

Technologien

GNOME ist eine flexible und mächtige Plattform für Desktop- und mobile Anwendungen. Wir entwickeln Toolkits wie GTK+ oder Clutter, die Webbrowser Engine WebKitGTK+, TMMultimedia-Bibliotheken wie GStreamer, das D-Bus messaging system, oder die Pango Textrendering-Bibliothek. GNOME Entwickler gehen auch weiter und modifizieren essentielle Kern-Infrastruktur wie den Linux Kernel, systemd oder Wayland.

Popularität

GNOME 3 kann über viele bekannte GNU/Linux-Distributionen wie Debian, Fedora, OpenSuSE oder Ubuntu bezogen werden. GNOME Technologien sind der Treiber vieler Firmen wie Endless Mobile, Amazon, TiVo, Nokia, TouchTune, Garmin oder TomTom. Viele andere Firmen benutzen GNOME Komponenten kostenfrei im Rahmen der Freien Lizenz unserer Software.

Wir laden gerne zu einem Treffen oder Interview während unserer Konferenz ein. Bei Fragen steht Tobias Mueller (tobi@guadec.org oder +4915153778790) gerne zur Verfügung.

Mehr Informationen über die GUADEC und GNOME gibt es unter: http://2016.guadec.org

GNOME Keysign 0.6

It’s been a while since I reported on GNOME Keysign. The last few releases have been exciting, because they introduced nice features which I have been waiting long for getting around to implement them.

So GNOME Keysign is an application to help you in the OpenPGP Keysigning process. That process will eventually require you to get hold of an authentic copy of the OpenPGP Key. In GNOME Keysign this is done by establishing a TCP connection between two machines and by exchanging the data via that channel. You may very well ask how we ensure that the key is authentic. The answer for now has been that we transmit the OpenPGP fingerprint via a secure channel and that we use the fingerprint to authenticate the key in question. This achieves at least the same security as when doing conventional key signing, because you get hold of the key either via a keyserver or a third party who organised the “key signing party”. Although, admittedly, in very rare cases you transfer data directly via a USB pendrive or so. Of course, this creates a whole new massive attack surface. I’m curious to see technologies like wormhole deployed for this use case.

The security of you going to the Internet to download the key is questionable, because not only do you leak that you’re intending to communicate with a certain person, but also expose yourself to attacks like someone dropping revocation certificates or UIDs of the key of your interest. While the former issue is being tackled by not going to the Internet in first place, the latter had not been dealt with. But these days are over now.

As of 0.5 GNOME Keysign also generates an HMAC of the data to be transferred and encodes that in the QR Code. The receiving end can then verify whether the data downloaded matches the expected value. I am confident that a new generation hash function serves the same purpose, but I’m not entirely sure how easy it is to get Keccak or siphash into the users’ hands. The HMAC, while being cryptographic overkill, should be fine, though. But the construction leaves a bad taste, especially because a known key is currently used to generate the HMAC. But it’s a mechanism built-in into Python. However, I expect to replace that with something more sensible.

In security, we better imagine a strong attacker who is capable of executing attacks which we think are not necessarily easy or even possible to mount. If we can defend against such a strong attacker then we may trust the system to resist weaker attacks, too. One of such a difficult attack, I think, is to inject just one frame while, at the same time, controlling the network. The attack could then make the victim scan a rogue barcode which delivers a rogue MAC which in turn validates the wrong data. Such an attack should not go unnoticed and, as of 0.5, GNOME Keysign will display the frame that contained the barcode.

This is what it looked like before:

2016-keysign-before

And now you can see the frame that got decoded. This is mainly because the GStreamer zbar element also provides the frame.

2016-keysign-after

Another interesting feature is the availability of a separate tool for producing signatures for a given key in a file. The scenario is that you may have received a key from your friend via a (trusted, haha) pendrive, a secure network connection (like wormhole), or any other means you consider sufficiently integrity preserving. In order to sign that key you can now execute something like python -m keysign.gnome-keysign-sign-key in order to run all the signing logic but without the whole key transfer stuff. This is a bit experimental though and I am not yet happy about the state that program is in, so it’s not directly exposed to users by installing it as executable.

GNOME Keysign is available in OpenSuSE, now. I don’t know the exact details of how to make it work, but rumour has it that you can just do a zypper install gnome-keysign. While getting there we identified a few issues along the way. For example, the gstreamer zbar element needs to be present. But that was a problem, because the zbar element was not built because the zbar library was not available. So that needed to get in first. Then we realised that the most modern OpenSuSE uses a very recent GnuPG which the currently used GnuPG library is not handling so nicely. That caused a few headaches. Also, the firewall seems to be an issue which needs to be dealt with. So much to code, so little time! 😉

Talking at OpenSuSE Conference 2016 in Nuremberg

I was invited to this year’s OpenSuSE Conference in Nuremberg, Germany. I had been to that event two years ago in Dubrovnik which I enjoyed so much that I was eager to go again.

oscfinal

The venue was very easy to find due to poster hanging everywhere. The flow of information was good in general. That includes emails being every day which highlighted items in the schedule or restaurant recommendations for the evening.

I arrived just in time for my first show on GNOME Keysign. For better or worse we only very few people so we could discuss matters deeply. It was good, because we found bugs and other user facing issues that need to be resolved. The first and most obvious one was GnuPG 2.1 support. Although still experimental, OpenSuSE ships 2.1 by default. The wrapping library we’re using to interact with GnuPG did not support calling the newer gpg, so we had to identify the issues, find a fix, and test. It eventually worked out 🙂

I also had a talk called “Five years after 3.0” which, to my surprise, has been covered by reddit and omgubuntu. I was also surprised by the schedule which only gave me 30 minutes instead of the usual 45 or 60. I was eventually politely reminded that I have significantly exceeded my time *blush*. We thus needed to move discussions outside which was fruitful. People at OpenSuSE Con are friendly and open-minded. It’s a pleasure to have arguments there 🙂

I didn’t actually see many talks myself. Although the schedule was quite full with interesting topics! But knowing that the VoCCC people were running the video recordings, I could count on recordings being available after a few days hours.

But I have had very interesting and enlightening discussions about distributions, containerised apps, Open Build Service, OpenQA, dragging more GNOME people towards OpenSuSE, Fonts, and other issues. That’s the great thing about conferences: You get to know people with interesting stories. As for the fonts, for example, I was discussing the complexity involved in rendering glyphs and whether this could eventually lead to security problems. I think the attack surface of fonts has been undervalued and needs some investigation. I hope I can invest some time in looking at building and modifying fonts. I also found it interesting to discuss why I would not recommend OpenSuSE as a GNU/Linux distribution to anyone, mainly because I need to reflect and challenge myself. Turns out, I don’t have any good reason except that my habits simply don’t include using OpenSuSE myself and I am thus unable to give a recommendation. I think they have interesting infrastructure though. I see the build service for having peoples’ apps built and OpenQA for having them tested. Both seem to be a little crude overall, but could become the tools to use for distributing your flatsnappimgpack. An idea was circling around to have a freedesktop.org for those app image formats and execution environments. But in a somewhat more working state. I think key to success of any such body is being lightweight and not end up like openstack. Let’s hope we can bring people who work on various parts or even implementation of containerisation for desktop applications together. I also hope that the focus for containered desktop apps will be isolation from other apps rather than actually distributing the software, because I don’t think we have a big problem with getting Free Software into the user’s hands.

So a big “thank you” to this year’s organisers for this event. I hope I can attend on of the following conferences 🙂

Talking at GNOME.Asia Summit 2016 in New Delhi, India

It’s spring time and that means it’s time for GNOME.Asia Summit! This year’s edition took place in New Delhi, India. This years makes five years after the initial GNOME 3.0 release. In fact, an important releases planning hackfest happened five years ago in India, so it’s been a somewhat remarkable date.

The conference felt a little smaller than the last edition, although I guess the college we were hosted at tried hard to bring their students to the talks. That was especially noticeable in the opening slot were everybody who felt sufficiently important had something to say. The big auditorium was filled with students, but I doubt they were really interested or listening. The opening was a bit weird for my taste, anyway. I have seen many conference openings, I would say. But that guy from the college who opened GNOME.Asia 2016 seemed to be a little bit confused, I have the feeling. He said that GNOME started 2008 so that all the software you use can be had freely so that you can upgrade your devices, like GPS satnavs. The opening ceremony, and yes, it’s really more of a ceremony rather than a short “welcome, good that you’re here” talk seems to be quite a formal thing in this college. Everybody on the stage receives a bouquet of flowers and many people were greeted and saluted to which stretched everything to an enormous length which in turn made the schedule slip by two hours or so.

Cosimo keynoted the conference and presented his ideas for the future of the GNOME project. We’ve come a long way, he said, with GNOME 3, which has initially been released five years ago. GNOME has aged well, he said. No wrinkles can be seen and GNOME is looking better than ever. He said that he likes GNOME 2 to be thought of GNOME in the era of distributions, because you could plug together modules that you liked. And everybody liked that. The pain point, he said, was that distributions chose which modules to plug together which finally decided about the user experience. Due to module proliferation was felt as impacting the project negatively the new world of GNOME 3 was introduced. One the most controversial but also most successful thing GNOME 3 did, he said, was to put the responsibility of defining the user experience back in upstream’s hands by eliminating choices. While causing people to complain, it led to a less complicated test matrix which eventually made GNOME accessible to less technical people. He said, GNOME 3 is the era of Operating Systems, so there are not distributions packaging GNOME but rather Operating systems built on GNOME, like endless, mint, or solus. The big elephant in the room is the role of applications, he said. If cohesive Operating Systems are built upon GNOME, how can applications work with different operating systems? Currently, you cannot, he said, run elementary applications on GNOME and vice versa. xdg-app will hopefully address that, he said. It’s a big transition for the GNOME project and that transition is even bigger than the one from GNOME 2 to GNOME 3, he said. Unfortunately, the audience seemed to be a little tired by from the length of the opening session and it felt like they were demanding a break by starting to chat with their neighbours…

Pravin then continued to talk about the state of Indian languages in GNOME. He mentioned that some Indian languages are well supported while some others have no support at all. He also showed that with Fedora 24 you get a text prediction engine. So you can type Latin characters for the word you want to enter in a different script. The Q&A revealed that the list of suggested words is sorted by frequency. Apparently they did some analysis of usage of words. I wonder whether it’s also able to learn from the user’s behaviour.

The talk on privacy given by Ankit Prateek showed how your typical Internet and Web usage would leave traces and what you mitigation you could employ. He mentioned specific Web attacks like Super Cookies or Canvas fingerprinting. He recommended using NoScript whichs usefulness the audience immediately questioned. To my surprise, he didn’t mention one my favourite plugins Google Privacy, because Google remembers what search results you click.

I got to talk about five years of GNOME 3. I conveyed the story of how the 3.0 release happened and what was part of it. For example, we had so many release parties with swag being sent around the world! But I also showed a few things that have changed since the initial 3.0.

Another talk I had was about Security. I explained why I see GNOME being in the perfect position to design, develop, and deploy security systems for a wide range of users. First, I ranted about modal dialogues, prompts, and that they are not a good choice for making a security decision. Then, I explained how we could possible defend against malicious USB devices. I think it’s work we, as developers of a Free Software desktop, have to do in order to serve our users. Technically, it’s not very hard, e.g. you block new USB devices being plugged in, when the screensaver is shown. We know how to do the blocking and unblocking of USB devices. More subtle issues involve the policies to apply and how to make the user aware of USB devices. Another pet peeve of mine is Keysigning, so I also ranted about the state of the art and we can and should improve things.

Thanks to the local organising people and the GNOME Foundation for flying me in and out.
Sponsored by GNOME!

Talking at FOSSASIA 2016 in Singapore

This year I was able to attend this year’s FOSSASIA in Singapore. It’s quite a decently sized event with more than 150 speakers and more than 1000 people attending. Given the number of speakers you can infer that there was an insane number of talks in the two and a half day of the conference. I’ve seen recordings being made so I would expect those to show up at some stage, but I don’t have any details. The atmosphere was very friendly and the venue a-maze-ing. By that I mean that it was a fantastic and huge maze. We were hosted in Singapore’s Science Museum which exhibits various things around biology, physics, chemistry, and much more. It is a rather large building in which it was easy to get lost. But it was great being among those sciency exhibits and to exchange ideas and thoughts. Sometimes, we could see an experiment being made as a show to the kids visiting the museum. These shows included a Tesla coil or a fire tornado. Quite impressive.

One of the first things I could see was Cat Altmann talking about her position being in the field of marketing which, she says, engineers tend to not like. But as opposed to making people like things they don’t need or want, she is rather concerned with reaching out to people to open source code. The Making and Science team within Google exists to work on things like sending kids from under resourced schools to field trips. Science also plays a role in this year’s Summer of Code. 43+ out of 180 projects are related to science, she said.

Nikolai talked about the Nefertiti hack. In case you missed it, the Nefertiti was “cloned”. The bust is a 3000 years old artefact which is housed in Berlin and is publicly available (for people who can travel to Berlin…). The high resolution data of the scanned object is, however, not available, along with many many other data that the museums have about their objects. He compared that behaviour to colonisation; I couldn’t really follow why, though. Anyway, they managed to scan the bust themselves by sneaking into the museum and now they’ve released the data. Their aim, as far as I could follow, was to empower people to decide about what culture is and what not. Currently, it is the administration which decides, he said. With the data (and I think with a printed copy of the bust) they travelled to Egypt to make an exhibition. But beforehand, they had produced a video which substantiates the claim of having found a second bust while digging for artefacts. The talk itself was interesting, but the presentation was bad. the speaker was lost a few times and didn’t know how to handle the technical side of the presentation. Anyway, I like it when such guerrilla art makes it into the news.

Lenny talked about systemd which uncovered some news that you may have missed if you’re not following its development too closely. He said that systemd has moved to github and while it’s attracting new contributors it still has major issues which he didn’t mention though. A component named networkd is now the default on both Fedora and Ubuntu. It’s a rather underwhelming piece of software though, because it has no runtime interface. The nspawn tool is used by CoreOS’ rkt docker alternative. He also mentioned sd-bus which he claims is a replacement for the reference DBus implementation. Another interesting thing he mentioned is that systemd can not only do socket activation but also USB Function FS activation. So whenever you are in the need to start your USB gadget only when the USB cable has been plugged in, systemd may be for you.

In another session, Lenny continued talking about systemd with regards to its container capabilities. Containers are all the rage, right..? He said that all the systemd tools work on containers, too, with the -M switch. systemd also just works inside a container, with the exception of Docker, he said. It is also possible to make systemd download and verify images to run full system images. Funnily enough, he said, Ubuntu images are properly signed but Fedora images are not.

I also had a talk and a workshop to give. The workshop was titled “Functionality, Security, Usability: Choose any two. Or GNOME.” which is a bit sensational, I admit. I’m not experienced with holding workshops and it was unclear to me what to expect. Workshop sounds to me like people come and they want to hack on something. The venue, however, was not necessarily equipped with a reliable Internet connection for the attendees. Also, the time was set to one hour. I don’t think you can do a meaningful workshop within one hour. so I didn’t really know how to prepare. I ended up ranting about OpenPGP, GnuPG, and SKS. Then I invited people to hack on GNOME Keysign which was a bit difficult. given the time constraints we had. Well, that I mainly had, because I was meant to give a proper talk shortly after.

During my talk, I gave a glimpse on what to expect from GNOME 3.20 codename Delhi. It was a day before the release, so it was the perfect timing for getting people excited. And I think it worked reasonably well. I would have loved to be able to show the release video, but it wasn’t finished until then. So I mainly showed screenshots of the changes and discussed on a high level what GNOME is and what it is not. People were quite engaged and still believe GNOME 3 was designed for tablets.

FOSSASIA used to be in Vietnam and it was actually co-hosted with GNOME.Asia Summit once. It smells like we could see such a double event in the future, but probably in Singapore. I think that’d be great, because FOSSASIA is a well organised event, albeit a little chaos here and there. But who doesn’t have that… In fact, I nearly couldn’t make it to the conference, because GNOME did not react for two weeks so the conference removed me from the schedule. Eventually, things worked out, so all is good and I would like to thank the GNOME Foundation for contributing to the coverage of the costs.

Sponsored by GNOME!

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.