Know your corridors – booking cheaper train tickets

In the past I showed you some interesting tricks to get cheaper travel fares. In a similar vein, I’d like to explore different train corridors with you.

Let’s consider a hypothetical route from Bremen to Jena. That’s from the north of Germany to somewhere in the middle east. The “normal price” of such a ticket is anything between 85 and 137 EUR.

Search result for going from Bremen to Jena

“Why the difference?” you may ask. Good question. By looking at the details of the connections we can see that the transfer stations differ. The first connection goes north through Hamburg.

Search result of going through the north

Map of going through the north

The second connection seems smarter, going south through Hannover but then plowing through the east.

Search result for going through the east

Map of connection going east

The last connection is arguably the most natural one: Going to Goettingen and then with a local train to Jena.

Search results for going through Goettingen

map of connection going through Goettingen

More combinations exist. For example, going through Hamburg, then via the east route to Erfurt and then to Jena. That is probably also the most expensive route.

We can see those lines on the official map of long distance train lines.

Plan of long distance (IC) trains

If you are looking for cheap train tickets you should ask the “Sparpreisfinder” (or “cheap fare finder”). If we provide that with our intended journey a few days in advance, it finds tickets as cheap as 29.90 EUR. That’s already quite good. It’s not a shame to stop here and buy that ticket. After all, the Sparpreisfinder is advertised as finding the cheapest ticket, so it can’t get any better, can it?

Sparpreisfinder results

Notice how our connections so far have made use of local trains. According to the map we can take long distance trains only via another corridor. More specifically: through Leipzig. It’s a longer route but may be cheaper due to the complicated pricing model and convoluted stack of stakeholders associated with the various trains and lines being operated. It’s not imperative to know that the Deutsche Bahn has three product categories, but it may help to understand the pricing system a little bit better. Product category “A” is for long distance ICE trains. “B” is for the less comfortable and slower IC trains. Finally, “C” is for local trains.

We can force the search to find connections via that new corridor, Leipzig, and hope to find fewer product categories used:

Notice how the both checkboxes in the bottom are unticked

Notice how the both checkboxes in the bottom are not ticked. This is to find longer routes. And indeed, we find an even cheaper connection than the DB’s Sparpreisfinder was willing to give us.

cheapest fare not found by the sparpreisfinder

You may say now that using long distance trains only can be achieved more easily by adapting the search to only find those:

Notice the checkboxes in the bottom

But even then you wouldn’t get that 19.90 ticket:

no ticket for 19.90 although only long distance trains are being sought

So lessons learned: Look at the map of train lines to see which connections exists. Check whether your journey can be performed with long distance trains, only. Check your search results for various corridors and notice whether long distance train-only connections exist. If not, force the search to find you a connection through the corridor you have identified.

New OpenPGP key

PSA: I’ve rolled over my OpenPGP Key.

The old key F289F7BA977DF4143AE9FDFBF70A02906C301813 is considered to be too short by some and it’s sufficiently old to retire it.

My new key is F98D03D7DC630399AAA6F43826B3F39189C397F6.

It’s been a while since I did that last. And GnuPG still makes it hard to use an expired key, I cannot sign this transition statement with both keys as suggested by this document. Also, I might consider using a service such as https://www.expirybot.com/ for telling me when it’s time to think of a strategy for the next roll-over. It’s a shame we don’t have such tooling in place for the desktop.

Anyway, feel free to grab the new from the WebPKI protected resource here.

sec   dsa1024 2008-12-03 [SC] [expired: 2018-02-28]
      F289F7BA977DF4143AE9FDFBF70A02906C301813
uid           [ expired] Tobias Mueller 

sec   dsa3072 2018-03-17 [SC] [expires: 2023-03-16]
      F98D03D7DC630399AAA6F43826B3F39189C397F6
uid           [ultimate] Tobias Mueller 
ssb   elg3072 2018-03-17 [E] [expires: 2023-03-16]

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQGiBEk21OkRBAC5XDJFPbA2WhhvbKdu51ZOL3iPwMPSp8k1U5qEY0uChuI5eekL
VUHhsQJP+mNSwPMKJuyAZgMtdeG+YHEo06Rh9beOxCRtR1Y4Jzl1iP2jslHUu/r9
O0DLXUOdbsi0oSfNGJt4nxdZheIr/eH18w8Sp/P1l3YooRcyC3wa/lwR2wCgps+U
nBLN5JbQaz4NeoDXROhwDNkD/0mPTNX7WyPWJoEpFwSSKD1ZPaF46fFDhlK6pdyv
120TYoRAtQBYew3XkiW+ZoW4OZiqVJLTqqI8bADSeZ4AaJEgRyKb99tRrpyWFhpk
vMoxtfw2qXDGidcsaeJfqzSw3+qjfjQJMGEkADW5+7d8URTU2I4GURoUUNVYRr+O
lVdUA/4qdHe2E5pkYXlQZAuq8DjUqvqKtay7uCCMFv0NZtXWNn+sPTq108tMfoQ9
QdS74NlbS4Hh/ttMzTZi1z7AEI4Kf74qY77xBjyhPFPSyThxPRH8WjRQLqrZHe0I
G1MtoEFeRfixqCCDtQyTrhQomEgYgSx7phQUTMrCP3wC8uTB6LQkVG9iaWFzIE11
ZWxsZXIgPHRvYmlhc211ZUBnbm9tZS5vcmc+iGYEExECACYCGwMGCwkIBwMCBBUC
CAMEFgIDAQIeAQIXgAUCVtxYfQUJEV+2xwAKCRD3CgKQbDAYE+7uAJ9KEPe5Qqmb
uQBQND8lL89kjBaHZQCeL4lji4/Pxu1wZK+2M1HgXh9dvNK5BA0ESTbU6RAQAK2Q
cpYuL/ieNSzVcXOCLzPMs+jL+nx7GLINoRdTZkK1394bPu4ZUrcUfsfE/Ehm1209
kzD4kXSqbtMCZ0QLo5Ohvzq/TS/d+5kavYiVs0V9UCnjdu0zl/MHjOZ3aghvGKJo
YB+eQNIVoCTLmZIWSE3tHdy+ocoRCPYXLOLZJgewkRmtfOrfg9n+YisLwVryDO4V
kzNZGe+lXf8JKbEXXnL55g+P6gkfDp6IRPm8QetJQ7j6o7xzFGDA+0oJLHc2NgMV
QdZAVZuZwOUJtHkv+HuH88b2mUBAkVLUv1pgyEjEqV45I+OiFdcy9BluzTtXMw00
3oovmmHTLCOars/GGK+tBS6uFMNX59V2lb0QycLwzg6lnedShvl0ivr//khZ2kJ1
avyInTGUdMy5CGM4YB84X8p63EbhrapkM3JmqlZrR71t5GM4nk3IumGCZY1LzO/7
mVbegtYvBc6vcYJqPjwPSLlBdcquQjD49I2UlDI5EfO91rEwkpvVo9RKigKes4yZ
kEP4bzOaXbMrNYuC3ssrbS/zH9w+TPlMSW1w97D2V5mTHsjbaGEgjzPZY3zWklrq
RnHlGEThTsHL0OsdQ0OI7cWUiZn2fMqGSofdeqFAf/o2IAlHGcyAK6MCRNR802dC
8M5XDh1d67QDA2lWL/axMjqgxHKwIR/ufU4EDlTzAAMFEACYnYVIjmfG2eitMowG
21GhIbAyyKjj+xBWeCjB1BVw3ir04fw5I/xaEd5VP3NTY+yv7TZWPdf3myN1SbhT
xeu0YVyzEDs7y2ekpDe6YNiTSRAOoWeb1YObiY6UGTa/vvHtK++Kx+wq7hbdaCaK
AUW2NZ8T9M8p/wX0jxx+CyV/iIfJuhdl9Q08bqRdlnWH+zfHaxq1syUQBMVPVrU/
Gi+IOwo6qcQiW5BI9mZ9WRLQKE5gbvlbzLL49TKcFRq4bCnQXtj6gVCYUIT+wm7I
aNYXl3NcWNWL6GkmuvSxfoF8z/R5Iw+4LUlngSYscXE9c+rFVzp9gfOOqlCbyFRx
1WCHtyLhZfzZub2MmJHQ0+MuYbcP4bG7rGq2QqpYpKYTuqVhhHx8S2EpOOTlx0ys
vLbbGoRdF29i6L4UFl6Tfy1j2S+FSAmug/iv7KbYj4FVeGBh1Rrt+cF//sUXBO6p
OefLSbXRMt7kZNnIyhNfWrM62n03R2nKucTE8oLlFrwZtZ+VTQZicirvwMsC965a
3a3YSX5VD8Ix3/+nYsDeTC8QUTWgK3wmth3gLy1I6ElU5yiSUHMfbFwmI4RFVh5b
fDnoompIAiINvM6KXSxwFfQ1Zwwkxw0iOx7rQ2F/HgTTpXkVA5bISCPChkfIp/WF
S9L5aiwvjo8rXPhb2ChWm1GvpIhPBBgRAgAPAhsMBQJW3FiZBQkRX7bHAAoJEPcK
ApBsMBgTjcIAoJG4JYCcs0RtH8/khWE1nxZzfhgqAJ9DTOJGHUNVqVBg+GtULiuY
wE7HxpkErgRarZOvEQwAnl8+RVZd1Ly24jGPGPW+P57ASaYlwGMj0ifQTvVAfKOE
NCQjcW7njMywLbhFOgDUpR0OfCiK/TJBqLXIa1G2KN2yMQfCCJdrL6bOTT1catMK
vTw9yTjiRnguk6HbM0IuQSbIlTNsLPwUGDQ1NMe3KckcRuLEjdTpubBTRtpYDM50
4Uexua5DS99MTuc+Mgm1S0o9Zft6yu7rpFjf18klqOqqoP+87FM9BQFm1WctLR02
n0e4NY7akryVH7/W0uDJkV4n5Ye7j8F98VAagoxCSrk8lX3AhAaMBzrbFjzstxJw
o8Vlt3ZTC1d7L8wbgFTEZtErmBKruTRV3BqOEhu9aiao2uwPe2Jhb0D5rjuHCvcF
DqmIhODWsfxuoy5uHy0wL4oSjttEfF8x5jtB3DBVrEBN+tReed5asNMfeQAZQqSZ
fIJ9gXbVE6FTJz8MrLHsn0dculG3lQezddYbc0TMPxg4f0v8sj2xk/LV2lXdTV/o
eB4VbRqkwiT+J/+SHE+TAQDcQCw44rbOCuZx8rzyvwD1W+Jw0IUIIvNwnB7y2HBX
uQwAnLbEC6D8dvYXJc1ltV3eBYiCwneO4S/WX4C9Yp1y0kQLILybbRaOyF41L9OE
7xeFnmr9xnNfS7MgKjdVAm6P99xuZic1S6uC7r8oaL3/R69coO/nwOEN5eVMwpwp
x2tt/Nhig2QT8kyjlVIZe7oppDpf/PWk8SxAXZgwWSa23nawMtp4P9L4VaPbwnPw
kT0nY4RI60jAZmG00GC0cYjomPGnfkg1vfJoGnSMghz7f7+syXGOEIE6SZLCKE1m
hssJmh390x6gAPQjKohU62micEm0VVGEax+fUN0jTnCpp8BXYSWbY66FB1dL4Mtl
o+2i1+7O5hJvnylrLCfRheETazlW/xqxS4ZBk2olaC8FZF1Tqo66rBlFDPVOm8de
rlACppC/px+apDDTZju3UaWPhMIPlhP9M3vkgouJgHRhqvAH9ZgwtlFV9v1wvyOn
91cQFC1y0YKkptIn6X2mlgiZIoizq8no5X0Eq+RtffCJlqDsfj9WncKlk4Oh5Bk6
FFfJC/sGlv7BkhORO1kHZZTHD7arCI3vOSJu8b+W9FnbKLho1/So9wnjQoFe6PO/
Q6PbNvNJnTCSwCEvxm1L7XBLIJtO5uOrrKge5BXNT9a/+oUMem702UU9bydcTF2m
Ux4TbLlstnXa4C+9821aZltSb+pnGPkNBGwprpP6cOTx1TIb6rUmpPqKdVDTMpeg
kDVeSDltOJShTstXH1y3h39WwRuH9c/tnW+m7lpyGmz2yVL4itU7qadygO6u06/2
SuLiyg+WSNh/s6LBsS+Qx+6MwXqs69JXcB13/BK5uqP2ub1ilBN/7aqTIMoESK2l
GHO+g8m3q1Cn0JSZ/ohMC8HMqU8NaHcv4jAh5yRds+UQ7YkxuwIpoxNgawNkW2rk
22JKkuCQWAb04Iiq5sBQSijlfkJoPVLiDW4mMhvrUs7ru1bu+mFS5G3vV/xem7xC
dDxLvIYslX8onZsVqxuzHmZLEXFstz/rfVQMH03I6rqt82Ogf9boNGYxvPUi+WqB
EwmdoFe0JFRvYmlhcyBNdWVsbGVyIDx0b2JpYXNtdWVAZ25vbWUub3JnPoh/BBMR
CAAnBQJarZOvAhsDBQkJZgGABQsJCAcCBhUICQoLAgQWAgMBAh4BAheAAAoJECaz
85GJw5f2jcoBAIxZr3oM7QObsHgXE3Aawi3oqsC3KYKv+u431WPsmM1UAQCZpc2F
fVHEa+4234cTmCnQbbFrCXxIeB8k2myZ76Hg+rkDDQRarZOvEAwAkwSKzWkK6pYk
sa6LgZdRKjGghFcwCFprptDyAv9iLfs/s+8zfUhHD2stliStDNtuGvhYeN0/o7T/
F6TJ9JZX0QhI/+LqeQTWvRsZ9KoevSpTp0SpNfQa66b+WCfeGIw9BwKnNn+p5SNc
kxjP3DZvyH57CzctoA8yFiAXK0OTijNiBO89llyCLCdmMHR/BANwg/Lv1AWL/M0a
4oibQR6+GR8T+3ydzaZvkxn+hNdTE50YslWFzXXR0brivparCWijUUpnn3Prgaus
wKvOz8QFfe8Qz037ydd8XXd40S3IA4/GYnuVn+opaxGVMsjMNh1yhYp5AiYU1VD4
FTXdKLZRMXtDr+TTB1qSN6KCJ7EQ/O/tWY704ldIE3zLKQV4dYXDhau4jKx1j9yz
xkb4PpJD+b6NBvwNGoTvw4iBsRNXwRho0WMYo304Dh64edBhMC/h3lhj+wCQONR7
yYv0l3hY8pmJXY409uCGwWUoQ9yr+ynk+fM7vNfOeZA6mVoJCuQbAAMFC/98wKAL
VNTDpmsQvZNebGkYUB2QxEeGtcqUaRly/DcKveb3SpJ4CRrxTVBsZQVVeHYbOQPH
5cKOWprV5RUV08urNVxY5Dlu3PCZTMHT2L2otPGiInxk0CRFOWheo5lu7LRdQMSI
i8m+1iP5heQrfTVt+2vS/ogkMtWbjWRaj1gzHQ0wEEtz+bF8t4f6pjQSUKU7R35g
z4GqTbDG+BsW5yD5tMG+2+Ide/c0dl4aGSmBHwJdhL/78Zi//z2VNt1h89rk8zyL
pfZVl6sq78ukqy30or9LWRkDSYrl8WKrLGXWzQgWLgN97Srd7dXRVqRZqVG7/3mG
6PbQzUiYj+pcItJfhGCp8oVwOBpnhNnywxsAGk5f2G4B9XUWv21aE4ZzaQLi0DnA
3+kzTLULI6yd6LfdhjEJaj1TcW8gxWgRGkyU/5mva9pXvYr5yZzNGxXAyLFliuNU
QHfPbuokGlr/ERy9J8CMmT2LYY+eocCeMpN/oyaZX9j2C4pqei41/Ich5M2IZwQY
EQgADwUCWq2TrwIbDAUJCWYBgAAKCRAms/ORicOX9rrnAQDBKBceDhsxXKWZQvuR
Me/juPtunEHhxSiPQa1i61djCgD5ATxw0MjcM/bRHiPFj8JJmvKeRfrZLMZsdNCA
BkK1L5A=
=Aij5
-----END PGP PUBLIC KEY BLOCK-----

First OpenPGP.conf 2016 in Cologne, Germany

Recently, I’ve attended the first ever OpenPGP conference in Cologne, Germany. It’s amazing how 25 years of OpenPGP have passed without any conference for bringing various OpenPGP people together. I attended rather spontaneously, but I’m happy to have gone. It’s been very insightful and I’m really glad to have met many interesting people.

Werner himself opened the conference with his talk on key discovery. He said that the problem of integrating GnuPG in MUAs is solved. I doubt that with a fair bit of confidence. Besides few weird MUAs (mutt, gnus, alot, …) I only know KMail (should maybe also go into the “weird” category 😉 ) which uses GnuPG through gpgme, which is how a MUA really should consume GnuPG functionality. GNOME’s Evolution, while technically correct, supports gnugp, but only badly. It should really be ported to gpgme. Anyway, Werner said that the problem of encryption has been solved, but now you need to obtain the key for the party you want to communicate with. How can you find the key of your target? He said that keyservers cannot map a mail address to a key. It was left a bit unclear what he meant, but he probably referred to the problem of someone uploading a key for your email address without your consent. Later, he mentioned the Web of Trust, which is meant for authenticating the other user’s key. But he disliked the fact that it’s “hard to explain”. He didn’t mention why, though. He did mention that the WoT exposes the global social graph, which is not a desirable feature. He also doubts that the Web of Trust scales, but he left the audience wondering why. To solve the mapping problem, you might imagine keyservers which verify your email address before accepting your key. These, he said, “harm the system”. The problem, he said, is that this system only works with one keyserver which would harm the decentralised nature of the OpenPGP system and bring us back in to the x.500 dark age. While I agree with the conclusion, I don’t fully agree with the premise. I don’t think it’s clear that you cannot operate a verifying server network akin to how it’s currently done. For example, the pool of keyservers could only accept keys which were signed by one of the servers of the pool within the last, say, 6 months. Otherwise, the user has to enrol by following a challenge-response protocol. The devil may be in the details, but I don’t see how it’s strictly impossible.

However, in general, Werner likes the OpenSSH approach better. That is, it opportunistically uses a key and detects when it changes. As with the Web of Trust, the key validation happens on your device, only. Rather than, say, have an external entity selling the trust as with X.509.

Back to the topic. How do you find a key of your partner now? DANE would be an option. It uses DNSSEC which, he said, is impossible to implement a client for. It also needs collaboration of the mail provider. He said that Posteo and mailbox.org have this feature.

Anyway, the user’s mail provider needs to provide the key, he said. Web Key Directory is a new proposal. It uses https for key look-up on a well known name on the domain of the email provider. Think .well-known/openpgp/. It’s not as distributed as DNS but as decentralised as eMail is, he said. It still requires collaboration of the email provider to set the Web service up. The proposal trusts the provider to return genuine keys instead of customised ones. But the system shall only be used for initial key discovery. Later, he mentioned to handle revocation via the protocol™. For some reason, he went on to explain a protocol to submit a key in much more detail rather than expanding on the protocol for the actual key discovery, what happens when the key gets invalid, when it expired, when it gets rolled over, etc.
—-

Next up was Meskio who talked about Key management at LEAP, the LEAP Encryption Access Project. They try to provide a one-stop solution to encrypting all the things™. One of its features is to transparently encrypt emails. To achieve that, it opens a local MTA and an IMAPd to then communicate via a VPN with the provider. It thus builds on the idea of federation the same way current email protocols do, he said. For LEAP to provide the emails, they synchronise the mailbox across devices. Think of a big dropbox share. But encrypted to all devices. They call it soledad which is based on u1db.

They want to protect the user from the provider and the provider from the user. Their focus on ease of use manifests itself in puppet modules that make it easy to deploy the software. The client side is “bitmask“, a desktop application written in Qt which sets everything up. That also includes transparently getting keys of other users. Currently, he said, we don’t have good ways of finding keys. Everything assumes that there is user intervention. They want to change that and build something that encrypts emails even when the user does not do anything. That’s actually quite an adorable goal. Security by default.

Regarding the key validation they intend to do, he mentioned that it’s much like TOFU, but with many many exceptions, because there are many corner cases to handle in that scheme. Keys have different validation levels. The key with the highest validation level is used. When a key roll-over happens, the new key must be signed by the old one and the new key needs to be at least of a validation level as the old one. Several other conditions need to also hold. Quite an interesting approach and I wish that they will get more exposure and users. It’s promising, because they don’t change “too” much. They still do SMTP, IMAP, and OpenPGP. Connecting to those services is different though which may upset people.


More key management was referred on by Comodo’s Phillip Hallam-Baker who went then on to talk about The Mathematical Mesh: Management of Keys. He also doesn’t want to change the user experience except for simplifying everything. Every button to push is one too many, he said. And we must not write instructions. He noted that if every user had a key pair, we wouldn’t need passwords and every communication would be secured end-to-end. That is a strong requirement, of course. He wants to have a single trust model supporting every application, so that the user does not have to configure separate trust configurations for S/MIME, OpenPGP, SSH, etc. That again is a bit of a far fetched dream, I think. Certainly worth working towards it, but I don’t believe to experience such a thing in my lifetime. Except when we think of single closed system, of course. Currently, he said, fingerprints are used in two ways: Either users enter them manually or they compare it to a string given by a secure source.

He presented “The Mesh” which is a virtual store for configuration information. The idea is that you can use the Mesh to provision your devices with the data and keys it needs to make encrypted communication happen. The Mesh is thus a bit of a synchronised storage which keeps encrypted data only. It would have been interesting to see him relate the Mesh to Soledad which was presented earlier. Like Soledad, you need to sign up with a provider and connect your devices to it when using the Mesh. His scheme has a master signature key which only signs a to be created administration key. That in turn signs application- and device keys. Each application can create as many keys as it needs. Each device has three device keys which he did unfortunately not go into detail why these keys are needed. He also has an escrow method for getting the keys back when a disaster happens: The private keys are encrypted, secret shared, and uploaded. Then, you can use two out of three shares to get your key back. I wonder where to upload those shares to though and how to actually find your shares back.

Then he started losing me when he mentioned that OpenPGP keyservers, if designed today, would use a “linked notary log (blockchain)”. He also brought (Proxy-) reencryption into the mix which I didn’t really understand. The purpose itself I think I understand: He wants the mesh to cater for services to re-encrypt to the several keys that all of one entity’s devices have. But I didn’t really understand why it’s related to his Mesh at all. All together, the proposal is a bit opportunistic. But it’s great to have some visions…

Bernhard Reiter talked about getting more OpenPGP users by 2017. Although it was more about whitewashing the money he receives from German administration… He is doing gpg4win, the Windows port of GnuPG. The question is, he said, how to get GnuPG to a large user base and to make them use it. Not surprisingly, he mentioned that we need to improve the user experience. Once he user gets in touch with cryptography and is exposed to making trust decisions, he said, the user is lost. I would argue otherwise, because the people are heavily exposed to cryptography when using whatsapp. Anyway, he then referred to an idea of his: “restricted documents”. He wants to build a military style of access control for documents. You can’t blame him; it’s probably what he makes money off.

He promised to present ideas for Android and the Web. Android applications, he said, run on devices that are ten times smaller and slower compared to “regular” machines. They did actually conduct a study to find this, and other things, out. Among the other things are key insights such as “the Android permission model allows for deploying end to end encryption”. Genius. They also found out that there is an OpenPGP implementation in Bouncy Castle which people use and that it’s possible to wrap libgcrypt for Java. No shit!!1 They have also identified OpenKeychain and K9 mail as applications using OpenPGP. Wow. As for the Web, their study found out that Webmail is a problem, but that an extension to a Web browser could make end to end encryption possible. Unbelievable. I am not necessarily disappointed given that they are a software company and not a research institute. But I’m puzzled in what reality these results are interesting to the audience of OpenPGP.conf. In any case, his company conducted the study as part of the public tender they won and their results may have been biased by his subcontractors who are deeply involved in the respective projects (i.e. Mailvelope, OpenKeychain, …).

As for his idea regarding UX, his main idea is to implement Web Key Directory (see Werner’s opening talk) discovery mechanism. While I think the approach is good, I still don’t think it is sufficient to make 2017 the year of OpenPGP. My concerns revolve about the UX in non straight-forward cases like people revoking their keys. It’s also one thing to have a nice UX and another to actually have users going for it. Totally unrelated but potentially interesting: He said that the German Federal Office for Information Security (“BSI”) uses 500 workstations with GNU/Linux with a Plasma desktop in addition to Windows.

—-

Holger Krekel then went on to refer about automatic end to end encrypted emails. He is working on an EU funded project called NEXTLEAP. He said that email is refusing to die in favour of Facebook and all the other new kids on the block. He stressed that email is the largest open social messaging system and that many others use it as an anchor of identity. However, many people use it for “SPAM and work” only, he said. He identified various usability problems with end to end encrypted email: key distribution, preventing SPAM, managing secrets across devices, and handle device or key loss.

To tackle the key distribution problem, he mentioned CONIKS, Werner’s Webkey, Mailvelope, and DANE as projects to look into. With these, the respective providers add APIs to find public keys for a person. We know about Werner’s Webkey proposal. CONIKS, in short, is a key transparency approach which requires identity providers to publicly testify your key. Mailvelope automatically asks a verifying key server to provide the recipient’s key. DANE uses DNS with DNSSEC to distribute keys.

He proposed to have inline keys. That means to attach keys and cryptographic information to your emails. When you receive such a message, you would parse the details and use them for encryption the next time you create a message. The features of such a scheme, he said, are that it is more private in the sense that there is no public key server which exposes your identity. Also, it’s simpler in the sense that you “only” need to get support from MUAs and you don’t need to care about extra infrastructure. He identified that we need to run a protocol over email if we ever want to support that scheme. I’m curious to see that, because I believe that it’s good if we support protocols via email. Much like Outlook already does with its voting. SPAM prevention would follow naturally, he said. Because the initial message is sent as plain text, you can detect SPAM. Only if you reply, the other party gets your key, he said. I think it should be possible to get a person’s key out of band, but that doesn’t matter much, I guess. Anyway, I couldn’t really follow that SPAM argument, because it seems to imply that we can handle SPAM in the plain text case. But if that was the case, then we wouldn’t have the SPAM problem today. For managing keys, he thinks of sharing your keys via IMAP, like in the whiteout proposal.

—-

Stefan Marsiske then talked about his concerns regarding the future directions of GnuPG. He said he did some teaching regarding crypto and privacy preserving tools and that he couldn’t really recommend GnuPG to anyone, because it could not be used by the people he was teaching. Matt Green and Schneier both said that PGP is not able to secure email or that email is “unsecurable”. This is inline with the list that secushare produced. The saltpack people also list some issues they see with OpenPGP. He basically evaluated gpg against the list of criteria established in the SoK paper on instant messaging which is well worth a read.

Lutz Donnerhacke then gave a brief overview of the history of OpenPGP. He is one of the authors of the initial OpenPGP standard. In 1992, he said, he read about PGP on the UseNet. He then cared about getting PGP 2.6.3(i)n out of the door to support larger keys than 1024 and fix other bugs that annoyed him. Viacrypt then sold PGP4 which was based on PGP2. PGP5 was eventually exported in books and were scanned back in during HIP97 and CCCamp99, he said. Funnily enough, a bug lurked for about five years, he said. Their get_random always returned 1…

Funnily enough he uses a 20 years old V3 key so at least his Key ID is trivially forgeable, but the fingerprint should also be easy to create. He acknowledges it but doesn’t really care. Mainly, because he “is a person from the last century”. I think that this mindset is present in many people’s heads…

The next day Intevation’s Andre Heinecke talked about the “automated use of gpg through gpgme“. GPGME is the abbreviation of “GnuPG made easy” and is meant to be a higher level abstraction for gpg. “gpg is a tool not a library”, he said. For a library you can apply versioning while the tool may change its output liberally, he said. He mentions gpg’s machine interface with --with-colons and that changes to that format will break things. GpgME will abstract that for you and tries to make the tool a library. There is a defined interface and “people should use it”. A selling point is that it works with all gpg versions.

When I played around with gpgme, I found it convoluted and lacking basic operations. I think it’s convoluted because it is highly stateful and you need to be careful with calling (many) functions in the correct order if you don’t want it to complain. It’s lacking, because signing other people’s keys is a weird thing to do and the interface is not designed with that in mind. He also acknowledged that it is a fairly low level API in the sense that every option has to be set distinctly and that editing keys is especially hard. In gpgme, he said, operations are done based on contexts that you have to create. The context can be created for various gpg protocols. Surprisingly, that’s not only OpenPGP, but also CMS, GpgConf, and others.

I don’t think GNOME Software is ported to gpgme. At least Evolution and Seahorse call gpg directly rather than using gpgme. We should change that! Although gpgme is a bit of a weird thing. Normally™ you’d have a library build a tool with it. With gpgme, you have a tool (gpg) and build a library. It feels wrong. I claim that if we had an OpenPGP library that reads and composes packets we would be better off.

Vincent and Dominik came to talk about UX decisions in OpenKeychain, the Android OpenPGP implementation. It does key management, encryption and decryption of files, and other OpenPGP operations such as signing keys. The speakers said that they use bouncy castle for the crypto and OpenPGP serialisation. They are also working on K9 which will support PGP/MIME soon. They have an Open Tech Fund which finances that work. In general, they focused on the UX to make it easy for the consumer. They identified “workflows” users possibly want to carry out with their app. Among them are the discovery and exchange of keys, as well as confirming them (signing). They gave nice looking screenshots of how they think they made the UI better. They probably did, but I found it the foundations a bit lacking. Their design process seems to be a rather ad-hoc affair and they themselves are their primary test subjects. While it’s good work, I don’t think it’s easily applicable to other projects.

An interesting thing happened (again): They deviate from the defaults that GnuPG uses. Unfortunately, the discussions revolving about that fact were not very elaborate. I find it hard to imagine that both tools have, say, different default key lengths. Both tools try to prevent mass surveillance so you would think that they try to use the same defaults to achieve their goal. It would have been interesting to find out what default value serves the desired purpose better.

Next up was Kritian Fiskerstrand who gave an update on the SKS keyserver network. SKS is the software that we trust our public keys with. SKS is written in OCaml, which he likes, but of which he said that people have different opinions on. SKS is single threaded which is s a problem, he said. So you need to have a reverse proxy to handle more than one client.

He was also talking about the Evil32 keys which caused some stir-up recently. In essence, the existing OpenPGP keys were duplicated but with matching short keyids. That means that if you lookup a key by its short key ID, you’re screwed, because you get the wrong key. If you are using the name or email address instead, then you also have a problem. People were upset about getting the wrong key when having asked the keyserver to deliver.

He said that it’s absolutely no problem because users should verify the keys anyway. I can only mildly agree. It’s true that users should do that. But I think we would live in a nicer world where we could still maintain a significantly high security level of such a rigorous verification does not happen. If he really maintains that point of view then I’m wondering why he is allowing keys to be retrieved by name, email address, or anything else than the fingerprint in first place.

—-

Volker Birk from pretty Easy privacy talked about their project which aims at making encrypted email possible for the masses.
they make extensive use of gpgme and GnuNet, he said. Their focus is “privacy by default”. Not security, he said. If security and privacy are contradicting in some cases, they go for privacy instead of security. For example, the Web of Trust is a good idea for security, but not for privacy, because it reveals the social graph. I really like that clear communication and the admission of security and privacy not necessarily going well together. I also think that keyservers should be considered harmful, mainly because they are learning who is attempting to communicate with whom. He said that everything should be decentralised and peer-to-peer. Likewise, a provider should not be responsible for binding an email address to a key. The time was limited, unfortunately, so the actual details of how it’s supposed to be working were not discussed. They wouldn’t be the first ones to attempt a secure or rather privacy preserving solution. In the limited time, however, he showed how to use their Python adapter to have it automatically locate a public key of a recipient and encrypt to it. They have bindings for various other languages, too.

Interestingly, a keysigning “party” was scheduled on the first evening but that didn’t take place. You would expect that if anybody cared about that it is the OpenPGP hardcore hackers, all of which were present. But not a single person (as in nobody, zero “0”, null) was interested. You can’t blame them. It’s probably been a cool thing when you were younger and GnuPG this was about preventing the most powerful targetted attacks. I think people realised that you can’t have people mumble base16 encoded binary strings AND mass adoption. You need to bring at least cake to the party… Anyway, as you might be aware, we’re working towards a more pleasant key signing experience 🙂 So stay tuned for updates.

LinuxCon Europe 2015 in Dublin

sponsor

The second day was opened by Leigh Honeywell and she was talking about how to secure an Open Future. An interesting case study, she said, was Heartbleed. Researchers found that vulnerability and went through the appropriate vulnerability disclosure channels, but the information leaked although there was an embargo in place. In fact, the bug proofed to be exploited for a couple of months already. Microsoft, her former employer, had about ten years of a head start in developing a secure development life-cycle. The trick is, she said, to have plans in place in case of security vulnerabilities. You throw half of your plan away, anyway, but it’s good to have that practice of knowing who to talk to and all. She gave a few recommendations of which she thinks will enable us to write secure code. Coders should review, learn, and speak up if they feel uncomfortable with a piece of code. Managers could take up on what she called “smells” when people tend to be fearful about their code. Of course, MicroSoft’s SDL also contains many good practices. Her minimal set of practices is to have a self-assessment in place to determine if something needs security review, have an up-front threat modelling that is kept up to date as things evolve, have a security checklist like Mozilla’s or OWASP’s, and have security analysis built into CI process.

Honeywell

The container panel was led by Jeo Zonker Brockmeier who started the discussion by stating that we’ve passed the cloud hype and containers are all the rage now. The first question he shot at the panellists was whether containers were ready at all to be used for production. The panellists were, of course, all in agreement that they are, although the road ahead is still a bit bumpy. One issue, they identified, was image distribution. There are, apparently, two types of containers. Application containers and System containers. Containers used to be a lightweight VM with a full Linux system. Application Containers, on the other hand, only run your database instance. They see application containers as replacing Apps in the future. Other services like databases are thus not necessarily the task of Application containers. One of the panellists was embracing dockerhub as a similar means to RPM or .deb packages for distributing software, but, he said, we need to solve the problem of signing and trusting. He was comparing the trust issue with packages he had installed on his laptop. When he installed a package, he didn’t check what was inside the packages his OS downloaded. Well, I guess he missed that people put trust in the distribution instead of random people on the Internet who put up an image for everybody to download. Anyway, he wanted Docker to be a form of trusted entity like Google or Apple are for their app stores which are distributing applications. I don’t know how they could have missed the dependency resolution and the problem of updating lower level libraries, maybe that problem has been solved already…

Container Panel

Intel’s Mark was talking on how Open Source was fuelling the Internet of Things. He said that trust was an essential aspect of devices that have access to personal or sensitive data like access to your house. He sees the potential in IoT around vaccines which is a connection I didn’t think of. But it makes somewhat sense. He explained that vaccines are quite sensitive to temperature. In developing countries, up to 30% of the vaccines spoil, he said, and what’s worse is that you can’t tell whether the vaccine is good. The IoT could provide sensors on vaccines which can monitor the conditions. In general, he sees the integration of diverse functionality and capabilities of IoT devices will need new development efforts. He didn’t mention what those would be, though. Another big issue, he said, was the updatetability, he said. Even with smaller devices, updates must not be neglected. Also, the ability of these devices to communicate is a crucial component, too, he said. It must not be that two different light bulbs cannot talk to their controller. That sounds like this rant.

IoT opps

Next, Bradley talked about GPL compliance. He mentioned the ThinkPinguin products as a pristine example for a good GPL compliant “complete corresponding source”. He pointed the audience to the Compliance.guide. He said that it’s best to avoid the offer for source. It’s better to include the source with the product, he said, because the offer itself creates ongoing obligations. For example, your call centre needs to handle those requests for the next three years which you are probably not set up to do. Also, products have a typically short lifespan. CCS requires good instructions how to build. It’s not only automated build tools (think configure, make, make install). You should rather think of a script as a movie or play script. The test to use on your potential CCS is to give your source release to another developer of some other department and try whether that person can build the code with your instructions. Anyway, make install does usually not work on embedded anyway, because you need to flash the code. So make sure to include instructions as to how to get the software on the device. It’s usually not required to ship the tool-chain as long as you give instructions as to what compiler to use (and how it was configured). If you do include a compiler, you might end up having more obligations because GCC, for example, is itself GPL licensed. An interesting question came up regarding specialised hardware needed to build or flash the software. You do not need to include anything “tool-chain-like” as long as you have instructions as to the requirements what the user needs to obtain.

Bradley

Samsung’s Krzysztof was talking about USB in Linux. He said, it is the most common external interface in the world. It’s like the Internet in the sense that it provides services in a client-server architecture. USB also provides services. After he explained what the USB actually is how the host interacts with devices, he went on to explain the plug and play aspect of USB. While he provided some rather low-level details of the protocol, it was a rather high level in the sense that it was still the very basic USB protocol. He didn’t talk too much on how exactly the driver is being selected, for example. He went on to explain the BadUSB attack. He said that the vulnerability basically results from the lack of user interaction when plugging in a device and loading its driver. One of his suggestions were to not connect “unknown devices”, which is hard because you actually don’t know what “services” the device is implementing. He also suggested to limit the number of input sources to X11. Most importantly, though, he said that we’d better be using device authorisation to explicitly allow devices before activating them. That’s good news, because we are working on it! There are, he said, patches available for allowing certain interfaces, instead of the whole device, but they haven’t been merged yet.

USB

Jeff was talking about applying Open Source Principles to hardware. He began by pointing out how many processors you don’t get to see, for example in your hard disk, your touchpad controller, or the display controller. These processors potentially exfiltrate information but you don’t really know what they do. Actually, these processors are about owning the owner, the consumer, to then sell them stuff based on that exfiltrated big data, rather than to serve the owner, he said. He’s got a project running to build devices that you not only own, but control. He mentioned IoT as a new battleground where OpenHardware could make an interesting contestant. FPGAs are lego for hardware which can be used easily to build your functionality in hardware, he said. He mentioned that the SuperH patents have now expired. I think he wants to build the “J-Core CPU” in software such that you can use those for your computations. He also mentioned that open hardware can now be what Linux has been to the industry, a default toolkit for your computations. Let’s see where his efforts will lead us. It would certainly be a nice thing to have our hardware based on publicly reviewed designs.

Open Hardware

The next keynote was reserved for David Mohally from Huawei. He said he has a lab in which they investigate what customers will be doing in five to ten years. He thinks that the area of network slicing will be key, because different businesses needs require different network service levels. Think your temperature sensor which has small amounts of data in a bursty fashion while your HD video drone has rather high volume and probably requires low latency. As far as I understood, they are having network slices with smart meters in a very large deployment. He never mentioned what a network slice actually is, though. The management of the slices shall be opened up to the application layer on top for third parties to implement their managing. The landscape, he said, is changing dramatically from what he called legacy closed source applications to open source. Let’s hope he’s right.

Huawei

It was announced that the next LinuxCon will happen in Berlin, Germany. So again in Germany. Let’s hope it’ll be an event as nice as this one.

Intel Booth

HP Booth

LinuxCon Europe – Day 1

attendee registration

The conference was opened by the LinuxFoundation’s Executive Jim Zemlin. He thanked the FSF for their 30 years of work. I was a little surprised to hear that, given the differences between OpenSource and Free Software. He continued by mentioning the 5 Billion Dollar report which calculates how much “value” the projects hosted at Linux Foundation have generated over the last five years. He said that a typical product contains 80%, 90%, or even more Free and Open Source Software. He also extended the list of projects by the Real Time Collaborative project which, as far as I understood, effectively means to hire Thomas Gleisxner to work on the Real Time Linux patches.

world without Linux

The next, very interesting, presentation was given by Sean Gourley, the founder of Quid, a business intelligence analytics company. He talked about the limits of human cognition and how algorithms help to exploit these limits. The limit is the speed of your thinking. He mentioned that studies measured the blood flow across the brain when making decisions which found differences depending on how proficient you are at a given task. They also found that you cannot be quicker than a certain limit, say, 650ms. He continued that the global financial market is dominated by algorithms and that a fibre cable from New York to London costs 300 million dollars to save 5 milliseconds. He then said that these algorithms make decisions at a speed we are unable to catch up with. In fact, the flash crash of 2:45 is inexplicable until today. Nobody knows what happened that caused a loss of trillions of dollars. Another example he gave was the crash of Knight Capital which caused a loss of 440 million dollars in 45 minutes only because they updated their trading algorithms. So algorithms are indeed controlling our lives which he underlined by saying that 61% of the traffic on the Internet is not generated by humans. He suggested that Bots would not only control the financial markets, but also news reading and even the writing of news. As an example he showed a Google patent for auto generating social status updates and how Mexican and Chinese propaganda bots would have higher volume tweets than humans. So the responsibilities are shifting and we’d be either working with an algorithm or for one. Quite interesting thought indeed.

man vs. machine

Next up was IBM on Transforming for the Digital Economy with Open Technology which was essentially a gigantic sales pitch for their new Power architecture. The most interesting bit of that presentation was that “IBM is committed to open”. This, she said, is visible through IBM’s portfolio and through its initiatives like the IBM Academic Initiative. OpenPower Foundation is another one of those. It takes the open development model of software and takes it further to everything related to the Power architecture (e.g. chip design), she said. They are so serious about being open, that they even trademarked “Open by Design“…

IBM sales pitch

Then, the drone code people presented on their drone project. They said that they’ve come a long way since 2008 and that the next years are going to fundamentally change the drone scene as many companies are involved now. Their project, DroneCode, is a stack from open hardware to flight control and the next bigger thing will be CAN support, which is already used in cards, planes, and other vehicles. The talk then moved to ROS, the robot operating system. It is the lingua franca for robotic in academia.

Drones

Matthew Garret talked on securing containers. He mentioned seccomp and what type of features you can deprive processes of. Nowadays, you can also reason about the arguments for the system call in question, so it might be more useful to people. Although, he said, writing a good seccomp policy is hard. So another mechanism to deprive processes of privileges is to set capabilities. It allows you to limit the privileges in a more coarse grained way and the behaviour is not very well defined. The combination of capabilities and seccomp might have surprising results. For example, you might be allowing the mknod() call, but you then don’t have the capability to actually execute it or vice versa. SELinux was next on his list as a mechanism to secure your containers. He said that writing SELinux policy is not the most fun thing in the world. Another option was to run your container in a virtual machine, but you then lose some benefits such as introspection of fine grained control over the processes. But you get the advantages of more isolation. Eventually, he asked the question of when to use what technology. The performance overhead of seccomp, SELinux, and capabilities are basically negligible, he said. Fully virtualising is usually more secure, he said, but the problem is that you have more complex infrastructure which tend to attract bugs. He also mentioned GRSecurity as a means of protecting your Linux kernel. Let’s hope it’ll be merged some day.

Containers

Canonical’s Daniel Watkins then talked on cloud-init. He said it runs in three stages. Init, config, and final in which init sets up networking, config does the actual configuration of your services, final is for the things that eventually need to be done. The clound-init architecture is apparently quite flexible and versatile. You can load your own configuration and user-data modules so that you can set up your cloud images as you like. cloud-init allows you get rid of custom images such that you can have confidence in your base image working as intended. In fact, it’s working not only with BSDs but also with Windows images. He said, it is somewhat similar to tools like Ansible, so if you are already happily using one of those, you’re good.

cloud-init

An entertaining talk was given by Florian Haas on LXC and containers. He talked about tricks managing your application containers and showed a problem when using a naive chroot which is that you get to see the host processes and networking information through the proc filesystem. With LXC, that problem is dealt with, he said. But then you have a problem when you update the host, i.e. you have to take down the container while the upgrade is running. With two nodes, he said, you can build a replication setup which takes care of failing over the node while it is upgrading. He argued that this is interesting for security reasons, because you can upgrade your software to not be vulnerable against “the latest SSL hack” without losing uptime. Or much of it, at least… But you’d need twice the infrastructure to run production. The future, he said, might be systemd with it’s nspawn tool. If you use systemd all the way, then you can use fleet to manage the instances. I didn’t take much away, personally, but I guess managing containers is all the rage right now.

LXC

Next up was Michael Hausenblas on Filesystems, SQL and NoSQL with Apache Mesos. I had briefly heard of Mesos, but I really didn’t know what it was. Not that I’m an expert now, but I guess I know that it’s a scheduler you can use for your infrastructure. Especially your Apache stack. Mesos addresses the problem of allocating resources to jobs. Imagine you have several different jobs to execute, e.g. a Web server, a caching layer, and some number crunching computation framework. Now suppose you want to increase the number crunching after hours when the Web traffic wears off. Then you can tell Mesos what type of resources you have and when you need that. Mesos would then go off and manage your machines. The alternative, he said, was to manually SSH into the machines and reprovision them. He explained some existing and upcoming features of Mesos. So again, a talk about managing containers, machines, or infrastructure in general.

Mesos

The following Kernel panel didn’t provide much information to me. The moderation felt a bit stiff and the discussions weren’t really enganged. The topics mainly circled around maintainership, growth, and community.

Kernel Panel

SuSE’s Ralf was then talking on DevOps. He described his DevOps needs based on a cycle of planning, coding, building, testing, releasing, deploying, operating, monitoring, and then back to planning. When bringing together multiple projects, he said, they need to bring two independent integration loops together. When doing DevOps with a customer, he mentioned some companies who themselves provide services to their customers. In order to be successful when doing DevOps, you need, he said, Smart tools, Process automation, Open APIs, freedom of choice, and quality control are necessary. So I guess he was pitching for people to use “standards”, whatever that exactly means.

SuSE DevOps

I awaited the next talk on Patents and patent non aggression. Keith Bergelt, from OIN talked about ten years of the Open Invention Network. He said that ten years ago Microsoft sued Linux companies to hinder Linux distribution. Their network was founded to embrace patent non-aggression in the community. A snarky question would have been why it would not be simply enough to use GPLv3, but no questions were admitted. He said that the OIN has about 1750 licensees now with over a million patents being shared. That’s actually quite impressive and I hope that small companies are being protected from patent threats of big players…

OIN

That concluded the first day. It was a lot of talks and talking in the hallway. Video recordings are said to be made available in a couple of weeks. So keep watching the conference page.

Sponsors

IBM Booth

Unboxing a Siswoo C55

For a couple of days now, I am an owner of a Siswoo Longbow C55. It’s a 5.5″ Chinese smartphone with an interesting set of specs for the 130 EUR it costs. For one, it has a removable battery with 3300mAh. That powers the phone for two days which I consider to be quite good. A removable battery is harder and harder to get these days :-/ But I absolutely want to be able to replace the battery in case it’s worn out, hard reboot it when it locks up, or simply make sure that it’s off. It also has 802.11a WiFi which seems to be rare for phones in that price range. Another very rare thing these days is an IR interface. The Android 5.1 based firmware also comes with a remote control app to control various TVs, aircons, DVRs, etc. The new Android version is refreshing and is fun to use. I don’t count on getting updates though, although the maker seems to be open about it.

The does not have NFC, but something called hotknot. The feature is described as being similar to NFC, but works with induction on the screen. So when you want to connect two devices, you need to make the screens touch. I haven’t tried that out yet, simply because I haven’t seen anyone with that technology yet. It also does not have illuminated lower buttons. So if you’re depending on that then the phone does not work for you. A minor annoyance for me is the missing notification LED. I do wonder why such a cheap part is not being built into those cheap Chinese phones. I think it’s a very handy indicator and it annoys me to having to power on the screen only to see whether I have received a message.

I was curious whether the firmware on the phone matches the official firmware offered on the web site. So I got hold of a GNU/Linux version of the flashtool which is Qt-based BLOB. Still better than running Windows… That tool started but couldn’t make contact with the phone. I was pulling my hair out to find out why it wouldn’t work. Eventually, I took care of ModemManager, i.e. systemd disable ModemManager or do something like sudo mv /usr/share/dbus-1/system-services/org.freedesktop.ModemManager1.service{,.bak} and kill modem-manager. So apparently it got in the way when the flashtool was trying to establish a connection. I have yet to find out whether this

/etc/udev/rules.d/21-android-ignore-modemmanager.rules

works for me:

ACTION!="add|change|move", GOTO="mm_custom_blacklist_end"
SUBSYSTEM!="usb", GOTO="mm_custom_blacklist_end"
ENV{DEVTYPE}!="usb_device", GOTO="mm_custom_blacklist_end"
ATTR{idVendor}=="0e8d", ATTR{idProduct}=="2000", ENV{ID_MM_DEVICE_IGNORE}="1"
LABEL="mm_custom_blacklist_end"

I “downloaded” the firmware off the phone and compared it with the official firmware. At first I was concerned because they didn’t hash to the same value, but it turns out that the flash tool can only download full blocks and the official images do not seem to be aligned to full blocks. Once I took as many bytes of the phone’s firmware as the original firmware images had, the hash sums matched. I haven’t found a way yet to get full privileges on that Android 5.1, but given that flashing firmware works (sic!) it should only be a matter of messing with the system partition. If you have any experience doing that, let me know.

The device performs sufficiently well. The battery power is good, the 2GB of RAM make it unlikely for the OOM killer to stop applications. What is annoying though is the sheer size of the device. I found 5.0″ to be too big already, so 5.5″ is simply too much for my hands. Using the phone single handedly barely works. I wonder why there are so many so huge devices out there now. Another minor annoyance is that some applications simply crash. I guess they don’t handle the 64bit architecture well or have problems with Android 5.1 APIs.

FWIW: I bought from one of those Chinese shops with a European warehouse and their support seems to be comparatively good. My interaction with them was limited, but their English was perfect and, so far, they have kept what they promised. I pre-ordered the phone and it was sent a day earlier than they said it would be. The promise was that they take care of the customs and all and they did. So there was absolutely no hassle on my side, except that shipping took seven days, instead of, say, two. At least for my order, they used SFBest as shipping company.

Do you have any experience with (cheap) Chinese smartphones or those shops?

DFN Workshop 2015

As in the last few years, the DFN Workshop happened in Hamburg, Germany.

The conference was keynoted by Steven Le Blond who talked about targeted attacks, e.g. against dissidents. He mentioned that he already presented the content at the USENIX security conference which some people think is very excellent. He first showed how he used Skype to look up IP addresses of his boss and how similarly targeted attacks were executed in the past. Think Stuxnet. His main focus were attacks on NGOs though. He focussed on an attacker sending malicious emails to the victim.

In order to find out what attack vectors were used, they contacted over 100 NGOs to ask whether they were attacked. Two NGOs, which are affiliated with the Chinese WUC, which represents the Uyghur minority, received 1500 malicious emails, out of which 1100 were carrying malware. He showed examples of those emails and some of them were indeed very targeted. They contained a personalised message with enough context to look genuine. However, the mail also had a malicious DOC file attached. Interestingly enough though, the infrastructure used by the attacker for the targeted attacks was re-used for several victims. You could have expected the attacker to have their infrastructure separated for the various victims, especially when carrying out targeted attacks.

They also investigated how quickly the attacker exploited publicly known vulnerabilities. They measured the time of the malicious email sent minus the release date of the vulnerability. They found that some of the attacks were launched on day 0, meaning that as soon as a vulnerability was publicly disclosed, an NGO was attacked with a relevant exploit. Maybe interestingly, they did not find any 0-day exploits launched. They also measured how the security precautions taken by Adobe for their Acrobat Reader and Microsoft for their Office product (think sandboxing) affected the frequency of attacks. It turned out that it does help to make your software more secure!

To defend against targeted attacks based on spoofed emails he proposed to detect whether the writing style of an email corresponds to that of previously seen emails of the presumed contact. In fact, their research shows that they are able to tell whether the writing style matches that of previous emails with very high probability.

The following talk assessed end-to-end email solutions. It was interesting, because they created a taxonomy for 36 existing projects and assessed qualities such as their compatibility, the trust-model used, or the platform it runs on.
The 36 solutions they identified were (don’t hold your breath, wall of links coming): Neomailbox, Countermail, salusafe, Tutanota, Shazzlemail, Safe-Mail, Enlocked, Lockbin, virtru, APG, gpg4o, gpg4win, Enigmail, Jumble Mail, opaqueMail, Scramble.io, whiteout.io, Mailpile, Bitmail, Mailvelope, pEp, openKeychain, Shwyz, Lavaboom, ProtonMail, StartMail, PrivateSky, Lavabit, FreedomBox, Parley, Mega, Dark Mail, opencom, okTurtles, End-to-End, kinko.me, and LEAP (Bitmask).

Many of them could be discarded right away, because they were not production ready. The list could be further reduced by discarding solutions which do not use open standards such as OpenPGP, but rather proprietary message formats. After applying more filters, such as that the private key must not leave the realm of the user, the list could be condensed to seven projects. Those were: APG, Enigmail, gpg4o, Mailvelope, pEp, Scramble.io, and whiteout.io.

Interestingly, the latter two were not compatible with the rest. The speakers attributed that to the use of GPG/MIME vs. GPG/Inline and they favoured the latter. I don’t think it’s a good idea though. The authors attest pEp a lot of potential and they seem to have indeed interesting ideas. For example, they offer to sign another person’s key by reading “safe words” over a secure channel. While this is not a silver bullet to the keysigning problem, it appears to be much easier to use.

As we are on keysigning. I have placed an article in the conference proceedings. It’s about GNOME Keysign. The paper’s title is “Welcome to the 2000s: Enabling casual two-party key signing” which I think reflects in what era the current OpenPGP infrastructure is stuck. The mindsets of the people involved are still a bit left in the old days where dealing with computation machines was a thing for those with long and white beards. The target group of users for secure communication protocols has inevitably grown much larger than it used to be. While this sounds trivial, the interface to GnuPG has not significantly changed since. It also still makes it hard for others to build higher level tools by making bad default decisions, demanding to be in control of “trust” decisions, and by requiring certain environmental conditions (i.e. the filesystem to be used). GnuPG is not a mere library. It seems it understands itself as a complete crypto suite. Anyway, in the paper, I explained how I think contemporary keysigning protocols work, why it’s not a good thing, and how to make it better.

I propose to further decentralise OpenPGP by enabling people to have very small keysigning “parties”. Currently, the setup cost of a keysigning party is very high. This is, amongst other things, due to the fact that an organiser is required to collect all the keys, to compile a list of participant, and to make the keys available for download. Then, depending on the size of the event, the participants queue up for several hours. And to then tick checkboxes on pieces of paper. A gigantic secops fail. The smarter people sign every box they tick so that an attacker cannot “inject” a maliciously ticked box onto the paper sheet. That’s not fun. The not so smart people don’t even bring their sheets of paper or have them printed by a random person who happens to also be at the conference and, surprise, has access to a printer. What a gigantic attack surface. I think this is bad. Let’s try to reduce that surface by reducing the size of the events.

In order to enable people to have very small events, i.e. two people keysigning, I propose to make most of the actions of a keysigning protocol automatic. So instead of requiring the user to manually compare the fingerprint, I propose that we securely transfer the key to be signed. You might rightfully ask, how to do that. My answer is that we’ve passed the 2000s and that we bring devices which are capable of opening a TCP connection on a link local network, e.g. WiFi. I know, this is not necessarily a given, but let’s just assume for the sake of simplicity that one of our device we carry along can actually do WiFi (and that the network does not block connections between machines). This also prevents certain attacks that users of current Best Practises are still vulnerable against, namely using short key ids or leaking who you are communicating with.

Another step that needs to be automated is signing the key. It sounds easy, right? But it’s not just a mere gpg --sign-key. The first problem is, that you don’t want the key to be signed to pollute your keyring. That can be fixed by using --homedir or the GNUPGHOME environment variable. But then you also want to sign each UID on the key separately. And this is were things get a bit more interesting. Anyway, to make a long story short: We’re not able to do that with plain GnuPG (as of now) in a sane manner. And I think it’s a shame.

Lastly, sending the key needs to be as “zero-click” as possible, too. I propose to simply reuse the current MUA of the user. That sounds easy, but unfortunately, it’s only 2015 and we cannot interact with, say, Evolution and Thunderbird in a standardised manner. There is xdg-email, but it has annoying bugs and doesn’t seem to be maintained. I’m waiting for a sane Email-API. I mean, Email has been around for some time now, let’s now try to actually use it. I hope to be able to make another more formal announcement on GNOME Keysign, soon.

the userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work

— attributed to Ellison.

Anyway, the event was good, I am happy to have attended. I hope to be able to make it there next year again.

Attending the DANTE Tagung in Karlsruhe

Much to my surprise, the DANTE Tagung took place in Karlsruhe, Germany. It appears to be the main gathering of the LaTeX (and related) community.

Besides pub-based events in the evenings, they also had talks. I knew some people on the program by name and was eager to finally see them IRL. One of those was Markus Kohm, from the KOMAScript fame. He went on to present new or less used features. One of those was scrlayer which is capable of adding layers to a page, i.e. background or foreground layers. So you can add, e.g. a logo or a document version to every page, more or less like this:

DeclareNewLayer[{
    background,
    topmargin,
    contents={\hfill
        \includegraphics[width=3cm, heigth=2cm]
                                  {example-image}
}%
}[{Logo}
\AddLayersToPageStyle{@everystyle@}{Logo}

You could do that with fancyhead, but then you’d only get the logo depending on your page style. The scrlayer solution will be applied always. And it’s more KOMAesque, I guess.

The next talk I attended was given by Uwe Ziegenhagen on new or exciting CTAN packages.
Among the packages he presented was ctable. It can be used to type-set tables and figures. It uses a favourite package of mine, tabularx. The main advantage seems to be to be able to use footnotes which is otherwise hard to achieve.

He also presented easy-todo which provides “to-do notes through­out a doc­u­ment, and will pro­vide an in­dex of things to do”. I usually use todonotes which seems similar enough so I don’t really plan on changing that. The differences seem to be that easy-todo offer more fine grained control over what goes into a list of todos to be printed out.

The flowchart package seems to allow drawing flowcharts with TikZ more easily, especially following “IBM Flowcharting Template”. The flowcharts I drew so far were easy enough and I don’t think this package would have helped me, but it is certain that the whole process of drawing with TikZ needs to be made much easier…

Herbert Voß went on to talk about ConTeXt, which I had already discovered, but was pleased by. From my naïve understanding, it is a “different” macro set for the TeX engine. So it’s not PDFTeX, LuaLaTeX, or XeTeX, but ConTeXt. It is distributed with your favourite TeXLive distribution, so it should be deployed on quite a few installations. However, the best way to get ConTeXt, he said, was to fire up the following command:

rsync -rlpt rsync://contextgarden.net/minimals/setup/.../bin .

wow. rsync. For binary software distribution. Is that the pinnacle of apps? In 2014? Rsync?! What is this? 1997? Quite an effective method, but I doubt it’s the most efficient. Let alone security wise.

Overall, ConTeXt is described as being a bit of an alien in the TeX world. The relationship with TeXLive is complicated, at best, and conventions are not congruent which causes a multitude of complications when trying to install, run, extend, or maintain both LaTeX and ConTeXt.


The next gathering will take place in the very north of Germany. A lovely place, but I doubt that I’ll be attending. The crowd is nice, but it probably won’t be interesting for me, talk-wise. I attribute that party to my inability to enjoy coding TeX or LaTeX, but also to the arrogance I felt from the community. For example, people were mocking use cases people had, disregarding them as being irrelevant. So you might not be able to talk TeX with those people, but they are nice, anyway.

Getting cheaper Bahn fares via external services

Imagine you want to go from some random place in Germany to the capital. Maybe because it is LinuxTag. We learned that you can try to apply international fares. In the case of Berlin, the Netzplan for Berlin indicates that several candidate train stations exist: Rzepin, Kostrzyn, or Szczecin. However, we’re not going to explore that now.

Instead, we have a look at other (third party) offers. Firstly, you can always get a Veranstaltungsticket. It’s a ticket rated at 99 EUR for a return trip. The flexible ticket costs 139 EUR and allows you to take any train, instead of fixed ones. Is that a good price? Let’s check the regular price for the route Karlsruhe ←→ Berlin.

The regular price is 142 EUR. Per leg. So the return trip would cost a whopping 284 EUR. Let’s assume you have a BahnCard 50. It costs 255 EUR and before you get it, you better do the math whether it’s worth it. Anyway, if you have that card, the price halves and we have to pay 71 EUR for a leg or 142 for the return trip. That ticket is fully flexible, so any train can be taken. The equivalent Veranstaltungsticket costs 139, so a saving of 3 EUR, or 2%.

Where to get that Veranstaltungsticket you ask? Well, turns out, LinuxTag offered it, itself. You call the phone number of the Bahn and state your “code”. In the LinuxTag case it was “STATION Berlin”. It probably restricts your destination options to Berlin. More general codes are easily found on the Web. Try “Finanz Informatik”,
“TMF”, or “DOAG”.

I don’t expect you to be impressed by saving 2%. Another option is to use bus search engines, such as busliniensuche.de, fernbusse.de, or fromatob.de. You need to be a bit lucky though as only a few of those tickets are available. However, it’s worth a shot as they cost 29 EUR only.

That saves you 80% compared to the original 142 EUR, or 60% compared to the 71 EUR with the BC 50. That’s quite nice, already. But we can do better. There is the “Fernweh-Ticket” which is only available from LTUR. It costs 26 EUR and you need to poll their Web Interface every so often to get a chance to find a ticket. I intended to write a crawler, but I have not gotten around to do it yet…

With such a ticket you save almost 82% or 63% compared to the regular price. Sweet! Have I missed any offer that worth mentioning?

Finding (more) cheap flights with Kayak

People knowing me know about my weakness when it comes to travel itineraries. I spend hours and hours, sometimes days or even weeks with finding the optimal itinerary. As such, when I was looking for flights to GNOME.Asia Summit, I had an argument over the cheapest and most comfortable flight. When I was told that a cheaper and better flight existed that I didn’t find, I refused to accept it as I saw my pride endangered. As it turned out, there were more flights than I knew of.

Kayak seems to give you different results depending on what site you actually open. I was surprised to learn that.

Here is the evidence: (you probably have to open that with a wide monitor or scroll within the image)
Kayak per country

In the screenshot, you can see that on the left hand side kayak.de found 1085 flights. It also found the cheapest one rated at 614 EUR. That flight, marked with the purple “1”, was also found by kayak.com and kayak.ie at different, albeit similar prices. In any case, that flight has a very long layover. The next best flight kayak.de returned was rated at 687 EUR. The other two Kayaks have that flight, marked with the green “3”, at around 730 EUR, almost 7% more than on the German site. The German Kayak does not have the Ethiad flight, marked with the blueish “2”, at 629 as the Irish one does! The American Kayak has that flight at 731 EUR, which is a whopping 17% of a difference. I actually haven’t checked whether the price difference persists when actually booking the flights. However, I couldn’t even have booked the Ethiad flight if I didn’t check other Kayak versions.

Lessons learnt: Checking one Kayak is not enough to find all good flights.

In addition to Kayak, I like to the the ITA Travel Matrix as it allows to greatly customise the queries. It also has a much more sane interface than Kayak. The prices are not very accurate though, as far as I could tell from my experiments. It can give you an idea of what connections are cheap, so you can use that information for, e.g. Kayak. Or, for that other Web site that I use: Skyscanner. It allows to list flights for a whole months or for a whole country instead of a specific airport.

What tools do you use to check for flights?

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.