DFN Workshop 2015

As in the last few years, the DFN Workshop happened in Hamburg, Germany.

The conference was keynoted by Steven Le Blond who talked about targeted attacks, e.g. against dissidents. He mentioned that he already presented the content at the USENIX security conference which some people think is very excellent. He first showed how he used Skype to look up IP addresses of his boss and how similarly targeted attacks were executed in the past. Think Stuxnet. His main focus were attacks on NGOs though. He focussed on an attacker sending malicious emails to the victim.

In order to find out what attack vectors were used, they contacted over 100 NGOs to ask whether they were attacked. Two NGOs, which are affiliated with the Chinese WUC, which represents the Uyghur minority, received 1500 malicious emails, out of which 1100 were carrying malware. He showed examples of those emails and some of them were indeed very targeted. They contained a personalised message with enough context to look genuine. However, the mail also had a malicious DOC file attached. Interestingly enough though, the infrastructure used by the attacker for the targeted attacks was re-used for several victims. You could have expected the attacker to have their infrastructure separated for the various victims, especially when carrying out targeted attacks.

They also investigated how quickly the attacker exploited publicly known vulnerabilities. They measured the time of the malicious email sent minus the release date of the vulnerability. They found that some of the attacks were launched on day 0, meaning that as soon as a vulnerability was publicly disclosed, an NGO was attacked with a relevant exploit. Maybe interestingly, they did not find any 0-day exploits launched. They also measured how the security precautions taken by Adobe for their Acrobat Reader and Microsoft for their Office product (think sandboxing) affected the frequency of attacks. It turned out that it does help to make your software more secure!

To defend against targeted attacks based on spoofed emails he proposed to detect whether the writing style of an email corresponds to that of previously seen emails of the presumed contact. In fact, their research shows that they are able to tell whether the writing style matches that of previous emails with very high probability.

The following talk assessed end-to-end email solutions. It was interesting, because they created a taxonomy for 36 existing projects and assessed qualities such as their compatibility, the trust-model used, or the platform it runs on.
The 36 solutions they identified were (don’t hold your breath, wall of links coming): Neomailbox, Countermail, salusafe, Tutanota, Shazzlemail, Safe-Mail, Enlocked, Lockbin, virtru, APG, gpg4o, gpg4win, Enigmail, Jumble Mail, opaqueMail, Scramble.io, whiteout.io, Mailpile, Bitmail, Mailvelope, pEp, openKeychain, Shwyz, Lavaboom, ProtonMail, StartMail, PrivateSky, Lavabit, FreedomBox, Parley, Mega, Dark Mail, opencom, okTurtles, End-to-End, kinko.me, and LEAP (Bitmask).

Many of them could be discarded right away, because they were not production ready. The list could be further reduced by discarding solutions which do not use open standards such as OpenPGP, but rather proprietary message formats. After applying more filters, such as that the private key must not leave the realm of the user, the list could be condensed to seven projects. Those were: APG, Enigmail, gpg4o, Mailvelope, pEp, Scramble.io, and whiteout.io.

Interestingly, the latter two were not compatible with the rest. The speakers attributed that to the use of GPG/MIME vs. GPG/Inline and they favoured the latter. I don’t think it’s a good idea though. The authors attest pEp a lot of potential and they seem to have indeed interesting ideas. For example, they offer to sign another person’s key by reading “safe words” over a secure channel. While this is not a silver bullet to the keysigning problem, it appears to be much easier to use.

As we are on keysigning. I have placed an article in the conference proceedings. It’s about GNOME Keysign. The paper’s title is “Welcome to the 2000s: Enabling casual two-party key signing” which I think reflects in what era the current OpenPGP infrastructure is stuck. The mindsets of the people involved are still a bit left in the old days where dealing with computation machines was a thing for those with long and white beards. The target group of users for secure communication protocols has inevitably grown much larger than it used to be. While this sounds trivial, the interface to GnuPG has not significantly changed since. It also still makes it hard for others to build higher level tools by making bad default decisions, demanding to be in control of “trust” decisions, and by requiring certain environmental conditions (i.e. the filesystem to be used). GnuPG is not a mere library. It seems it understands itself as a complete crypto suite. Anyway, in the paper, I explained how I think contemporary keysigning protocols work, why it’s not a good thing, and how to make it better.

I propose to further decentralise OpenPGP by enabling people to have very small keysigning “parties”. Currently, the setup cost of a keysigning party is very high. This is, amongst other things, due to the fact that an organiser is required to collect all the keys, to compile a list of participant, and to make the keys available for download. Then, depending on the size of the event, the participants queue up for several hours. And to then tick checkboxes on pieces of paper. A gigantic secops fail. The smarter people sign every box they tick so that an attacker cannot “inject” a maliciously ticked box onto the paper sheet. That’s not fun. The not so smart people don’t even bring their sheets of paper or have them printed by a random person who happens to also be at the conference and, surprise, has access to a printer. What a gigantic attack surface. I think this is bad. Let’s try to reduce that surface by reducing the size of the events.

In order to enable people to have very small events, i.e. two people keysigning, I propose to make most of the actions of a keysigning protocol automatic. So instead of requiring the user to manually compare the fingerprint, I propose that we securely transfer the key to be signed. You might rightfully ask, how to do that. My answer is that we’ve passed the 2000s and that we bring devices which are capable of opening a TCP connection on a link local network, e.g. WiFi. I know, this is not necessarily a given, but let’s just assume for the sake of simplicity that one of our device we carry along can actually do WiFi (and that the network does not block connections between machines). This also prevents certain attacks that users of current Best Practises are still vulnerable against, namely using short key ids or leaking who you are communicating with.

Another step that needs to be automated is signing the key. It sounds easy, right? But it’s not just a mere gpg --sign-key. The first problem is, that you don’t want the key to be signed to pollute your keyring. That can be fixed by using --homedir or the GNUPGHOME environment variable. But then you also want to sign each UID on the key separately. And this is were things get a bit more interesting. Anyway, to make a long story short: We’re not able to do that with plain GnuPG (as of now) in a sane manner. And I think it’s a shame.

Lastly, sending the key needs to be as “zero-click” as possible, too. I propose to simply reuse the current MUA of the user. That sounds easy, but unfortunately, it’s only 2015 and we cannot interact with, say, Evolution and Thunderbird in a standardised manner. There is xdg-email, but it has annoying bugs and doesn’t seem to be maintained. I’m waiting for a sane Email-API. I mean, Email has been around for some time now, let’s now try to actually use it. I hope to be able to make another more formal announcement on GNOME Keysign, soon.

the userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work

— attributed to Ellison.

Anyway, the event was good, I am happy to have attended. I hope to be able to make it there next year again.

FOSDEM 2015

It’s winter again and it was clear that FOSDEM was coming. However, preparation fell through the cracks, at least for me, mainly because my personal life is fast-paced at the moment. We had a table again, and our EventsBox, which is filled with goodness to demo GNOME, made its way from Gothenburg, where I actually carried it to a couple of months ago.

Unfortunately though, we didn’t have t-shirts to sell. We do have boxes of t-shirts left, but they didn’t make it to FOSDEM :-\ So this FOSDEM didn’t generate nearly as much revenue as the last years. It’s a pity that this year’s preparation was suboptimal. I hope we can improve next year. Were able to get rid of other people’s things, though 😉 Like last year, the SuSE people brought beer, but it was different this time. Better, even 😉

The fact that there wasn’t as much action at our booth as last years, I could actually attend talks. I was able to see Sri and Pam talking on the Groupon incident that shook us up a couple of months ago. It was really nice to see her, because I wanted to shake hands and say thanks. She did an amazing job. Interestingly enough, she praised us, the GNOME Foundation’s Board of Directors, for working very professionally. Much better than any client she has worked with. I am surprised, because I didn’t really have the feeling we were acting as promptly as we could. You know, we’re volunteers, after all. Also, we didn’t really prepare as much as we could have which led to some things being done rather spontaneously. Anyway, I take that as a compliment and I guess that our work can’t be all too bad. The talk itself showed our side of things and, if you ask me, was painting things in a too bright light. Sure, we were successful, but I attribute much of that success to network effects and a bit of luck. I don’t think we could replicate that success easily.

GNOME’s presence at FOSDEM was not too bad though, despite the lack of shirts. We had a packed beer event and more talks by GNOMEy people. The list includes Karen‘s keynote, Benzo‘s talk on SDAPDS, and Sri‘s talk on GNOME’s impact on the Free Software ecosystem. You can find more here.

A talk that I did see was on improving the keysigning situation. I really mean to write about this some more. For now, let me just say that I am pleased to see people working on solutions. Solutions to a problem I’m not sure many people see and that I want to devote some time for explaining it, i.e. in s separate post. The gist is, that contemporary “keysigning parties” come with non-negligible costs for both, the organiser and the participant. KeySigningPartyTools were presented which intend to improve they way things are currently done. That’s already quite good as it’ll reduce the number of errors people typically make when attending such a party.

However, I think that we need to rethink keysigning. Mostly, because the state of the art is a massive SecOps fail. There is about a gazillion traps to be avoided and many things don’t actually make so much sense. For example, I am unable to comprehend why we are muttering a base16 encoded version of your 160 bit fingerprint to ourselves. Or why we must queue outside in the cold without being able to jump the queue if a single person is a bit slow, because then everybody will be terribly confused and the whole thing taking even longer. Or why we need to do everything on paper (well, I know the arguments: Your computer can be hacked, be social, yadda yadda). I did actually give a talk on rethinking the keysigning problem (slides). It’s about a project that I have only briefly mentioned here and which I should really write about in the near future. GNOME Keysign intends to be less of a SecOps fail by letting the scan a barcode and click “next”. The rest will be operations known to the user such as sending an email. No more manually comparing fingerprints. No more leaking data to the Internet about who you want to contact. No more MITM attacks against your OpenPGP installation. No more short key ids that you accidentally use or because you mistyped a letter of the fingerprint. No more editing raw Perl in order to configure your keysigning tool. The talk went surprisingly well. I actually expected the people in the security devroom to be mad when someone like me is taking their perl and their command line away. I received good questions and interesting feedback. I’ll follow up here with another post once real-life lets me get to it.

Brussels itself is a very nice city. We were lucky, I guess, because we had some sunshine when we were walking around the city. I love the plethora of restaurants. And I like that Brussels is very open and cultural. Unfortunately, the makerspace was deserted when we arrived, but it is was somewhat expected as it was daytime… I hope to return again and check it out during the night 😉

On Academia…

A paper that I have authored has recently been published a while ago, but I’ve put this post off for a long time now. Before talking about the paper itself, I want to talk about Academia as I have the feeling that I need to defend myself for playing their game™. The following may sounds overly pessimistic and a while a few bright spots are going to be mentioned, many have been left out for ranting reasons. Keep that in mind when reading that somewhat unstructured rant…

Published papers are the currency in Academia. The more you have, the more respected you are. The quantity is the main metric. No wonder, given that quality control measures are not very well deployed. Pretty much the only mechanism to ensure quality is peer review. The holy grail.

Although the more papers at “better” conferences or journals you have, the better you are, the quality of the conference or journal and the quality of the paper are rarely questioned after the publication. Again, I don’t have proper proof for the statements I make as this is supposed to be a more general rant on current practises in Academia. I can only tell from experience. From me listening to people talking about fellow academics, from observing key metrics in various web portals, or seeing people applying for academic positions. Those people usually have an enumeration of their publications. Maybe it’s a “selection”. But I’ve never seen that people put a “ranking” of the quality of the publisher nor the publication itself. And it wouldn’t make sense, because we don’t have metrics for that, anyway. Sure, there are some people or companies trying to come up with something meaningful. But metrics such as “rejection rate”, “number of citations”, or “h-index” are inherently flawed. For many reasons. Mainly because the data is proprietary. You have to rely on the conference or the journal providing you with correct data. You cannot know whether it is correct as there is no right for you to know. Secondarily, the metric might suffer from chilling effects, such that people think the quality of their publication in spe is too weak to be able to be published on a “high ranked” conference. So they don’t even bother to submit. Other metrics like the average citation count after five years resembles much more a stochastic experiment rather than reflecting the quality of the publications (Ike Antkare anyone?). Again, you have the effect of people wanting to cite some paper of a “high ranked” conference, because that is what people will cite in the future. And in order to be found more easily in the future via backwards citation searches, you’d rather cite publications you think will be cited more often in the future (cf.).

Talking about quality…

You have to trust the peer review of the conference or journal but you actually cannot because you don’t even know who the peers were. It’s good to have an informed opinion and it’s a good thing to be able to rely on an informed judgement. But it’s not good having to rely on that. If, for whatever reason, a peer fails to provide appropriate reviews, one should be able to make a decision oneself. Some studies have indeed shown that the peer review process is no better than flipping a coin. So there seems to be some need to review the peer review.

Once again to be clear: I don’t mind peer review. I think it’s good. Blindly publishing without ensuring that there is indeed an advancement of world’s knowledge wouldn’t be good. And peer review could be a tool to control that. But it doesn’t do it right now. I don’t have any concrete proposal. But I think if the reviews themselves and the reviewers were known, then we could make better decisions as to whether to “trust” a publication or not.

Another proposal is to not have “journals” as physical hard copies anymore. It is 20142015, we have the Web, we have some cool technologies. But we don’t make use of any of that. Instead, we are maintaining the status from 20, or rather 200, years ago. We still subscribe to one-off bundles of printed and stapled paper. And we pay loads for that. And not only do we pay loads for receiving that, if you wanted to publish in one of those journals (or conferences), you have to pay, too. In fairness, it’s not only the printing and stapling that costs money, but the services around that. Things like proof reading (has anyone ever gotten a lectorate?), the peer review (has any peer ever gotten any reimbursement?), or the maintenance of an online database (why is it so damn hard to use any of these web databases?) are things we pay money for. I doubt that we need Journals in their current form. We probably do need entities (call them “publishers”), who in turn will need to earn some money, to make sure everything is going smoothly. But we don’t need print-and-forget style publishing. If we could add things like comments, annotations, links, reviews, supplementary material, a varying level of detail, to a paper, even after a few years or even decades, we could move to a “permanently peer reviewed” model. A publication is being reviewed all the time. Ideally by the general public. We could model our current workflow by delegating some form of trust to a group of people, say “reviewers of Journal X”, and only see what these people have vouched for. We could then selectively exclude people from that group of trustees, much like the web of trust. We could, if a paper makes an assumption which is falsified in the future, render some warning when opening the publication. We could decentralise the data such that everyone could build their own index, search mechanism, or interface.

On interfaces

Right now, if you wanted to, say, re-conduct the experiments done in published papers and share your results, you would have to create a publication (which is expected, but right now you would likely have to pay for that) and cite the papers whose results you are trying to reproduce. That’s okay. But if I then wanted to see when and how successful people tried to redo the experiments, I’d have to rely on the database I’m using to provide a reverse citation search and have the correct data (which, for some databases, seems to be the ability to do OCR on the PDF…). That’s not how things should work nowdays, right? We’d expect something more interactive, with tags, open data, something wikiesque. While the ability to reverse-search citations, to highlight some key references, or to link to a key contribution that followed a paper at hand would be nice indeed, we probably have to step back and make existing functionality somewhat usable. I’m not talking about advanced stuff like exporting search results in a standardised format or about deep linking to a result set from a query. That would need treatment after we’ve solved actually searching for multiple keywords, excluding some conferences or journals, or joining or intersecting queries. All that only works to some extent and it’s depressing that we cannot do anything about it, because we don’t have the relevant access or data. Don’t believe me? Well, you shouldn’t. But I’ll provide a table, probably in another post, showing what works with which database and what does not.

On experiments

As I was referring to reproducing results: It is pretty much impossible to reproduce any result, at least in my field, computer science. You don’t get the raw data, let alone the programs to run to get the results. You could argue that it is too complicated to maintain a program that can be run on any platform. Fair enough. I don’t have a solution. But the situation right now is not a good status quo. Right now you don’t get anything. So even if you had the very same setup as the authors of some publication, you would not be able to redo the experiments. It’s likely to be similar in other disciplines. I imagine that rocket scientists do experiments with self made devices or with some utterly expensive appliance (think LHC). Nobody will be able to reproduce the results, simply because there is just that one LHC out there… But… fortunately we have many digital things which are easy to archive and distribute. We, computer scientists, should make use of that. Why not require authors to submit a virtual appliance in some openly specified format? Obviously, source code would be nice, but even in academia there doesn’t seem to be a culture of sharing code freely, so I’m not even suggesting that.

Phew. After having criticised Academia and having made some half baked proposals I forgot what I actually wanted to do: Being a good academic (not caring about the public perception of “good” in terms of quantity of publications), and discuss a few things around the paper that we paid a couple of hundred dollars for to get published. But I leave that for another rant post.

In what ways do you think is Academia broken?

WideOpenId – woid.cryptobitch.de

Uh, I meant to blog about this a while ago, but somehow, it got lost… Anyway, I was inspired by http://openid.aliz.es and intrigued by OpenID I set out to find an implementation that comes with an acceptable level of required effort to set up and run.

While the idea of federated authentication sounds nice, the concepts gets a bit flawed if everybody uses Google or Stackexchange as their identity provider. Also, you might not really want to provide your very own OpenID for good reasons. Pretty much as with email, which is why you could make use of mailinator, yopmail, or others.

There is a list of server software on the OpenID page, but none of them really looked like low effort. I wouldn’t want to install Django or any other web framework. But I’d go with a bad Python solution before even looking at PHP.

There is an “official” OpenID example server which is not WSGI aware and thus requires more effort than I am willing to invest. Anyway, I took an existing OpenID server and adapted it such that anyone could log in. Always. When developing and deploying, I noticed that mod_wsgi‘s support for virtualenv is really bad. For example, the PYTHONPATH cannot be inside Apache’s VirtualHosts declaration and you thus need a custom WSGI file which hard codes the Python version. It appears that there is also no helper on the Python level to “load” a virtual env. Weird.

woid server in action

Anyway, you can now enjoy OpenID by providing http://woid.cryptobitch.de/your-id-here as your identity provider. The service will happily tell anyone that any ID is valid. So you can log in as any name you one. A bit like mailinator for OpenID.

To test whether the OpenID provider actually works, you can download the example consumer and start it.
Screenshot from 2014-01-06 16:49:43

Bahn Bonus Points Saemmeln

mdb_166803_saemmel_bc_4zu1_734x183_hq

The Bahn currently has a Web-based game for you to win some of their loyalty points. It’s not a very exciting game, but you get up to 500 points which is half a free ride across the country. (You get the other half when signing up for their program.)

In order to get these 500 points you need to play for an hour or so. Or you observe the Web traffic your browser generates and look closely. You’ll see that the Flash applet fetches a token from the server and sends your result, along with the token and some hash, to the server. How to get the correct hash you ask? Worry not, you will get the correct hash from the server if you don’t send the correct one. You can resend your request with the hash the server sent you and your POST will be accepted. Neat.

I don’t know why they send the “correct_hash”, but it’s obviously a bad idea.

PS: It seems that Kazam has troubles recording my mouse pointer position correctly.

On GNOME and Groupon

You may have noticed that we ran a campaign calling for support regarding the Groupon trademark issue. Fortunately, everything was over much quicker than everybody expected.

It is fair to say that we were surprised by our campaign and the amount of support we had. And so were they (GroupOn). As Bradley said, the campaign could have failed miserably. It was a pure gamble. And I was everything but excited and full of expectations when we launched the campaign. We didn’t know how it would go and our preparation was.. simple, at best. I don’t mean to discredit any of the great work the volunteers around us (and we, ourselves) did. But it’s true that we’re not experts and that we didn’t have all the things in place you could have expected us to have. For example, we didn’t really have a bar of the money raised on the web page. In fact, that information was only available to a limited extent. It’s mainly my fault, but I also blame the fact that we only had mockups of the page, and not real code just until hours before the launch. Personally, my thinking was that we’d have days, if not weeks, to slowly fix things up.

Fortunately, things went differently. The coverage was amazing. I didn’t expect that our very simple page generated so much traffic. It’s hard to come up with an exact timeline of events as everything happened quickly and, in fairness, a bit chaotically. It may have been OMGUbuntu or Reddit who have reported first on our fundraiser. Other sites, such as Phoronix or Hackernews followed quickly. I was told that the latter was exceptional, because it ranked very high for rather long time.

We also had International coverage, i.e. on Heise (with somewhat interesting discussions), Golem (with two more articles!), Computerbase, and others… The usual suspects such as Slashdot, or, of course, LWN had an article as well. Even Arstechnica and Reuters covered our case. And that although we missed sending press releases to most of those sites. Sorry for that :-/

By quickly checking Google News, I know I haven’t found all the articles on the subject, but so far I’ve only found this article which was not in favour of our move. I think this was surprising to most, if not all of us.

Over the course of the day, this image was floating around, showing Brian’s LinkedIn profile which some people found hilarious. Some other pictures were floating around and comments were made. Some of them not in a not acceptable language but most of them were just expressing their concerns regarding Groupon’s behaviour. Some people cancelled all their accounts with Groupon while others started a petition.

We had close to one retweet per second and money was pouring in. The average amount donated was about 20 USD and the rate at which people donated was about 75 USD per minute. Every single minute. This can indeed be considered success. I think I noticed that this is going to be big when Freenode sent a message to all its 80000 connected users asking for supporting our case. “This is bigger than GNOME“, they said. Very correctly so. And it’s a shame, too. Not only for Groupon, because they needed to use the emergency break here, but for the system at large. It shouldn’t be the case that you need money in order to defend yourself against someone misusing your name.

timeline

Dear Internet, thanks. I am overwhelmed. We did not expect that amount of feedback to our recent trademark campaign, let alone the financial contributions. Our campaign was very successful. It was too successful, at least from a technical point of view. We are using a self made, very rudimentary Makefile for the business logic. We are still busy verifying the incoming transactions with Paypal… During the campaign, our servers were very busy handling the incoming requests.

I didn’t expect Groupon to be that cooperative given the behaviour I have observed over the last few months. It might have been Engagdet which were the first to report that Groupon backed up. Other news sites followed suit. All of that happened so quickly, that some news sites couldn’t even report on the case and could only report on Groupon abandoning their marks. That was probably Groupon’s strategy and, I guess, it was a wise choice. They retired their marks, but the app and their page are still online. They also still have a Gnome job posted. But I have no doubt that this will cease to exist.

Again: Thanks to everyone involved. This could as well have been the end to the GNOME Foundation, given that defending the GNOME marks is one of their main reasons for existing. A special thanks to all of you who have spread the word and made this campaign successful. Let’s hope we do not need such a campaign in the future.

For those of you who are interested in some pretty graphs (thanks benzo!), here is another one showing the transaction sizes and their volume. You can see, that we had many many small contributions. This is so amazing. I am very grateful and happy to see our community standing together so closely.

hist_10

GNOME at FSCONS14 in Gothenburg, Sweden

I was glad to be invited to FSONCS 2014 in Gothenburg, Sweden. Remember that this is also the place for next year’s GUADEC! This year’s FSCONS was attended by around 150 people or so. I guess it was a bit less. That might not sound like a lot, but it’s a very cool event with many interesting people and talks.

We, GNOME, had a presence at the event due to me bringing the EventsBox and T-Shirts to Gothenburg. It was quite a trip, especially with those heavy boxes…

The first keynote of the conference was given by Karl Fogel. He declared the end of copyright in 1993. He imagined copyright as a tree whose bottom has been chopped off, but the, the top hasn’t noticed that just yet. He put copyright on a timeline and drew a strong relation to the printing press. He claimed that in the United Kingdom, a monopoly used to control who prints and distributes books and it then transferred to a differently shaped monopoly which involved the actual authors. These could then transfer their rights to printers. He went on with ranting about the fact that nowadays you cannot tip the author for their (free) work. He appealed to the authors of f-droid or the firefox mobile app market to integrate such a functionality. Overall it was an interesting talk with many aspects. He is a talented speaker.

The second keynote was given by Leigh Honeywell. She talked about communities and community building. She said that she got most of the ideas presented in her talk from Sumana Harihareswara‘s “Models we use to change the world”. During her talk she referred to her experiences when founded the HackLabTO Hackerspace after having attended the CCCamp 2007. She basically shared models of understanding the community and their behaviour. The Q&A session was inspiring and informative. Many questions about managing a community were asked and answered.

Another interesting talk was given by Guilhem Moulin who went on to talk about Fripost. It is a democratic email service provider from Sweden. He gave a bit of an insight regarding the current Email usage on today’s Internet. He claimed that we have 2.7 billion internet users and that the top three email service providers accumulate roughly a third of this population. His numbers were 425 million for GMail, 420 million for Hotmail, and 280 million for Yahoo. All these companies are part of PRISM, he said, which worried him enough to engage with Fripost. In fact, he became a board member after having been a user and a sysadmin. As someone who operates a mail server for oneself and others with similar needs, I was quite interested in seeing concentrated efforts like this. Fripost’s governance seems to be interesting. It’s a democratic body and I wonder how to thwart malicious subversion. Anyway, the talk was about technical details as to how to create your own fripost.org. So I can only encourage to run your own infrastructure and found structures that care about running ecosystem. A memorable quote he provided to underpin this appeal is attributed to Schneier: “We were safer when our email was at 10,000 ISPs than it was at 10“.

My talk went sufficiently well. I guess I preached to the choir regarding Free Software. I don’t think I needed to convince the people that Free Software is a good thing. As for convincing the audience that GNOME is a good thing, I think I faced a big challenge. Some of the attendees didn’t seem to be very enthusiastic about their desktop which is great. But some others were more in the, what I would call, old school category using lynx, xautoscreenlock, and all that stuff from the 90s. Anyway, we had a great session with many questions from the audience such that I couldn’t even go through my slides.

I had a lightning talk about signing OpenPGP keys using GNOME Keysign. I probably need to write up a separate blog post for that. In short, I mentioned that short key IDs are evil, but that long key IDs are also problematic. Actually, using keyservers is inherently problematic and should be avoided. To do so, I showed how I transfer a key securely and sign it following best practices (thanks to Andrei for an initial version!). Bastian was nice enough to do the demo with me. We needed to cheat a little though, as currently, they key is transferred using the WiFi network you are on. The WiFi, however, didn’t allow us to create a TCP connection to each other. We thus opened a WiFi hotspot and used that. I think this would be a useful feature.

The last talk of the conference was given by Hans Lysglimt from Norway. He is, among other things, a politician, an activist, and an entrepreneur who founded an email service. His runbox has around 1000000 accounts and 30000 paid subscriptions, so it’s fairly big, compared to Fripost at least. Again, running email services myself, I found it interesting to listen to the stories he had to tell. His story was that he received a gag order for running his commercial email service provider. It remained unclear whether it was send because of his interview with Julian Assange or not.

Interestingly, he didn’t seem to have received many correct subpoenas in the sense that they were Norwegian court orders. However, in one case the American authorities went through the Norwegian legal system which he found funny in itself because the two legal system were not very similar. He eventually mentioned that every email service provider has at least one gag order, either an implicit or and explicit one. Ultimately, he concluded that you cannot trust a corporation.

FSCONS is an interesting event. Their manifesto is certainly impressive. I am glad to have visited and I am looking forward to visiting again. It is very atmospheric, very relaxed, and friendly. A very nice place to be.

mrmcd14 in Darmstadt – DOM-based XSS

After last year’s fabulous event, I was really looking forward to this year’s mrmcd in Darmstadt, Germany. It outgrew last year’s edition and had probably around 250 to 300 people attending. Maybe even more. In fact, 450 clients generated 423 GB traffic during the conference which lasted 60 hours or so. That’s around 2MB/s. That’s megabytes. Per second. Every second. I find that quite impressive. Especially as the outdoor area was very inviting to just hang around, grab a beer, and chat to your fellow hackers. So some people must have had an amazing demand of … updates…

This year’s theme was construction sites. As IT, and especially security, is a major, never ending, and dangerous construction site. It was well done, with a lot of warning tape, the people wearing helmets, hi-vis vests, some security boots, etc. Although it couldn’t excel last year’s aviation theme, but the watermark was set extremely high. Anyway, the speakers received cool gadgets, like a tool set, a level, and other very well done gadgets. The talks were opened by Unicorn who, as you can see, was wearing proper safety gear. We were given instructions as to how to behave in case of fire, flood, or lack of alcohol. A nifty feature of this event is the availability of carbo hydrates in form of various food stuffs. It’s very cool to always being able to walk up to the buffet and fill up energy reserves.

The keynote was involuntarily given by dodger who did not miss the opportunity to show us various constructions sites, such as the Utah Data Center. Ultimately, (now I am maybe over interpreting things), it’s also hackers like us who make those possible. We usually decide for ourselves where to go and what to do. It was a good round-up on how we as a community work or should work. Also with some political references which I think is important as I have the feeling that many people lose that focus too easily.

An interesting series of talks was given by Ange Albertini, who first presented the PDF file format. It was interesting to see how the format actually looks like. I knew already a little but I’ve never really cared about the details. This was a very interesting and visually appealing talk. Pretty much like his other presentations which were again on file formats and on crypto.

My own talk was scheduled after the second night. I was positively surprised to see a half-filled room on a Sunday morning, after two nights of demanding partying… Anyway, I had an interested crowd which I think I could entertain. You can find my slides here. I was talking on DOM-based Cross-site Scripting. I presented a modified Chrome browser which is able to stop all identified DOM-based XSSs. I will need a separate post to cover the details. As a brief summary: Both WebKit and V8 were modified to track taint, that is, to annotate strings with the information of the source. Such a source could be the document.URL or the window.name. This taint information is evaluated whenever it is about to be compiled to code. The simple approach of blocking every tainted string to compile is not followed as it breaks the Web. Instead, the compiler will notice which token is about to be generated and only allow generation if and only if the string is untainted or of a data type (String, Boolean, Number). If the tainted token is, for example, function call, assignment or pretty much anything else, then it is replaced with an illegal token in order to abort compilation. There is a video of the talk here:

As we are on videos, the video team is just plainly amazing. It released videos of the event pretty much after they finished. And in a quality that is hard to excel. You check the videos of this conference, but also others. You may find some gems that are well worth watching. Be aware though, some talks are also very much on the vapor-ware side of things… I guess I don’t need to point to specific talks as it should be easy to identify…

I am already looking forward to next year’s event. The watermark has, again, been set high and I expect the next year to be able to raise that bar. But I hope it will be able to stay small enough to not lose the cosy and comfy feeling. Maybe I shouldn’t blog about that fantastic event to not generate too much attention 😉

LibreOffice Con in Bern, Switzerland

I was invited to give a talk in Bern, Switzerland, for the LibreOffice Conference. The LibreOffice people are a nice crowd with diverse backgrounds. I talked to design people, coders doing rather low-level GL things, marketing folks, some being new to Free Software, and to some being old farts. It sounds like a lot of people and one is inclined to think of boat loads of people attending the conference when having the community statistics in mind. But it has been a very cosy event, with less than a hundred people. I found that surprising, but not necessarily in a bad way.

I couldn’t make it to many talks, because the conference took place on week days. But judging from the schedule there were many interesting talks. The only thing I didn’t like about the schedule was the weird formatting. Seriously, who makes the track’s name more visible than the talk’s title..? Also grouping by room and not by time is a bit weird.

Anyway, my talk went well although it was in the first slot after the free beer party 😉 You can find my slides in the collection. I was talking about GNOME in general, but with a twist for those who migrate from proprietary software to Free Software. I hope I could convey that the GNOME desktop might be a viable alternative to proprietary products.

As this was a great, comfortable conference, I’m looking forward to visiting next year’s event.

Attending the DANTE Tagung in Karlsruhe

Much to my surprise, the DANTE Tagung took place in Karlsruhe, Germany. It appears to be the main gathering of the LaTeX (and related) community.

Besides pub-based events in the evenings, they also had talks. I knew some people on the program by name and was eager to finally see them IRL. One of those was Markus Kohm, from the KOMAScript fame. He went on to present new or less used features. One of those was scrlayer which is capable of adding layers to a page, i.e. background or foreground layers. So you can add, e.g. a logo or a document version to every page, more or less like this:

DeclareNewLayer[{
    background,
    topmargin,
    contents={\hfill
        \includegraphics[width=3cm, heigth=2cm]
                                  {example-image}
}%
}[{Logo}
\AddLayersToPageStyle{@everystyle@}{Logo}

You could do that with fancyhead, but then you’d only get the logo depending on your page style. The scrlayer solution will be applied always. And it’s more KOMAesque, I guess.

The next talk I attended was given by Uwe Ziegenhagen on new or exciting CTAN packages.
Among the packages he presented was ctable. It can be used to type-set tables and figures. It uses a favourite package of mine, tabularx. The main advantage seems to be to be able to use footnotes which is otherwise hard to achieve.

He also presented easy-todo which provides “to-do notes through­out a doc­u­ment, and will pro­vide an in­dex of things to do”. I usually use todonotes which seems similar enough so I don’t really plan on changing that. The differences seem to be that easy-todo offer more fine grained control over what goes into a list of todos to be printed out.

The flowchart package seems to allow drawing flowcharts with TikZ more easily, especially following “IBM Flowcharting Template”. The flowcharts I drew so far were easy enough and I don’t think this package would have helped me, but it is certain that the whole process of drawing with TikZ needs to be made much easier…

Herbert Voß went on to talk about ConTeXt, which I had already discovered, but was pleased by. From my naïve understanding, it is a “different” macro set for the TeX engine. So it’s not PDFTeX, LuaLaTeX, or XeTeX, but ConTeXt. It is distributed with your favourite TeXLive distribution, so it should be deployed on quite a few installations. However, the best way to get ConTeXt, he said, was to fire up the following command:

rsync -rlpt rsync://contextgarden.net/minimals/setup/.../bin .

wow. rsync. For binary software distribution. Is that the pinnacle of apps? In 2014? Rsync?! What is this? 1997? Quite an effective method, but I doubt it’s the most efficient. Let alone security wise.

Overall, ConTeXt is described as being a bit of an alien in the TeX world. The relationship with TeXLive is complicated, at best, and conventions are not congruent which causes a multitude of complications when trying to install, run, extend, or maintain both LaTeX and ConTeXt.


The next gathering will take place in the very north of Germany. A lovely place, but I doubt that I’ll be attending. The crowd is nice, but it probably won’t be interesting for me, talk-wise. I attribute that party to my inability to enjoy coding TeX or LaTeX, but also to the arrogance I felt from the community. For example, people were mocking use cases people had, disregarding them as being irrelevant. So you might not be able to talk TeX with those people, but they are nice, anyway.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.