19th DFN Workshop 2012

The 19th DFN Workshop happened again *yay* and I was lucky enough to be able to take part πŸ™‚

After last year we all knew the venue and it’s great. The hotel is very professional and the receptions are very good. The conference room itself is very spacious and well equipped for having a couple of hundred people there.

So after a first caffeine infusion the conference started and the first guy gave the keynote. Tom Vogt (from Calitarus GmbH) talked about Security and Usability and he made some interesting points. He doesn’t want to have more “Security Awareness” but more “User Awareness”. He claims that users are indeed aware of security issues but need to be properly communicated with. He gave Facebook as an example: If you log in wrongly a couple of times, Facebook will send you an email, excusing themselves for the troubles *you* have while logging in. As opposed to the “if the question is stupid, the helpdesk will set you on fire” attitude.

So instead of writing security policies with a lot of rules he wants us to write policies that take the user’s view into account and make sense for the average user. He also brought up passwords and password policy. Instead of requiring at least 8 characters (which will be read as “8 characters” by the user anyway) one should encourage a more sensible strategy, i.e. the XKCD one.

He also disliked the metaphors we’re using all the time, i.e. we’re talking about documents or crypto keys. A document is something static that you hold in your hand. It can’t do any harm. But a Word-“document” is indeed something different, because there are macros and whatnot. And it’s not a big problem to temporarily give away physical keys. But in the crypto world, it is. And people, he claimed, would make those associations when confronted with these terms. Unfortunately, he didn’t have a fix for those long-term used metaphors but he said extra caution needed to be applied when talking in these terms.

Dissonance was another big thing. He claimed that it’s problematic that starting a program and opening a file is the very same action in modern operating systems. If the open document was triggered differently, then the user could see if the document that they received was indeed a text file or a some binary gibberish.

And well, as the talk was titled “Usability” user interfaces were criticised, too. He mentioned that dialogues were very rude and that it was equal to holding someone until they answer a question. That trained the user to avoid and escape the dialogue as quickly as possible without even reading them, totally destroying the whole point of a dialogue. So we should only use them in a “life or death” situation where it would be okay to physically hold someone. And well, “user errors are interface errors”.

My favourite usability bug is the whole Keysigning story. It’s broken from beginning to end. I think that if we come up with a nice and clean design of a procedure to sign each others keys, the Web of Trust model will be used more and more. Right now, it’s an utterly complex process involving different media and all that is doomed to be broken.

After that, a guy from the Leibniz-Rechenzentrum talked about internal perpetrators from university data centres. They basically introduced Login IDS, a tool to scrub your logs and make them more administration friendly. He said that they didn’t watch their logs because it was way too much data. They had around 800 logins per day on their two SSH and two Citrix servers and nobody really checked when somebody was logging in. To reduce the amount of log, they check the SSHd log and fire different events, i.e. if there is someone logging in for the very first time. Or if user hasn’t logged in at that time of the day or from the IP she’s using before. That, he claimed, reduced their amount of log to 10% of the original volume. Unfortunately, the git repo shows a single big and scary Perl file with no license at all 😐

Another somewhat technical talk followed by Michael Weiser. He talked about security requirements for modern high performance computing environments and I couldn’t really follow all the way through. But from what I’ve understood, he wants to be able to execute big jobs and have all the necessary Kerberos or AFS tokens because you don’t know for how long you’ll have to wait until you can process your data. And well, he showed some solutions (S4U2self) and proposed another one which I didn’t really understand. But apparently everything needs to be very complex because you cannot get a ticket that’s valid long enough. And instead you get a “Granting-Ticket” which empowers you to get all the tickets you want for a basically unlimited amount of time…?

The break was just coming up at the right time so that the caffeine stock could be replenished. It did get used up quite quickly πŸ˜‰

The first talk after the break introduced to HoneypotMe, a technology that enables you to put honeypots on your production-mode machines without risking to have them compromised. They basically create tunnel for the ports that are open on the honeypot but not on the production machine. So an attacker would not detect the honeypot that easily. Although it’s kinda nonsensical for a Linux machine to have the MSSQL port open. Interesting technology, although I don’t quite understand, why they put the honeypot after the production machine (network topology wise), so that you have to modify the TCP stack on the production machine in order to relay connections to the actual honeypot. Instead, one could put the honeypot in front and relay connections to the production machine. That way, one would probably reduce plumbing the TCP layer on the machine that’s meant to serve production purposes.

Another, really technical talk was given by a guy from the research centre juelich. It was so technical that I couldn’t follow. Jesus christ were the slides packed. The topic was quite interesting though. Unfortunate that it was a rather exhausting presentation. He tried to tell us how to mange IPv6 or well, to better damn manage it, because otherwise you’d have loads of trouble in your network. He was referring a lot to the very interesting IPv6 toolkit by THC. He claimed that those attacks were not easy to defend against. But it doesn’t need an attacker, he said. Windows would be enough to screw up your network, i.e. by somehow configuring Internet Connection Sharing it would send weird Router Advertisements. But I might have gotten that wrong because he was throwing lots of words and acronyms on us. NDPMON. RAPIXD. RAMOND. WTF. Fortunately, it was the last talk and we could head off to have some proper beer.

After way too less sleep and ridiculous amounts of very good food, the second day started off with a very great talk by a guy from RedTeam Pentesting. He did very interesting research involving URL shortening services and presented us his results. Some of which are quite scary. If you’re remotely interested in this topic, you should have a look at the paper once it is available. There is slightly different version here.

So the basic problem was described as follows: A user wants to send a link to a friend but the URL is too long so that email clients break it (well, he didn’t mention which though) or Twitter would simply not accept it… We kinda have to assume that Twitter is a useful thing that people do actually use to transmit links. Anyway, to shorten links, people may use a service that translates the long URL into a short one. And now the problems start.

First of all, the obvious tracking issues arise. The service provider can see who clicks on which links and even worse: Set cookies so that users are identifiable even much later. Apparently, almost all of these service do make use of tracking cookies which last for a couple of years. Interestingly, Google is reported to not make use of tracking technologies in their URL shortening service.

Secondly, you eventually leak a secret which is encoded in the URL you are shortening. And that’s apparently, what people do. They do use Google Docs or other sensitive webapps that encode important access tokens in the URL that you are throwing with both hands at the service provider. He claimed to have found many interesting documents, ranging from “obviously very private photos” over balance sheets from some company to a list of addresses of kindergarten kids. He got a good percentage of private documents which was really interesting to see.

But it gets worse. He set up a brand new web server listening on a brand new domain (fd0.me) and created URLs which he then shortened using the services. On the page his webserver delivered was a password which no search engine knew back then. The question was: Do URL shortening services leak their data to search engines? Or worse: Do they scan the database for interesting looking URLs themselves? Turns out: Yes and yes. He found his password on search engines and curious administrators in his webserver log.

Other obvious problems include loss of URL. Apparently people do use shortened URLs in long lasting things like books. And well, URL shortening services are not necessarily known for being long living. Fun fact: His university used to have such a service, but they shut it down…

Another technical issue is speed. Because of the indirection, you have an overhead in time. Google are the winner here again. They serve the fastest.

So yeah that was a very interesting talk which clearly showed the practical risks of such services.

A electronic ID card was introduced in Germany rather recently and the next guy did some research (sponsered by the ministry of interior) to explore the “eID Online Authentication Network Threat Model, Attacks and Implications”. Nobody in the audience actually used the eID so he had to tell us what you are supposed to do with it. It is used to authenticate data like your name, address, birthday or just the fact that you are at legal age. It’s heavily focussed on Browser stuff, so the scenarios are a bank or a web shop. After the website requested eID functions, the browser speaks to the local eID deamon which then wants to read your eID and communicates with the servers. Turns out, that everything seems to be quite well designed, expect well, the browsers. So he claims it is possible to Man in the Middle a connection if one can make a browser terminate a successfully opened connection. I.e. after all the TLS handshakes were finished, one would terminate the connection, intercept it and then no further verification was done. A valid attack scenario, not necessarily easy to be in that position though.


There were tiny talks as well. My favourite was Martin John from SAP talking about Cross Domain Policies. Apparently, standards exist to “enhance” the same origin policy and enable JavaScripts in browsers to talk to different domains. He scanned the internet^tm and found 3% of the domains to have wildcard policies. 50% of those had in some way sensitive webapps, i.e. authentication. He closed giving the recommendation of using CORS to do cross domain stuff.

The last two talks were quite interesting. The first one talked about XML Signature Wrapping. A technique that I haven’t heard of before, mostly because I’m not into XML at all. But it seems that you can sign parts of a XML document and well, because XML is utterly complex, libraries fail to handle that properly. There are several attacks including simply reproducing the XML tree with different properties and hoping that the parser would verify the correct tree, but work on the other. Simple, huh? But he claimed to have found CVE 2011-1411, a vulnerability in an interesting user of XML: SAML, some authentification protocol based on XML.

Afterwards, I was surprised to see an old tool I was playing with some time ago: Volatility. It gained better Linux support and the speaker showed off some features and explained how to make it support your Linux version. Quite interesting to see that people focus on bringing memory forensics to Linux.

So if you are more interested in the topics, feel free to browse or buy the book which includes all the papers.

This year’s DFN Workshop was much more interesting content wise and I am glad that it managed to present interesting topics. Again, the setting and the catering are very nice and I hope to be able to attend many more DFN Workshops in the future πŸ™‚

One thought on “19th DFN Workshop 2012”

Leave a Reply

Your email address will not be published. Required fields are marked *

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.