Talking on PrivacyScore at DFN Security Conference 2018 in Hamburg, Germany

I seem to have skipped last year, but otherwise I have been to the DFN Workshop regularly. While I had a publication at this venue before, it’s only this year that I got to have a the conference.

I cannot comment on the other talks so much, because I could not attend too many πŸ™ But our talk (slides) was well visited and I think people appreciated the presentation being a bit lighter than the previous one about the upcoming GDPR.

I talked about PrivacyScore.org and how we’ve measured German universities. The paper is here. Our results were mixed. As for TLS deployment, with a lot of imagination we can see a line dividing Germany. The West seems to have fewer problems with their TLS deployment than the East. The more red an area is, the worse its TLS support is. That ranges from not offering TLS at all to having an invalid certificate or using broken parameters.

As for tracking its users we had the hypothesis that privately run institutions have a higher interest in tracking its users than publicly run institutions. The following graphic reflects the geographic distribution of trackers on German university’s Web sites.
That hypothesis can be confirmed by looking at the PrivacyScore list that discriminates those institutions.

We found data that was very likely not meant to be there, such as database dumps or Git repositories of the Web site’s code (including passwords for their staging environments, etc.). We tried to report these issues to the Web site operators, but it was difficult to get hold of the responsible people. For the 21 leaks we found I have 93 emails in my mailbox. Ideally, the 21 I sent off were enough. But even sending those emails is hard, because people don’t respect RFC 2142 and have a security@ address. Eventually, we made the Internet a tiny bit more secure by having those Website operators remove the leaks from their Web site, but there are still some pages which have (supposedly) unwanted information such as their visitors’ IP addresses online. The graph below shows that most of the operators who reacted did so in the first few days. So management of security incidents seems to be an area of improvement.

I hope to be able to return next year, if only for the catering πŸ˜‰ Then, I better attend some more talks and chat with the other guests.

DFN Workshop 2015

As in the last few years, the DFN Workshop happened in Hamburg, Germany.

The conference was keynoted by Steven Le Blond who talked about targeted attacks, e.g. against dissidents. He mentioned that he already presented the content at the USENIX security conference which some people think is very excellent. He first showed how he used Skype to look up IP addresses of his boss and how similarly targeted attacks were executed in the past. Think Stuxnet. His main focus were attacks on NGOs though. He focussed on an attacker sending malicious emails to the victim.

In order to find out what attack vectors were used, they contacted over 100 NGOs to ask whether they were attacked. Two NGOs, which are affiliated with the Chinese WUC, which represents the Uyghur minority, received 1500 malicious emails, out of which 1100 were carrying malware. He showed examples of those emails and some of them were indeed very targeted. They contained a personalised message with enough context to look genuine. However, the mail also had a malicious DOC file attached. Interestingly enough though, the infrastructure used by the attacker for the targeted attacks was re-used for several victims. You could have expected the attacker to have their infrastructure separated for the various victims, especially when carrying out targeted attacks.

They also investigated how quickly the attacker exploited publicly known vulnerabilities. They measured the time of the malicious email sent minus the release date of the vulnerability. They found that some of the attacks were launched on day 0, meaning that as soon as a vulnerability was publicly disclosed, an NGO was attacked with a relevant exploit. Maybe interestingly, they did not find any 0-day exploits launched. They also measured how the security precautions taken by Adobe for their Acrobat Reader and Microsoft for their Office product (think sandboxing) affected the frequency of attacks. It turned out that it does help to make your software more secure!

To defend against targeted attacks based on spoofed emails he proposed to detect whether the writing style of an email corresponds to that of previously seen emails of the presumed contact. In fact, their research shows that they are able to tell whether the writing style matches that of previous emails with very high probability.

The following talk assessed end-to-end email solutions. It was interesting, because they created a taxonomy for 36 existing projects and assessed qualities such as their compatibility, the trust-model used, or the platform it runs on.
The 36 solutions they identified were (don’t hold your breath, wall of links coming): Neomailbox, Countermail, salusafe, Tutanota, Shazzlemail, Safe-Mail, Enlocked, Lockbin, virtru, APG, gpg4o, gpg4win, Enigmail, Jumble Mail, opaqueMail, Scramble.io, whiteout.io, Mailpile, Bitmail, Mailvelope, pEp, openKeychain, Shwyz, Lavaboom, ProtonMail, StartMail, PrivateSky, Lavabit, FreedomBox, Parley, Mega, Dark Mail, opencom, okTurtles, End-to-End, kinko.me, and LEAP (Bitmask).

Many of them could be discarded right away, because they were not production ready. The list could be further reduced by discarding solutions which do not use open standards such as OpenPGP, but rather proprietary message formats. After applying more filters, such as that the private key must not leave the realm of the user, the list could be condensed to seven projects. Those were: APG, Enigmail, gpg4o, Mailvelope, pEp, Scramble.io, and whiteout.io.

Interestingly, the latter two were not compatible with the rest. The speakers attributed that to the use of GPG/MIME vs. GPG/Inline and they favoured the latter. I don’t think it’s a good idea though. The authors attest pEp a lot of potential and they seem to have indeed interesting ideas. For example, they offer to sign another person’s key by reading “safe words” over a secure channel. While this is not a silver bullet to the keysigning problem, it appears to be much easier to use.

As we are on keysigning. I have placed an article in the conference proceedings. It’s about GNOME Keysign. The paper’s title is “Welcome to the 2000s: Enabling casual two-party key signing” which I think reflects in what era the current OpenPGP infrastructure is stuck. The mindsets of the people involved are still a bit left in the old days where dealing with computation machines was a thing for those with long and white beards. The target group of users for secure communication protocols has inevitably grown much larger than it used to be. While this sounds trivial, the interface to GnuPG has not significantly changed since. It also still makes it hard for others to build higher level tools by making bad default decisions, demanding to be in control of “trust” decisions, and by requiring certain environmental conditions (i.e. the filesystem to be used). GnuPG is not a mere library. It seems it understands itself as a complete crypto suite. Anyway, in the paper, I explained how I think contemporary keysigning protocols work, why it’s not a good thing, and how to make it better.

I propose to further decentralise OpenPGP by enabling people to have very small keysigning “parties”. Currently, the setup cost of a keysigning party is very high. This is, amongst other things, due to the fact that an organiser is required to collect all the keys, to compile a list of participant, and to make the keys available for download. Then, depending on the size of the event, the participants queue up for several hours. And to then tick checkboxes on pieces of paper. A gigantic secops fail. The smarter people sign every box they tick so that an attacker cannot “inject” a maliciously ticked box onto the paper sheet. That’s not fun. The not so smart people don’t even bring their sheets of paper or have them printed by a random person who happens to also be at the conference and, surprise, has access to a printer. What a gigantic attack surface. I think this is bad. Let’s try to reduce that surface by reducing the size of the events.

In order to enable people to have very small events, i.e. two people keysigning, I propose to make most of the actions of a keysigning protocol automatic. So instead of requiring the user to manually compare the fingerprint, I propose that we securely transfer the key to be signed. You might rightfully ask, how to do that. My answer is that we’ve passed the 2000s and that we bring devices which are capable of opening a TCP connection on a link local network, e.g. WiFi. I know, this is not necessarily a given, but let’s just assume for the sake of simplicity that one of our device we carry along can actually do WiFi (and that the network does not block connections between machines). This also prevents certain attacks that users of current Best Practises are still vulnerable against, namely using short key ids or leaking who you are communicating with.

Another step that needs to be automated is signing the key. It sounds easy, right? But it’s not just a mere gpg --sign-key. The first problem is, that you don’t want the key to be signed to pollute your keyring. That can be fixed by using --homedir or the GNUPGHOME environment variable. But then you also want to sign each UID on the key separately. And this is were things get a bit more interesting. Anyway, to make a long story short: We’re not able to do that with plain GnuPG (as of now) in a sane manner. And I think it’s a shame.

Lastly, sending the key needs to be as “zero-click” as possible, too. I propose to simply reuse the current MUA of the user. That sounds easy, but unfortunately, it’s only 2015 and we cannot interact with, say, Evolution and Thunderbird in a standardised manner. There is xdg-email, but it has annoying bugs and doesn’t seem to be maintained. I’m waiting for a sane Email-API. I mean, Email has been around for some time now, let’s now try to actually use it. I hope to be able to make another more formal announcement on GNOME Keysign, soon.

the userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work

— attributed to Ellison.

Anyway, the event was good, I am happy to have attended. I hope to be able to make it there next year again.

20th DFN CERT Workshop

I was fortunate enough to be able to attend this year’s DFN Workshop which happened to be an anniversary as the event turned 20. Needless to say that I didn’t make all 20 πŸ˜‰ Well, I did a few anyway.

The keynote was surprisingly political. Marcus J. Ranum (Tenable Network Security) talked about Cyberwar – A Matter of Logistics and Privilege and made witty and thoughtful points. So he asked questions such as whether Stuxnet was an act of terrorism and whether its victims could sue the US to get their damages reimbursed. Highly interesting subject, highly interesting speaker.

Jan Ole Malchow presented “distPaste”, a HTML 5 based webapp that uses all the browsers to store data. So a distributed storage. Might be related to the fun project FillDisk.com.

Jens Liebchen from the awesome Redteam Pentesting did again a nice presentation this year. They got a new “Multi Function Printer” like a Canon C5051i (so a huge thing…) and had certain requirements regarding its security. He presented a threat model and shared some insights he gained while dealing with the vendor, and, more importantly, after having analysed the machine himself. It turns out that the device has a regular hard drive and runs some flavour of Linux with a big BLOB for their services. However, data was found to be spread over the partitions even though they bought a licence for “secure deletion” of data. They, rightfully, did not expect to find traces of their print or scan jobs. He mentioned that the security properties of such devices were not assessed yet. So there are loads of toys to play with.

Also funny was the work of Benjamin Kahler and Steffen Wendzel who did “Wardriving against building automation“. Basically, the question was how easy it is to break into a network and remote control the building, i.e. open doors and windows. Turns out, there are standard products which are not well secured and the deployment is usually not done properly either, so that network boundaries either don’t exist or can be passed easily.

The security of Android-App’s SSL/TLS usage was presented by Matthew Smith. They examined many many “Apps”, decompiled them and statically analysed how well they handle various conditions when building up a TLS connection. Apparently, many programs just do not care about the security properties of their TLS connection so that they just disable the verification of the certificate chain. The model is said to be too complex and too burdensome to set up during development. They also recommended to introduce a new privilege, namely sending data unencryptely. So that a user could select that an application must not transfer data as plain text.

Besides listening to the talks and chatting to people, I tried to get on the wireless in the hotel. Turns out, they interfere with your traffic, i.e. they block everything and redirect your web traffic to present you a landing page from which you are supposed to log in to the gratis wireless. The credentials to be entered were the room number and the last name of a guest of that room. Well, given the speakers and attendees list (or some knowledge of popular names in the region) it seems easy enough to just poke some data in and hope for the best. Or, instead of doing that manually, have a program doing that for you. Voila, je vous presente “petitelysee”. A simple Python script to try to log in to a landing page. As I’ve said, it’s the result of three hours or so work. So it’s not very nicely done and I obviously didn’t try it out. It has just been coded in a way that I *think* might work.

19th DFN Workshop 2012

The 19th DFN Workshop happened again *yay* and I was lucky enough to be able to take part πŸ™‚

After last year we all knew the venue and it’s great. The hotel is very professional and the receptions are very good. The conference room itself is very spacious and well equipped for having a couple of hundred people there.

So after a first caffeine infusion the conference started and the first guy gave the keynote. Tom Vogt (from Calitarus GmbH) talked about Security and Usability and he made some interesting points. He doesn’t want to have more “Security Awareness” but more “User Awareness”. He claims that users are indeed aware of security issues but need to be properly communicated with. He gave Facebook as an example: If you log in wrongly a couple of times, Facebook will send you an email, excusing themselves for the troubles *you* have while logging in. As opposed to the “if the question is stupid, the helpdesk will set you on fire” attitude.

So instead of writing security policies with a lot of rules he wants us to write policies that take the user’s view into account and make sense for the average user. He also brought up passwords and password policy. Instead of requiring at least 8 characters (which will be read as “8 characters” by the user anyway) one should encourage a more sensible strategy, i.e. the XKCD one.

He also disliked the metaphors we’re using all the time, i.e. we’re talking about documents or crypto keys. A document is something static that you hold in your hand. It can’t do any harm. But a Word-“document” is indeed something different, because there are macros and whatnot. And it’s not a big problem to temporarily give away physical keys. But in the crypto world, it is. And people, he claimed, would make those associations when confronted with these terms. Unfortunately, he didn’t have a fix for those long-term used metaphors but he said extra caution needed to be applied when talking in these terms.

Dissonance was another big thing. He claimed that it’s problematic that starting a program and opening a file is the very same action in modern operating systems. If the open document was triggered differently, then the user could see if the document that they received was indeed a text file or a some binary gibberish.

And well, as the talk was titled “Usability” user interfaces were criticised, too. He mentioned that dialogues were very rude and that it was equal to holding someone until they answer a question. That trained the user to avoid and escape the dialogue as quickly as possible without even reading them, totally destroying the whole point of a dialogue. So we should only use them in a “life or death” situation where it would be okay to physically hold someone. And well, “user errors are interface errors”.

My favourite usability bug is the whole Keysigning story. It’s broken from beginning to end. I think that if we come up with a nice and clean design of a procedure to sign each others keys, the Web of Trust model will be used more and more. Right now, it’s an utterly complex process involving different media and all that is doomed to be broken.

After that, a guy from the Leibniz-Rechenzentrum talked about internal perpetrators from university data centres. They basically introduced Login IDS, a tool to scrub your logs and make them more administration friendly. He said that they didn’t watch their logs because it was way too much data. They had around 800 logins per day on their two SSH and two Citrix servers and nobody really checked when somebody was logging in. To reduce the amount of log, they check the SSHd log and fire different events, i.e. if there is someone logging in for the very first time. Or if user hasn’t logged in at that time of the day or from the IP she’s using before. That, he claimed, reduced their amount of log to 10% of the original volume. Unfortunately, the git repo shows a single big and scary Perl file with no license at all 😐

Another somewhat technical talk followed by Michael Weiser. He talked about security requirements for modern high performance computing environments and I couldn’t really follow all the way through. But from what I’ve understood, he wants to be able to execute big jobs and have all the necessary Kerberos or AFS tokens because you don’t know for how long you’ll have to wait until you can process your data. And well, he showed some solutions (S4U2self) and proposed another one which I didn’t really understand. But apparently everything needs to be very complex because you cannot get a ticket that’s valid long enough. And instead you get a “Granting-Ticket” which empowers you to get all the tickets you want for a basically unlimited amount of time…?

The break was just coming up at the right time so that the caffeine stock could be replenished. It did get used up quite quickly πŸ˜‰

The first talk after the break introduced to HoneypotMe, a technology that enables you to put honeypots on your production-mode machines without risking to have them compromised. They basically create tunnel for the ports that are open on the honeypot but not on the production machine. So an attacker would not detect the honeypot that easily. Although it’s kinda nonsensical for a Linux machine to have the MSSQL port open. Interesting technology, although I don’t quite understand, why they put the honeypot after the production machine (network topology wise), so that you have to modify the TCP stack on the production machine in order to relay connections to the actual honeypot. Instead, one could put the honeypot in front and relay connections to the production machine. That way, one would probably reduce plumbing the TCP layer on the machine that’s meant to serve production purposes.

Another, really technical talk was given by a guy from the research centre juelich. It was so technical that I couldn’t follow. Jesus christ were the slides packed. The topic was quite interesting though. Unfortunate that it was a rather exhausting presentation. He tried to tell us how to mange IPv6 or well, to better damn manage it, because otherwise you’d have loads of trouble in your network. He was referring a lot to the very interesting IPv6 toolkit by THC. He claimed that those attacks were not easy to defend against. But it doesn’t need an attacker, he said. Windows would be enough to screw up your network, i.e. by somehow configuring Internet Connection Sharing it would send weird Router Advertisements. But I might have gotten that wrong because he was throwing lots of words and acronyms on us. NDPMON. RAPIXD. RAMOND. WTF. Fortunately, it was the last talk and we could head off to have some proper beer.

After way too less sleep and ridiculous amounts of very good food, the second day started off with a very great talk by a guy from RedTeam Pentesting. He did very interesting research involving URL shortening services and presented us his results. Some of which are quite scary. If you’re remotely interested in this topic, you should have a look at the paper once it is available. There is slightly different version here.

So the basic problem was described as follows: A user wants to send a link to a friend but the URL is too long so that email clients break it (well, he didn’t mention which though) or Twitter would simply not accept it… We kinda have to assume that Twitter is a useful thing that people do actually use to transmit links. Anyway, to shorten links, people may use a service that translates the long URL into a short one. And now the problems start.

First of all, the obvious tracking issues arise. The service provider can see who clicks on which links and even worse: Set cookies so that users are identifiable even much later. Apparently, almost all of these service do make use of tracking cookies which last for a couple of years. Interestingly, Google is reported to not make use of tracking technologies in their URL shortening service.

Secondly, you eventually leak a secret which is encoded in the URL you are shortening. And that’s apparently, what people do. They do use Google Docs or other sensitive webapps that encode important access tokens in the URL that you are throwing with both hands at the service provider. He claimed to have found many interesting documents, ranging from “obviously very private photos” over balance sheets from some company to a list of addresses of kindergarten kids. He got a good percentage of private documents which was really interesting to see.

But it gets worse. He set up a brand new web server listening on a brand new domain (fd0.me) and created URLs which he then shortened using the services. On the page his webserver delivered was a password which no search engine knew back then. The question was: Do URL shortening services leak their data to search engines? Or worse: Do they scan the database for interesting looking URLs themselves? Turns out: Yes and yes. He found his password on search engines and curious administrators in his webserver log.

Other obvious problems include loss of URL. Apparently people do use shortened URLs in long lasting things like books. And well, URL shortening services are not necessarily known for being long living. Fun fact: His university used to have such a service, but they shut it down…

Another technical issue is speed. Because of the indirection, you have an overhead in time. Google are the winner here again. They serve the fastest.

So yeah that was a very interesting talk which clearly showed the practical risks of such services.

A electronic ID card was introduced in Germany rather recently and the next guy did some research (sponsered by the ministry of interior) to explore the “eID Online Authentication Network Threat Model, Attacks and Implications”. Nobody in the audience actually used the eID so he had to tell us what you are supposed to do with it. It is used to authenticate data like your name, address, birthday or just the fact that you are at legal age. It’s heavily focussed on Browser stuff, so the scenarios are a bank or a web shop. After the website requested eID functions, the browser speaks to the local eID deamon which then wants to read your eID and communicates with the servers. Turns out, that everything seems to be quite well designed, expect well, the browsers. So he claims it is possible to Man in the Middle a connection if one can make a browser terminate a successfully opened connection. I.e. after all the TLS handshakes were finished, one would terminate the connection, intercept it and then no further verification was done. A valid attack scenario, not necessarily easy to be in that position though.


There were tiny talks as well. My favourite was Martin John from SAP talking about Cross Domain Policies. Apparently, standards exist to “enhance” the same origin policy and enable JavaScripts in browsers to talk to different domains. He scanned the internet^tm and found 3% of the domains to have wildcard policies. 50% of those had in some way sensitive webapps, i.e. authentication. He closed giving the recommendation of using CORS to do cross domain stuff.

The last two talks were quite interesting. The first one talked about XML Signature Wrapping. A technique that I haven’t heard of before, mostly because I’m not into XML at all. But it seems that you can sign parts of a XML document and well, because XML is utterly complex, libraries fail to handle that properly. There are several attacks including simply reproducing the XML tree with different properties and hoping that the parser would verify the correct tree, but work on the other. Simple, huh? But he claimed to have found CVE 2011-1411, a vulnerability in an interesting user of XML: SAML, some authentification protocol based on XML.

Afterwards, I was surprised to see an old tool I was playing with some time ago: Volatility. It gained better Linux support and the speaker showed off some features and explained how to make it support your Linux version. Quite interesting to see that people focus on bringing memory forensics to Linux.

So if you are more interested in the topics, feel free to browse or buy the book which includes all the papers.

This year’s DFN Workshop was much more interesting content wise and I am glad that it managed to present interesting topics. Again, the setting and the catering are very nice and I hope to be able to attend many more DFN Workshops in the future πŸ™‚

DFN Workshop 2011

I had the opportunity to attend the 18th DFN Workshop (I wonder how that link will look like next year) and since it’s a great event I don’t want you to miss out. Hence I’ll try to sum the talks and the happenings up.

It was the second year for the conference to take place in Hotel Grand Elysee in Hamburg, Germany. I was unable to attend last year, so I didn’t know the venue. But I am impressed. It is very spacious, friendly and well maintained. The technical equipment seems to be great and everything worked really well. I am not too sure whether this is the work of the Hotel or the Linux Magazin though.

After a welcome reception which provided a stock of caffeine that should last all day long, the first talk was given by Dirk Kollberg from Sophos. Actually his boss was supposed to give the talk but cancelled it on short notice so he had to jump in. He basically talked about Scareware and that it was a big business.

He claimed that it used to be cyber graffiti but nowadays it turned into cyber war and Stuxnet would be a good indicator for that. The newest trend, he said, was that a binary would not only be compressed or encrypted by a packer, but that the packer itself used special techniques like OpenGL functions. That was a problem for simulators which were commonly used in Antivirus products.

He investigated a big Ukrainian company (Innovative Marketing) that produced a lot of scareware and was in fact very well organised. But apparently not from a security point of view because he claimed to have retrieved a lot of information via unauthenticated HTTP. And I mean a lot. From the company’s employees address book, over ERM diagrams of internal databases to holiday pictures of the employees. Almost unbelievable. He also discovered a server that malware was distributed from and was able to retrieve the statistics page which showed how much traffic the page made and which clients with which IPs were connecting. He claimed to have periodically scraped the page to then compile a map with IPs per country. The animation was shown for about 90 scraped days. I was really wondering why he didn’t contact the ISP to shut that thing down. So I asked during Q&A and he answered that it would have been for Sophos because they wouldn’t have been able to gain more insight. That is obviously very selfish and instead of providing good to the whole Internet community, they only care about themselves.

The presentation style was a bit weird indeed. He showed and commented a pre-made video which lasted for 30 minutes out of his 50 minutes presentation time. I found that rather bold. What’s next? A pre-spoken video which he’ll just play while standing on the stage? Really sad. But the worst part was as he showed private photos of the guy of that Ukrainian company which he found by accident. I also told him that I found it disgusting that he pillared that guy in public and showed off his private life. The people in the audience applauded.

A coffee break made us calm down.

The second talk about Smart Grid was done by Klaus Mueller. Apparently Smart Grids are supposed to be the new big thing in urban power networks. It’s supposed to be a power *and* communications network and the household or every device in it would be able to communicate, i.e. to tell or adapt its power consumption.

He depicted several attack scenarios and drew multiple catastrophic scenarios, i.e. what happens if that Smart Grid system was remotely controllable (which it is by design) and also remotely exploitable so that you could turn off power supply for a home or a house?
The heart of the Smart Grid system seemed to be so called Smart Meters which would ultimately replace traditional, mechanical power consumption measuring devices. These Smart Meters would of course be designed to be remotely controllable because you will have an electrified car which you only want to be charged when the power is at its cheapest price, i.e. in the night. Hence, the power supplier would need to tell you when to turn the car charging, dish or clothes washing machine on.

Very scary if you ask me. And even worse: Apparently you can already get Smart Meters right now! For some weird reason, he didn’t look into them. I would have thought that if he was interested in that, he would buy such a device and open it. He didn’t even have a good excuse, i.e. no time or legal reasons. He gave a talk about attack scenarios on a system which is already partly deployed but without actually having a look at the rolled out thing. That’s funny…

The next guy talked about Smart Grids as well, but this time more from a privacy point of view. Although I was not really convinced. He proposed a scheme to anonymously submit power consumption data. Because the problem was that the Smart Meter submitted power consumption data *very* regularly, i.e. every 15 minutes and that the power supplier must not know exactly how much power was consumed in each and every interval. I follow and highly appreciate that. After all, you can tell exactly when somebody comes back home, turns the TV on, puts something in the fridge, makes food, turns the computer on and off and goes to bed. That kind of profiles are dangerous albeit very useful for the supplier. Anyway, he committed to submitting aggregated usage data to the supplier and pulled off self-made protocols instead of looking into the huge fundus of cryptographic protocols which were designed for anonymous or pseudonymous encryption. During Q&A I told him that I had the impression of the proposed protocols and the crypto being designed on a Sunday evening in front of the telly and whether he actually had a look at any well reviewed cryptographic protocols. He didn’t. Not at all. Instead he pulled some random protocols off his nose which he thought was sufficient. But of course it was not, which was clearly understood during the Q&A. How can you submit a talk about privacy and propose a protocol without actually looking at existing crypto protocols beforehand?! Weird dude.

The second last man talking to the crowd was a bit off, too. He had interesting ideas though and I think he was technically competent. But he first talked about home routers being able of getting hacked and becoming part of a botnet and then switched to PCs behind the router being able to become part of a botnet to then talk about installing an IDS on every home router which not only tells the ISP about potential intrusions but also is controllable by the ISP, i.e. “you look like you’re infected with a bot, let’s throttle your bandwidth”. I didn’t really get the connection between those topics.

But both ideas are a bit weird anyway: Firstly, your ISP will see the exact traffic it’s routing to you whatsoever. Hence there is no need to install an IDS on your home router because the ISP will have the information anyway. Plus their IDS will be much more reliable than some crap IDS that will be deployed on a crap Linux which will run on crappy hardware. Secondly, having an ISP which is able to control your home router to shape, shut down or otherwise influence your traffic is really off the wall. At least it is today. If he assumes the home router and the PCs behind it to be vulnerable, he can’t trust the home router to deliver proper IDS results anyway. Why would we want the ISP then to act upon that potentially malicious data coming from a potentially compromised home router? And well, at least in the paper he submitted he tried to do an authenticated boot (in userspace?!) so that no hacked firmware could be booted, but that would require the software in the firmware to be secure in first place, otherwise the brilliantly booted device would be hacked during runtime as per the first assumption.

But I was so confused about him talking about different things that the best question I could have asked would have been what he was talking about.

Finally somebody with practical experience talked and he presented us how they at Leibniz Rechenzentrum. Stefan Metzger showed us their formal steps and how they were implemented. At the heart of their system was OSSIM which aggregated several IDSs and provided a neat interface to search and filter. It wasn’t all too interesting though, mainly because he talked very sleepily.

The day ended with a lot of food, beer and interesting conversations πŸ™‚

The next day started with Joerg Voelker talking about iPhone security. Being interested in mobile security myself, I really looked forward to that talk. However, I was really disappointed. He showed what more or less cool stuff he could do with his phone, i.e. setting an alarm or reading email… Since it was so cool, everybody had it. Also, he told us what important data was on such a phone. After he built his motivation, which lasted very long and showed many pictures of supposed to be cool applications, he showed us which security features the iPhone allegedly had, i.e. Code Signing, Hardware and File encryption or a Sandbox for the processes. He read the list without indicating any problems with those technologies, but he eventually said that pretty much everything was broken. It appears that you can jailbreak the thing to make it run unsigned binaries, get a dump of the disk with dd without having to provide the encryption key or other methods that render the protection mechanisms useless. But he suffered a massive cognitive dissonance because he kept praising the iPhone and how cool it was.
When he mentioned the sandbox, I got suspicious, because I’ve never heard of such a thing on the iPhone. So I asked him whether he could provide details on that. But he couldn’t. I appears that it’s a policy thing and that your application can very well read and write data out of the directory it is supposed to. Apple just rejects applications when they see it accessing files it shouldn’t.
Also I asked him which protection mechanisms on the iPhone that were shipped by Apple do actually work. He claimed that with the exception of the File encryption, none was working. I told him that the File encryption is proprietary code and that it appears to be a designed User Experience that the user does not need to provide a password for syncing files, hence a master key would decrypt files while syncing.

That leaves me with the impression that an enthusiastic Apple fanboy needed to justify his iPhone usage (hey, it’s cool) without actually having had a deeper look at how stuff works.

A refreshing talk was given by Liebchen on Physical Security. He presented ways and methods to get into buildings using very simple tools. He is part of the Redteam Pentesting team and apparently was ordered to break into buildings in order to get hold of machines, data or the network. He told funny stories about how they broke in. Their tools included a “Keilformgleiter“, “Tuerfallennadeln” or “Tuerklinkenangel“.
Once you’re in you might encounter glass offices which have the advantage that, since passwords are commonly written on PostIts and sticked to the monitor, you can snoop the passwords by using a big lens!

Peter Sakal presented a so called “Rapid in-Depth Security Framework” which he developed (or so). He introduced to secure software development and what steps to take in order to have a reasonably secure product. But all of that was very high level and wasn’t really useful in real life. I think his main point was that he classified around 300 fuzzers and if you needed one, you could call him and ask him. I expected way more, because he teased us with a framework and introduced into the whole fuzzing thing, but didn’t actually deliver any framework. I really wonder how the term “framework” even made it into the title of his talk. Poor guy. He also presented softscheck.com on every slide which now makes a good entry in my AdBlock list…

Fortunately, Chritoph Wegener was a good speaker. He talked about “Cloud Security 2.0” and started off with an introduction about Cloud Computing. He claimed that several different types exist, i.e. “Infrastructure as a Service” (IaaS), i.e. EC2 or Dropbox, “Platform as a Service” (PaaS), i.e. AppEngine or “Software as a Service (SaaS), i.e. GMail or Twitter. He drew several attack scenarios and kept claiming that you needed to trust the provider if you wanted to do serious stuff. Hence, that was the unspoken conclusion, you must not use Cloud Services.

Lastly, Sven Gabriel gave a presentation about Grid Security. Apparently, he supervises boatloads of nodes in a grid and showed how he and his team manage to do so. Since I don’t operate 200k nodes myself, I didn’t think it was relevant albeit it was interesting.

To conclude the DFN Workshop: It’s a nice conference with a lot of nice people but it needs to improve content wise.

16th DFN CERT Workshop 2009

Again, I had the great pleasure to attend the annual DFN Workshop which takes place in the Conference Center Hamburg (ever thought about, why they haven’t called it “Konferenz Zentrum”?).

dfn-cert logo

It’s more “tieish” than a Chaos Communication Congress but it’s still comfortable being there. Most people have a strong academic background so they were used to jeans and pullovers as well πŸ˜‰

The first person to speak was a Dr. Neil Long from Team Camry and he spoke about the underground economy. They claim to research and investigate in that area and make deals with the criminals. He showed IRC logs most of the time and it was quite funny to see how the people interact with each other. They actually do speak 1337 and even I had a tough time reading their conversation πŸ˜‰ He explained in great detail how the underground is organized. He claimed, that there are specialists for everything, everywhere. Programmers, Exploit-writers, Webhosts, Credit Card stealers, yadda yadda. Everything has it’s price and that is paid through various online money trasferring systems.

The next guy talked about Exploit Toolkits for the Web. He named various kits, like MPack, IcePack, NeoSploit, FirePack or UniquePack. They basically allow you to create a drive-by download site and deploy a given payload. The programs itself are split up into two parts. A server part which actually exploits a browser and makes it download and execute a loader program which in turn downloads the second stage – the real malware to be run on the victims machine. The other part is a binary to create that first-stage program. I spent some time in searching for those toolkits and downloaded some of them. That required me to learn some Russian πŸ˜‰
This first-stage part opens an interesting attack vector to the wannabe hackers: Many Web Exploit Toolkits were infected with malware themselves. Because you have to run a strange smelling binary to create your first-stage excutable, you might run foreign malware yourself. I actually don’t understand, why this loader thing is such a big issue. I assume you could deploy your malware in first place without having it loaded through a staging program.

The next interesting talk was given by the smart guys from Red Team Pentesting, which is a pretty interesting company actually. Former students founded that company and they do professional Pentesting. I have to admit, that I envy them a little. It must be a great job with a lot of interesting stuff to see. Anyay, they talked about jBoss insecurities. It seems that jBoss comes with development configuration and the people don’t change them to productive values but blindly bind their server to the network. It turns out that you can get shell access through nearly a handful ways, even if a smart administrator has locked some ways down. Also, many corporate or governmental site are driven by a jBoss server and -which is the interesting part- have a weak configuration. They have an interesting statistic that shows that only 8% of the JBoss servers out there are reasonably secure.

How secure is the JBoss Web?
How secure is the JBoss Web?

I was actually bored by just one talk. It was about GRID Firewalls. While the topic is interesting in general, the guy made me fall asleep :- That’s a pity, because I believe he knew what he was talking about and had valuable information to deliver, especially due to his strong emphasis on practical problems. Maybe he can get his talk accepted next year and improve his talking skills.

After the first day, we visited the Groeninger Braukeller which was a real blast! They have one of the finest beers I know of. Also the food in there is delicious. It’s a perfect atmosphere to get together and discuss the talks you’ve just listened to. I also took the chance to meet old friends which I haven’t seen for a while.

Probably due to the massive amount of food and beer, I couldn’t sleep well that night and I thus was very tired the second day. I’ve listened to the talks but I couldn’t make it to the ModSecurity workshop πŸ™ It’s really annoying, because I actually wanted to attend that session! I do use ModSecurity at some projects and I think it’s a good tool. A reallife-relevant workshop would have been great.

So, if you have nothing else to do on 2009-02-09, consider coming to Hamburg and enjoy the 17th DFN Workshop!

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.