Archive for the ‘life’ Category

Attending the DANTE Tagung in Karlsruhe

Sunday, September 21st, 2014

Much to my surprise, the DANTE Tagung took place in Karlsruhe, Germany. It appears to be the main gathering of the LaTeX (and related) community.

Besides pub-based events in the evenings, they also had talks. I knew some people on the program by name and was eager to finally see them IRL. One of those was Markus Kohm, from the KOMAScript fame. He went on to present new or less used features. One of those was scrlayer which is capable of adding layers to a page, i.e. background or foreground layers. So you can add, e.g. a logo or a document version to every page, more or less like this:

DeclareNewLayer[{
    background,
    topmargin,
    contents={\hfill
        \includegraphics[width=3cm, heigth=2cm]
                                  {example-image}
}%
}[{Logo}
\AddLayersToPageStyle{@everystyle@}{Logo}

You could do that with fancyhead, but then you’d only get the logo depending on your page style. The scrlayer solution will be applied always. And it’s more KOMAesque, I guess.

The next talk I attended was given by Uwe Ziegenhagen on new or exciting CTAN packages.
Among the packages he presented was ctable. It can be used to type-set tables and figures. It uses a favourite package of mine, tabularx. The main advantage seems to be to be able to use footnotes which is otherwise hard to achieve.

He also presented easy-todo which provides “to-do notes through­out a doc­u­ment, and will pro­vide an in­dex of things to do”. I usually use todonotes which seems similar enough so I don’t really plan on changing that. The differences seem to be that easy-todo offer more fine grained control over what goes into a list of todos to be printed out.

The flowchart package seems to allow drawing flowcharts with TikZ more easily, especially following “IBM Flowcharting Template”. The flowcharts I drew so far were easy enough and I don’t think this package would have helped me, but it is certain that the whole process of drawing with TikZ needs to be made much easier…

Herbert Voß went on to talk about ConTeXt, which I had already discovered, but was pleased by. From my naïve understanding, it is a “different” macro set for the TeX engine. So it’s not PDFTeX, LuaLaTeX, or XeTeX, but ConTeXt. It is distributed with your favourite TeXLive distribution, so it should be deployed on quite a few installations. However, the best way to get ConTeXt, he said, was to fire up the following command:

rsync -rlpt rsync://contextgarden.net/minimals/setup/.../bin .

wow. rsync. For binary software distribution. Is that the pinnacle of apps? In 2014? Rsync?! What is this? 1997? Quite an effective method, but I doubt it’s the most efficient. Let alone security wise.

Overall, ConTeXt is described as being a bit of an alien in the TeX world. The relationship with TeXLive is complicated, at best, and conventions are not congruent which causes a multitude of complications when trying to install, run, extend, or maintain both LaTeX and ConTeXt.


The next gathering will take place in the very north of Germany. A lovely place, but I doubt that I’ll be attending. The crowd is nice, but it probably won’t be interesting for me, talk-wise. I attribute that party to my inability to enjoy coding TeX or LaTeX, but also to the arrogance I felt from the community. For example, people were mocking use cases people had, disregarding them as being irrelevant. So you might not be able to talk TeX with those people, but they are nice, anyway.

Getting cheaper Bahn fares via external services

Thursday, July 17th, 2014

Imagine you want to go from some random place in Germany to the capital. Maybe because it is LinuxTag. We learned that you can try to apply international fares. In the case of Berlin, the Netzplan for Berlin indicates that several candidate train stations exist: Rzepin, Kostrzyn, or Szczecin. However, we’re not going to explore that now.

Instead, we have a look at other (third party) offers. Firstly, you can always get a Veranstaltungsticket. It’s a ticket rated at 99 EUR for a return trip. The flexible ticket costs 139 EUR and allows you to take any train, instead of fixed ones. Is that a good price? Let’s check the regular price for the route Karlsruhe ←→ Berlin.

The regular price is 142 EUR. Per leg. So the return trip would cost a whopping 284 EUR. Let’s assume you have a BahnCard 50. It costs 255 EUR and before you get it, you better do the math whether it’s worth it. Anyway, if you have that card, the price halves and we have to pay 71 EUR for a leg or 142 for the return trip. That ticket is fully flexible, so any train can be taken. The equivalent Veranstaltungsticket costs 139, so a saving of 3 EUR, or 2%.

Where to get that Veranstaltungsticket you ask? Well, turns out, LinuxTag offered it, itself. You call the phone number of the Bahn and state your “code”. In the LinuxTag case it was “STATION Berlin”. It probably restricts your destination options to Berlin. More general codes are easily found on the Web. Try “Finanz Informatik”,
“TMF”, or “DOAG”.

I don’t expect you to be impressed by saving 2%. Another option is to use bus search engines, such as busliniensuche.de, fernbusse.de, or fromatob.de. You need to be a bit lucky though as only a few of those tickets are available. However, it’s worth a shot as they cost 29 EUR only.

That saves you 80% compared to the original 142 EUR, or 60% compared to the 71 EUR with the BC 50. That’s quite nice, already. But we can do better. There is the “Fernweh-Ticket” which is only available from LTUR. It costs 26 EUR and you need to poll their Web Interface every so often to get a chance to find a ticket. I intended to write a crawler, but I have not gotten around to do it yet…

With such a ticket you save almost 82% or 63% compared to the regular price. Sweet! Have I missed any offer that worth mentioning?

Finding (more) cheap flights with Kayak

Monday, July 7th, 2014

People knowing me know about my weakness when it comes to travel itineraries. I spend hours and hours, sometimes days or even weeks with finding the optimal itinerary. As such, when I was looking for flights to GNOME.Asia Summit, I had an argument over the cheapest and most comfortable flight. When I was told that a cheaper and better flight existed that I didn’t find, I refused to accept it as I saw my pride endangered. As it turned out, there were more flights than I knew of.

Kayak seems to give you different results depending on what site you actually open. I was surprised to learn that.

Here is the evidence: (you probably have to open that with a wide monitor or scroll within the image)
Kayak per country

In the screenshot, you can see that on the left hand side kayak.de found 1085 flights. It also found the cheapest one rated at 614 EUR. That flight, marked with the purple “1”, was also found by kayak.com and kayak.ie at different, albeit similar prices. In any case, that flight has a very long layover. The next best flight kayak.de returned was rated at 687 EUR. The other two Kayaks have that flight, marked with the green “3”, at around 730 EUR, almost 7% more than on the German site. The German Kayak does not have the Ethiad flight, marked with the blueish “2”, at 629 as the Irish one does! The American Kayak has that flight at 731 EUR, which is a whopping 17% of a difference. I actually haven’t checked whether the price difference persists when actually booking the flights. However, I couldn’t even have booked the Ethiad flight if I didn’t check other Kayak versions.

Lessons learnt: Checking one Kayak is not enough to find all good flights.

In addition to Kayak, I like to the the ITA Travel Matrix as it allows to greatly customise the queries. It also has a much more sane interface than Kayak. The prices are not very accurate though, as far as I could tell from my experiments. It can give you an idea of what connections are cheap, so you can use that information for, e.g. Kayak. Or, for that other Web site that I use: Skyscanner. It allows to list flights for a whole months or for a whole country instead of a specific airport.

What tools do you use to check for flights?

Applying international Bahn travel tricks to save money for tickets

Thursday, November 21st, 2013

Suppose you are sick of Tanzverbot and you want to go from Karlsruhe to Hamburg. As a proper German you’d think of the Bahn first, although Germany started to allow long distance travel by bus, which is cheap and surprisingly comfortable. My favourite bus search engine is busliniensuche.de.

Anyway, you opted for the Bahn and you search a connection, the result is a one way travel for 40 Euro. Not too bad:
bahn-ka-hh-40

But maybe we can do better. If we travel from Switzerland, we can save a whopping 0.05 Euro!
bahn-basel-hh-40
Amazing, right? Basel SBB is the first station after the German border and it allows for international fares to be applied. Interestingly, special offers exist which apparently make the same travel, and a considerable chunk on top, cheaper.

But we can do better. Instead of travelling from Switzerland to Germany, we can travel from Germany to Denmark. To determine the first station after the German border, use the Netzplan for the IC routes and then check the local map, i.e. Schleswig Holstein. You will find Padborg as the first non German station. If you travel from Karlsruhe to Padborg, you save 17.5%:
bahn-ka-padborg-33

Sometime you can save by taking a Global ticket, crossing two borders. This is, however, not the case for us:
bahn-basel-padborg-49

In case you were wondering whether it’s the very same train and route all the time: Yes it is. Feel free to look up the CNL 472.
db-cnl-472

I hope you can use these tips to book a cheaper travel.
Do you know any ways to “optimise” your Bahn ticket?

RIP Atul Chitnis

Tuesday, June 4th, 2013

Atul

I am sad to read that Atul Chitnis passed away at the age of 51. I met him several times during FOSS.in and it was a pleasure to meet the driving force behind that conference. While certainly being a controversial figure in the Free Software world, he did a lot of good things for our communities and ecosystems. Let’s hope the FOSS.in team takes the heritage and continues to make great events for India.

Talks at FOSS.in 2012

Tuesday, January 15th, 2013

Let me recap the talks held at FOSS.in a bit. It’s a bit late, I’m sorry for that, but the festive season was a bit demanding, timewise.

FOSS.IN

The conference started off smoothly with a nice Indian breakfast, coffee and good chats. The introductory talk by Atul went well and by far not as long as we expected it to be. Atul was obviously not as energetic as he used to be. I think he grew old and does visibly suffer from his illness. So a big round of applause and a bigger bucket of respect for pulling this event off nonetheless.

The first talk of the day was given by Gopal and he talked about “Big Data”. He started off with a definition and by claiming that what is considered to be big data now, is likely not to be considered big data in the future. We should think about 1GB RAM now in our laptops. Everybody ran 1GB or more in their laptops. But 10 years ago that would not have been the case. The only concept, he said, that survived was “Divide and Conquer”. That is to break up a problem into smaller sub problems which then can be run on many processing units in parallel. Hence distributed data and distributed processing was very important.

The prime example of big data was to calculate the count of unique items in a large set, i.e. compare the vocabulary of two books. You split up the books into words to find the single words and then count every one of them to find out how often it was present. You could also preprocess the words with a “stemming filter” to get rid of forms and flexions. If your data was big enough, “sort | uniq” wouldn’t do it, because “sort” would use up all your memory. To do it successfully anyway, you can split your data up, do the sorting and then merge the sort result. He was then explaining how to split up various operations and merge them together. Basically, it was important to split and merge every operation possible to scale well. And that was exactly what “Hadoop” does. In fact, it’s got several components that facilitate dealing with all that: “splitter”, “mapper”, “combiner”, “partitioner” , “shuffle fetch” and a “reducer”. However, getting data into Hadoop, was painful, he said.

Lydia from KDE talked about “Wikidata – The foundation to build your apps on“. She introduced her talk with a problem: “Which drugs are approved for pregnancy in the US?”. She said, that the Wikipedia couldn’t really answer this question easily, because maintaining such a list would be manual labour which is not really fascinating. One would have to walk through every article about a drug and try to find the information whether it was approved or not and then condense it to a list. She was aiming at, I guess, Wikipedia not really storing sematic data.

Wikidata wants to be similar to Wikimedia Commons, but for data of the world’s knowledge. It seems to that missing semantic storage which is also able to store information about the sources of the information that confirm correctness. Something like the GDP of a country or length of a river would be prime examples of use cases for Wikidata. Eventually this will increase the number of editors because the level to contribute will be lowered significantly. Also every Wikipedia language can profit immediately because it can be easily hooked up.

I just had a quick peek at Drepper’s workshop on C++11, because it was very packed. Surprisingly many people wanted to listen to what he had to say about the new C++. Since I was not really present I can’t really provide details on the contents.

Lenny talked about politics in Free Software projects. As the title was “Pushing Big Changes“, the talk revolved around issues around acquiring and convincing people to share your vision and have your project accepted by the general public. He claimed that the Internet is full of haters and that one needed a thick skin to survive the flames on the Internet. Very thick in fact.

An interesting point he made was, that connections matter. Like personal relationships with relevant people and being able to influence them. And he didn’t like it. That, and the talk in general, was interesting, because I haven’t really heard anyone talking about that so openly. Usually, everybody praises Free Software communities as being very open, egalitarian and what not. But not only rumour has it, that this is rarely the case. Anyway, The bigger part of the talk was quite systemd centric though and I don’t think it’s applicable to many other projects.

A somewhat unusual talk was given by Ben & Daniel, talking about how to really use Puppet. They do it at Mozilla at a very large scale and wanted to share some wisdom they gained.

They had a few points to make. Firstly: Do not store business data (as opposed to business logic) in Puppet modules. Secondly: Put data in “PuppetDB” or use “Hiera”. Thirdly: Reuse modules from either the “PuppetForge” or Github. About writing your own modules, they recommended to write generic enough code with parametrised classes to support many more configurations. Also, they want you to stick to the syntax style guide.

Sebastian from the KDE fame talked about KDE Plasma and how to make us succeed on mobile targets such as mobile phones or tablets. Me, not knowing “Plasma” at all, was interested to learn that Plasma was “a technology that makes it easy to build modern user interfaces”. He briefly mentioned some challenges such as running on multiple devices with or without touchscreens. He imagines the operating system to be provided by Mer and then run Plasma on top. He said that there was a range of devices that were supported at the moment. The developer story was also quite good with “Plasma Quick” and the Mer SDK.

He tried to have devices manufactured by Chinese companies and told some stories about the problems involved. One of them being that “Freedom” (probably as in Software Freedom) was not in their vocabulary. So getting free drivers was a difficult, if not impossible, task. Another issue was the size of orders, so you can’t demand anything with a order of a size of 10000 units, he said. But they seem to be able to pull it off anyway! I’m very eager to see their devices.

The last talk, which was the day’s keynote, went quite well and basically brought art and code together. He introduced us to Processing, some interesting programming IDE to produce mainly visual arts. He praised how Free Software (although he referred to it as Open Source) made everybody more creative and how the availability of art transformed the art landscape. It was interesting to see how he used computers to express his creativity and unfortunately, his time was up quite quickly.

Drepper, giving quite a few talks, also gave a talk about parallel programming. The genesis of problem was the introduction of multiple processors into a machine. It got worse when threads were introduced where they share the address space. It allowed for easy data sharing between threads but also made corrupting other threads very very easy. Also in subtle ways that you would not anticipate like that all threads share one working directory and if one thread changed it, it would be changed for all the threads of the process. Interestingly, he said that threads are not something that the end user shall use, but rather a tool for the system to exploit parallelism. The system shall provide better means for the user to use parallelism.

He praised Haskell for providing very good means for using threads. It is absolutely side effect free and even stateful stuff is modelled side effect free. So he claimed that it is a good research tool, but that it is not as efficient as C or C++. He also praised Futures (with OpenMP) where the user doesn’t have to care about the details about the whole threading but leave it up to the system. You only specify what can run in parallel and the system does it for you. Finally, he introduced into C++11 features that help using parallelism. There are various constructs in the language that make it easy to use futures, including anonymous functions and modelling thread dependencies. I didn’t like them all too much, but I think it’s cool that the language allows you to use these features.

There was another talk from Mozilla’s IT given by Shyam and he talked about DNSSec. He started with a nice introduction to DNSSec. It was a bit too much, I feel, but it’s a quite complicated topic so I appreciate all the efforts he made. The main point that I took away was to not push the DS too soon, because if you don’t have signed zones yet, resolvers don’t trust your answers and your domain is offline.

Olivier talked about GStreamer 1.0. He introduced into the GStreamer technology by telling that its concept is around elements, which are put in bins and that elements have source and sink pads that you connect. New challenges were DSPs, different processing units like GPUs. The new 1.0 included various new features better locking support that makes it easier for languages like Python or better memory management with GstBufferPool.

I couldn’t really follow the rest of the talks as I was giving one myself and was busy talking to people afterwards. It’s really amazing how interested people are and to see the angle they ask questions from.

OpenPGP Key Rollover from D3492A2A to 1BF98D6D

Monday, September 24th, 2012

Public Service Announcement: I am deprecating my old key 0xD3492A2A in favour of a newly generated key 0x1BF98D6D. I have uploaded a copy here. It is signed with my old key, too. FTR: It involved exporting the old secret key and the new public key to a temporary directory, change the expiry date of the old key, sign the new key and import the new signed key *sigh*. It’s only 11 years that --allow-expired-keys was discussed.

The new fingerprint is:

$ gpg --fingerprint --list-key 1BF98D6D
pub   3072D/1BF98D6D 2012-05-10 [expires: 2017-05-09]
      Key fingerprint = FF52 DA33 C025 B1E0 B910  92FC 1C34 19BF 1BF9 8D6D
uid                  Tobias Mueller tobias.mueller2  mail.dcu.ie
uid                  Tobias Mueller 4tmuelle  informatik.uni-hamburg.de
sub   3072g/3B76E8B3 2012-05-10 [expires: 2017-05-09]
$

It’s 2012 already and apparently there ain’t such a thing as best practices for rolling over your OpenPGP key. I’m thinking about something that discusses whether or how to

  1. create a new key
  2. adding old UIDs to the new key
  3. sign the new key with the old one
  4. sign the old key with the new one
  5. probably sign the new key with other secret keys in your keyring
  6. preparing a small text file stating the rollover
  7. sign that so that you can upload it to the public
  8. inform people that have signed your old key that a new one is in place

I do think the steps mentioned make sense and should be implemented to easy the key transition. I started with something very simple; you can find the code here. You are welcome to discuss what’s needed in order to properly move from one key to another.

19th DFN Workshop 2012

Thursday, March 1st, 2012

The 19th DFN Workshop happened again *yay* and I was lucky enough to be able to take part :)

After last year we all knew the venue and it’s great. The hotel is very professional and the receptions are very good. The conference room itself is very spacious and well equipped for having a couple of hundred people there.

So after a first caffeine infusion the conference started and the first guy gave the keynote. Tom Vogt (from Calitarus GmbH) talked about Security and Usability and he made some interesting points. He doesn’t want to have more “Security Awareness” but more “User Awareness”. He claims that users are indeed aware of security issues but need to be properly communicated with. He gave Facebook as an example: If you log in wrongly a couple of times, Facebook will send you an email, excusing themselves for the troubles *you* have while logging in. As opposed to the “if the question is stupid, the helpdesk will set you on fire” attitude.

So instead of writing security policies with a lot of rules he wants us to write policies that take the user’s view into account and make sense for the average user. He also brought up passwords and password policy. Instead of requiring at least 8 characters (which will be read as “8 characters” by the user anyway) one should encourage a more sensible strategy, i.e. the XKCD one.

He also disliked the metaphors we’re using all the time, i.e. we’re talking about documents or crypto keys. A document is something static that you hold in your hand. It can’t do any harm. But a Word-“document” is indeed something different, because there are macros and whatnot. And it’s not a big problem to temporarily give away physical keys. But in the crypto world, it is. And people, he claimed, would make those associations when confronted with these terms. Unfortunately, he didn’t have a fix for those long-term used metaphors but he said extra caution needed to be applied when talking in these terms.

Dissonance was another big thing. He claimed that it’s problematic that starting a program and opening a file is the very same action in modern operating systems. If the open document was triggered differently, then the user could see if the document that they received was indeed a text file or a some binary gibberish.

And well, as the talk was titled “Usability” user interfaces were criticised, too. He mentioned that dialogues were very rude and that it was equal to holding someone until they answer a question. That trained the user to avoid and escape the dialogue as quickly as possible without even reading them, totally destroying the whole point of a dialogue. So we should only use them in a “life or death” situation where it would be okay to physically hold someone. And well, “user errors are interface errors”.

My favourite usability bug is the whole Keysigning story. It’s broken from beginning to end. I think that if we come up with a nice and clean design of a procedure to sign each others keys, the Web of Trust model will be used more and more. Right now, it’s an utterly complex process involving different media and all that is doomed to be broken.

After that, a guy from the Leibniz-Rechenzentrum talked about internal perpetrators from university data centres. They basically introduced Login IDS, a tool to scrub your logs and make them more administration friendly. He said that they didn’t watch their logs because it was way too much data. They had around 800 logins per day on their two SSH and two Citrix servers and nobody really checked when somebody was logging in. To reduce the amount of log, they check the SSHd log and fire different events, i.e. if there is someone logging in for the very first time. Or if user hasn’t logged in at that time of the day or from the IP she’s using before. That, he claimed, reduced their amount of log to 10% of the original volume. Unfortunately, the git repo shows a single big and scary Perl file with no license at all :-|

Another somewhat technical talk followed by Michael Weiser. He talked about security requirements for modern high performance computing environments and I couldn’t really follow all the way through. But from what I’ve understood, he wants to be able to execute big jobs and have all the necessary Kerberos or AFS tokens because you don’t know for how long you’ll have to wait until you can process your data. And well, he showed some solutions (S4U2self) and proposed another one which I didn’t really understand. But apparently everything needs to be very complex because you cannot get a ticket that’s valid long enough. And instead you get a “Granting-Ticket” which empowers you to get all the tickets you want for a basically unlimited amount of time…?

The break was just coming up at the right time so that the caffeine stock could be replenished. It did get used up quite quickly ;-)

The first talk after the break introduced to HoneypotMe, a technology that enables you to put honeypots on your production-mode machines without risking to have them compromised. They basically create tunnel for the ports that are open on the honeypot but not on the production machine. So an attacker would not detect the honeypot that easily. Although it’s kinda nonsensical for a Linux machine to have the MSSQL port open. Interesting technology, although I don’t quite understand, why they put the honeypot after the production machine (network topology wise), so that you have to modify the TCP stack on the production machine in order to relay connections to the actual honeypot. Instead, one could put the honeypot in front and relay connections to the production machine. That way, one would probably reduce plumbing the TCP layer on the machine that’s meant to serve production purposes.

Another, really technical talk was given by a guy from the research centre juelich. It was so technical that I couldn’t follow. Jesus christ were the slides packed. The topic was quite interesting though. Unfortunate that it was a rather exhausting presentation. He tried to tell us how to mange IPv6 or well, to better damn manage it, because otherwise you’d have loads of trouble in your network. He was referring a lot to the very interesting IPv6 toolkit by THC. He claimed that those attacks were not easy to defend against. But it doesn’t need an attacker, he said. Windows would be enough to screw up your network, i.e. by somehow configuring Internet Connection Sharing it would send weird Router Advertisements. But I might have gotten that wrong because he was throwing lots of words and acronyms on us. NDPMON. RAPIXD. RAMOND. WTF. Fortunately, it was the last talk and we could head off to have some proper beer.

After way too less sleep and ridiculous amounts of very good food, the second day started off with a very great talk by a guy from RedTeam Pentesting. He did very interesting research involving URL shortening services and presented us his results. Some of which are quite scary. If you’re remotely interested in this topic, you should have a look at the paper once it is available. There is slightly different version here.

So the basic problem was described as follows: A user wants to send a link to a friend but the URL is too long so that email clients break it (well, he didn’t mention which though) or Twitter would simply not accept it… We kinda have to assume that Twitter is a useful thing that people do actually use to transmit links. Anyway, to shorten links, people may use a service that translates the long URL into a short one. And now the problems start.

First of all, the obvious tracking issues arise. The service provider can see who clicks on which links and even worse: Set cookies so that users are identifiable even much later. Apparently, almost all of these service do make use of tracking cookies which last for a couple of years. Interestingly, Google is reported to not make use of tracking technologies in their URL shortening service.

Secondly, you eventually leak a secret which is encoded in the URL you are shortening. And that’s apparently, what people do. They do use Google Docs or other sensitive webapps that encode important access tokens in the URL that you are throwing with both hands at the service provider. He claimed to have found many interesting documents, ranging from “obviously very private photos” over balance sheets from some company to a list of addresses of kindergarten kids. He got a good percentage of private documents which was really interesting to see.

But it gets worse. He set up a brand new web server listening on a brand new domain (fd0.me) and created URLs which he then shortened using the services. On the page his webserver delivered was a password which no search engine knew back then. The question was: Do URL shortening services leak their data to search engines? Or worse: Do they scan the database for interesting looking URLs themselves? Turns out: Yes and yes. He found his password on search engines and curious administrators in his webserver log.

Other obvious problems include loss of URL. Apparently people do use shortened URLs in long lasting things like books. And well, URL shortening services are not necessarily known for being long living. Fun fact: His university used to have such a service, but they shut it down…

Another technical issue is speed. Because of the indirection, you have an overhead in time. Google are the winner here again. They serve the fastest.

So yeah that was a very interesting talk which clearly showed the practical risks of such services.

A electronic ID card was introduced in Germany rather recently and the next guy did some research (sponsered by the ministry of interior) to explore the “eID Online Authentication Network Threat Model, Attacks and Implications”. Nobody in the audience actually used the eID so he had to tell us what you are supposed to do with it. It is used to authenticate data like your name, address, birthday or just the fact that you are at legal age. It’s heavily focussed on Browser stuff, so the scenarios are a bank or a web shop. After the website requested eID functions, the browser speaks to the local eID deamon which then wants to read your eID and communicates with the servers. Turns out, that everything seems to be quite well designed, expect well, the browsers. So he claims it is possible to Man in the Middle a connection if one can make a browser terminate a successfully opened connection. I.e. after all the TLS handshakes were finished, one would terminate the connection, intercept it and then no further verification was done. A valid attack scenario, not necessarily easy to be in that position though.


There were tiny talks as well. My favourite was Martin John from SAP talking about Cross Domain Policies. Apparently, standards exist to “enhance” the same origin policy and enable JavaScripts in browsers to talk to different domains. He scanned the internet^tm and found 3% of the domains to have wildcard policies. 50% of those had in some way sensitive webapps, i.e. authentication. He closed giving the recommendation of using CORS to do cross domain stuff.

The last two talks were quite interesting. The first one talked about XML Signature Wrapping. A technique that I haven’t heard of before, mostly because I’m not into XML at all. But it seems that you can sign parts of a XML document and well, because XML is utterly complex, libraries fail to handle that properly. There are several attacks including simply reproducing the XML tree with different properties and hoping that the parser would verify the correct tree, but work on the other. Simple, huh? But he claimed to have found CVE 2011-1411, a vulnerability in an interesting user of XML: SAML, some authentification protocol based on XML.

Afterwards, I was surprised to see an old tool I was playing with some time ago: Volatility. It gained better Linux support and the speaker showed off some features and explained how to make it support your Linux version. Quite interesting to see that people focus on bringing memory forensics to Linux.

So if you are more interested in the topics, feel free to browse or buy the book which includes all the papers.

This year’s DFN Workshop was much more interesting content wise and I am glad that it managed to present interesting topics. Again, the setting and the catering are very nice and I hope to be able to attend many more DFN Workshops in the future :-)

Ekoparty 2011

Tuesday, November 8th, 2011

I was invited to Ekoparty in Buenos Aires, Argentina. It all went very quickly, because when I was accepted for my talk on Virtualised USB Fuzzing using QEMU and Scapy, I couldn’t read email very well. I was abroad and had only a replacement laptop (which we got at MeeGo Summit in Dublin) at hand because my laptop broke down :-( And of top of that I wasn’t very well connected. Anyway, I got notice exactly two weeks before the conference and actually I had other plans anyway. But since it was in Argentina and I haven’t been there yet, I was very eager to go.

I was going from Hamburg via Amsterdam and Sao Paulo to Buenos Aires. And back from Buenos Aires via Charles de Gaule to Berlin. After my first fight I had a good break at Shiphol but when I wanted to board the next flight, I was denied at first. After a couple of minutes, some officials came and I was interrogated. Because my itinerary looked suspicious, they said. So I was asked and searched and the information I gave was promptly checked by they woman and her smart-phone. Weird stuff. The next flights and airports were fortunately much better.

The very first day of the conference was reserved for the keynote and workshops. Unfortunately, the workshops were held in Spanish only so I couldn’t really follow anything. But I still attended some folks playing around with an USRP. It was interesting enough despite the Spanish. They decoded normal FM radio, pager messages and other (analogue) radio messages flying through the ether. The keynote was held in Spanish, too, but two translators simultaneously translated the talk into English. It’s the first time that *I* am the one needing a translation device ;-) I didn’t fully get the keynote because the there was a lot of noise in the radio of the Spanglish :-/

The first talk by Agustin Gianni from Immunity was about Attacking the Webkit Heap and was, well, very technical. A bit too detailed for me as I don’t have much desire to exploit memory issues in Webkit, but it’s good to know that there people looking into that. Just after that, there was a talk about security of SAP products. The message I got was, to read the SAP advisories and documentation. Because he was showing exploits that used vulnerabilities that were either known and fixed or documented. It was still a bit interesting for me as I didn’t know much about SAP systems and could see what it’s actually about.

I don’t have much to say about the iOS forensic talk, because you can find the things he mentioned with a one liner: find / -name '*.db'.
Ryan McArthur talked about Machine Specific Registers which I didn’t even know what it was. But apparently CPUs have special registers that you usually don’t use. And these have special capabilities such as offering debug facilities. Also you can issue a simple instruction to detect whether you are in a virtual machine or not. That sounds damn interesting. With Intel it’s called Last Branch Recording. And he implementing something that would be able to trace programs like Skype. I wonder though what difference to PaiMai is. An implementation using these facilities apparently exists for Linux as well.

A bit off the wall was Marcos Nieto talking about making money with Facebook. So he realised that he could send the AJAX request, which some Flash game sends to the game server, himself. He didn’t think about writing a bot playing the game for him though. Instead, he used a proxy to capture the HTTP traffic his Flashplayer was generating and replaying that traffic with the proxy software. And the money part would then be to sell the account that had all the experience points on eBay. I hope it was just the translation and the crappy quality of the radio that made it seem so lame.

As for my presentation, I wasn’t too lucky with the MeeGo laptop I used, because it only has an Atom processor which doesn’t have KVM support. That is very bad if you want to do something with QEMU :-( But I tried to prepare my things well enough to not have many problems. But what happened then was really embarrassing. I prepared demos and I did that very thoroughly. I even recorded some videos as second line of defence in case something fails. But I didn’t expect anything to fail because my demos were simple enough, and just a few copy&paste jobs. That’s what I thought and Murphy proved me wrong. I hate him. So my demos did not work, of course. I still don’t really know why, but I guess that I left a QEMU instance running due to the nervousness. And that instance would still mess around with the pipes that I was using. So lessons learnt: Whenever you think it’s simple enough, think harder.

Demo-Video. If it doesn’t play inline (stupid wordpress) please download yourself.

The rest of the conference was relaxed and the talks were much better than the day before. I feel that the second day was saved for the big things while the first was thought of as a buffer for the people to arrive. There was the SSL talk which caught a lot of attention in international media even before the conference. For reference: The issue was assigned CVE-2011-3389. I was astonished, really, to hear *the* talk being held in Spanish. I absolutely expected that thing to go off in English. Unfortunately, I couldn’t understand much of the things that were told. It took me quite a while to understand that the “navigator” the translatress was constantly referring to is actually the browser… So I was disappointed by that talk, but the expectations were high so it was easy to be disappointed.

http://www.youtube.com/watch?v=lauFlKi56aM

So all in all it went fine. It’s a nice enough conference, really relaxed, maybe even too relaxed. Given that there was one track only, it didn’t really matter that things bent the schedule by two hours. I felt that generally things went off the radar of the organising folks, most likely due to organising a conference being very stressful ;-) But well, it would still have been nice if they actually provided the facilities they promised to give a talk, like a USB cable or a demo laptop ;-) I barely got a T-Shirt :D

CHIS-ERA conference 2011 in Cork

Wednesday, October 26th, 2011

While being in Ireland, I had the great opportunity of attending the CHIS-ERA strategic conference 2011 in Cork. Never heard of it? Neither have I. It’s a conference of European academic funding bodies to project and discuss future work and the direction of the work to be funded. Hence, it had many academics or industrial research people that talked about their vision for the next few years. If I got it correctly, the funding bodies wanted some input on their new “Call” which is their next big pile of money they throw at research.

The two broad topics were “Green ICT” and “From Data to Knowledge“. And both subjects were actually interesting. But due to the nature of the conference, many talks were quite high level and a bit too, say, visionary for my taste. But it had some technical talks which I think were displaced and given by poor Post-Docs that needed to have a presentation on their record to impress their supervisor or funding body.

CHIS-ERA Flower
However, for the Green IT part, almost all the speakers highlighted how important it was to aim for “Zero Power ICT”, because the energy consumption of electronic devices would shoot up as it did the last decade or so. But it hadn’t necessarily been much of problem, because Moore’s Law would save us a bit: We knew that in a couple of month, we could place the same logic onto half the chip which would then, according to the experts, use half the energy. However, that wouldn’t hold anymore in a decade or two, because we would reach a physical limit and we needed new solutions to the problem.

Some proposed to focus on specialised ICs that are very efficient or could be turned off, some others proposed to build probabilistic architectures because most of time a very correct result wouldn’t matter or to focus research on new materials like nanotubes and nanowires. The most interesting suggestion was to exploit very new non volatile memory technologies using spintronic elements. The weirdest approach was to save energy by eliminating routers on the Internet and have a non routing Internet. The same guy proposed to cache content on the provider as if it wasn’t done already by ISPs.

After the first day, we had a very nice trip to the old Jameson Distillery in Midleton. It started off with a movie telling us the story about Jameson coming to Ireland and making Whiskey. It didn’t forget to mention that Irish Whiskey was older and of course better than the Scottish and the tour around the old buildings were able to tell us what makes Irish Whiskey way better than the Scottish. Funnily enough, they didn’t tell us that the Jameson guy was actually Scottish ;-) I do have to admit that I like the Irish Whiskey though :-) The evening completed with a very nice and fancy meal in a nice Restaurant called Ballymaloe. I think I never dined with so many pieces of cutlery in front of me…

CHIST-ERA D2K visualisation
The second day was about “From Data to Knowledge” and unfortunately, I couldn’t attend every lecture so I probably missed the big trends. When I heard that Natural Language Processing and Automatic Speech Recognition were as advanced as being able to transcribe a spoken TV or radio news show with a 5% error rate, I was quite interested. Because in my world, I can’t even have the texts that I write corrected because I need to use ispell which doesn’t do well with markup or other stuff. Apparently, there is a big discrepancy between the bleeding edge of academic research and freely available tools :-( I hope we can close this gap first, before tackling the next simultaneous translation tool from Urdu to Lowgerman…