Applying international Bahn travel tricks to save money for tickets

Suppose you are sick of Tanzverbot and you want to go from Karlsruhe to Hamburg. As a proper German you’d think of the Bahn first, although Germany started to allow long distance travel by bus, which is cheap and surprisingly comfortable. My favourite bus search engine is busliniensuche.de.

Anyway, you opted for the Bahn and you search a connection, the result is a one way travel for 40 Euro. Not too bad:
bahn-ka-hh-40

But maybe we can do better. If we travel from Switzerland, we can save a whopping 0.05 Euro!
bahn-basel-hh-40
Amazing, right? Basel SBB is the first station after the German border and it allows for international fares to be applied. Interestingly, special offers exist which apparently make the same travel, and a considerable chunk on top, cheaper.

But we can do better. Instead of travelling from Switzerland to Germany, we can travel from Germany to Denmark. To determine the first station after the German border, use the Netzplan for the IC routes and then check the local map, i.e. Schleswig Holstein. You will find Padborg as the first non German station. If you travel from Karlsruhe to Padborg, you save 17.5%:
bahn-ka-padborg-33

Sometime you can save by taking a Global ticket, crossing two borders. This is, however, not the case for us:
bahn-basel-padborg-49

In case you were wondering whether it’s the very same train and route all the time: Yes it is. Feel free to look up the CNL 472.
db-cnl-472

I hope you can use these tips to book a cheaper travel.
Do you know any ways to “optimise” your Bahn ticket?

RIP Atul Chitnis

Atul

I am sad to read that Atul Chitnis passed away at the age of 51. I met him several times during FOSS.in and it was a pleasure to meet the driving force behind that conference. While certainly being a controversial figure in the Free Software world, he did a lot of good things for our communities and ecosystems. Let’s hope the FOSS.in team takes the heritage and continues to make great events for India.

Talks at FOSS.in 2012

Let me recap the talks held at FOSS.in a bit. It’s a bit late, I’m sorry for that, but the festive season was a bit demanding, timewise.

FOSS.IN

The conference started off smoothly with a nice Indian breakfast, coffee and good chats. The introductory talk by Atul went well and by far not as long as we expected it to be. Atul was obviously not as energetic as he used to be. I think he grew old and does visibly suffer from his illness. So a big round of applause and a bigger bucket of respect for pulling this event off nonetheless.

The first talk of the day was given by Gopal and he talked about “Big Data”. He started off with a definition and by claiming that what is considered to be big data now, is likely not to be considered big data in the future. We should think about 1GB RAM now in our laptops. Everybody ran 1GB or more in their laptops. But 10 years ago that would not have been the case. The only concept, he said, that survived was “Divide and Conquer”. That is to break up a problem into smaller sub problems which then can be run on many processing units in parallel. Hence distributed data and distributed processing was very important.

The prime example of big data was to calculate the count of unique items in a large set, i.e. compare the vocabulary of two books. You split up the books into words to find the single words and then count every one of them to find out how often it was present. You could also preprocess the words with a “stemming filter” to get rid of forms and flexions. If your data was big enough, “sort | uniq” wouldn’t do it, because “sort” would use up all your memory. To do it successfully anyway, you can split your data up, do the sorting and then merge the sort result. He was then explaining how to split up various operations and merge them together. Basically, it was important to split and merge every operation possible to scale well. And that was exactly what “Hadoop” does. In fact, it’s got several components that facilitate dealing with all that: “splitter”, “mapper”, “combiner”, “partitioner” , “shuffle fetch” and a “reducer”. However, getting data into Hadoop, was painful, he said.

Lydia from KDE talked about “Wikidata – The foundation to build your apps on“. She introduced her talk with a problem: “Which drugs are approved for pregnancy in the US?”. She said, that the Wikipedia couldn’t really answer this question easily, because maintaining such a list would be manual labour which is not really fascinating. One would have to walk through every article about a drug and try to find the information whether it was approved or not and then condense it to a list. She was aiming at, I guess, Wikipedia not really storing sematic data.

Wikidata wants to be similar to Wikimedia Commons, but for data of the world’s knowledge. It seems to that missing semantic storage which is also able to store information about the sources of the information that confirm correctness. Something like the GDP of a country or length of a river would be prime examples of use cases for Wikidata. Eventually this will increase the number of editors because the level to contribute will be lowered significantly. Also every Wikipedia language can profit immediately because it can be easily hooked up.

I just had a quick peek at Drepper’s workshop on C++11, because it was very packed. Surprisingly many people wanted to listen to what he had to say about the new C++. Since I was not really present I can’t really provide details on the contents.

Lenny talked about politics in Free Software projects. As the title was “Pushing Big Changes“, the talk revolved around issues around acquiring and convincing people to share your vision and have your project accepted by the general public. He claimed that the Internet is full of haters and that one needed a thick skin to survive the flames on the Internet. Very thick in fact.

An interesting point he made was, that connections matter. Like personal relationships with relevant people and being able to influence them. And he didn’t like it. That, and the talk in general, was interesting, because I haven’t really heard anyone talking about that so openly. Usually, everybody praises Free Software communities as being very open, egalitarian and what not. But not only rumour has it, that this is rarely the case. Anyway, The bigger part of the talk was quite systemd centric though and I don’t think it’s applicable to many other projects.

A somewhat unusual talk was given by Ben & Daniel, talking about how to really use Puppet. They do it at Mozilla at a very large scale and wanted to share some wisdom they gained.

They had a few points to make. Firstly: Do not store business data (as opposed to business logic) in Puppet modules. Secondly: Put data in “PuppetDB” or use “Hiera”. Thirdly: Reuse modules from either the “PuppetForge” or Github. About writing your own modules, they recommended to write generic enough code with parametrised classes to support many more configurations. Also, they want you to stick to the syntax style guide.

Sebastian from the KDE fame talked about KDE Plasma and how to make us succeed on mobile targets such as mobile phones or tablets. Me, not knowing “Plasma” at all, was interested to learn that Plasma was “a technology that makes it easy to build modern user interfaces”. He briefly mentioned some challenges such as running on multiple devices with or without touchscreens. He imagines the operating system to be provided by Mer and then run Plasma on top. He said that there was a range of devices that were supported at the moment. The developer story was also quite good with “Plasma Quick” and the Mer SDK.

He tried to have devices manufactured by Chinese companies and told some stories about the problems involved. One of them being that “Freedom” (probably as in Software Freedom) was not in their vocabulary. So getting free drivers was a difficult, if not impossible, task. Another issue was the size of orders, so you can’t demand anything with a order of a size of 10000 units, he said. But they seem to be able to pull it off anyway! I’m very eager to see their devices.

The last talk, which was the day’s keynote, went quite well and basically brought art and code together. He introduced us to Processing, some interesting programming IDE to produce mainly visual arts. He praised how Free Software (although he referred to it as Open Source) made everybody more creative and how the availability of art transformed the art landscape. It was interesting to see how he used computers to express his creativity and unfortunately, his time was up quite quickly.

Drepper, giving quite a few talks, also gave a talk about parallel programming. The genesis of problem was the introduction of multiple processors into a machine. It got worse when threads were introduced where they share the address space. It allowed for easy data sharing between threads but also made corrupting other threads very very easy. Also in subtle ways that you would not anticipate like that all threads share one working directory and if one thread changed it, it would be changed for all the threads of the process. Interestingly, he said that threads are not something that the end user shall use, but rather a tool for the system to exploit parallelism. The system shall provide better means for the user to use parallelism.

He praised Haskell for providing very good means for using threads. It is absolutely side effect free and even stateful stuff is modelled side effect free. So he claimed that it is a good research tool, but that it is not as efficient as C or C++. He also praised Futures (with OpenMP) where the user doesn’t have to care about the details about the whole threading but leave it up to the system. You only specify what can run in parallel and the system does it for you. Finally, he introduced into C++11 features that help using parallelism. There are various constructs in the language that make it easy to use futures, including anonymous functions and modelling thread dependencies. I didn’t like them all too much, but I think it’s cool that the language allows you to use these features.

There was another talk from Mozilla’s IT given by Shyam and he talked about DNSSec. He started with a nice introduction to DNSSec. It was a bit too much, I feel, but it’s a quite complicated topic so I appreciate all the efforts he made. The main point that I took away was to not push the DS too soon, because if you don’t have signed zones yet, resolvers don’t trust your answers and your domain is offline.

Olivier talked about GStreamer 1.0. He introduced into the GStreamer technology by telling that its concept is around elements, which are put in bins and that elements have source and sink pads that you connect. New challenges were DSPs, different processing units like GPUs. The new 1.0 included various new features better locking support that makes it easier for languages like Python or better memory management with GstBufferPool.

I couldn’t really follow the rest of the talks as I was giving one myself and was busy talking to people afterwards. It’s really amazing how interested people are and to see the angle they ask questions from.

OpenPGP Key Rollover from D3492A2A to 1BF98D6D

Public Service Announcement: I am deprecating my old key 0xD3492A2A in favour of a newly generated key 0x1BF98D6D. I have uploaded a copy here. It is signed with my old key, too. FTR: It involved exporting the old secret key and the new public key to a temporary directory, change the expiry date of the old key, sign the new key and import the new signed key *sigh*. It’s only 11 years that --allow-expired-keys was discussed.

The new fingerprint is:

$ gpg --fingerprint --list-key 1BF98D6D
pub   3072D/1BF98D6D 2012-05-10 [expires: 2017-05-09]
      Key fingerprint = FF52 DA33 C025 B1E0 B910  92FC 1C34 19BF 1BF9 8D6D
uid                  Tobias Mueller tobias.mueller2  mail.dcu.ie
uid                  Tobias Mueller 4tmuelle  informatik.uni-hamburg.de
sub   3072g/3B76E8B3 2012-05-10 [expires: 2017-05-09]
$

It’s 2012 already and apparently there ain’t such a thing as best practices for rolling over your OpenPGP key. I’m thinking about something that discusses whether or how to

  1. create a new key
  2. adding old UIDs to the new key
  3. sign the new key with the old one
  4. sign the old key with the new one
  5. probably sign the new key with other secret keys in your keyring
  6. preparing a small text file stating the rollover
  7. sign that so that you can upload it to the public
  8. inform people that have signed your old key that a new one is in place

I do think the steps mentioned make sense and should be implemented to easy the key transition. I started with something very simple; you can find the code here. You are welcome to discuss what’s needed in order to properly move from one key to another.

19th DFN Workshop 2012

The 19th DFN Workshop happened again *yay* and I was lucky enough to be able to take part πŸ™‚

After last year we all knew the venue and it’s great. The hotel is very professional and the receptions are very good. The conference room itself is very spacious and well equipped for having a couple of hundred people there.

So after a first caffeine infusion the conference started and the first guy gave the keynote. Tom Vogt (from Calitarus GmbH) talked about Security and Usability and he made some interesting points. He doesn’t want to have more “Security Awareness” but more “User Awareness”. He claims that users are indeed aware of security issues but need to be properly communicated with. He gave Facebook as an example: If you log in wrongly a couple of times, Facebook will send you an email, excusing themselves for the troubles *you* have while logging in. As opposed to the “if the question is stupid, the helpdesk will set you on fire” attitude.

So instead of writing security policies with a lot of rules he wants us to write policies that take the user’s view into account and make sense for the average user. He also brought up passwords and password policy. Instead of requiring at least 8 characters (which will be read as “8 characters” by the user anyway) one should encourage a more sensible strategy, i.e. the XKCD one.

He also disliked the metaphors we’re using all the time, i.e. we’re talking about documents or crypto keys. A document is something static that you hold in your hand. It can’t do any harm. But a Word-“document” is indeed something different, because there are macros and whatnot. And it’s not a big problem to temporarily give away physical keys. But in the crypto world, it is. And people, he claimed, would make those associations when confronted with these terms. Unfortunately, he didn’t have a fix for those long-term used metaphors but he said extra caution needed to be applied when talking in these terms.

Dissonance was another big thing. He claimed that it’s problematic that starting a program and opening a file is the very same action in modern operating systems. If the open document was triggered differently, then the user could see if the document that they received was indeed a text file or a some binary gibberish.

And well, as the talk was titled “Usability” user interfaces were criticised, too. He mentioned that dialogues were very rude and that it was equal to holding someone until they answer a question. That trained the user to avoid and escape the dialogue as quickly as possible without even reading them, totally destroying the whole point of a dialogue. So we should only use them in a “life or death” situation where it would be okay to physically hold someone. And well, “user errors are interface errors”.

My favourite usability bug is the whole Keysigning story. It’s broken from beginning to end. I think that if we come up with a nice and clean design of a procedure to sign each others keys, the Web of Trust model will be used more and more. Right now, it’s an utterly complex process involving different media and all that is doomed to be broken.

After that, a guy from the Leibniz-Rechenzentrum talked about internal perpetrators from university data centres. They basically introduced Login IDS, a tool to scrub your logs and make them more administration friendly. He said that they didn’t watch their logs because it was way too much data. They had around 800 logins per day on their two SSH and two Citrix servers and nobody really checked when somebody was logging in. To reduce the amount of log, they check the SSHd log and fire different events, i.e. if there is someone logging in for the very first time. Or if user hasn’t logged in at that time of the day or from the IP she’s using before. That, he claimed, reduced their amount of log to 10% of the original volume. Unfortunately, the git repo shows a single big and scary Perl file with no license at all 😐

Another somewhat technical talk followed by Michael Weiser. He talked about security requirements for modern high performance computing environments and I couldn’t really follow all the way through. But from what I’ve understood, he wants to be able to execute big jobs and have all the necessary Kerberos or AFS tokens because you don’t know for how long you’ll have to wait until you can process your data. And well, he showed some solutions (S4U2self) and proposed another one which I didn’t really understand. But apparently everything needs to be very complex because you cannot get a ticket that’s valid long enough. And instead you get a “Granting-Ticket” which empowers you to get all the tickets you want for a basically unlimited amount of time…?

The break was just coming up at the right time so that the caffeine stock could be replenished. It did get used up quite quickly πŸ˜‰

The first talk after the break introduced to HoneypotMe, a technology that enables you to put honeypots on your production-mode machines without risking to have them compromised. They basically create tunnel for the ports that are open on the honeypot but not on the production machine. So an attacker would not detect the honeypot that easily. Although it’s kinda nonsensical for a Linux machine to have the MSSQL port open. Interesting technology, although I don’t quite understand, why they put the honeypot after the production machine (network topology wise), so that you have to modify the TCP stack on the production machine in order to relay connections to the actual honeypot. Instead, one could put the honeypot in front and relay connections to the production machine. That way, one would probably reduce plumbing the TCP layer on the machine that’s meant to serve production purposes.

Another, really technical talk was given by a guy from the research centre juelich. It was so technical that I couldn’t follow. Jesus christ were the slides packed. The topic was quite interesting though. Unfortunate that it was a rather exhausting presentation. He tried to tell us how to mange IPv6 or well, to better damn manage it, because otherwise you’d have loads of trouble in your network. He was referring a lot to the very interesting IPv6 toolkit by THC. He claimed that those attacks were not easy to defend against. But it doesn’t need an attacker, he said. Windows would be enough to screw up your network, i.e. by somehow configuring Internet Connection Sharing it would send weird Router Advertisements. But I might have gotten that wrong because he was throwing lots of words and acronyms on us. NDPMON. RAPIXD. RAMOND. WTF. Fortunately, it was the last talk and we could head off to have some proper beer.

After way too less sleep and ridiculous amounts of very good food, the second day started off with a very great talk by a guy from RedTeam Pentesting. He did very interesting research involving URL shortening services and presented us his results. Some of which are quite scary. If you’re remotely interested in this topic, you should have a look at the paper once it is available. There is slightly different version here.

So the basic problem was described as follows: A user wants to send a link to a friend but the URL is too long so that email clients break it (well, he didn’t mention which though) or Twitter would simply not accept it… We kinda have to assume that Twitter is a useful thing that people do actually use to transmit links. Anyway, to shorten links, people may use a service that translates the long URL into a short one. And now the problems start.

First of all, the obvious tracking issues arise. The service provider can see who clicks on which links and even worse: Set cookies so that users are identifiable even much later. Apparently, almost all of these service do make use of tracking cookies which last for a couple of years. Interestingly, Google is reported to not make use of tracking technologies in their URL shortening service.

Secondly, you eventually leak a secret which is encoded in the URL you are shortening. And that’s apparently, what people do. They do use Google Docs or other sensitive webapps that encode important access tokens in the URL that you are throwing with both hands at the service provider. He claimed to have found many interesting documents, ranging from “obviously very private photos” over balance sheets from some company to a list of addresses of kindergarten kids. He got a good percentage of private documents which was really interesting to see.

But it gets worse. He set up a brand new web server listening on a brand new domain (fd0.me) and created URLs which he then shortened using the services. On the page his webserver delivered was a password which no search engine knew back then. The question was: Do URL shortening services leak their data to search engines? Or worse: Do they scan the database for interesting looking URLs themselves? Turns out: Yes and yes. He found his password on search engines and curious administrators in his webserver log.

Other obvious problems include loss of URL. Apparently people do use shortened URLs in long lasting things like books. And well, URL shortening services are not necessarily known for being long living. Fun fact: His university used to have such a service, but they shut it down…

Another technical issue is speed. Because of the indirection, you have an overhead in time. Google are the winner here again. They serve the fastest.

So yeah that was a very interesting talk which clearly showed the practical risks of such services.

A electronic ID card was introduced in Germany rather recently and the next guy did some research (sponsered by the ministry of interior) to explore the “eID Online Authentication Network Threat Model, Attacks and Implications”. Nobody in the audience actually used the eID so he had to tell us what you are supposed to do with it. It is used to authenticate data like your name, address, birthday or just the fact that you are at legal age. It’s heavily focussed on Browser stuff, so the scenarios are a bank or a web shop. After the website requested eID functions, the browser speaks to the local eID deamon which then wants to read your eID and communicates with the servers. Turns out, that everything seems to be quite well designed, expect well, the browsers. So he claims it is possible to Man in the Middle a connection if one can make a browser terminate a successfully opened connection. I.e. after all the TLS handshakes were finished, one would terminate the connection, intercept it and then no further verification was done. A valid attack scenario, not necessarily easy to be in that position though.


There were tiny talks as well. My favourite was Martin John from SAP talking about Cross Domain Policies. Apparently, standards exist to “enhance” the same origin policy and enable JavaScripts in browsers to talk to different domains. He scanned the internet^tm and found 3% of the domains to have wildcard policies. 50% of those had in some way sensitive webapps, i.e. authentication. He closed giving the recommendation of using CORS to do cross domain stuff.

The last two talks were quite interesting. The first one talked about XML Signature Wrapping. A technique that I haven’t heard of before, mostly because I’m not into XML at all. But it seems that you can sign parts of a XML document and well, because XML is utterly complex, libraries fail to handle that properly. There are several attacks including simply reproducing the XML tree with different properties and hoping that the parser would verify the correct tree, but work on the other. Simple, huh? But he claimed to have found CVE 2011-1411, a vulnerability in an interesting user of XML: SAML, some authentification protocol based on XML.

Afterwards, I was surprised to see an old tool I was playing with some time ago: Volatility. It gained better Linux support and the speaker showed off some features and explained how to make it support your Linux version. Quite interesting to see that people focus on bringing memory forensics to Linux.

So if you are more interested in the topics, feel free to browse or buy the book which includes all the papers.

This year’s DFN Workshop was much more interesting content wise and I am glad that it managed to present interesting topics. Again, the setting and the catering are very nice and I hope to be able to attend many more DFN Workshops in the future πŸ™‚

Ekoparty 2011

I was invited to Ekoparty in Buenos Aires, Argentina. It all went very quickly, because when I was accepted for my talk on Virtualised USB Fuzzing using QEMU and Scapy, I couldn’t read email very well. I was abroad and had only a replacement laptop (which we got at MeeGo Summit in Dublin) at hand because my laptop broke down πŸ™ And of top of that I wasn’t very well connected. Anyway, I got notice exactly two weeks before the conference and actually I had other plans anyway. But since it was in Argentina and I haven’t been there yet, I was very eager to go.

I was going from Hamburg via Amsterdam and Sao Paulo to Buenos Aires. And back from Buenos Aires via Charles de Gaule to Berlin. After my first fight I had a good break at Shiphol but when I wanted to board the next flight, I was denied at first. After a couple of minutes, some officials came and I was interrogated. Because my itinerary looked suspicious, they said. So I was asked and searched and the information I gave was promptly checked by they woman and her smart-phone. Weird stuff. The next flights and airports were fortunately much better.

The very first day of the conference was reserved for the keynote and workshops. Unfortunately, the workshops were held in Spanish only so I couldn’t really follow anything. But I still attended some folks playing around with an USRP. It was interesting enough despite the Spanish. They decoded normal FM radio, pager messages and other (analogue) radio messages flying through the ether. The keynote was held in Spanish, too, but two translators simultaneously translated the talk into English. It’s the first time that *I* am the one needing a translation device πŸ˜‰ I didn’t fully get the keynote because the there was a lot of noise in the radio of the Spanglish :-/

The first talk by Agustin Gianni from Immunity was about Attacking the Webkit Heap and was, well, very technical. A bit too detailed for me as I don’t have much desire to exploit memory issues in Webkit, but it’s good to know that there people looking into that. Just after that, there was a talk about security of SAP products. The message I got was, to read the SAP advisories and documentation. Because he was showing exploits that used vulnerabilities that were either known and fixed or documented. It was still a bit interesting for me as I didn’t know much about SAP systems and could see what it’s actually about.

I don’t have much to say about the iOS forensic talk, because you can find the things he mentioned with a one liner: find / -name '*.db'.
Ryan McArthur talked about Machine Specific Registers which I didn’t even know what it was. But apparently CPUs have special registers that you usually don’t use. And these have special capabilities such as offering debug facilities. Also you can issue a simple instruction to detect whether you are in a virtual machine or not. That sounds damn interesting. With Intel it’s called Last Branch Recording. And he implementing something that would be able to trace programs like Skype. I wonder though what difference to PaiMai is. An implementation using these facilities apparently exists for Linux as well.

A bit off the wall was Marcos Nieto talking about making money with Facebook. So he realised that he could send the AJAX request, which some Flash game sends to the game server, himself. He didn’t think about writing a bot playing the game for him though. Instead, he used a proxy to capture the HTTP traffic his Flashplayer was generating and replaying that traffic with the proxy software. And the money part would then be to sell the account that had all the experience points on eBay. I hope it was just the translation and the crappy quality of the radio that made it seem so lame.

As for my presentation, I wasn’t too lucky with the MeeGo laptop I used, because it only has an Atom processor which doesn’t have KVM support. That is very bad if you want to do something with QEMU πŸ™ But I tried to prepare my things well enough to not have many problems. But what happened then was really embarrassing. I prepared demos and I did that very thoroughly. I even recorded some videos as second line of defence in case something fails. But I didn’t expect anything to fail because my demos were simple enough, and just a few copy&paste jobs. That’s what I thought and Murphy proved me wrong. I hate him. So my demos did not work, of course. I still don’t really know why, but I guess that I left a QEMU instance running due to the nervousness. And that instance would still mess around with the pipes that I was using. So lessons learnt: Whenever you think it’s simple enough, think harder.

Demo-Video. If it doesn’t play inline (stupid wordpress) please download yourself.

The rest of the conference was relaxed and the talks were much better than the day before. I feel that the second day was saved for the big things while the first was thought of as a buffer for the people to arrive. There was the SSL talk which caught a lot of attention in international media even before the conference. For reference: The issue was assigned CVE-2011-3389. I was astonished, really, to hear *the* talk being held in Spanish. I absolutely expected that thing to go off in English. Unfortunately, I couldn’t understand much of the things that were told. It took me quite a while to understand that the “navigator” the translatress was constantly referring to is actually the browser… So I was disappointed by that talk, but the expectations were high so it was easy to be disappointed.

http://www.youtube.com/watch?v=lauFlKi56aM

So all in all it went fine. It’s a nice enough conference, really relaxed, maybe even too relaxed. Given that there was one track only, it didn’t really matter that things bent the schedule by two hours. I felt that generally things went off the radar of the organising folks, most likely due to organising a conference being very stressful πŸ˜‰ But well, it would still have been nice if they actually provided the facilities they promised to give a talk, like a USB cable or a demo laptop πŸ˜‰ I barely got a T-Shirt πŸ˜€

CHIS-ERA conference 2011 in Cork

While being in Ireland, I had the great opportunity of attending the CHIS-ERA strategic conference 2011 in Cork. Never heard of it? Neither have I. It’s a conference of European academic funding bodies to project and discuss future work and the direction of the work to be funded. Hence, it had many academics or industrial research people that talked about their vision for the next few years. If I got it correctly, the funding bodies wanted some input on their new “Call” which is their next big pile of money they throw at research.

The two broad topics were “Green ICT” and “From Data to Knowledge“. And both subjects were actually interesting. But due to the nature of the conference, many talks were quite high level and a bit too, say, visionary for my taste. But it had some technical talks which I think were displaced and given by poor Post-Docs that needed to have a presentation on their record to impress their supervisor or funding body.

CHIS-ERA Flower
However, for the Green IT part, almost all the speakers highlighted how important it was to aim for “Zero Power ICT”, because the energy consumption of electronic devices would shoot up as it did the last decade or so. But it hadn’t necessarily been much of problem, because Moore’s Law would save us a bit: We knew that in a couple of month, we could place the same logic onto half the chip which would then, according to the experts, use half the energy. However, that wouldn’t hold anymore in a decade or two, because we would reach a physical limit and we needed new solutions to the problem.

Some proposed to focus on specialised ICs that are very efficient or could be turned off, some others proposed to build probabilistic architectures because most of time a very correct result wouldn’t matter or to focus research on new materials like nanotubes and nanowires. The most interesting suggestion was to exploit very new non volatile memory technologies using spintronic elements. The weirdest approach was to save energy by eliminating routers on the Internet and have a non routing Internet. The same guy proposed to cache content on the provider as if it wasn’t done already by ISPs.

After the first day, we had a very nice trip to the old Jameson Distillery in Midleton. It started off with a movie telling us the story about Jameson coming to Ireland and making Whiskey. It didn’t forget to mention that Irish Whiskey was older and of course better than the Scottish and the tour around the old buildings were able to tell us what makes Irish Whiskey way better than the Scottish. Funnily enough, they didn’t tell us that the Jameson guy was actually Scottish πŸ˜‰ I do have to admit that I like the Irish Whiskey though πŸ™‚ The evening completed with a very nice and fancy meal in a nice Restaurant called Ballymaloe. I think I never dined with so many pieces of cutlery in front of me…

CHIST-ERA D2K visualisation
The second day was about “From Data to Knowledge” and unfortunately, I couldn’t attend every lecture so I probably missed the big trends. When I heard that Natural Language Processing and Automatic Speech Recognition were as advanced as being able to transcribe a spoken TV or radio news show with a 5% error rate, I was quite interested. Because in my world, I can’t even have the texts that I write corrected because I need to use ispell which doesn’t do well with markup or other stuff. Apparently, there is a big discrepancy between the bleeding edge of academic research and freely available tools πŸ™ I hope we can close this gap first, before tackling the next simultaneous translation tool from Urdu to Lowgerman…

Spare Thinkpad x60, x60s, x61 or x61s anybody?

Dear Lazyweb,

my beloved laptop broke down πŸ™ It’s an x61s and its backlight is not working anymore. I replaced the inverter card and the LCD cable to no avail. It can now only be the last and most expensive part: The LCD panel.

Hence my question: Do you know where to get hold of a spare x60, x60s, x61 or x61s with a working LCD panel? If so, please contact me.

Thanks.

My new book: Lorem Ipsum

Lenny already posted the news, so it’s about time and a real pleasure for me to present my new book: Lorem Ipsum.

It was a long ride for me and I want to thank all my supporters for allowing me to work through nights and weekends, potentially neglecting my friends and family for a while. But now it’s finally done and I’m very happy for the book to hit the (electronic and real-life-bookstore) bookshelves.

Amazon.com or if you prefer on Amazon.de. But you get more discount if you buy Support independent publishing: Buy this book on Lulu.here. So get it while it’s hot!

Product Details

ISBN 978-1-257-04887-8
Copyright Tobias Mueller
Published April 19, 2011
Language Latin
Pages 112
Binding Hardcover (casewrap)
Interior Ink Black & white
Dimensions (cm) 15.2 wide Γ— 22.9 tall

Since the exterior contributes a lot to a proper reading experience, care was taken about nice lookings and well proportioned dimensions. Obviously, it’s a hard cover as well and no cheap paper back. So don’t only judge by the content, but also by the lookings. Also, if you look close enough, you will notice a few easter eggs, that I’ve hidden in the book.

So have a lot of fun enjoying the book πŸ™‚

As a courtesy, I’ll provide the table of contents and a first page for reading.

An audio book is almost produced as well, you can have a peak at half of the first chapter here.

Your browser does not support the audio element. Or this stupid wordpress instance filters out the audio tags :-\

“Schuelerbotendienst” auf Abzocktour in Hamburg

Gerade komm’ ich mit nem Kumpel aus der Innenstadt. Dort wurden wir von zwei jungen Menschen, die vielleicht gerade 20 waren, angesprochen, ob wir den “Schuelerbotendienst” kennen wuerden. Wir verneinten und es wurde uns erklaert, dass es sich um ein soziales Projekt handele, bei dem Hartz IV Kinder sich etwas dazu verdienen koennten, indem sie Zeitung austragen. Dazu muessten sie aber erst auf Zuverlassigkeit geprueft werden. Und dafuer braeuchten sie Freiwillige, die sich ein kostenloses Abo zuschicken lassen und die korrekte Lieferung bestaetigen wollen. Nach zwei Wochen (oder so) wuerde das Abo dann aufhoeren aber wenn man wollte, koenne man es verlaengern.

Es wirkte nicht direkt abwaegig. Und in der Tat war ich fast gewillt, mich darauf einzulassen. Aber auf der Strasse etwas unterschreiben wollte ich nicht. Ich wollte die zurueckrufen, sobald ich mich informiert habe. Aber der junge Mann konnte mir gar keine Nummer seines Schuelerbotendienstes geben. Sehr fishy. Also ging ich mit einem blanko Zettel nach Hause und studierte die Information. Die zu unterschreibende Botschaft hat weder den “Schuelerbotendienst” noch eine Kostenfreiheit erwaehnt. Im Gegenteil. Zwei Wochen lang solle man das Abo bekommen, aber ohne seine Bankdaten angeben zu muessen, lediglich auf Rechnung. Danach wuerde sich das Abo eben um ein Jahr (oder so) verlaengern.

Die Skepsis war also angebracht und die Masche mit dem sog. “Schuelerbotendienst” scheint auch nicht neu zu sein.

Die Abos, die die Betrueger an die Menschen bringen wollen, sind von dem VSR Verlag, der wohl schon laenger mit dubiosen Vertriebler zu kaempfen hat.

Also Augen auf und Sinne geschaerft bei einem komischen Verkaufsgespraech auf der Strasse. Sollte doch etwas unterschrieben worden sein, gleich die 14 Tage Widerspruchsfrist in Anspruch nehmen und etwaige Vertraege kuendigen.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.