LinuxCon Europe 2015 in Dublin

sponsor

The second day was opened by Leigh Honeywell and she was talking about how to secure an Open Future. An interesting case study, she said, was Heartbleed. Researchers found that vulnerability and went through the appropriate vulnerability disclosure channels, but the information leaked although there was an embargo in place. In fact, the bug proofed to be exploited for a couple of months already. Microsoft, her former employer, had about ten years of a head start in developing a secure development life-cycle. The trick is, she said, to have plans in place in case of security vulnerabilities. You throw half of your plan away, anyway, but it’s good to have that practice of knowing who to talk to and all. She gave a few recommendations of which she thinks will enable us to write secure code. Coders should review, learn, and speak up if they feel uncomfortable with a piece of code. Managers could take up on what she called “smells” when people tend to be fearful about their code. Of course, MicroSoft’s SDL also contains many good practices. Her minimal set of practices is to have a self-assessment in place to determine if something needs security review, have an up-front threat modelling that is kept up to date as things evolve, have a security checklist like Mozilla’s or OWASP’s, and have security analysis built into CI process.

Honeywell

The container panel was led by Jeo Zonker Brockmeier who started the discussion by stating that we’ve passed the cloud hype and containers are all the rage now. The first question he shot at the panellists was whether containers were ready at all to be used for production. The panellists were, of course, all in agreement that they are, although the road ahead is still a bit bumpy. One issue, they identified, was image distribution. There are, apparently, two types of containers. Application containers and System containers. Containers used to be a lightweight VM with a full Linux system. Application Containers, on the other hand, only run your database instance. They see application containers as replacing Apps in the future. Other services like databases are thus not necessarily the task of Application containers. One of the panellists was embracing dockerhub as a similar means to RPM or .deb packages for distributing software, but, he said, we need to solve the problem of signing and trusting. He was comparing the trust issue with packages he had installed on his laptop. When he installed a package, he didn’t check what was inside the packages his OS downloaded. Well, I guess he missed that people put trust in the distribution instead of random people on the Internet who put up an image for everybody to download. Anyway, he wanted Docker to be a form of trusted entity like Google or Apple are for their app stores which are distributing applications. I don’t know how they could have missed the dependency resolution and the problem of updating lower level libraries, maybe that problem has been solved already…

Container Panel

Intel’s Mark was talking on how Open Source was fuelling the Internet of Things. He said that trust was an essential aspect of devices that have access to personal or sensitive data like access to your house. He sees the potential in IoT around vaccines which is a connection I didn’t think of. But it makes somewhat sense. He explained that vaccines are quite sensitive to temperature. In developing countries, up to 30% of the vaccines spoil, he said, and what’s worse is that you can’t tell whether the vaccine is good. The IoT could provide sensors on vaccines which can monitor the conditions. In general, he sees the integration of diverse functionality and capabilities of IoT devices will need new development efforts. He didn’t mention what those would be, though. Another big issue, he said, was the updatetability, he said. Even with smaller devices, updates must not be neglected. Also, the ability of these devices to communicate is a crucial component, too, he said. It must not be that two different light bulbs cannot talk to their controller. That sounds like this rant.

IoT opps

Next, Bradley talked about GPL compliance. He mentioned the ThinkPinguin products as a pristine example for a good GPL compliant “complete corresponding source”. He pointed the audience to the Compliance.guide. He said that it’s best to avoid the offer for source. It’s better to include the source with the product, he said, because the offer itself creates ongoing obligations. For example, your call centre needs to handle those requests for the next three years which you are probably not set up to do. Also, products have a typically short lifespan. CCS requires good instructions how to build. It’s not only automated build tools (think configure, make, make install). You should rather think of a script as a movie or play script. The test to use on your potential CCS is to give your source release to another developer of some other department and try whether that person can build the code with your instructions. Anyway, make install does usually not work on embedded anyway, because you need to flash the code. So make sure to include instructions as to how to get the software on the device. It’s usually not required to ship the tool-chain as long as you give instructions as to what compiler to use (and how it was configured). If you do include a compiler, you might end up having more obligations because GCC, for example, is itself GPL licensed. An interesting question came up regarding specialised hardware needed to build or flash the software. You do not need to include anything “tool-chain-like” as long as you have instructions as to the requirements what the user needs to obtain.

Bradley

Samsung’s Krzysztof was talking about USB in Linux. He said, it is the most common external interface in the world. It’s like the Internet in the sense that it provides services in a client-server architecture. USB also provides services. After he explained what the USB actually is how the host interacts with devices, he went on to explain the plug and play aspect of USB. While he provided some rather low-level details of the protocol, it was a rather high level in the sense that it was still the very basic USB protocol. He didn’t talk too much on how exactly the driver is being selected, for example. He went on to explain the BadUSB attack. He said that the vulnerability basically results from the lack of user interaction when plugging in a device and loading its driver. One of his suggestions were to not connect “unknown devices”, which is hard because you actually don’t know what “services” the device is implementing. He also suggested to limit the number of input sources to X11. Most importantly, though, he said that we’d better be using device authorisation to explicitly allow devices before activating them. That’s good news, because we are working on it! There are, he said, patches available for allowing certain interfaces, instead of the whole device, but they haven’t been merged yet.

USB

Jeff was talking about applying Open Source Principles to hardware. He began by pointing out how many processors you don’t get to see, for example in your hard disk, your touchpad controller, or the display controller. These processors potentially exfiltrate information but you don’t really know what they do. Actually, these processors are about owning the owner, the consumer, to then sell them stuff based on that exfiltrated big data, rather than to serve the owner, he said. He’s got a project running to build devices that you not only own, but control. He mentioned IoT as a new battleground where OpenHardware could make an interesting contestant. FPGAs are lego for hardware which can be used easily to build your functionality in hardware, he said. He mentioned that the SuperH patents have now expired. I think he wants to build the “J-Core CPU” in software such that you can use those for your computations. He also mentioned that open hardware can now be what Linux has been to the industry, a default toolkit for your computations. Let’s see where his efforts will lead us. It would certainly be a nice thing to have our hardware based on publicly reviewed designs.

Open Hardware

The next keynote was reserved for David Mohally from Huawei. He said he has a lab in which they investigate what customers will be doing in five to ten years. He thinks that the area of network slicing will be key, because different businesses needs require different network service levels. Think your temperature sensor which has small amounts of data in a bursty fashion while your HD video drone has rather high volume and probably requires low latency. As far as I understood, they are having network slices with smart meters in a very large deployment. He never mentioned what a network slice actually is, though. The management of the slices shall be opened up to the application layer on top for third parties to implement their managing. The landscape, he said, is changing dramatically from what he called legacy closed source applications to open source. Let’s hope he’s right.

Huawei

It was announced that the next LinuxCon will happen in Berlin, Germany. So again in Germany. Let’s hope it’ll be an event as nice as this one.

Intel Booth

HP Booth

LinuxCon Europe – Day 1

attendee registration

The conference was opened by the LinuxFoundation’s Executive Jim Zemlin. He thanked the FSF for their 30 years of work. I was a little surprised to hear that, given the differences between OpenSource and Free Software. He continued by mentioning the 5 Billion Dollar report which calculates how much “value” the projects hosted at Linux Foundation have generated over the last five years. He said that a typical product contains 80%, 90%, or even more Free and Open Source Software. He also extended the list of projects by the Real Time Collaborative project which, as far as I understood, effectively means to hire Thomas Gleisxner to work on the Real Time Linux patches.

world without Linux

The next, very interesting, presentation was given by Sean Gourley, the founder of Quid, a business intelligence analytics company. He talked about the limits of human cognition and how algorithms help to exploit these limits. The limit is the speed of your thinking. He mentioned that studies measured the blood flow across the brain when making decisions which found differences depending on how proficient you are at a given task. They also found that you cannot be quicker than a certain limit, say, 650ms. He continued that the global financial market is dominated by algorithms and that a fibre cable from New York to London costs 300 million dollars to save 5 milliseconds. He then said that these algorithms make decisions at a speed we are unable to catch up with. In fact, the flash crash of 2:45 is inexplicable until today. Nobody knows what happened that caused a loss of trillions of dollars. Another example he gave was the crash of Knight Capital which caused a loss of 440 million dollars in 45 minutes only because they updated their trading algorithms. So algorithms are indeed controlling our lives which he underlined by saying that 61% of the traffic on the Internet is not generated by humans. He suggested that Bots would not only control the financial markets, but also news reading and even the writing of news. As an example he showed a Google patent for auto generating social status updates and how Mexican and Chinese propaganda bots would have higher volume tweets than humans. So the responsibilities are shifting and we’d be either working with an algorithm or for one. Quite interesting thought indeed.

man vs. machine

Next up was IBM on Transforming for the Digital Economy with Open Technology which was essentially a gigantic sales pitch for their new Power architecture. The most interesting bit of that presentation was that “IBM is committed to open”. This, she said, is visible through IBM’s portfolio and through its initiatives like the IBM Academic Initiative. OpenPower Foundation is another one of those. It takes the open development model of software and takes it further to everything related to the Power architecture (e.g. chip design), she said. They are so serious about being open, that they even trademarked “Open by Design“…

IBM sales pitch

Then, the drone code people presented on their drone project. They said that they’ve come a long way since 2008 and that the next years are going to fundamentally change the drone scene as many companies are involved now. Their project, DroneCode, is a stack from open hardware to flight control and the next bigger thing will be CAN support, which is already used in cards, planes, and other vehicles. The talk then moved to ROS, the robot operating system. It is the lingua franca for robotic in academia.

Drones

Matthew Garret talked on securing containers. He mentioned seccomp and what type of features you can deprive processes of. Nowadays, you can also reason about the arguments for the system call in question, so it might be more useful to people. Although, he said, writing a good seccomp policy is hard. So another mechanism to deprive processes of privileges is to set capabilities. It allows you to limit the privileges in a more coarse grained way and the behaviour is not very well defined. The combination of capabilities and seccomp might have surprising results. For example, you might be allowing the mknod() call, but you then don’t have the capability to actually execute it or vice versa. SELinux was next on his list as a mechanism to secure your containers. He said that writing SELinux policy is not the most fun thing in the world. Another option was to run your container in a virtual machine, but you then lose some benefits such as introspection of fine grained control over the processes. But you get the advantages of more isolation. Eventually, he asked the question of when to use what technology. The performance overhead of seccomp, SELinux, and capabilities are basically negligible, he said. Fully virtualising is usually more secure, he said, but the problem is that you have more complex infrastructure which tend to attract bugs. He also mentioned GRSecurity as a means of protecting your Linux kernel. Let’s hope it’ll be merged some day.

Containers

Canonical’s Daniel Watkins then talked on cloud-init. He said it runs in three stages. Init, config, and final in which init sets up networking, config does the actual configuration of your services, final is for the things that eventually need to be done. The clound-init architecture is apparently quite flexible and versatile. You can load your own configuration and user-data modules so that you can set up your cloud images as you like. cloud-init allows you get rid of custom images such that you can have confidence in your base image working as intended. In fact, it’s working not only with BSDs but also with Windows images. He said, it is somewhat similar to tools like Ansible, so if you are already happily using one of those, you’re good.

cloud-init

An entertaining talk was given by Florian Haas on LXC and containers. He talked about tricks managing your application containers and showed a problem when using a naive chroot which is that you get to see the host processes and networking information through the proc filesystem. With LXC, that problem is dealt with, he said. But then you have a problem when you update the host, i.e. you have to take down the container while the upgrade is running. With two nodes, he said, you can build a replication setup which takes care of failing over the node while it is upgrading. He argued that this is interesting for security reasons, because you can upgrade your software to not be vulnerable against “the latest SSL hack” without losing uptime. Or much of it, at least… But you’d need twice the infrastructure to run production. The future, he said, might be systemd with it’s nspawn tool. If you use systemd all the way, then you can use fleet to manage the instances. I didn’t take much away, personally, but I guess managing containers is all the rage right now.

LXC

Next up was Michael Hausenblas on Filesystems, SQL and NoSQL with Apache Mesos. I had briefly heard of Mesos, but I really didn’t know what it was. Not that I’m an expert now, but I guess I know that it’s a scheduler you can use for your infrastructure. Especially your Apache stack. Mesos addresses the problem of allocating resources to jobs. Imagine you have several different jobs to execute, e.g. a Web server, a caching layer, and some number crunching computation framework. Now suppose you want to increase the number crunching after hours when the Web traffic wears off. Then you can tell Mesos what type of resources you have and when you need that. Mesos would then go off and manage your machines. The alternative, he said, was to manually SSH into the machines and reprovision them. He explained some existing and upcoming features of Mesos. So again, a talk about managing containers, machines, or infrastructure in general.

Mesos

The following Kernel panel didn’t provide much information to me. The moderation felt a bit stiff and the discussions weren’t really enganged. The topics mainly circled around maintainership, growth, and community.

Kernel Panel

SuSE’s Ralf was then talking on DevOps. He described his DevOps needs based on a cycle of planning, coding, building, testing, releasing, deploying, operating, monitoring, and then back to planning. When bringing together multiple projects, he said, they need to bring two independent integration loops together. When doing DevOps with a customer, he mentioned some companies who themselves provide services to their customers. In order to be successful when doing DevOps, you need, he said, Smart tools, Process automation, Open APIs, freedom of choice, and quality control are necessary. So I guess he was pitching for people to use “standards”, whatever that exactly means.

SuSE DevOps

I awaited the next talk on Patents and patent non aggression. Keith Bergelt, from OIN talked about ten years of the Open Invention Network. He said that ten years ago Microsoft sued Linux companies to hinder Linux distribution. Their network was founded to embrace patent non-aggression in the community. A snarky question would have been why it would not be simply enough to use GPLv3, but no questions were admitted. He said that the OIN has about 1750 licensees now with over a million patents being shared. That’s actually quite impressive and I hope that small companies are being protected from patent threats of big players…

OIN

That concluded the first day. It was a lot of talks and talking in the hallway. Video recordings are said to be made available in a couple of weeks. So keep watching the conference page.

Sponsors

IBM Booth

Unboxing a Siswoo C55

For a couple of days now, I am an owner of a Siswoo Longbow C55. It’s a 5.5″ Chinese smartphone with an interesting set of specs for the 130 EUR it costs. For one, it has a removable battery with 3300mAh. That powers the phone for two days which I consider to be quite good. A removable battery is harder and harder to get these days :-/ But I absolutely want to be able to replace the battery in case it’s worn out, hard reboot it when it locks up, or simply make sure that it’s off. It also has 802.11a WiFi which seems to be rare for phones in that price range. Another very rare thing these days is an IR interface. The Android 5.1 based firmware also comes with a remote control app to control various TVs, aircons, DVRs, etc. The new Android version is refreshing and is fun to use. I don’t count on getting updates though, although the maker seems to be open about it.

The does not have NFC, but something called hotknot. The feature is described as being similar to NFC, but works with induction on the screen. So when you want to connect two devices, you need to make the screens touch. I haven’t tried that out yet, simply because I haven’t seen anyone with that technology yet. It also does not have illuminated lower buttons. So if you’re depending on that then the phone does not work for you. A minor annoyance for me is the missing notification LED. I do wonder why such a cheap part is not being built into those cheap Chinese phones. I think it’s a very handy indicator and it annoys me to having to power on the screen only to see whether I have received a message.

I was curious whether the firmware on the phone matches the official firmware offered on the web site. So I got hold of a GNU/Linux version of the flashtool which is Qt-based BLOB. Still better than running Windows… That tool started but couldn’t make contact with the phone. I was pulling my hair out to find out why it wouldn’t work. Eventually, I took care of ModemManager, i.e. systemd disable ModemManager or do something like sudo mv /usr/share/dbus-1/system-services/org.freedesktop.ModemManager1.service{,.bak} and kill modem-manager. So apparently it got in the way when the flashtool was trying to establish a connection. I have yet to find out whether this

/etc/udev/rules.d/21-android-ignore-modemmanager.rules

works for me:

ACTION!="add|change|move", GOTO="mm_custom_blacklist_end"
SUBSYSTEM!="usb", GOTO="mm_custom_blacklist_end"
ENV{DEVTYPE}!="usb_device", GOTO="mm_custom_blacklist_end"
ATTR{idVendor}=="0e8d", ATTR{idProduct}=="2000", ENV{ID_MM_DEVICE_IGNORE}="1"
LABEL="mm_custom_blacklist_end"

I “downloaded” the firmware off the phone and compared it with the official firmware. At first I was concerned because they didn’t hash to the same value, but it turns out that the flash tool can only download full blocks and the official images do not seem to be aligned to full blocks. Once I took as many bytes of the phone’s firmware as the original firmware images had, the hash sums matched. I haven’t found a way yet to get full privileges on that Android 5.1, but given that flashing firmware works (sic!) it should only be a matter of messing with the system partition. If you have any experience doing that, let me know.

The device performs sufficiently well. The battery power is good, the 2GB of RAM make it unlikely for the OOM killer to stop applications. What is annoying though is the sheer size of the device. I found 5.0″ to be too big already, so 5.5″ is simply too much for my hands. Using the phone single handedly barely works. I wonder why there are so many so huge devices out there now. Another minor annoyance is that some applications simply crash. I guess they don’t handle the 64bit architecture well or have problems with Android 5.1 APIs.

FWIW: I bought from one of those Chinese shops with a European warehouse and their support seems to be comparatively good. My interaction with them was limited, but their English was perfect and, so far, they have kept what they promised. I pre-ordered the phone and it was sent a day earlier than they said it would be. The promise was that they take care of the customs and all and they did. So there was absolutely no hassle on my side, except that shipping took seven days, instead of, say, two. At least for my order, they used SFBest as shipping company.

Do you have any experience with (cheap) Chinese smartphones or those shops?

DFN Workshop 2015

As in the last few years, the DFN Workshop happened in Hamburg, Germany.

The conference was keynoted by Steven Le Blond who talked about targeted attacks, e.g. against dissidents. He mentioned that he already presented the content at the USENIX security conference which some people think is very excellent. He first showed how he used Skype to look up IP addresses of his boss and how similarly targeted attacks were executed in the past. Think Stuxnet. His main focus were attacks on NGOs though. He focussed on an attacker sending malicious emails to the victim.

In order to find out what attack vectors were used, they contacted over 100 NGOs to ask whether they were attacked. Two NGOs, which are affiliated with the Chinese WUC, which represents the Uyghur minority, received 1500 malicious emails, out of which 1100 were carrying malware. He showed examples of those emails and some of them were indeed very targeted. They contained a personalised message with enough context to look genuine. However, the mail also had a malicious DOC file attached. Interestingly enough though, the infrastructure used by the attacker for the targeted attacks was re-used for several victims. You could have expected the attacker to have their infrastructure separated for the various victims, especially when carrying out targeted attacks.

They also investigated how quickly the attacker exploited publicly known vulnerabilities. They measured the time of the malicious email sent minus the release date of the vulnerability. They found that some of the attacks were launched on day 0, meaning that as soon as a vulnerability was publicly disclosed, an NGO was attacked with a relevant exploit. Maybe interestingly, they did not find any 0-day exploits launched. They also measured how the security precautions taken by Adobe for their Acrobat Reader and Microsoft for their Office product (think sandboxing) affected the frequency of attacks. It turned out that it does help to make your software more secure!

To defend against targeted attacks based on spoofed emails he proposed to detect whether the writing style of an email corresponds to that of previously seen emails of the presumed contact. In fact, their research shows that they are able to tell whether the writing style matches that of previous emails with very high probability.

The following talk assessed end-to-end email solutions. It was interesting, because they created a taxonomy for 36 existing projects and assessed qualities such as their compatibility, the trust-model used, or the platform it runs on.
The 36 solutions they identified were (don’t hold your breath, wall of links coming): Neomailbox, Countermail, salusafe, Tutanota, Shazzlemail, Safe-Mail, Enlocked, Lockbin, virtru, APG, gpg4o, gpg4win, Enigmail, Jumble Mail, opaqueMail, Scramble.io, whiteout.io, Mailpile, Bitmail, Mailvelope, pEp, openKeychain, Shwyz, Lavaboom, ProtonMail, StartMail, PrivateSky, Lavabit, FreedomBox, Parley, Mega, Dark Mail, opencom, okTurtles, End-to-End, kinko.me, and LEAP (Bitmask).

Many of them could be discarded right away, because they were not production ready. The list could be further reduced by discarding solutions which do not use open standards such as OpenPGP, but rather proprietary message formats. After applying more filters, such as that the private key must not leave the realm of the user, the list could be condensed to seven projects. Those were: APG, Enigmail, gpg4o, Mailvelope, pEp, Scramble.io, and whiteout.io.

Interestingly, the latter two were not compatible with the rest. The speakers attributed that to the use of GPG/MIME vs. GPG/Inline and they favoured the latter. I don’t think it’s a good idea though. The authors attest pEp a lot of potential and they seem to have indeed interesting ideas. For example, they offer to sign another person’s key by reading “safe words” over a secure channel. While this is not a silver bullet to the keysigning problem, it appears to be much easier to use.

As we are on keysigning. I have placed an article in the conference proceedings. It’s about GNOME Keysign. The paper’s title is “Welcome to the 2000s: Enabling casual two-party key signing” which I think reflects in what era the current OpenPGP infrastructure is stuck. The mindsets of the people involved are still a bit left in the old days where dealing with computation machines was a thing for those with long and white beards. The target group of users for secure communication protocols has inevitably grown much larger than it used to be. While this sounds trivial, the interface to GnuPG has not significantly changed since. It also still makes it hard for others to build higher level tools by making bad default decisions, demanding to be in control of “trust” decisions, and by requiring certain environmental conditions (i.e. the filesystem to be used). GnuPG is not a mere library. It seems it understands itself as a complete crypto suite. Anyway, in the paper, I explained how I think contemporary keysigning protocols work, why it’s not a good thing, and how to make it better.

I propose to further decentralise OpenPGP by enabling people to have very small keysigning “parties”. Currently, the setup cost of a keysigning party is very high. This is, amongst other things, due to the fact that an organiser is required to collect all the keys, to compile a list of participant, and to make the keys available for download. Then, depending on the size of the event, the participants queue up for several hours. And to then tick checkboxes on pieces of paper. A gigantic secops fail. The smarter people sign every box they tick so that an attacker cannot “inject” a maliciously ticked box onto the paper sheet. That’s not fun. The not so smart people don’t even bring their sheets of paper or have them printed by a random person who happens to also be at the conference and, surprise, has access to a printer. What a gigantic attack surface. I think this is bad. Let’s try to reduce that surface by reducing the size of the events.

In order to enable people to have very small events, i.e. two people keysigning, I propose to make most of the actions of a keysigning protocol automatic. So instead of requiring the user to manually compare the fingerprint, I propose that we securely transfer the key to be signed. You might rightfully ask, how to do that. My answer is that we’ve passed the 2000s and that we bring devices which are capable of opening a TCP connection on a link local network, e.g. WiFi. I know, this is not necessarily a given, but let’s just assume for the sake of simplicity that one of our device we carry along can actually do WiFi (and that the network does not block connections between machines). This also prevents certain attacks that users of current Best Practises are still vulnerable against, namely using short key ids or leaking who you are communicating with.

Another step that needs to be automated is signing the key. It sounds easy, right? But it’s not just a mere gpg --sign-key. The first problem is, that you don’t want the key to be signed to pollute your keyring. That can be fixed by using --homedir or the GNUPGHOME environment variable. But then you also want to sign each UID on the key separately. And this is were things get a bit more interesting. Anyway, to make a long story short: We’re not able to do that with plain GnuPG (as of now) in a sane manner. And I think it’s a shame.

Lastly, sending the key needs to be as “zero-click” as possible, too. I propose to simply reuse the current MUA of the user. That sounds easy, but unfortunately, it’s only 2015 and we cannot interact with, say, Evolution and Thunderbird in a standardised manner. There is xdg-email, but it has annoying bugs and doesn’t seem to be maintained. I’m waiting for a sane Email-API. I mean, Email has been around for some time now, let’s now try to actually use it. I hope to be able to make another more formal announcement on GNOME Keysign, soon.

the userbase for strong cryptography declines by half with every additional keystroke or mouseclick required to make it work

— attributed to Ellison.

Anyway, the event was good, I am happy to have attended. I hope to be able to make it there next year again.

Attending the DANTE Tagung in Karlsruhe

Much to my surprise, the DANTE Tagung took place in Karlsruhe, Germany. It appears to be the main gathering of the LaTeX (and related) community.

Besides pub-based events in the evenings, they also had talks. I knew some people on the program by name and was eager to finally see them IRL. One of those was Markus Kohm, from the KOMAScript fame. He went on to present new or less used features. One of those was scrlayer which is capable of adding layers to a page, i.e. background or foreground layers. So you can add, e.g. a logo or a document version to every page, more or less like this:

DeclareNewLayer[{
    background,
    topmargin,
    contents={\hfill
        \includegraphics[width=3cm, heigth=2cm]
                                  {example-image}
}%
}[{Logo}
\AddLayersToPageStyle{@everystyle@}{Logo}

You could do that with fancyhead, but then you’d only get the logo depending on your page style. The scrlayer solution will be applied always. And it’s more KOMAesque, I guess.

The next talk I attended was given by Uwe Ziegenhagen on new or exciting CTAN packages.
Among the packages he presented was ctable. It can be used to type-set tables and figures. It uses a favourite package of mine, tabularx. The main advantage seems to be to be able to use footnotes which is otherwise hard to achieve.

He also presented easy-todo which provides “to-do notes through­out a doc­u­ment, and will pro­vide an in­dex of things to do”. I usually use todonotes which seems similar enough so I don’t really plan on changing that. The differences seem to be that easy-todo offer more fine grained control over what goes into a list of todos to be printed out.

The flowchart package seems to allow drawing flowcharts with TikZ more easily, especially following “IBM Flowcharting Template”. The flowcharts I drew so far were easy enough and I don’t think this package would have helped me, but it is certain that the whole process of drawing with TikZ needs to be made much easier…

Herbert Voß went on to talk about ConTeXt, which I had already discovered, but was pleased by. From my naïve understanding, it is a “different” macro set for the TeX engine. So it’s not PDFTeX, LuaLaTeX, or XeTeX, but ConTeXt. It is distributed with your favourite TeXLive distribution, so it should be deployed on quite a few installations. However, the best way to get ConTeXt, he said, was to fire up the following command:

rsync -rlpt rsync://contextgarden.net/minimals/setup/.../bin .

wow. rsync. For binary software distribution. Is that the pinnacle of apps? In 2014? Rsync?! What is this? 1997? Quite an effective method, but I doubt it’s the most efficient. Let alone security wise.

Overall, ConTeXt is described as being a bit of an alien in the TeX world. The relationship with TeXLive is complicated, at best, and conventions are not congruent which causes a multitude of complications when trying to install, run, extend, or maintain both LaTeX and ConTeXt.


The next gathering will take place in the very north of Germany. A lovely place, but I doubt that I’ll be attending. The crowd is nice, but it probably won’t be interesting for me, talk-wise. I attribute that party to my inability to enjoy coding TeX or LaTeX, but also to the arrogance I felt from the community. For example, people were mocking use cases people had, disregarding them as being irrelevant. So you might not be able to talk TeX with those people, but they are nice, anyway.

Getting cheaper Bahn fares via external services

Imagine you want to go from some random place in Germany to the capital. Maybe because it is LinuxTag. We learned that you can try to apply international fares. In the case of Berlin, the Netzplan for Berlin indicates that several candidate train stations exist: Rzepin, Kostrzyn, or Szczecin. However, we’re not going to explore that now.

Instead, we have a look at other (third party) offers. Firstly, you can always get a Veranstaltungsticket. It’s a ticket rated at 99 EUR for a return trip. The flexible ticket costs 139 EUR and allows you to take any train, instead of fixed ones. Is that a good price? Let’s check the regular price for the route Karlsruhe ←→ Berlin.

The regular price is 142 EUR. Per leg. So the return trip would cost a whopping 284 EUR. Let’s assume you have a BahnCard 50. It costs 255 EUR and before you get it, you better do the math whether it’s worth it. Anyway, if you have that card, the price halves and we have to pay 71 EUR for a leg or 142 for the return trip. That ticket is fully flexible, so any train can be taken. The equivalent Veranstaltungsticket costs 139, so a saving of 3 EUR, or 2%.

Where to get that Veranstaltungsticket you ask? Well, turns out, LinuxTag offered it, itself. You call the phone number of the Bahn and state your “code”. In the LinuxTag case it was “STATION Berlin”. It probably restricts your destination options to Berlin. More general codes are easily found on the Web. Try “Finanz Informatik”,
“TMF”, or “DOAG”.

I don’t expect you to be impressed by saving 2%. Another option is to use bus search engines, such as busliniensuche.de, fernbusse.de, or fromatob.de. You need to be a bit lucky though as only a few of those tickets are available. However, it’s worth a shot as they cost 29 EUR only.

That saves you 80% compared to the original 142 EUR, or 60% compared to the 71 EUR with the BC 50. That’s quite nice, already. But we can do better. There is the “Fernweh-Ticket” which is only available from LTUR. It costs 26 EUR and you need to poll their Web Interface every so often to get a chance to find a ticket. I intended to write a crawler, but I have not gotten around to do it yet…

With such a ticket you save almost 82% or 63% compared to the regular price. Sweet! Have I missed any offer that worth mentioning?

Finding (more) cheap flights with Kayak

People knowing me know about my weakness when it comes to travel itineraries. I spend hours and hours, sometimes days or even weeks with finding the optimal itinerary. As such, when I was looking for flights to GNOME.Asia Summit, I had an argument over the cheapest and most comfortable flight. When I was told that a cheaper and better flight existed that I didn’t find, I refused to accept it as I saw my pride endangered. As it turned out, there were more flights than I knew of.

Kayak seems to give you different results depending on what site you actually open. I was surprised to learn that.

Here is the evidence: (you probably have to open that with a wide monitor or scroll within the image)
Kayak per country

In the screenshot, you can see that on the left hand side kayak.de found 1085 flights. It also found the cheapest one rated at 614 EUR. That flight, marked with the purple “1”, was also found by kayak.com and kayak.ie at different, albeit similar prices. In any case, that flight has a very long layover. The next best flight kayak.de returned was rated at 687 EUR. The other two Kayaks have that flight, marked with the green “3”, at around 730 EUR, almost 7% more than on the German site. The German Kayak does not have the Ethiad flight, marked with the blueish “2”, at 629 as the Irish one does! The American Kayak has that flight at 731 EUR, which is a whopping 17% of a difference. I actually haven’t checked whether the price difference persists when actually booking the flights. However, I couldn’t even have booked the Ethiad flight if I didn’t check other Kayak versions.

Lessons learnt: Checking one Kayak is not enough to find all good flights.

In addition to Kayak, I like to the the ITA Travel Matrix as it allows to greatly customise the queries. It also has a much more sane interface than Kayak. The prices are not very accurate though, as far as I could tell from my experiments. It can give you an idea of what connections are cheap, so you can use that information for, e.g. Kayak. Or, for that other Web site that I use: Skyscanner. It allows to list flights for a whole months or for a whole country instead of a specific airport.

What tools do you use to check for flights?

Applying international Bahn travel tricks to save money for tickets

Suppose you are sick of Tanzverbot and you want to go from Karlsruhe to Hamburg. As a proper German you’d think of the Bahn first, although Germany started to allow long distance travel by bus, which is cheap and surprisingly comfortable. My favourite bus search engine is busliniensuche.de.

Anyway, you opted for the Bahn and you search a connection, the result is a one way travel for 40 Euro. Not too bad:
bahn-ka-hh-40

But maybe we can do better. If we travel from Switzerland, we can save a whopping 0.05 Euro!
bahn-basel-hh-40
Amazing, right? Basel SBB is the first station after the German border and it allows for international fares to be applied. Interestingly, special offers exist which apparently make the same travel, and a considerable chunk on top, cheaper.

But we can do better. Instead of travelling from Switzerland to Germany, we can travel from Germany to Denmark. To determine the first station after the German border, use the Netzplan for the IC routes and then check the local map, i.e. Schleswig Holstein. You will find Padborg as the first non German station. If you travel from Karlsruhe to Padborg, you save 17.5%:
bahn-ka-padborg-33

Sometime you can save by taking a Global ticket, crossing two borders. This is, however, not the case for us:
bahn-basel-padborg-49

In case you were wondering whether it’s the very same train and route all the time: Yes it is. Feel free to look up the CNL 472.
db-cnl-472

I hope you can use these tips to book a cheaper travel.
Do you know any ways to “optimise” your Bahn ticket?

Talks at FOSS.in 2012

Let me recap the talks held at FOSS.in a bit. It’s a bit late, I’m sorry for that, but the festive season was a bit demanding, timewise.

FOSS.IN

The conference started off smoothly with a nice Indian breakfast, coffee and good chats. The introductory talk by Atul went well and by far not as long as we expected it to be. Atul was obviously not as energetic as he used to be. I think he grew old and does visibly suffer from his illness. So a big round of applause and a bigger bucket of respect for pulling this event off nonetheless.

The first talk of the day was given by Gopal and he talked about “Big Data”. He started off with a definition and by claiming that what is considered to be big data now, is likely not to be considered big data in the future. We should think about 1GB RAM now in our laptops. Everybody ran 1GB or more in their laptops. But 10 years ago that would not have been the case. The only concept, he said, that survived was “Divide and Conquer”. That is to break up a problem into smaller sub problems which then can be run on many processing units in parallel. Hence distributed data and distributed processing was very important.

The prime example of big data was to calculate the count of unique items in a large set, i.e. compare the vocabulary of two books. You split up the books into words to find the single words and then count every one of them to find out how often it was present. You could also preprocess the words with a “stemming filter” to get rid of forms and flexions. If your data was big enough, “sort | uniq” wouldn’t do it, because “sort” would use up all your memory. To do it successfully anyway, you can split your data up, do the sorting and then merge the sort result. He was then explaining how to split up various operations and merge them together. Basically, it was important to split and merge every operation possible to scale well. And that was exactly what “Hadoop” does. In fact, it’s got several components that facilitate dealing with all that: “splitter”, “mapper”, “combiner”, “partitioner” , “shuffle fetch” and a “reducer”. However, getting data into Hadoop, was painful, he said.

Lydia from KDE talked about “Wikidata – The foundation to build your apps on“. She introduced her talk with a problem: “Which drugs are approved for pregnancy in the US?”. She said, that the Wikipedia couldn’t really answer this question easily, because maintaining such a list would be manual labour which is not really fascinating. One would have to walk through every article about a drug and try to find the information whether it was approved or not and then condense it to a list. She was aiming at, I guess, Wikipedia not really storing sematic data.

Wikidata wants to be similar to Wikimedia Commons, but for data of the world’s knowledge. It seems to that missing semantic storage which is also able to store information about the sources of the information that confirm correctness. Something like the GDP of a country or length of a river would be prime examples of use cases for Wikidata. Eventually this will increase the number of editors because the level to contribute will be lowered significantly. Also every Wikipedia language can profit immediately because it can be easily hooked up.

I just had a quick peek at Drepper’s workshop on C++11, because it was very packed. Surprisingly many people wanted to listen to what he had to say about the new C++. Since I was not really present I can’t really provide details on the contents.

Lenny talked about politics in Free Software projects. As the title was “Pushing Big Changes“, the talk revolved around issues around acquiring and convincing people to share your vision and have your project accepted by the general public. He claimed that the Internet is full of haters and that one needed a thick skin to survive the flames on the Internet. Very thick in fact.

An interesting point he made was, that connections matter. Like personal relationships with relevant people and being able to influence them. And he didn’t like it. That, and the talk in general, was interesting, because I haven’t really heard anyone talking about that so openly. Usually, everybody praises Free Software communities as being very open, egalitarian and what not. But not only rumour has it, that this is rarely the case. Anyway, The bigger part of the talk was quite systemd centric though and I don’t think it’s applicable to many other projects.

A somewhat unusual talk was given by Ben & Daniel, talking about how to really use Puppet. They do it at Mozilla at a very large scale and wanted to share some wisdom they gained.

They had a few points to make. Firstly: Do not store business data (as opposed to business logic) in Puppet modules. Secondly: Put data in “PuppetDB” or use “Hiera”. Thirdly: Reuse modules from either the “PuppetForge” or Github. About writing your own modules, they recommended to write generic enough code with parametrised classes to support many more configurations. Also, they want you to stick to the syntax style guide.

Sebastian from the KDE fame talked about KDE Plasma and how to make us succeed on mobile targets such as mobile phones or tablets. Me, not knowing “Plasma” at all, was interested to learn that Plasma was “a technology that makes it easy to build modern user interfaces”. He briefly mentioned some challenges such as running on multiple devices with or without touchscreens. He imagines the operating system to be provided by Mer and then run Plasma on top. He said that there was a range of devices that were supported at the moment. The developer story was also quite good with “Plasma Quick” and the Mer SDK.

He tried to have devices manufactured by Chinese companies and told some stories about the problems involved. One of them being that “Freedom” (probably as in Software Freedom) was not in their vocabulary. So getting free drivers was a difficult, if not impossible, task. Another issue was the size of orders, so you can’t demand anything with a order of a size of 10000 units, he said. But they seem to be able to pull it off anyway! I’m very eager to see their devices.

The last talk, which was the day’s keynote, went quite well and basically brought art and code together. He introduced us to Processing, some interesting programming IDE to produce mainly visual arts. He praised how Free Software (although he referred to it as Open Source) made everybody more creative and how the availability of art transformed the art landscape. It was interesting to see how he used computers to express his creativity and unfortunately, his time was up quite quickly.

Drepper, giving quite a few talks, also gave a talk about parallel programming. The genesis of problem was the introduction of multiple processors into a machine. It got worse when threads were introduced where they share the address space. It allowed for easy data sharing between threads but also made corrupting other threads very very easy. Also in subtle ways that you would not anticipate like that all threads share one working directory and if one thread changed it, it would be changed for all the threads of the process. Interestingly, he said that threads are not something that the end user shall use, but rather a tool for the system to exploit parallelism. The system shall provide better means for the user to use parallelism.

He praised Haskell for providing very good means for using threads. It is absolutely side effect free and even stateful stuff is modelled side effect free. So he claimed that it is a good research tool, but that it is not as efficient as C or C++. He also praised Futures (with OpenMP) where the user doesn’t have to care about the details about the whole threading but leave it up to the system. You only specify what can run in parallel and the system does it for you. Finally, he introduced into C++11 features that help using parallelism. There are various constructs in the language that make it easy to use futures, including anonymous functions and modelling thread dependencies. I didn’t like them all too much, but I think it’s cool that the language allows you to use these features.

There was another talk from Mozilla’s IT given by Shyam and he talked about DNSSec. He started with a nice introduction to DNSSec. It was a bit too much, I feel, but it’s a quite complicated topic so I appreciate all the efforts he made. The main point that I took away was to not push the DS too soon, because if you don’t have signed zones yet, resolvers don’t trust your answers and your domain is offline.

Olivier talked about GStreamer 1.0. He introduced into the GStreamer technology by telling that its concept is around elements, which are put in bins and that elements have source and sink pads that you connect. New challenges were DSPs, different processing units like GPUs. The new 1.0 included various new features better locking support that makes it easier for languages like Python or better memory management with GstBufferPool.

I couldn’t really follow the rest of the talks as I was giving one myself and was busy talking to people afterwards. It’s really amazing how interested people are and to see the angle they ask questions from.