Open Source Hong Kong 2015

Recently, I’ve been to Hong Kong for Open Source Hong Kong 2015, which is the heritage of the GNOME.Asia Summit 2012 we’ve had in Hong Kong. The organisers apparently liked their experience when organising GNOME.Asia Summit in 2012 and continued to organise Free Software events. When talking to organisers, they said that more than 1000 people registered for the gratis event. While those 1000 were not present, half of them are more realistic.

Olivier from Amazon Web Services Klein was opening the conference with his keynote on Big Data and Open Source. He began with a quote from RMS: about the “Free” in Free Software referring to freedom, not price. He followed with the question of how does Big Data fit into the spirit of Free Software. He answered shortly afterwards by saying that technologies like Hadoop allow you to mess around with large data sets on commodity hardware rather than requiring you to build a heavy data center first. The talk then, although he said it would not, went into a subtle sales pitch for AWS. So we learned about AWS’ Global Infrastructure, like how well located the AWS servers are, how the AWS architecture helps you to perform your tasks, how everything in AWS is an API, etc. I wasn’t all too impressed, but then he demoed how he uses various Amazon services to analyse Twitter for certain keywords. Of course, analysing Twitter is not that impressive, but being able to do that within a few second with relatively few lines of code impressed me. I was also impressed by his demoing skills. Of course, one part of his demo failed, but he was reacting very professionally, e.g. he quickly opened a WiFi hotspot on his phone to use that as an alternative uplink. Also, he quickly grasped what was going on on his remote Amazon machine by quickly glancing over netstat and ps output.

The next talk I attended was on trans-compiling given by Andi Li. He was talking about Haxe and how it compiles to various other languages. Think Closure, Scala, and Groovy which all compile to Java bytecode. But on steroids. Haxe apparently compiles to code in another language. So Haxe is a in a sense like Emcripten or Vala, but a much more generic source-to-source compiler. He referred about the advantages and disadvantages of Haxe, but he lost me when he was saying that more abstraction is better. The examples he gave were quite impressive. I still don’t think trans-compiling is particularly useful outside the realm of academic experiments, but I’m still intrigued by the fact that you can make use of Haxe’s own language features to conveniently write programs in languages that don’t provide those features. That seems to be the origin of the tool: Flash. So unless you have a proper language with a proper stdlib, you don’t need Haxe…

From the six parallel tracks, I chose to attend the one on BDD in Mediawiki by Baochuan Lu. He started out by providing his motivation for his work. He loves Free/Libre and Open Source software, because it provides a life-long learning environment as well as a very supportive community. He is also a teacher and makes his students contribute to Free Software projects in order to get real-life experience with software development. As a professor, he said, one of his fears when starting these projects was being considered as the expert™ although he doesn’t know much about Free Software development. This, he said, is shared by many professors which is why they would not consider entering the public realm of contributing to Free Software projects. But he reached out to the (Mediawiki) community and got amazing responses and an awful lot of help.
He continued by introducing to Mediawiki, which, he said, is a platform which powers many Wikimedia Foundation projects such as the Wikipedia, Wikibooks, Wikiversity, and others. One of the strategies for testing the Mediawiki is to use Selenium and Cucumber for automated tests. He introduced the basic concepts of Behaviour Driven Development (BDD), such as being short and concise in your test cases or being iterative in the test design phase. Afterwards, he showed us how his tests look like and how they run.

The after-lunch talk titled Data Transformation in Camel Style was given by Red Hat’s Roger Hui and was concerned with Apache Camel, an “Enterprise Integration” software. I had never heard of that and I am not much smarter know. From what I understood, Camel allows you to program message workflows. So depending on the content of a message, you can make it go certain ways, i.e. to a file or to an ActiveMQ queue. The second important part is data transformation. For example, if you want to change the data format from XML to JSON, you can use their tooling with a nice clicky pointy GUI to drag your messages around and route them through various translators.

From the next talk by Thomas Kuiper I learned a lot about Gandi, the domain registrar. But they do much more than that. And you can do that with a command line interface! So they are very tech savvy and enjoy having such customers, too. They really seem to be a cool company with an appropriate attitude.

The next day began with Jon’s Kernel Report. If you’re reading LWN then you haven’t missed anything. He said that the kernel grows and grows. The upcoming 4.2 kernel, probably going to be released on August 23rd. might very well be the busiest we’ve seen with the most changesets so far. The trend seems to be unstoppable. The length of the development cycle is getting shorter and shorter, currently being at around 63 days. The only thing that can delay a kernel release is Linus’ vacation… The rate of volunteer contribution is dropping from 20% as seen for 2.6.26 to about 12% in 3.10. That trend is also continuing. Another analysis he did was to look at the patches and their timezone. He found that that a third of the code comes from the Americas, that Europe contributes another third, and so does Australasia. As for Linux itself, he explained new system calls and other features of the kernel that have been added over the last year. While many things go well and probably will continue to do so, he worries about the real time Linux project. Real time, he said, was the system reacting to an external event within a bounded time. No company is supporting the real time Linux currently, he said. According to him, being a real time general purpose kernel makes Linux very attractive and if we should leverage that potential. Security is another area of concern. 2014 was the year of high profile security incidents, like various Bash and OpenSSL bugs. He expects that 2015 will be no less interesting. Also because the Kernel carries lots of old and unmaintained code. Three million lines of code haven’t been touch in at least ten years. Shellshock, he said, was in code more than 20 years old code. Also, we have a long list of motivated attackers while not having people working on making the Kernel more secure although “our users are relying on us to keep them safe in a world full of threats”

The next presentation was given by Microsoft on .NET going Open Source. She presented the .NET stack which Microsoft has open sourced at the end of last year as well as on Visual Studio. Their vision, she said, is that Visual Studio is a general purpose IDE for every app and every developer. So they have good Python and Android support, she said. A “free cross platform code editor” named Visual Studio Code exists now which is a bit more than an editor. So it does understand some languages and can help you while debugging. I tried to get more information on that Patent Grant, but she couldn’t help me much.

There was also a talk on Luwrain by Michael Pozhidaev which is GPLv3 software for blind people. It is not a screen reader but more of a framework for writing software for blind people. They provide an API that guarantees that your program will be accessible without the application programmer needing to have knowledge of accessibility technology. They haven’t had a stable release just yet, but it is expected for the end of 2015. The demo unveiled some a text oriented desktop which reads out text on the screen. Several applications already exist, including a file editor and a Twitter client. The user is able to scroll through the text by word or character which reminded of ChorusText I’ve seen at GNOME.Asia Summit earlier this year.

I had the keynote slot which allowed me to throw out my ideas for the future of the Free Software movement. I presented on GNOME and how I see that security and privacy can make a distinguishing feature of Free Software. We had an interesting discussion afterwards as to how to enable users to make security decisions without prompts. I conclude that people do care about creating usable secure software which I found very refreshing.

Both the conference and Hong Kong were great. The local team did their job pretty well and I am proud that the GNOME.Asia Summit in Hong Kong inspired them to continue doing Free Software events. I hope I can be back soon :-)

GNOME.Asia Summit 2015 in Depok, Indonesia

I have just returned from the GNOME.Asia Summit 2015 in Depok, Indonesia.

Out of the talks, the most interesting talk I have seen, I think, was the one from Iwan S. Tahari, the manager of a local shoe producer who also sponsored GNOME shoes!

Open Source Software in Shoes Industry” was the title and he talked about how his company, FANS Shoes, est 2001, would use “Open Source”. They are also a BlankOn Linux partner which seems to be a rather big thing in Indonesia. In fact, the keynote presentation earlier was on that distribution and mentioned how they try to make it easier for people of their culture to contribute to Free Software.
Anyway, the speaker went on to claim that in Indonesia, they have 82 million Internet users out of which 69 million use Facebook. But few use “Open Source”, he asserted. The machines sold ship with either Windows or DOS, he said. He said that FANS preferred FOSS because it increased their productivity, not only because of viruses (he mentioned BRONTOK.A as a pretty annoying example), but also because of the re-installation time. To re-install Windows costs about 90 minutes, he said. The average time to install Blank On (on an SSD), was 15 minutes. According to him, the install time is especially annoying for them, because they don’t have IT people on staff. He liked Blank On Linux because it comes with “all the apps” and that there is not much to install afterwards. Another advantage he mentioned is the costs. He estimated the costs of their IT landscape going Windows to be 136,57 million Rupees (12000 USD). With Blank On, it comes down to 0, he said. That money, he can now spend on a Van and a transporter scooter instead. Another feature of his GNU/Linux based system, he said, was the ability to cut the power at will without stuff breaking. Indonesia, he said, is known for frequent power cuts. He explicitly mentioned printer support to be a major pain point for them.

When they bootstrapped their Free Software usage, they first tried to do Dual Boot for their 5 employees. But it was not worth their efforts, because everybody selected Windows on boot, anyway. They then migrated the accounting manager to a GNU/Linux based operating system. And that laptop still runs the LinuxMint version 13 they installed… He mentioned that you have to migrate top down, never from bottom to top, so senior management needs to go first. Later Q&A revealed that this is because of cultural issues. The leaders need to set an example and the workers will not change unless their superiors do. Only their RnD department was hard to migrate, he said, because they need to be compatible to Corel Draw. With the help of an Indonesian Inkscape book, though, they managed to run Inkscape. The areas where they lack support is CAD (think AutoCAD), Statistics (think SPSS), Kanban information system (like iceScrum), and integration with “Computer Aided Machinery”. He also identified the lack of documentation to be a problem not only for them, but for the general uptake of Free Software in Indonesia. In order to amend the situation, they provide gifts for people writing documentation or books!

All in all, it was quite interesting to see an actual (non-computer) business running exclusively on Free Software. I had a chat with Iwan afterwards and maybe we can get GNOME shaped flip-flops in the future :-)

The next talk was given by Ahmad Haris with GNOME on an Android TV Dongle. He brought GNOME to those 30 USD TV sticks that can turn your TV into a “smart” device. He showed various commands and parameters which enable you to run Linux on these devices. For the reasons as to why put GNOME on those devices, he said, that it has a comparatively small memory footprint. I didn’t really understand the motivation, but I blame mostly myself, because I don’t even have a TV… Anyway, bringing GNOME to more platforms is good, of course, and I was happy to see that people are actively working on bringing GNOME to various hardware.

Similarly, Running GNOME on a Nexus 7 by Bin Li was presenting how he tried to make his Android tabled run GNOME. There is previous work done by VadimRutkovsky:

He gave instructions as to how to create a custom kernel for the Nexus 7 device. He also encountered some problems, such as compilations errors, and showed how he fixed them. After building the kernel, he installed Arch-Linux with the help of some scripts. This, however, turned out to not be successful, so he couldn’t run his custom Arch Linux with GNOME.
He wanted to have a tool like “ubuntu-device-flash” such that hacking on this device is much easier. Also, downloading and flashing a working image is too hard for casually hacking on it, he said.

A presentation I was not impressed by was “In-memory computing on GNU/Linux”. More and more companies, he said, would be using in-memory computing on a general operating system. Examples of products which use in-memory computing were GridGain, SAP HANA, IBM DB2, and Oracle 12c. These products, he said, allow you to make better and faster decision making and to avoid risks. He also pointed out that you won’t have breaking down hard-drives and less energy consumption. While in-memory is blazingly fast, all your data is lost when you have a power failure. The users of big data, according to him, are businesses, academics, government, or software developers. The last one surprised me, but he didn’t go into detail as to why it is useful for an ordinary developer. The benchmarks he showed were impressive. Up to hundred-fold improvements for various tests were recorded in the in-memory setting compared to the traditional on-disk setting. The methodology wasn’t comprehensive, so I am yet not convinced that the convoluted charts show anything useful. But the speaker is an academic, so I guess he’s got at least compelling arguments for his test setup. In order to build a Linux suitable for in-memory computation, they installed a regular GNU/Linux on a drive and modify the boot scripts such that the disk will be copied into a tmpfs. I am wondering though, wouldn’t it be enough to set up a very aggressive disk cache…?

I was impressed by David’s work on ChorusText. I couldn’t follow the talk, because my Indonesian wasn’t good enough. But I talked to him privately and he showed me his device which, as far as I understand, is an assistive screen reader. It has various sliders with tactile feedback to help you navigating through text with the screen reader. Apparently, he has low vision himself so he’s way better suited to tell whether this device is useful. For now, I think it’s great and I hope that it helps more people and that we can integrate it nicely into GNOME.

My own keynote went fairly well. I spent my time with explaining what I think GNOME is, why it’s good, and what it should become in the future. If you know GNOME, me, and my interests, then it doesn’t come as a surprise that I talked about the history of GNOME, how it tries to bring Free computing to everyone, and how I think security and privacy will going to matter in the future. I tried to set the tone for the conference, hoping that discussions about GNOME’s future would spark in the coffee breaks. I had some people discussing with afterwards, so I think it was successful enough.

When I went home, I saw that the Jakarta airport runs GNOME 3, but probably haven’t done that for too long, because the airport’s UX is terrible. In fact, it is one of the worst ones I’ve seen so far. I arrived at the domestic terminal, but I didn’t know which one it was, i.e. its number. There were no signs or indications that tell you in which terminal you are in. Let alone where you need to go to in order to catch your international flight. Their self-information computer system couldn’t deliver. The information desk was able to help, though. The transfer to the international terminal requires you to take a bus (fair enough), but whatever the drivers yell when they stop is not comprehensible. When you were lucky enough to get out at the right terminal, you needed to have a printed version of your ticket. I think the last time I’ve seen this was about ten years ago in Mumbai. The airport itself is big and bulky with no clear indications as to where to go. Worst of all, it doesn’t have any air conditioning. I was not sure whether I had to pay the 150000 Rupees departure tax, but again, the guy at the information desk was able to help. Although I was disappointed to learn that they won’t take a credit card, but cash only. So I drew the money out of the next ATM that wasn’t broken (I only needed three attempts). But it was good to find the non-broken ATM, because the shops wouldn’t take my credit card, either, so I already knew where to get cash from. The WiFi’s performance matches the other airport’s infrastructure well: It’s quite dirty. Because it turned out that the information the guy gave me was wrong, I invested my spare hundred somewhat thousands rupees in dough-nuts in order to help me waiting for my 2.5 hours delayed flight. But I couldn’t really enjoy the food, because the moment I sat on any bench, cockroaches began to invade the place. I think the airport hosts the dirtiest benches of all Indonesia. The good thing is, that they have toilets. With no drinkable water, but at least you can wash your hands. Fortunately, my flight was only two hours late, so I could escape relatively quickly. I’m looking forward to going back, but maybe not via CGK ;-)

All in all, many kudos to the organisers. I think this year’s edition was quite successful.

Sponsored by GNOME!

FOSDEM 2015

It’s winter again and it was clear that FOSDEM was coming. However, preparation fell through the cracks, at least for me, mainly because my personal life is fast-paced at the moment. We had a table again, and our EventsBox, which is filled with goodness to demo GNOME, made its way from Gothenburg, where I actually carried it to a couple of months ago.

Unfortunately though, we didn’t have t-shirts to sell. We do have boxes of t-shirts left, but they didn’t make it to FOSDEM :-\ So this FOSDEM didn’t generate nearly as much revenue as the last years. It’s a pity that this year’s preparation was suboptimal. I hope we can improve next year. Were able to get rid of other people’s things, though ;-) Like last year, the SuSE people brought beer, but it was different this time. Better, even ;-)

The fact that there wasn’t as much action at our booth as last years, I could actually attend talks. I was able to see Sri and Pam talking on the Groupon incident that shook us up a couple of months ago. It was really nice to see her, because I wanted to shake hands and say thanks. She did an amazing job. Interestingly enough, she praised us, the GNOME Foundation’s Board of Directors, for working very professionally. Much better than any client she has worked with. I am surprised, because I didn’t really have the feeling we were acting as promptly as we could. You know, we’re volunteers, after all. Also, we didn’t really prepare as much as we could have which led to some things being done rather spontaneously. Anyway, I take that as a compliment and I guess that our work can’t be all too bad. The talk itself showed our side of things and, if you ask me, was painting things in a too bright light. Sure, we were successful, but I attribute much of that success to network effects and a bit of luck. I don’t think we could replicate that success easily.

GNOME’s presence at FOSDEM was not too bad though, despite the lack of shirts. We had a packed beer event and more talks by GNOMEy people. The list includes Karen‘s keynote, Benzo‘s talk on SDAPDS, and Sri‘s talk on GNOME’s impact on the Free Software ecosystem. You can find more here.

A talk that I did see was on improving the keysigning situation. I really mean to write about this some more. For now, let me just say that I am pleased to see people working on solutions. Solutions to a problem I’m not sure many people see and that I want to devote some time for explaining it, i.e. in s separate post. The gist is, that contemporary “keysigning parties” come with non-negligible costs for both, the organiser and the participant. KeySigningPartyTools were presented which intend to improve they way things are currently done. That’s already quite good as it’ll reduce the number of errors people typically make when attending such a party.

However, I think that we need to rethink keysigning. Mostly, because the state of the art is a massive SecOps fail. There is about a gazillion traps to be avoided and many things don’t actually make so much sense. For example, I am unable to comprehend why we are muttering a base16 encoded version of your 160 bit fingerprint to ourselves. Or why we must queue outside in the cold without being able to jump the queue if a single person is a bit slow, because then everybody will be terribly confused and the whole thing taking even longer. Or why we need to do everything on paper (well, I know the arguments: Your computer can be hacked, be social, yadda yadda). I did actually give a talk on rethinking the keysigning problem (slides). It’s about a project that I have only briefly mentioned here and which I should really write about in the near future. GNOME Keysign intends to be less of a SecOps fail by letting the scan a barcode and click “next”. The rest will be operations known to the user such as sending an email. No more manually comparing fingerprints. No more leaking data to the Internet about who you want to contact. No more MITM attacks against your OpenPGP installation. No more short key ids that you accidentally use or because you mistyped a letter of the fingerprint. No more editing raw Perl in order to configure your keysigning tool. The talk went surprisingly well. I actually expected the people in the security devroom to be mad when someone like me is taking their perl and their command line away. I received good questions and interesting feedback. I’ll follow up here with another post once real-life lets me get to it.

Brussels itself is a very nice city. We were lucky, I guess, because we had some sunshine when we were walking around the city. I love the plethora of restaurants. And I like that Brussels is very open and cultural. Unfortunately, the makerspace was deserted when we arrived, but it is was somewhat expected as it was daytime… I hope to return again and check it out during the night ;-)

mrmcd14 in Darmstadt – DOM-based XSS

After last year’s fabulous event, I was really looking forward to this year’s mrmcd in Darmstadt, Germany. It outgrew last year’s edition and had probably around 250 to 300 people attending. Maybe even more. In fact, 450 clients generated 423 GB traffic during the conference which lasted 60 hours or so. That’s around 2MB/s. That’s megabytes. Per second. Every second. I find that quite impressive. Especially as the outdoor area was very inviting to just hang around, grab a beer, and chat to your fellow hackers. So some people must have had an amazing demand of … updates…

This year’s theme was construction sites. As IT, and especially security, is a major, never ending, and dangerous construction site. It was well done, with a lot of warning tape, the people wearing helmets, hi-vis vests, some security boots, etc. Although it couldn’t excel last year’s aviation theme, but the watermark was set extremely high. Anyway, the speakers received cool gadgets, like a tool set, a level, and other very well done gadgets. The talks were opened by Unicorn who, as you can see, was wearing proper safety gear. We were given instructions as to how to behave in case of fire, flood, or lack of alcohol. A nifty feature of this event is the availability of carbo hydrates in form of various food stuffs. It’s very cool to always being able to walk up to the buffet and fill up energy reserves.

The keynote was involuntarily given by dodger who did not miss the opportunity to show us various constructions sites, such as the Utah Data Center. Ultimately, (now I am maybe over interpreting things), it’s also hackers like us who make those possible. We usually decide for ourselves where to go and what to do. It was a good round-up on how we as a community work or should work. Also with some political references which I think is important as I have the feeling that many people lose that focus too easily.

An interesting series of talks was given by Ange Albertini, who first presented the PDF file format. It was interesting to see how the format actually looks like. I knew already a little but I’ve never really cared about the details. This was a very interesting and visually appealing talk. Pretty much like his other presentations which were again on file formats and on crypto.

My own talk was scheduled after the second night. I was positively surprised to see a half-filled room on a Sunday morning, after two nights of demanding partying… Anyway, I had an interested crowd which I think I could entertain. You can find my slides here. I was talking on DOM-based Cross-site Scripting. I presented a modified Chrome browser which is able to stop all identified DOM-based XSSs. I will need a separate post to cover the details. As a brief summary: Both WebKit and V8 were modified to track taint, that is, to annotate strings with the information of the source. Such a source could be the document.URL or the window.name. This taint information is evaluated whenever it is about to be compiled to code. The simple approach of blocking every tainted string to compile is not followed as it breaks the Web. Instead, the compiler will notice which token is about to be generated and only allow generation if and only if the string is untainted or of a data type (String, Boolean, Number). If the tainted token is, for example, function call, assignment or pretty much anything else, then it is replaced with an illegal token in order to abort compilation. There is a video of the talk here:

As we are on videos, the video team is just plainly amazing. It released videos of the event pretty much after they finished. And in a quality that is hard to excel. You check the videos of this conference, but also others. You may find some gems that are well worth watching. Be aware though, some talks are also very much on the vapor-ware side of things… I guess I don’t need to point to specific talks as it should be easy to identify…

I am already looking forward to next year’s event. The watermark has, again, been set high and I expect the next year to be able to raise that bar. But I hope it will be able to stay small enough to not lose the cosy and comfy feeling. Maybe I shouldn’t blog about that fantastic event to not generate too much attention ;-)

LibreOffice Con in Bern, Switzerland

I was invited to give a talk in Bern, Switzerland, for the LibreOffice Conference. The LibreOffice people are a nice crowd with diverse backgrounds. I talked to design people, coders doing rather low-level GL things, marketing folks, some being new to Free Software, and to some being old farts. It sounds like a lot of people and one is inclined to think of boat loads of people attending the conference when having the community statistics in mind. But it has been a very cosy event, with less than a hundred people. I found that surprising, but not necessarily in a bad way.

I couldn’t make it to many talks, because the conference took place on week days. But judging from the schedule there were many interesting talks. The only thing I didn’t like about the schedule was the weird formatting. Seriously, who makes the track’s name more visible than the talk’s title..? Also grouping by room and not by time is a bit weird.

Anyway, my talk went well although it was in the first slot after the free beer party ;-) You can find my slides in the collection. I was talking about GNOME in general, but with a twist for those who migrate from proprietary software to Free Software. I hope I could convey that the GNOME desktop might be a viable alternative to proprietary products.

As this was a great, comfortable conference, I’m looking forward to visiting next year’s event.

Reverse sshuttle tunnel to connect to separate networks

I had to solve that the split horizon DNS problem in order to find my way out to the Internet. The complementary problem is how to access the internal network form the Internet. The scenario being, for example, your home network being protected by a very angry firewall that you don’t necessarily control. However, it’d be quite handy to be able to SSH into your machines at home, use the printer, or connect to the internal messaging system.

However, everything is pretty much firewalled such that no incoming connections are possible. Fortunately, outgoing connections to an SSH server are possible. With the RemoteForward option of OpenSSH we can create a reverse tunnel to connect to the separate network. All it requires is a SSH server that you can connect to from both sides, i.e. the internet and the separate network, and some configuration, maybe like this on the machine within the network: ssh -o 'RemoteForward=localhost:23 localhost:22' root@remotehost and this for the internet machine:

Host dialin
    User toor
    HostName my.server
    Port 23

It then looks almost like this:

      
+---------------------------------------+                       
|Internet                               |                       
+---------------------------------------+
|  +-----------+                        |                       
|  |My machine | +------------+         |                       
|  +-----------+              |         |                       
|                             |         |                       
|                  +----------v--+      |                       
|                  |             |      |                       
|                  | SSH Server  |      |                       
|                  |             |      |                       
|                  +----------+--+      |                       
|                         ^   |         |                       
+------------------------ |   | --------+                       
                          |   |                                 
+------------------------ |   | --------+                       
|XXXXXXXX   Firewall  XX  |   | XXXXXXXX|                       
+------------------------ |   | --------+                       
                          |   |                                 
+------------------------ |   | --------+
| ACME.corp  10/8         |   |         |                       
+------------------------ |   | --------+
|                         |   |         |                       
|               +---------+---|------+  |                       
|   XMPP  <-+   |             |      |  |                       
|           |   |             |      |  |                       
|           |   |             v      |  |                       
|   Print <----------+ ssh -R        |  |                       
|           |   |      via corkscrew |  |                       
|           |   |                    |  |                       
|   VCS   <-+   +--------------------+  |                       
|               |  My machine        |  |                       
|               +--------------------+  |                       
|                                       |                       
+---------------------------------------+                       

“But…” I hear you say. What about the firewall? How would we connect in first place? Sure, we can use corkscrew, as we’ve learned. That will then look a bit more convoluted, maybe like this:


ssh -o ProxyCommand="corkscrew proxy.acme.corp 80 ssh.my.server 443" -o 'RemoteForward=localhost:23 localhost:22' root@lolcathost

What? You don’t have corkscrew installed? Gnah, it’s dangerous to go alone, take this:

cd
wget --continue http://www.agroman.net/corkscrew/corkscrew-2.0.tar.gz
tar xvf corkscrew*.tar*
cd corkscrew*
./configure --prefix=~/corkscrew; make; make install

echo -e  'y\n'|ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa

(echo -n 'command="read",no-X11-forwarding,no-agent-forwarding '; cat ~/.ssh/id_rsa.pub ;echo;echo EOF)

As a bonus, you get a SSH public key which you can add on the server side, i.e. cat >> ~root/.ssh/authorized_keys <<EOF. Have you noticed? When logging on with that key, only the read command will be executed.

That’s already quite helpful. But how do you then connect? Via the SSH server, of course. But it’s a bit of a hassle to first connect there and then somehow port forward via SSH and all. Also, in order to resolve internal names, you’d have to first SSH into the separate machine to issue DNS queries. That’s all painful and not fun. How about an automatic pseudo VPN that allows you to use the internal nameserver and transparently connects you to your internal network?

Again, sshuttle to the rescue. With the same patches applied to /etc/NetworkManager/dnsmasq.d/corp-tld, namely

# resolves names both, .corp and .acme
server=/acme.corp/10.2.3.4
server=/corp.acme/10.3.4.5

you can make use of that lovely patch for dns hosts. In the following example, we have a few nameservers defined, just in case: 10.2.3.4, 10.3.4.5, 10.4.5.6, and 10.5.6.7. It also excludes some networks that you may not want to have transparently routed. A few of them are actually standard local networks and should probably never be routed. Finally, the internal network is defined. In the example, the networks are 10.1.2.3/8, 123.1.2.3/8, and 321.456.0.0/16.


sshuttle --dns-hosts 10.2.3.4,10.3.4.5,10.4.5.6,10.5.6.7 -vvr dialin 10.1.2.3/8 123.1.2.3/8 321.456.0.0/16 \
--exclude 10.0.2.1/24 \
--exclude 10.183.252.224/24 \
--exclude 127.0.1.1/8 \
--exclude 224.0.0.1/8 \
--exclude 232.0.0.1/8 \
--exclude 233.252.0.0/14 \
--exclude 234.0.0.0/8

This setup allows you to simply execute that command and enjoy all of your networks. Including name resolution.

GUADEC 2014 in Strasbourg

This year, GUADEC took place in the lovely Strasbourg in France. It was really nice to attend the conference and to hang around with people who care about Free Software. In fact, the venue itself ran Debian which was nice to see :-)

Unfortunately, I wasn’t able to attend many of the great talks as I wasn’t available for all days. And when I was, I was busy meeting people. Although it felt smaller than the last GUADEC, I think I’ve never met so many people who I wanted to talk to.

The conference offered a two-track program. Interestingly many of them looking out for a future of GNOME. John Stowers gave one of the more important talks, I think. He was describing the situation in academia. Python is very popular in the scientific computing space, he said. He was not satisfied with JavaScript being the new “default” language for GNOME applications, because the contestants are numerous and powerful. So we would compete at least against the Web and Qt. The former apparently being nice on other platforms such as Windows. GNOME’s bindings, however, were very good, he said. The technological foundation is excellent and we should leverage that potential and make people use it. However, GNOME’s story on Windows is not all too good, he said. GTK+ is becoming more and more irrelevant and even Wx appears to be as popular as Gtk. I also heard others claiming that the Windows situation is a problem. What I don’t understand is whether there are technical problems blocking easy to use ports. Apparently introspected GNOME libraries for Pyhon on Windows exist, but I don’t understand why that doesn’t do the job.

Another talk related to the future of GNOME was given byAllan Day. In order for GNOME to be successful, amongst other things, a focus on quality must be established, he said. Various ways to improve the current release process were mentioned and the audience engaged in a vivid discussion. I don’t remember the detail so I hope this will be followed up and discussed more broadly in the GNOME community.

“Why do we do desktop”, asked Matthew Garrett in his presentation. When I read that title for the first time I thought the question of the desktop becoming irrelevant was being picked up. But that was not the case. Instead, he wanted GNOME to differentiate from the existing desktops which, as he claimed, are continuing to be simple multiplexors for running several programs (such as clocks) at the same time. In contrast to existing desktop, GNOME should become the secure desktop. Other desktops, he said, would only exist in order to sell more things to the user, i.e. to tie the user to an existing ecosystem. An advantage of GNOME is it being free from corporate control. Decisions are made very transparently which enables it to focus on brining privacy and security to the user. Even if the user is not aligned with our core values and principles. As such, every user deserves as much privacy and security as we can possible provide.

Many thanks to the local team for having organised the conference. I hope next year in Gothenburg will be at least as good.

Sponsored by GNOME!

Getting cheaper Bahn fares via external services

Imagine you want to go from some random place in Germany to the capital. Maybe because it is LinuxTag. We learned that you can try to apply international fares. In the case of Berlin, the Netzplan for Berlin indicates that several candidate train stations exist: Rzepin, Kostrzyn, or Szczecin. However, we’re not going to explore that now.

Instead, we have a look at other (third party) offers. Firstly, you can always get a Veranstaltungsticket. It’s a ticket rated at 99 EUR for a return trip. The flexible ticket costs 139 EUR and allows you to take any train, instead of fixed ones. Is that a good price? Let’s check the regular price for the route Karlsruhe ←→ Berlin.

The regular price is 142 EUR. Per leg. So the return trip would cost a whopping 284 EUR. Let’s assume you have a BahnCard 50. It costs 255 EUR and before you get it, you better do the math whether it’s worth it. Anyway, if you have that card, the price halves and we have to pay 71 EUR for a leg or 142 for the return trip. That ticket is fully flexible, so any train can be taken. The equivalent Veranstaltungsticket costs 139, so a saving of 3 EUR, or 2%.

Where to get that Veranstaltungsticket you ask? Well, turns out, LinuxTag offered it, itself. You call the phone number of the Bahn and state your “code”. In the LinuxTag case it was “STATION Berlin”. It probably restricts your destination options to Berlin. More general codes are easily found on the Web. Try “Finanz Informatik”,
“TMF”, or “DOAG”.

I don’t expect you to be impressed by saving 2%. Another option is to use bus search engines, such as busliniensuche.de, fernbusse.de, or fromatob.de. You need to be a bit lucky though as only a few of those tickets are available. However, it’s worth a shot as they cost 29 EUR only.

That saves you 80% compared to the original 142 EUR, or 60% compared to the 71 EUR with the BC 50. That’s quite nice, already. But we can do better. There is the “Fernweh-Ticket” which is only available from LTUR. It costs 26 EUR and you need to poll their Web Interface every so often to get a chance to find a ticket. I intended to write a crawler, but I have not gotten around to do it yet…

With such a ticket you save almost 82% or 63% compared to the regular price. Sweet! Have I missed any offer that worth mentioning?

Finding (more) cheap flights with Kayak

People knowing me know about my weakness when it comes to travel itineraries. I spend hours and hours, sometimes days or even weeks with finding the optimal itinerary. As such, when I was looking for flights to GNOME.Asia Summit, I had an argument over the cheapest and most comfortable flight. When I was told that a cheaper and better flight existed that I didn’t find, I refused to accept it as I saw my pride endangered. As it turned out, there were more flights than I knew of.

Kayak seems to give you different results depending on what site you actually open. I was surprised to learn that.

Here is the evidence: (you probably have to open that with a wide monitor or scroll within the image)
Kayak per country

In the screenshot, you can see that on the left hand side kayak.de found 1085 flights. It also found the cheapest one rated at 614 EUR. That flight, marked with the purple “1”, was also found by kayak.com and kayak.ie at different, albeit similar prices. In any case, that flight has a very long layover. The next best flight kayak.de returned was rated at 687 EUR. The other two Kayaks have that flight, marked with the green “3”, at around 730 EUR, almost 7% more than on the German site. The German Kayak does not have the Ethiad flight, marked with the blueish “2”, at 629 as the Irish one does! The American Kayak has that flight at 731 EUR, which is a whopping 17% of a difference. I actually haven’t checked whether the price difference persists when actually booking the flights. However, I couldn’t even have booked the Ethiad flight if I didn’t check other Kayak versions.

Lessons learnt: Checking one Kayak is not enough to find all good flights.

In addition to Kayak, I like to the the ITA Travel Matrix as it allows to greatly customise the queries. It also has a much more sane interface than Kayak. The prices are not very accurate though, as far as I could tell from my experiments. It can give you an idea of what connections are cheap, so you can use that information for, e.g. Kayak. Or, for that other Web site that I use: Skyscanner. It allows to list flights for a whole months or for a whole country instead of a specific airport.

What tools do you use to check for flights?

GPN 2014 in Karlsruhe

The Gulash Programmier Nacht (GPN) took place in Karlsruhe, Germany. The local subsidiary of the Chaos Computer Club organised that event, which apparently took place for the 14th time. So far, I wasn’t able to attend, but this time I made it.

It’s a 200 to 300 people event, focussed at hacking, making, and talks around that. It’s very cosy and somewhat similar to the mrmcds. Most of the talks were held in German, a few in English, but I think that could easily change if there is a demand.

The conference was keynoted by tante, who talked about the political aspects of code and the responsibility every developer has. It was good to hear someone saying that you do create reality for people with the software you write and that you are indeed responsible for the view on the world the users of your software have. There were a few other interesting thoughts and I think I agree with the results of the analysis conducted to a great extent. But I think a few areas are not well covered. For example, he said that you limit the people with your software. I don’t think that’s necessarily true. If you provide your users with enough freedoms, i.e. by choosing a Free Software license, than I don’t think his argument is valid anymore.

On the more funny side, a chemist taught us about chemistry based on the stories of Walter White. It was a funny talk with many interludes of the TV series. She explained what the people in the episodes were doing and how close that is to reality. Turns out, it is quite close and at least stupid mistakes were not done.

We also learned about Perl 6. If you think Perl is ugly, he said, it’s not modern Perl. The new and shiny Perl 6 allows you to write short code while looking nice, he said. He showed some features that make it easy to write command line tools. You can simply declare an argument to your main function and Perl would expose that to the user, e.g. by presenting a help screen. It would also detect the types provided and do some magic fancy stuff like checking whether the provided argument is an existing (or empty) file.

A very interesting talk was given on the Enigma, the German crypto machine. He showed the machine that broke the crypto and now stands in Bletchley Park. He told stories about the development and operation of that machine. Very interesting indeed. Also well done on a technical level, the slides were really well done.

I was invited to give talk on GNOME. As you can see in the video, my battery didn’t even last the full 90 minutes slot I was assigned. Something is certainly wrong, either this Linux thing or my battery. Anyway, the talk itself went very well, and it was particularly well attended for that early slot. I was also positively surprised by the audience asking many questions and while I specifically asked for flames, I didn’t get that many.