Reverse sshuttle tunnel to connect to separate networks

I had to solve that the split horizon DNS problem in order to find my way out to the Internet. The complementary problem is how to access the internal network form the Internet. The scenario being, for example, your home network being protected by a very angry firewall that you don’t necessarily control. However, it’d be quite handy to be able to SSH into your machines at home, use the printer, or connect to the internal messaging system.

However, everything is pretty much firewalled such that no incoming connections are possible. Fortunately, outgoing connections to an SSH server are possible. With the RemoteForward option of OpenSSH we can create a reverse tunnel to connect to the separate network. All it requires is a SSH server that you can connect to from both sides, i.e. the internet and the separate network, and some configuration, maybe like this on the machine within the network: ssh -o 'RemoteForward=localhost:23 localhost:22' root@remotehost and this for the internet machine:

Host dialin
    User toor
    HostName my.server
    Port 23

It then looks almost like this:

      
+---------------------------------------+                       
|Internet                               |                       
+---------------------------------------+
|  +-----------+                        |                       
|  |My machine | +------------+         |                       
|  +-----------+              |         |                       
|                             |         |                       
|                  +----------v--+      |                       
|                  |             |      |                       
|                  | SSH Server  |      |                       
|                  |             |      |                       
|                  +----------+--+      |                       
|                         ^   |         |                       
+------------------------ |   | --------+                       
                          |   |                                 
+------------------------ |   | --------+                       
|XXXXXXXX   Firewall  XX  |   | XXXXXXXX|                       
+------------------------ |   | --------+                       
                          |   |                                 
+------------------------ |   | --------+
| ACME.corp  10/8         |   |         |                       
+------------------------ |   | --------+
|                         |   |         |                       
|               +---------+---|------+  |                       
|   XMPP  <-+   |             |      |  |                       
|           |   |             |      |  |                       
|           |   |             v      |  |                       
|   Print <----------+ ssh -R        |  |                       
|           |   |      via corkscrew |  |                       
|           |   |                    |  |                       
|   VCS   <-+   +--------------------+  |                       
|               |  My machine        |  |                       
|               +--------------------+  |                       
|                                       |                       
+---------------------------------------+                       

“But…” I hear you say. What about the firewall? How would we connect in first place? Sure, we can use corkscrew, as we’ve learned. That will then look a bit more convoluted, maybe like this:


ssh -o ProxyCommand="corkscrew proxy.acme.corp 80 ssh.my.server 443" -o 'RemoteForward=localhost:23 localhost:22' root@lolcathost

What? You don’t have corkscrew installed? Gnah, it’s dangerous to go alone, take this:

cd
wget --continue http://www.agroman.net/corkscrew/corkscrew-2.0.tar.gz
tar xvf corkscrew*.tar*
cd corkscrew*
./configure --prefix=~/corkscrew; make; make install

echo -e  'y\n'|ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa

(echo -n 'command="read",no-X11-forwarding,no-agent-forwarding '; cat ~/.ssh/id_rsa.pub ;echo;echo EOF)

As a bonus, you get a SSH public key which you can add on the server side, i.e. cat >> ~root/.ssh/authorized_keys <<EOF. Have you noticed? When logging on with that key, only the read command will be executed.

That’s already quite helpful. But how do you then connect? Via the SSH server, of course. But it’s a bit of a hassle to first connect there and then somehow port forward via SSH and all. Also, in order to resolve internal names, you’d have to first SSH into the separate machine to issue DNS queries. That’s all painful and not fun. How about an automatic pseudo VPN that allows you to use the internal nameserver and transparently connects you to your internal network?

Again, sshuttle to the rescue. With the same patches applied to /etc/NetworkManager/dnsmasq.d/corp-tld, namely

# resolves names both, .corp and .acme
server=/acme.corp/10.2.3.4
server=/corp.acme/10.3.4.5

you can make use of that lovely patch for dns hosts. In the following example, we have a few nameservers defined, just in case: 10.2.3.4, 10.3.4.5, 10.4.5.6, and 10.5.6.7. It also excludes some networks that you may not want to have transparently routed. A few of them are actually standard local networks and should probably never be routed. Finally, the internal network is defined. In the example, the networks are 10.1.2.3/8, 123.1.2.3/8, and 321.456.0.0/16.


sshuttle --dns-hosts 10.2.3.4,10.3.4.5,10.4.5.6,10.5.6.7 -vvr dialin 10.1.2.3/8 123.1.2.3/8 321.456.0.0/16 \
--exclude 10.0.2.1/24 \
--exclude 10.183.252.224/24 \
--exclude 127.0.1.1/8 \
--exclude 224.0.0.1/8 \
--exclude 232.0.0.1/8 \
--exclude 233.252.0.0/14 \
--exclude 234.0.0.0/8

This setup allows you to simply execute that command and enjoy all of your networks. Including name resolution.

GUADEC 2014 in Strasbourg

This year, GUADEC took place in the lovely Strasbourg in France. It was really nice to attend the conference and to hang around with people who care about Free Software. In fact, the venue itself ran Debian which was nice to see ๐Ÿ™‚

Unfortunately, I wasn’t able to attend many of the great talks as I wasn’t available for all days. And when I was, I was busy meeting people. Although it felt smaller than the last GUADEC, I think I’ve never met so many people who I wanted to talk to.

The conference offered a two-track program. Interestingly many of them looking out for a future of GNOME. John Stowers gave one of the more important talks, I think. He was describing the situation in academia. Python is very popular in the scientific computing space, he said. He was not satisfied with JavaScript being the new “default” language for GNOME applications, because the contestants are numerous and powerful. So we would compete at least against the Web and Qt. The former apparently being nice on other platforms such as Windows. GNOME’s bindings, however, were very good, he said. The technological foundation is excellent and we should leverage that potential and make people use it. However, GNOME’s story on Windows is not all too good, he said. GTK+ is becoming more and more irrelevant and even Wx appears to be as popular as Gtk. I also heard others claiming that the Windows situation is a problem. What I don’t understand is whether there are technical problems blocking easy to use ports. Apparently introspected GNOME libraries for Pyhon on Windows exist, but I don’t understand why that doesn’t do the job.

Another talk related to the future of GNOME was given byAllan Day. In order for GNOME to be successful, amongst other things, a focus on quality must be established, he said. Various ways to improve the current release process were mentioned and the audience engaged in a vivid discussion. I don’t remember the detail so I hope this will be followed up and discussed more broadly in the GNOME community.

“Why do we do desktop”, asked Matthew Garrett in his presentation. When I read that title for the first time I thought the question of the desktop becoming irrelevant was being picked up. But that was not the case. Instead, he wanted GNOME to differentiate from the existing desktops which, as he claimed, are continuing to be simple multiplexors for running several programs (such as clocks) at the same time. In contrast to existing desktop, GNOME should become the secure desktop. Other desktops, he said, would only exist in order to sell more things to the user, i.e. to tie the user to an existing ecosystem. An advantage of GNOME is it being free from corporate control. Decisions are made very transparently which enables it to focus on brining privacy and security to the user. Even if the user is not aligned with our core values and principles. As such, every user deserves as much privacy and security as we can possible provide.

Many thanks to the local team for having organised the conference. I hope next year in Gothenburg will be at least as good.

Sponsored by GNOME!

Getting cheaper Bahn fares via external services

Imagine you want to go from some random place in Germany to the capital. Maybe because it is LinuxTag. We learned that you can try to apply international fares. In the case of Berlin, the Netzplan for Berlin indicates that several candidate train stations exist: Rzepin, Kostrzyn, or Szczecin. However, we’re not going to explore that now.

Instead, we have a look at other (third party) offers. Firstly, you can always get a Veranstaltungsticket. It’s a ticket rated at 99 EUR for a return trip. The flexible ticket costs 139 EUR and allows you to take any train, instead of fixed ones. Is that a good price? Let’s check the regular price for the route Karlsruhe โ†โ†’ Berlin.

The regular price is 142 EUR. Per leg. So the return trip would cost a whopping 284 EUR. Let’s assume you have a BahnCard 50. It costs 255 EUR and before you get it, you better do the math whether it’s worth it. Anyway, if you have that card, the price halves and we have to pay 71 EUR for a leg or 142 for the return trip. That ticket is fully flexible, so any train can be taken. The equivalent Veranstaltungsticket costs 139, so a saving of 3 EUR, or 2%.

Where to get that Veranstaltungsticket you ask? Well, turns out, LinuxTag offered it, itself. You call the phone number of the Bahn and state your “code”. In the LinuxTag case it was “STATION Berlin”. It probably restricts your destination options to Berlin. More general codes are easily found on the Web. Try “Finanz Informatik”,
“TMF”, or “DOAG”.

I don’t expect you to be impressed by saving 2%. Another option is to use bus search engines, such as busliniensuche.de, fernbusse.de, or fromatob.de. You need to be a bit lucky though as only a few of those tickets are available. However, it’s worth a shot as they cost 29 EUR only.

That saves you 80% compared to the original 142 EUR, or 60% compared to the 71 EUR with the BC 50. That’s quite nice, already. But we can do better. There is the “Fernweh-Ticket” which is only available from LTUR. It costs 26 EUR and you need to poll their Web Interface every so often to get a chance to find a ticket. I intended to write a crawler, but I have not gotten around to do it yet…

With such a ticket you save almost 82% or 63% compared to the regular price. Sweet! Have I missed any offer that worth mentioning?

Finding (more) cheap flights with Kayak

People knowing me know about my weakness when it comes to travel itineraries. I spend hours and hours, sometimes days or even weeks with finding the optimal itinerary. As such, when I was looking for flights to GNOME.Asia Summit, I had an argument over the cheapest and most comfortable flight. When I was told that a cheaper and better flight existed that I didn’t find, I refused to accept it as I saw my pride endangered. As it turned out, there were more flights than I knew of.

Kayak seems to give you different results depending on what site you actually open. I was surprised to learn that.

Here is the evidence: (you probably have to open that with a wide monitor or scroll within the image)
Kayak per country

In the screenshot, you can see that on the left hand side kayak.de found 1085 flights. It also found the cheapest one rated at 614 EUR. That flight, marked with the purple “1”, was also found by kayak.com and kayak.ie at different, albeit similar prices. In any case, that flight has a very long layover. The next best flight kayak.de returned was rated at 687 EUR. The other two Kayaks have that flight, marked with the green “3”, at around 730 EUR, almost 7% more than on the German site. The German Kayak does not have the Ethiad flight, marked with the blueish “2”, at 629 as the Irish one does! The American Kayak has that flight at 731 EUR, which is a whopping 17% of a difference. I actually haven’t checked whether the price difference persists when actually booking the flights. However, I couldn’t even have booked the Ethiad flight if I didn’t check other Kayak versions.

Lessons learnt: Checking one Kayak is not enough to find all good flights.

In addition to Kayak, I like to the the ITA Travel Matrix as it allows to greatly customise the queries. It also has a much more sane interface than Kayak. The prices are not very accurate though, as far as I could tell from my experiments. It can give you an idea of what connections are cheap, so you can use that information for, e.g. Kayak. Or, for that other Web site that I use: Skyscanner. It allows to list flights for a whole months or for a whole country instead of a specific airport.

What tools do you use to check for flights?

GPN 2014 in Karlsruhe

The Gulash Programmier Nacht (GPN) took place in Karlsruhe, Germany. The local subsidiary of the Chaos Computer Club organised that event, which apparently took place for the 14th time. So far, I wasn’t able to attend, but this time I made it.

It’s a 200 to 300 people event, focussed at hacking, making, and talks around that. It’s very cosy and somewhat similar to the mrmcds. Most of the talks were held in German, a few in English, but I think that could easily change if there is a demand.

The conference was keynoted by tante, who talked about the political aspects of code and the responsibility every developer has. It was good to hear someone saying that you do create reality for people with the software you write and that you are indeed responsible for the view on the world the users of your software have. There were a few other interesting thoughts and I think I agree with the results of the analysis conducted to a great extent. But I think a few areas are not well covered. For example, he said that you limit the people with your software. I don’t think that’s necessarily true. If you provide your users with enough freedoms, i.e. by choosing a Free Software license, than I don’t think his argument is valid anymore.

On the more funny side, a chemist taught us about chemistry based on the stories of Walter White. It was a funny talk with many interludes of the TV series. She explained what the people in the episodes were doing and how close that is to reality. Turns out, it is quite close and at least stupid mistakes were not done.

We also learned about Perl 6. If you think Perl is ugly, he said, it’s not modern Perl. The new and shiny Perl 6 allows you to write short code while looking nice, he said. He showed some features that make it easy to write command line tools. You can simply declare an argument to your main function and Perl would expose that to the user, e.g. by presenting a help screen. It would also detect the types provided and do some magic fancy stuff like checking whether the provided argument is an existing (or empty) file.

A very interesting talk was given on the Enigma, the German crypto machine. He showed the machine that broke the crypto and now stands in Bletchley Park. He told stories about the development and operation of that machine. Very interesting indeed. Also well done on a technical level, the slides were really well done.

I was invited to give talk on GNOME. As you can see in the video, my battery didn’t even last the full 90 minutes slot I was assigned. Something is certainly wrong, either this Linux thing or my battery. Anyway, the talk itself went very well, and it was particularly well attended for that early slot. I was also positively surprised by the audience asking many questions and while I specifically asked for flames, I didn’t get that many.

GNOME.Asia Summit 2014

I was fortunate to be able to attend this year’s GNOME Asia Summit in Beijing, China.

It was co-hosted with FUDCon, the Fedora Users and Developers Conference. We had many attendees and the venue provided good facilities to talk about Free Software and the Free Desktop.

Fudcon Beijing Logo

The venue was the Beihai University somewhat north of Beijing. Being Chinese, the building was massive in size. So we had loads of space, anyway ๐Ÿ˜‰ The first day was reserved for trainings and attendees could get their feet wet with thinks like developing a GNOME application. I took part myself and was happy to learn new GNOME APIs. I think the audience was interested and I hope we could inspire a few attendees to create their next application using GNOME technologies.

I was invited to keynote the conference. It was my first time to do such a thing and I chose to give a talk that I would expect from a keynote, namely something that leads the conference and gives a vision and ideas about what to discuss during the conference. I talked on GNOME, GNOME 3, and GNOME 3.12. I tried to promote the ideas of GNOME and of Free Software. Unfortunately, I prepared for 60 minutes rather than 45, so I needed to cut off a good chunk of my talk :-/ Anyway, I am happy with how it went and especially happy with the fact that I wasn’t preaching to the choir only, as we had e.g. Fedora people in the audience, too.

We had RMS explaining Free Software to the audience and I think the people enjoyed his talking. I certainly did, although I think it doesn’t address problems we face nowadays. People have needs, as the discussion with the audience revealed. Apparently, people do want to have the functionality Facebook or Skype offers. I think that addressing these needs with the warning “you must not fall for the convenience trap” is too short sighted. We, the Free Software community, need to find better answers.

The event was full of talks and workshops from a diverse range of topics, which is a good thing for this conference. Of course, co-hosting with FUDCon helped that. The event is probably less technical than GUADEC and attendees can learn a lot from listening and talking to other people. I hope we can attract more Asian people to Free Software this way. I am not entirely sure we need to have the same setup as with GUADEC though. With GUADEC, we change the country every year. But Asia is about ten times larger than Europe. In fact, China alone is larger than all of Europe. It makes it somewhat hard for me to justify the moving around. We do need more presence in Asia, so trying to cover as much as possible might be an approach to attract more people. But I think we should investigate other approaches, such as focussing on an annual event in one location to actually create a strong Free Software location in Asia, before moving on. I wouldn’t know how to define “strong” right now, but we have absolutely no measure of success right now, anyway. That makes it a bit frustrating for me to pour money over Asia without actually seeing anything in return.

Anyway, Beijing is fun. We went to see the Great Wall and enjoyed the subway ๐Ÿ˜‰

I would like to thank the organisers for having provided a great place us, the Free Software community, to spread the word about the benefits of free computing. I would also like to thank the GNOME Foundation for enabling people like me to attend the event.

Sponsored by GNOME!

[Update: Here is the recording of the talk]

Installing OpenSuSE 13.1 on an Lenovo Ideapad S10-3t

I tried to install the most recent OpenSuSE image I received when I attended the OpenSuSE Conference. We were given pendrives with a live image so I was interested how smooth the OpenSuSE installation was, compared to installing Fedora. The test machine is a three to four year old Intel Ideapad s10-3t, which I received from Intel a while ago. It’s certainly not the most powerful machine, but it’s got some dual core CPU, a gigabyte of RAM, and a widescreen touch display.

The initial boot took a while. Apparently it changed something on the pendrive itself to expand to its full size, or so. The installation was a bit painful and, at the end of the day, not successful. The first error I received was about my username being wrong. It told me that I must only contain letters, digits, and other things. It did not tell me what was actually wrong; and I doubt it could, because my username was very legit. I clicked away the dialogue and tried again. Then it worked…

When I was asked about my partitioning scheme I was moderately confused. The window didn’t present any “next” button. I clicked the three only available buttons to no avail until it occurred to me that the machine has a wide screen so the vertical space was not sufficient to display everything. And yeah, after moving the window up, I could proceed.

While I was positively surprised to see that it offered full disk encryption, I wasn’t too impressed with the buttons. They were very tiny on the bottom of the screen, barely clickable.

Anyway, I found my way to proceed, but when attempting to install, YaST received “system error code -1014” and failed to partition the disk. The disk could be at fault, but I have reasons to believe it was not the disks fault:

Apparently something ate all the memory so that I couldn’t even start a terminal. I guess GNOME’s system requirements are higher than I expected.

LinuxTag 2014

I attended LinuxTag 2014 in Berlin. The event reinvented itself again, after it lost attraction is the recent years. We, GNOME, couldn’t even get enough volunteers to have a presence there. In Berlin. In perfect spring time. Other projects were struggling, too. For this year, they teamed up with re:publica and AndroidCon. The venue changed and the new format of the event made it more attractive and made a good number of people attend.

The venue was “Die Station“, apparently used by those Web people for their Web conference for a couple of years now. It has much more character than the expo in the west where LinuxTag used to be located. But it’s also a bit too unpolished to have a proper conference there. It’s very nice for the fair or expo part of LinuxTag, but not so nice for the conference part. The problem is the rooms. The infrastructure does not really allow for a nice conferency feeling. E.g. many plastic chair made the seats for the audience, the rooms were right next to each other and not sound proof so that you could hear the other talk from the other room. Some lecture halls were actually not really separated from the corridor, so people were walking by and making noise. As for the noise: Except for two big stages, the audio was really bad. I can’t really tell why, but I guess nobody actually tested whether the microphones would sound alright…

While I was grateful to be invited to give a talk on GNOME, I think someone in the organisation team didn’t like me ๐Ÿ˜‰ The conference party started at 18:00 that day and my talk was scheduled for the last slot at 21:30. So I had to compete with the beer outside and other talks in the slot that I wanted to see myself. At least I only had very motivated people in the audience ๐Ÿ˜‰

The LinuxTag deserves its name as it’s unusually kernel focussed for a “normal user” event. As in depth kernel session do not necessarily make sense for the every day computer user, teaming up with DroidCon seemed promising. But the two events were too separated. I actually have not seen any schedule for the DroidCon. And I couldn’t find a joint schedule anywhere on the Internet nor in the venue itself. I don’t think it’s bad intentions, though. It’s probably due to lacking resources to pull it off. A big thank thank you to the organisers. I think it’s an important event that connects many people, especially those from the Industry with the Community. Keep rocking, guys.

OpenSuSE Conference 14 in Dubrovnik, Croatia

I had the pleasure to be invited to the 2014 edition of the OpenSuSE Conference in Dubrovnik, Croatia. That event was flying under my radar for a long time and I am glad that I finally found out about it.

The first thing that impressed me was Dubrovik. A lovely city with a walled old town. Even a (rather high) watch tower is still there. The city manages to create an inspiring atmosphere despite all the crowds moving through the narrow streets. It’s clean and controlled, yet busy and wild. There are so many small cafรฉs, pubs, and restaurants, so many walls and corners, and so many friendly people. It’s an amazing place for an amazing conference.

The conference itself featured three tracks, which is quite busy already. But in addition, an unconference was held as a fourth track. The talks were varying in topic, from community management, to MySQL deployment, and of course, GNOME. I presented the latest and greatest GNOME 3.12. Despite the many tracks, the hallway track was the most interesting one. I didn’t know too many faces and as it’s a GNU/Linux distribution conference which I have never attended before, many of the people I met had an interesting background which I was not familiar with. It was fun meeting new people who do exciting things. I hope to be able to stay in touch with many of them.

The conference was opened by the OpenSuSE Board. I actually don’t really know how OpenSuSE is governed and if there is any legal entity behind it. But the Board seems to be somehow elected by the community and was to announce a few changes to OpenSuSE. The title of the conference was “The Strength to Change” which is indeed inviting to announce radical changes. For better or worse, both the number and severity of the changes announced were limited. First and foremost, handling marketing materials is about to change. A new budget was put in place to allow for new materials to be generated to have a much bigger presence in the world. Also, the materials were created by SuSE’s designers on staff. So they are considered to be rather high quality. To get more contributors, they introduce formalised sponsorship program for people to attend conferences to present OpenSuSE. I don’t know what the difference to their Travel Support Program is, though. They will also reimburse for locally produced marketing materials which cannot be shipped around the world to encourage more people to spread the word about OpenSuSE. A new process will be put in place which will enable local contributors to produce materials up to 200 USD from a budget of 2000 USD per quarter. Something that will change, but not just yet, is the development and release model. Andrew Wafaa said that OpenSuSE was a victim of its own success. He mentioned the number of 7500 packages which should probably indicate that it is a lot for them to handle. The current release cycle of 8 months is to be discussed. There is a strong question of whether something new shall be tried. Maybe annual releases, or even longer to have more time for polish. Or maybe not do regular releases at all, like rolling releases or just take as long as it takes. A decision is expected after the next release which will happen as normal at the end of this year. There was an agreement that OpenSuSE wants to be easy to contribute to. The purpose of this conference is to grow the participants’ knowledge and connections in and about the FLOSS environment.

The next talk was Protect your MySQL Server by Georgi Kodinov. Being with MySQL since 2006 he talked about the security of MySQL in OpenSuSE. The first point he made was how the post-installation situation is on OpenSuSE 13.1. It ships version 5.6.12 which is not too bad because it is only 5 updates behind of what upstream released. Other distros are much further away from that, he said. Version 5.6 introduced cool security related features like expiring passwords, password strength policies, or SHA256 support. He urged the audience to stop using passwords on the command line and look into the 5.6 documentation instead. He didn’t make it any more concrete, though, but mentioned “login paths” later. He also liked that the server was not turned on by default which encourages you to use your self-made configuration instead of a default one. He also liked the fact that there is no pre-packaged database as that does not configure users that are not very well protected. Finally, he pointed out that he is pleased to see that no remote access is configured in the default configuration. However, he did not like that OpenSuSE does not ship the latest version. The newest upstream version 5.6.15 not only fixes around 25 security problems but also adds advanced AES functionalities such as keys being bigger than 128 bits. He also disliked that a mysql_secure_installation script is not run after installation. That script would put random passwords to the root account, would disallow anonymous access, and would do away with empty default passwords. Another regret he had was that mysql_config_editor is not packaged. That tool would help to get rid of passwords in scripts using MySQL by storing credentials in encrypted files. That way you would have to protect only one file, not a lot of scripts. For some reason OpenSuSE activates the “federated plugin” which is disabled upstream.
Another weird plugin is the archive plugin which, he said, is not needed. In fact, it is not even available so that the starting server throws errors… Also, authentication plugins which should only be used for testing are enabled by default which can be a problem as it could allow someone to log in as any user. After he explained how this was a threat, the actual attack seems to be a bit esoteric. Anyway, he concluded that you get a development installation when you install MySQL in OpenSuSE, rather than an installation suited for production use.

He went on to refer about how to harden it after installation. He proposed to run mysql_secure_installation as it wouldn’t cause any harm even if run multiple times. He also recommended to make it listen on specific interfaces only, instead of all interfaces which is does by default. He also wants you to generate SSL keys and certificates to allow for encrypted communication over the network.

Even more security can be achieved when turning off TCP access altogether, so you should do it if the environment allows it. If you do use TCP, he recommended to use SSL even if there is no PKI. An interesting advice was to use external authentication such as PAM or LDAP. He didn’t go into details how to actually do it, though. The most urgent tip he gave was to set secure_file_priv to a certain directory as it will restrict the paths MySQL can write to.

As for new changes that come with MySQL 5.7, which is the current development version accumulating changes over 18 months of development, he mentioned the option to log to syslog. Interestingly,
a --ssl option on the client is basically a no-op (sic!) but will actually enforce SSL in the upcoming version. The new version also adds more crypto functions such as RANDOM_BYTES() which interface with the SSL libraries. He concluded his talk with a quote: “Security is like plastic surgery. the more you invest, the prettier it gets.”.

Michael Meeks talked next on the history of the Document Foundation. He explained how it used to be in the StarOffice days. Apparently, they were very process driven and believed that the more processes with even more steps help the quality of the software they produced. He didn’t really share that view. The mind set was, he said, that people would go into a shop and buy a box with the software. He sees that behaviour declining steeply. So then hackers came and branched StarOffice into OpenOffice which had a much shorter release cycle than the original product and incorporated fixes and features of the future version. Everyone shipped that instead of the original thing. The 18 months of the original product were a bit of a long thing in the free software world, he said. He quoted someone saying “StarDivision a problem for every solution.”

He went on to rant about Contributor License Agreements and showed a graph of Fedora contributions which spiked off when they dropped the requirement of a CLA. The graph was impressive but really showed the number of active accounts in an unspecified system. He claimed that by now they have around the same magnitude of contributions as the kernel does and with set a new record with 3000 commits in February 2014. The dominating body of contributors is volunteers which is quite different when compared to the kernel. He talked about various aspects of the Document Foundation like the governance or the fact that they want to make it as easy to contribute to the project as possible.

The next talk was given on bcache by Oliver Neukum. Bcache is a disk cache which is probably primarily used to cache rotational disks with SSDs. He first talked about the principles of caching, like write-back, write-through, and write-around. That is, the cache is responsible for writing to the backing store, the cache places the data to be written in its buffer, or write to the backing storage, but not the cache, respectively. Subsequently, he explained how to actually use bcache. A demo given later revealed that it’s not fool proof and that you do need to get your commands straight in order to make it work properly. As to when to actually use Bcache, he explained that SSDs are cool as they are fast, but they are small and expensive. Fast, as he continued, can either mean throughput or latency. SSDs are good with regards to latency, but not necessarily with throughput. Other, probably similar options to Bcache are dm-cache, but it does not support safe writes. I guess that you cannot use it if you have the requirement of a write-through or write-around scenario. A different alternative is EnhanceIO, written originally by Facebook, which keeps hash structure of the data to be cached in RAM. Bcache, on the other hand, stores a b-tree on the SSD instead of in the RAM. It works on block devices, so anything goes. Tape drives, RAIDs, … It places a special superblock to indicate the partition is a bcache partition. A second block is created to indicate what the backing store is. Currently, the kernel does not auto detect these caches, hence making it work with the root filesystem is a bit tricky. He did a proper evaluation of the effects of the cache. So his statements were well founded which I liked a lot.

It was announced that the next year’s conference, oSC15, will be in The Hague, Netherlands. The city we had our GUADEC in, once. If you have some time in spring, probably in April, consider to go.

Split DNS Resolution

For the beginning of the year, I couldn’t make resolutions. The DNS server that the DHCP server gave me only resolves names from the local domain, i.e. acme.corp. Every connection to the outside world needs to go through a corporate HTTP proxy which then does the name resolution itself.

But that only works as long as the HTTP proxy is happy, i.e. with the destination port. It wouldn’t allow me to CONNECT to any other port than 80 (HTTP) or 443 (HTTPS). The proxy is thus almost useless for me. No IRC, no XMPP, no IMAP(s), no SSH, etc.

Fortunately, I have an SSH server running on port 443 and using the HTTP proxy to CONNECT to that machine works easily, i.e. using corkscrew with the following in ~/.ssh/config:

Host myserver443
  User remote-user-name
  HostName ssh443.example.com
  ProxyCommand corkscrew proxy.acme.corp 8080 %h %p
  Port 443

And with that SSH connection, I could easily tunnel TCP packets using the DynamicForward switch. That would give a SOCKS proxy and I only needed to configure my programs or use tsocks. But as I need a destination IP address in order to assemble TCP packets, I need to have DNS working, first. While a SOCKS proxy could do it, the one provided by OpenSSH cannot (correct me, if I am wrong). Obviously, I need to somehow get onto the Internet in order to resolve names, as I don’t have any local nameserver that would do that for me. So I need to tunnel. Somehow.

Most of the problem is solved by using sshuttle, which is half a VPN, half a tunnelling solution. It recognises your local machine sending packets (using iptables), does its magic to transport these to a remote host under your control (using a small python program to get the packets from iptables), and sends the packets from that remote host (using a small daemon on the server side). It also collects and forwards the answers. Your local machine doesn’t really realise that it is not really connecting itself.

As the name implies it uses SSH as a transport for the packets and it works very well, not only for TCP, but also for UDP packets you send to the nameserver of your choice. So external name resolution is done, as well as sending TCP packets to any host. You may now think that the quest is solved. But as sshuttle intercepts *all* queries to the (local) nameserver, you don’t use that (local nameserver) anymore and internal name resolution thus breaks (because the external nameserver cannot resolve printing.acme.corp). That’s almost what I wanted. Except that I also want to resolve the local domain names…

To clarify my setup, marvel at this awesome diagram of the scenario. You can see my machine being inside the corporate network with the proxy being the only way out. sshuttle intercepts every packet sent to the outside world, including DNS traffic. The local nameserver is not used as it cannot resolve external names. Local names, such as printing.acme.corp, can thus not be resolved.



  +-----------------------------------------+
  | ACME.corp                               |
  |-----------------------------------------|
  |                                         |
  |                                         |
  | +----------------+        +-----------+ |
  | |My machine      |        | DNS Server| |
  | |----------------|        +-----------+ |
  | |                |                      |
  | |sshuttle        |        +-----------+ |
  | |       corkscrew+------->| HTTP Proxy| |
  | +----------------+        +-----+-----+ |
  |                                 |       |
  +---------------------------------|-------+
                                    |
  +-----------------------------------------+
  | Internet                        |       |
  |-----------------------------------------|
  |                                 v       |
  |       +----------+        +----------+  |
  |       |DNS Server|<-------+SSH Server|  |
  |       +----------+        +----------+  |
  |                            +  +  +  +   |
  |                            |  |  |  |   |
  |                            v  v  v  v   |
  +-----------------------------------------+

To solve that problem I need to selectively ask either the internal or the external nameserver and force sshuttle to not block traffic to the internal one. Fortunately, there is a patch for sshuttle to specify the IP address of the (external) nameserver. It lets traffic designated for your local nameserver pass and only intercept packets for your external nameserver. Awesome.

But how to make the system select the nameserver to be used? Just entering two nameservers in /etc/resolv.conf doesn’t work, of course. One solution to that problem is dnsmasq, which, fortunately, NetworkManager is running anyway. A single line added to the configuration in /etc/NetworkManager/dnsmasq.d/corp-tld makes it aware of a nameserver dedicated for a domain:

server=/acme.corp/10.1.1.2

With that setup, using a public DNS server as main nameserver and make dnsmasq resolve local domain names, but make sshuttle intercept the requests to the public nameserver only, solves my problem and enables me to work again.

~/sshuttle/sshuttle --dns-hosts 8.8.8.8 -vvr myserver443 0/0 \
	--exclude 10.0.2.15/8 \
	--exclude 127.0.1.1/8 \
	--exclude 224.0.0.1/8 \
	--exclude 232.0.0.1/8 \
	--exclude 233.252.0.0/14 \
	--exclude 234.0.0.0/8 \

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.