I attended LinuxTag 2014 in Berlin. The event reinvented itself again, after it lost attraction is the recent years. We, GNOME, couldn’t even get enough volunteers to have a presence there. In Berlin. In perfect spring time. Other projects were struggling, too. For this year, they teamed up with re:publica and AndroidCon. The venue changed and the new format of the event made it more attractive and made a good number of people attend.
The venue was “Die Station“, apparently used by those Web people for their Web conference for a couple of years now. It has much more character than the expo in the west where LinuxTag used to be located. But it’s also a bit too unpolished to have a proper conference there. It’s very nice for the fair or expo part of LinuxTag, but not so nice for the conference part. The problem is the rooms. The infrastructure does not really allow for a nice conferency feeling. E.g. many plastic chair made the seats for the audience, the rooms were right next to each other and not sound proof so that you could hear the other talk from the other room. Some lecture halls were actually not really separated from the corridor, so people were walking by and making noise. As for the noise: Except for two big stages, the audio was really bad. I can’t really tell why, but I guess nobody actually tested whether the microphones would sound alright…
While I was grateful to be invited to give a talk on GNOME, I think someone in the organisation team didn’t like me 😉 The conference party started at 18:00 that day and my talk was scheduled for the last slot at 21:30. So I had to compete with the beer outside and other talks in the slot that I wanted to see myself. At least I only had very motivated people in the audience 😉
The LinuxTag deserves its name as it’s unusually kernel focussed for a “normal user” event. As in depth kernel session do not necessarily make sense for the every day computer user, teaming up with DroidCon seemed promising. But the two events were too separated. I actually have not seen any schedule for the DroidCon. And I couldn’t find a joint schedule anywhere on the Internet nor in the venue itself. I don’t think it’s bad intentions, though. It’s probably due to lacking resources to pull it off. A big thank thank you to the organisers. I think it’s an important event that connects many people, especially those from the Industry with the Community. Keep rocking, guys.
The first thing that impressed me was Dubrovik. A lovely city with a walled old town. Even a (rather high) watch tower is still there. The city manages to create an inspiring atmosphere despite all the crowds moving through the narrow streets. It’s clean and controlled, yet busy and wild. There are so many small cafés, pubs, and restaurants, so many walls and corners, and so many friendly people. It’s an amazing place for an amazing conference.
The conference itself featured three tracks, which is quite busy already. But in addition, an unconference was held as a fourth track. The talks were varying in topic, from community management, to MySQL deployment, and of course, GNOME. I presented the latest and greatest GNOME 3.12. Despite the many tracks, the hallway track was the most interesting one. I didn’t know too many faces and as it’s a GNU/Linux distribution conference which I have never attended before, many of the people I met had an interesting background which I was not familiar with. It was fun meeting new people who do exciting things. I hope to be able to stay in touch with many of them.
The conference was opened by the OpenSuSE Board. I actually don’t really know how OpenSuSE is governed and if there is any legal entity behind it. But the Board seems to be somehow elected by the community and was to announce a few changes to OpenSuSE. The title of the conference was “The Strength to Change” which is indeed inviting to announce radical changes. For better or worse, both the number and severity of the changes announced were limited. First and foremost, handling marketing materials is about to change. A new budget was put in place to allow for new materials to be generated to have a much bigger presence in the world. Also, the materials were created by SuSE’s designers on staff. So they are considered to be rather high quality. To get more contributors, they introduce formalised sponsorship program for people to attend conferences to present OpenSuSE. I don’t know what the difference to their Travel Support Program is, though. They will also reimburse for locally produced marketing materials which cannot be shipped around the world to encourage more people to spread the word about OpenSuSE. A new process will be put in place which will enable local contributors to produce materials up to 200 USD from a budget of 2000 USD per quarter. Something that will change, but not just yet, is the development and release model. Andrew Wafaa said that OpenSuSE was a victim of its own success. He mentioned the number of 7500 packages which should probably indicate that it is a lot for them to handle. The current release cycle of 8 months is to be discussed. There is a strong question of whether something new shall be tried. Maybe annual releases, or even longer to have more time for polish. Or maybe not do regular releases at all, like rolling releases or just take as long as it takes. A decision is expected after the next release which will happen as normal at the end of this year. There was an agreement that OpenSuSE wants to be easy to contribute to. The purpose of this conference is to grow the participants’ knowledge and connections in and about the FLOSS environment.
The next talk was Protect your MySQL Server by Georgi Kodinov. Being with MySQL since 2006 he talked about the security of MySQL in OpenSuSE. The first point he made was how the post-installation situation is on OpenSuSE 13.1. It ships version 5.6.12 which is not too bad because it is only 5 updates behind of what upstream released. Other distros are much further away from that, he said. Version 5.6 introduced cool security related features like expiring passwords, password strength policies, or SHA256 support. He urged the audience to stop using passwords on the command line and look into the 5.6 documentation instead. He didn’t make it any more concrete, though, but mentioned “login paths” later. He also liked that the server was not turned on by default which encourages you to use your self-made configuration instead of a default one. He also liked the fact that there is no pre-packaged database as that does not configure users that are not very well protected. Finally, he pointed out that he is pleased to see that no remote access is configured in the default configuration. However, he did not like that OpenSuSE does not ship the latest version. The newest upstream version 5.6.15 not only fixes around 25 security problems but also adds advanced AES functionalities such as keys being bigger than 128 bits. He also disliked that a mysql_secure_installation script is not run after installation. That script would put random passwords to the root account, would disallow anonymous access, and would do away with empty default passwords. Another regret he had was that mysql_config_editor is not packaged. That tool would help to get rid of passwords in scripts using MySQL by storing credentials in encrypted files. That way you would have to protect only one file, not a lot of scripts. For some reason OpenSuSE activates the “federated plugin” which is disabled upstream.
Another weird plugin is the archive plugin which, he said, is not needed. In fact, it is not even available so that the starting server throws errors… Also, authentication plugins which should only be used for testing are enabled by default which can be a problem as it could allow someone to log in as any user. After he explained how this was a threat, the actual attack seems to be a bit esoteric. Anyway, he concluded that you get a development installation when you install MySQL in OpenSuSE, rather than an installation suited for production use.
He went on to refer about how to harden it after installation. He proposed to run mysql_secure_installation as it wouldn’t cause any harm even if run multiple times. He also recommended to make it listen on specific interfaces only, instead of all interfaces which is does by default. He also wants you to generate SSL keys and certificates to allow for encrypted communication over the network.
Even more security can be achieved when turning off TCP access altogether, so you should do it if the environment allows it. If you do use TCP, he recommended to use SSL even if there is no PKI. An interesting advice was to use external authentication such as PAM or LDAP. He didn’t go into details how to actually do it, though. The most urgent tip he gave was to set secure_file_priv to a certain directory as it will restrict the paths MySQL can write to.
As for new changes that come with MySQL 5.7, which is the current development version accumulating changes over 18 months of development, he mentioned the option to log to syslog. Interestingly,
a --ssl option on the client is basically a no-op (sic!) but will actually enforce SSL in the upcoming version. The new version also adds more crypto functions such as RANDOM_BYTES() which interface with the SSL libraries. He concluded his talk with a quote: “Security is like plastic surgery. the more you invest, the prettier it gets.”.
Michael Meeks talked next on the history of the Document Foundation. He explained how it used to be in the StarOffice days. Apparently, they were very process driven and believed that the more processes with even more steps help the quality of the software they produced. He didn’t really share that view. The mind set was, he said, that people would go into a shop and buy a box with the software. He sees that behaviour declining steeply. So then hackers came and branched StarOffice into OpenOffice which had a much shorter release cycle than the original product and incorporated fixes and features of the future version. Everyone shipped that instead of the original thing. The 18 months of the original product were a bit of a long thing in the free software world, he said. He quoted someone saying “StarDivision a problem for every solution.”
He went on to rant about Contributor License Agreements and showed a graph of Fedora contributions which spiked off when they dropped the requirement of a CLA. The graph was impressive but really showed the number of active accounts in an unspecified system. He claimed that by now they have around the same magnitude of contributions as the kernel does and with set a new record with 3000 commits in February 2014. The dominating body of contributors is volunteers which is quite different when compared to the kernel. He talked about various aspects of the Document Foundation like the governance or the fact that they want to make it as easy to contribute to the project as possible.
The next talk was given on bcache by Oliver Neukum. Bcache is a disk cache which is probably primarily used to cache rotational disks with SSDs. He first talked about the principles of caching, like write-back, write-through, and write-around. That is, the cache is responsible for writing to the backing store, the cache places the data to be written in its buffer, or write to the backing storage, but not the cache, respectively. Subsequently, he explained how to actually use bcache. A demo given later revealed that it’s not fool proof and that you do need to get your commands straight in order to make it work properly. As to when to actually use Bcache, he explained that SSDs are cool as they are fast, but they are small and expensive. Fast, as he continued, can either mean throughput or latency. SSDs are good with regards to latency, but not necessarily with throughput. Other, probably similar options to Bcache are dm-cache, but it does not support safe writes. I guess that you cannot use it if you have the requirement of a write-through or write-around scenario. A different alternative is EnhanceIO, written originally by Facebook, which keeps hash structure of the data to be cached in RAM. Bcache, on the other hand, stores a b-tree on the SSD instead of in the RAM. It works on block devices, so anything goes. Tape drives, RAIDs, … It places a special superblock to indicate the partition is a bcache partition. A second block is created to indicate what the backing store is. Currently, the kernel does not auto detect these caches, hence making it work with the root filesystem is a bit tricky. He did a proper evaluation of the effects of the cache. So his statements were well founded which I liked a lot.
It was announced that the next year’s conference, oSC15, will be in The Hague, Netherlands. The city we had our GUADEC in, once. If you have some time in spring, probably in April, consider to go.
For the beginning of the year, I couldn’t make resolutions. The DNS server that the DHCP server gave me only resolves names from the local domain, i.e. acme.corp. Every connection to the outside world needs to go through a corporate HTTP proxy which then does the name resolution itself.
But that only works as long as the HTTP proxy is happy, i.e. with the destination port. It wouldn’t allow me to CONNECT to any other port than 80 (HTTP) or 443 (HTTPS). The proxy is thus almost useless for me. No IRC, no XMPP, no IMAP(s), no SSH, etc.
Fortunately, I have an SSH server running on port 443 and using the HTTP proxy to CONNECT to that machine works easily, i.e. using corkscrew with the following in ~/.ssh/config:
Host myserver443
User remote-user-name
HostName ssh443.example.com
ProxyCommand corkscrew proxy.acme.corp 8080 %h %p
Port 443
And with that SSH connection, I could easily tunnel TCP packets using the DynamicForward switch. That would give a SOCKS proxy and I only needed to configure my programs or use tsocks. But as I need a destination IP address in order to assemble TCP packets, I need to have DNS working, first. While a SOCKS proxy could do it, the one provided by OpenSSH cannot (correct me, if I am wrong). Obviously, I need to somehow get onto the Internet in order to resolve names, as I don’t have any local nameserver that would do that for me. So I need to tunnel. Somehow.
Most of the problem is solved by using sshuttle, which is half a VPN, half a tunnelling solution. It recognises your local machine sending packets (using iptables), does its magic to transport these to a remote host under your control (using a small python program to get the packets from iptables), and sends the packets from that remote host (using a small daemon on the server side). It also collects and forwards the answers. Your local machine doesn’t really realise that it is not really connecting itself.
As the name implies it uses SSH as a transport for the packets and it works very well, not only for TCP, but also for UDP packets you send to the nameserver of your choice. So external name resolution is done, as well as sending TCP packets to any host. You may now think that the quest is solved. But as sshuttle intercepts *all* queries to the (local) nameserver, you don’t use that (local nameserver) anymore and internal name resolution thus breaks (because the external nameserver cannot resolve printing.acme.corp). That’s almost what I wanted. Except that I also want to resolve the local domain names…
To clarify my setup, marvel at this awesome diagram of the scenario. You can see my machine being inside the corporate network with the proxy being the only way out. sshuttle intercepts every packet sent to the outside world, including DNS traffic. The local nameserver is not used as it cannot resolve external names. Local names, such as printing.acme.corp, can thus not be resolved.
To solve that problem I need to selectively ask either the internal or the external nameserver and force sshuttle to not block traffic to the internal one. Fortunately, there is a patch for sshuttle to specify the IP address of the (external) nameserver. It lets traffic designated for your local nameserver pass and only intercept packets for your external nameserver. Awesome.
But how to make the system select the nameserver to be used? Just entering two nameservers in /etc/resolv.conf doesn’t work, of course. One solution to that problem is dnsmasq, which, fortunately, NetworkManager is running anyway. A single line added to the configuration in /etc/NetworkManager/dnsmasq.d/corp-tld makes it aware of a nameserver dedicated for a domain:
server=/acme.corp/10.1.1.2
With that setup, using a public DNS server as main nameserver and make dnsmasq resolve local domain names, but make sshuttle intercept the requests to the public nameserver only, solves my problem and enables me to work again.
I am at the LGM in Leipzig. The venue, the university of Leipzig, is amazing. Infrastructure is optimal and rooms are spacious enough. The organisers have also made sure that the weather is great 😉
I’ve never attended an LGM and I regret not having visited one earlier. It’s a cosy event with around 200 people from various parts of the world and from various projects. I am glad to have met a few great minds that I could exchange ideas with.
One of the highlights, so far, for me was the open movie night which showed movies which were either created using Free Software or at least licensed freely. Everybody knows tears of steel or big buck bunny. I was surprised by the long list of movies I didn’t know. Many of them were really good! So good that I can’t even select my favourite. My personal top three movies are, however, Mortys, which I consider to be a good mix of drama and comedy:
Mac n Cheese movies are definitely more on the action side of things, well worth watching:
These movies are, as far as I am aware, licensed under CC-BY-NC-ND. So very restrictive. Much more liberally licensed videos are the Caminandes videos.
I show many more very great videos, but I’ll just link to them here: Parigot, which I’d say is an action comedy. The Forest, definitely worth watching, also as artsy as the Palmipedarium. Camanchango is also interesting, more dramaesque, well animated. Happy Hour has some interesting effects, more on the humorous side.
There are so many great free movies. Is there any database like web site that lists and ranks free movies?
If you know, or if you just want to talk about GNOME, come and find me at LGM 🙂
For our booth at FOSDEM we had some hardware to show off the latest and greatest GNOME. I brought the tablet I got from the Desktop Summit. In order to prepare it I installed Fedora 20 which comes with a nice and shiny installer. I found a few issues and glitches and will present then them in the course of this post.
Some incoherences exist. One of them is that it shows the “Next” button on the bottom right. Which is what I’d expect. But sometimes it also asks the user to press a button on the upper left. I didn’t even remotely expect having to press a button on the “back” side of the screen in order to continue installation.
It was very nice though that it seems to offer installation along an existing operating system *and* full disk encryption. The Ubuntu installer can install you a fully encrypted system nicely, but only if you install Ubuntu on the whole disk. The Fedora installer seems to manage that nicely.
As it seems to be normal nowadays, installation starts even though you haven’t provided all the necessary information yet. That is very convenient and have a much fast installation experience.
Another coherence issue is on the user dialogue. I can actually guess what the thinking was when designing these menus. You have this “overview” screen as seen above and then you “dive” into the sub menus. I expected a more linearly following set of menus. Why would I need return to the overview menu at all? I claim that it is much easier to just continue, not to go forth and back… Anyway, a real bug is visible: Mnemonics are not formatted properly.
The user dialogue, while being at it, seemed to have forced me to enter a strong password. I just wanted to install a system for a demo machine. Probably not the usual usecase, but annoying enough if it doesn’t work smoothly. I think I found out later that I needed to double press the “Next” button (labelled “done” and being placed in the area of the screen where I’d expect “back” buttons to be).
Turns out, that the same thing happened with the root password, which really annoyed me. Especially as the soft keyboard doesn’t really allow for convenient input of complicated characters.
But then I discovered something. On the very bottom there something weirdly coloured. It was a notification for the current menu. Why on earth complain about the password I’ve entered on the very bottom when the widget is on the very top? That was surprising and confusing. Plus, the warning itself was not very visible due to the onscreen keyboard obstructing the view. So I guess it’d be smarter to not have the warnings there.
Anyway, I was pleasantly surprised how smooth the installation experience was. It could, of course, have been better, but all in all it was quite good. I finished in less than half and hour. Too bad that I didn’t know that neither Eye of GNOME nor Epiphany were installed by default.
It is this time of the year again *yay*. The biggest and greatest Free Software conference took place in Brussels, Belgium. It’s good to see all those interested and passionate people care about Free Software. I hope that the (intellectual) gravity of the people gets more people interested and strengthens our communities. In fact, I feel it was one of the better FOSDEMs so far. Maybe even the best. We, GNOME, had a hand full (not kidding) of new members of our communities staffing the booth or just being available. I was very please to see new faces and to identify them as people who were very committed to Free Software and GNOME.
As indicated, we, GNOME, had a booth and a fun time entertaining people stopping by. With the help of many volunteers, we presented our most recent GNOME release, sold some t-shirts, and discussed our future ideas. It’s not necessarily a venue to convince people to use Free Software, or even to use GNOME. But I have the feeling we manage to get both messages across. Bar one case in which an unlucky fellah was angry about everything and especially that this Linux 20 we had installed wouldn’t ship Emacs by default. Other than that we showed people how cool the GNOME Shell extensions are, how to quickly launch applications, or how to access the notification area quickly. Or, yes of course, how to suspend. Or to shutdown…
I also had the pleasure of being interviewed by an Irish dude who produced episodes for Hacker Public Radio. I didn’t know about that but it seems to be a cool project. I don’t know when it will go live or whether it actually has been published already.
We also had panel with the governing bodies of GNOME and KDE. The intention was to debunk some myths and to make the work more visible. I was on the Panel (on behalf of GNOME) with Kat (from GNOME…) and Lydia from KDE. She was joined by Cornelius who serves on the KDE board for more than 9 years. We were lamenting about various aspects of our work such as where does money come from, where does it go to, what are the processes of getting rid of the money. But also why we were doing that, why we think it is important and what achievements we are proud of. Our host, Paul, was a nice and fun guy and did his job very well. I think it was a successful event. It could probably have been better in the sense that we could have focussed more on the audience and making them want to step up and take over responsibilities. But the way it went and the participation of the audience makes me happy nonetheless.
Oh, I almost missed to report on this year’s CCCongress, 30C3. The thirtieths CCCongress. It has grown considerably over the last few years. We’ve reached over 9000 visitors whereas we had 4000 a couple of years ago. The new venue in Hamburg is amazing. Despite the impressive number of attendees, it didn’t feel crowded at all. So many nice details made the venue just awesome. It really felt like it was *the* place to be. A rather big detail was the installation of a letter shoot. Yes, a real pneumatic postal delivery system. With routing and all. Just amazing.
That’s pretty much all I have to say. It was, of course, nice to meet so many old friends and people. I couldn’t even say hi to all of the ones I wanted to meet. What follows is a bit of a rundown of some of the talks that I’ve actually seen, hoping you can evaluate whether you want to see any of that yourself.
I was a bit late for the conference, probably one of the first talks I’ve seen was DJB on, guess what, crypto. It even has a reference to Poettering (who I was also able to meet 🙂 )!
http://www.youtube.com/watch?v=HJB1mYEZPPA
Funnily enough, Nate from the EFF mentioned DJB in his talk on disclosure Dos and Donts. He said that it would be smart to think about how much fuzz one wants to make about a vulnerability at hand. Sure enough, the title needs to be catchy enough for people to notice. If you were DJB, then the lecture hall would be filled even if the title was “DJB has something to say”.
http://www.youtube.com/watch?v=oSi6PxVBOx4
Something that stirred up the community was Assange’s talk. Apparently sabotaged, the Skype connection wasn’t all too good. But it was also not very interesting. The gist: Sysadmin: Go to the three-letter-agencies and carry out document to become the next Snowden. Good advice.
http://www.youtube.com/watch?v=hzhtGvSflEk
As for carried out documents, Jake Applebaum presented the NSA’s shopping cart which includes all sorts of scary techniques and technologies. If you have only time to watch one video, make it this one. That’s probably even safer than sitting in the audience. Just after he showed the reconnaissance tools for the investigators to combine various data sources, undoubtedly including cell phone location and people around you, he switched on his cell phone so that the audience would have a connection with him. The one who knows he is being spied on. It was a very emotional talk, too.
http://www.youtube.com/watch?v=vILAlhwUgIU
Another depressing thing was Jöran talking about the missed (digital) opportunities in education. The most noticeable thing he said was that Apple products are consuming devices only. But the reality is that they make it work 93% of the time as opposed to 90%. But that difference makes teachers use it…
http://www.youtube.com/watch?v=a90Tto1b4eo
More scary, was the presentation on exploration and exploitation SD card controllers. You’re basically screwed. You have close to no idea what it running on the micro controller on your SD card. And on the various other controllers you carry around. They got themselves access to the chip and were able to flash their own firmware. Doesn’t sound all too exciting, but it is an eye opener that your stupid almost invisible SD card can spy on you.
http://www.youtube.com/watch?v=CPEzLNh5YIo
A strange talk was the one on Digital Bank robberies. There are so many weird details they talk about. They claim to have been called for investigation of a malware that found on ATMs in Brazil. The weirdest thing for me was that the physical damage done to the ATMs went unnoticed. The gangsters needed to install a pendrive so they had to break the case. Which apparently isn’t all too secure. And then they had to make the ATM reboot to boot off the pendrive. Without having to press a key. It is unclear to me whether they could leave the pendrive or not. Apparently they could remove it, because if they couldn’t then the malware could have been found much earlier. But given that the ATMs reboot so easily, it would make sense to install the malware on the ATMs hard drive. In that case they could have spotted the malware rather easily. Anyway, the presenting people were not Brazilian. Why would such a sensitive Brazilian investigation be undertaken by foreigners?
http://www.youtube.com/watch?v=0c08EYv4N5A
Another interesting, although weirdly presented, talk on X Security was given by Ilja van Sprundel. He looked at X code and identified a good number of easily exploitable bugs. No wonder given that the code is 30 years old… He also mentioned libraries on top of X such as GTK+ or Qt and explained how the security story from GNOME was very different from Qt’s. Essentially: The GNOME guys understood security. Qt didn’t.
http://www.youtube.com/watch?v=2l7ixRE3OCw
On the more fun side, the guys from Ztohovenpresented their recent work. They are probably best known for their manipulated video which ran during morning TV shows (IIRC).
In their presentation they talked about their performance for which they obtained numbers from parliamentarians and sent them text messages during a session that was aired live. Quite funny, actually. And the technical details are also interesting.
http://www.youtube.com/watch?v=hBxeSmBBdfg
Another artsy piece is “Do You Think That’s Funny?” (program link) in which the speaker describes the troubles their artistic group had to go through during or after their performances. They did things like vote auction (WP), Alanohof, or AnuScan, and their intention is to make surveillance visible and show how it makes activists censor themselves.
Suppose you are sick of Tanzverbot and you want to go from Karlsruhe to Hamburg. As a proper German you’d think of the Bahn first, although Germany started to allow long distance travel by bus, which is cheap and surprisingly comfortable. My favourite bus search engine is busliniensuche.de.
Anyway, you opted for the Bahn and you search a connection, the result is a one way travel for 40 Euro. Not too bad:
But maybe we can do better. If we travel from Switzerland, we can save a whopping 0.05 Euro!
Amazing, right? Basel SBB is the first station after the German border and it allows for international fares to be applied. Interestingly, special offers exist which apparently make the same travel, and a considerable chunk on top, cheaper.
But we can do better. Instead of travelling from Switzerland to Germany, we can travel from Germany to Denmark. To determine the first station after the German border, use the Netzplan for the IC routes and then check the local map, i.e. Schleswig Holstein. You will find Padborg as the first non German station. If you travel from Karlsruhe to Padborg, you save 17.5%:
Sometime you can save by taking a Global ticket, crossing two borders. This is, however, not the case for us:
In case you were wondering whether it’s the very same train and route all the time: Yes it is. Feel free to look up the CNL 472.
I hope you can use these tips to book a cheaper travel.
Do you know any ways to “optimise” your Bahn ticket?
I am working with a virtual GNU/Linux system, because the machine I’m working with must run a Windows on its bare metal.
I thought I wanted to give raw disk access to the guest, but it turns out, that it is not very easy in Windows to give permanent permissions to a regular user. You can give permissions using the semi-official subinacl like so:
C:\WINDOWS\system32>"C:\Program Files (x86)\Windows Resource Kits\Tools\subinacl.exe" /noverbose /file \\.\physicaldrive1 /display=sddl
I needed to figure out that the language the ACLs are written in, is SDDL and how to give my current useror group all permissions. I failed doing that so I opted for giving all access to every entity known to the system… You can see the relevant SDDL in the listing above.
But that change will only survive until the next reboot. To make the software permanent, an unofficial tool called dskacl can theoretically be used. Apparently it tries to write special values to the Windows Registry. Although I found my way through the documentation, I couldn’t make it work. It actually failed so hard on me that even Windows could not see the drive itself. So make a good contingency plan before even trying to make it work. It’s not really meant for attached disks, but rather external disks via USB. Anyway, I thought I’d have to redo the above mentioned step on every boot.
The question I had was, whether it’s actually worth it. I assumed that it would be a speed up to write directly to the harddrive without having to go through Windows and VirtualBox’ VDI layer before hitting the disk.
So I measured my typical workload: compiling Chromium. As it is what I’m working on, I want compilations to happen as quickly as possible.
My technique was the following:
export CCACHE_DISABLE=1
rm -rf out
./build/gyp_chromium
time ninja -C out/Debug chrome
I did a handul of runs to compensate for some irregularities that might happen on the host (or in fact, on the guest…)
When writing on an Ext4 straight onto the disk, I get the following results:
real 61m59.895s
user 322m52.832s
sys 46m49.268s
real 61m25.565s
user 318m40.680s
sys 46m7.608s
real 58m59.939s
user 320m36.500s
sys 46m28.336s
Having an Ext4 filesystem in a VDI container on an NTFS partition yields the following results:
real 60m50.360s
user 322m18.184s
sys 47m3.588s
real 57m30.324s
user 318m48.752s
sys 46m52.016s
real 63m29.179s
user 328m55.004s
sys 48m4.692s
I couldn’t test shared folders, because either the NTFS or in fact the vboxfs don’t support operations needed for the compilation. I guess it’s VirtualBoxes fault though…
My interpretation of the results:
Writing directly to the disk seems to be marginally slower than going through the VDI container. At best, there is no significant deviation from writing to the container. I decided that it’s not worth to write straight to disk. Going through the VDI container and through Windows is fine. Especially with all the risks involved such as Windows not being able to see the drive at all.
I acknowledge that my data is a bit flawed. It is likely that you cannot generalise my findings for any other workload. The method is also questionable as I didn’t flush caches or took care of anything disturbing my measurements. If you do measure yourself, I’m interested in getting the results.
It’s been a while since I attended the mrmcds. In 2011 the event did not take place and I couldn’t make it the year after. Fortunately, 2013 allowed me to participate and I was heavily surprised by the quality of everything. The (newish) location, the people, the provided catering, the atmosphere, …
The event itself is relatively small. I don’t have numbers but I felt like being surrounded by 100 people. Although the stats about connected devices suggests there were at least twice or thrice as many people present.
The talks were good, a refreshing mix of technical and non-technical content. With an audience generally inclined to discuss things. That allowed for more lively sessions which create new insights, also for the speakers. My favourite was Akiko talking about her job as air traffic controller. I learned a lot about how the aviation industry is organised how various pieces fit together.
Fukami keynoted the conference and tried to make us aware of our ethics. Surveillance was made by hackers, he said. People like you and me. The exercise for the audience was to further think and conclude that if we didn’t help implementing and deploying surveillance infrastructure, it wouldn’t have gotten that bad. While the talk itself wasn’t too bad, I wonder who the target audience was. If it meant to wake up young hackers who have not yet adjusted their moral compass, it was too weak. The talk didn’t really give advice as to how to handle dubious situations. If it was not meant for those hackers, then why talk about it in a very basic way and not ask hard questions? Anyway, I enjoyed seeing the issue of people’s responsibility coming up and creating a discussion among the hackers.
Mine and Stef’s talk went well, although it was the in the very last slot of the conference. After two long party nights. I barely made it to the talk myself 😀 We presented new ideas to guide the user when it comes to security critical questions. If you have been to GUADEC, then you haven’t missed much. The talk got a slight new angle though. In case you are interested in the slides, you can find them here.
The design of the conference was very impressive. The theme was aviation and not only did we have an impressive talk monitor as seen above, we also had trolleys with drinks and food as well as the time for various interesting locations. We also received amazing gadgets like the laser engraved belt made from the typical air plane seatbelt.
As always, parties were had with own DJs, light show, beer straight from the tap, cool people and music. To summarize: I’m glad to have visited a very enjoyable event. It’s a pleasure to be around all those smart hackers and to have inspiring discussions. I’m looking forward to next year.