Ouf, long time no blog. Sorry about that. Life is.. busy these days. One of my occupations has been to defend against USB-borne attacks, as mentioned before. Probably the most known is the BadUSB attack which masquerades as a mass storage device but also is a keyboard. Then, the device would inject keystrokes which hack your machine. This class of attacks is difficult to defend against, because the operating system can hardly determine whether the user did indeed want to plug a device with those capabilities in. There is another class of attacks, though, which is a bit easier to defend against, I think. That other class is based on buggy drivers which the malicious USB device pulls in. So once you attach your device, the host’s kernel will look for the appropriate driver and let it speak with the device. That’s very convenient because it makes devices just work. However, certain drivers might not be of the same quality than others and there have indeed been cases which allow a malicious USB device to interact with a bit-rotten driver which in turn led to fatal consequences.
I have been thinking a lot about how to defend against either class of attacks. You can easily come up with various solutions based on pop-ups interrupting the user and authoring the device before it will be ready for use. Or with a policy-based solution that requires you to generate a firewall-like description of what the machine is allowed to do. Or a mixture of those two attempts. And that’s what has been done, already. Arguably, none of these solutions has been successful, as I am not aware of any built-in protection scheme for any major operating system. The reason, I believe, is that users expect things to just work and once you make it not work, users get grumpy. So the challenge is to unfold protection capabilities without changing the users’ experience.
The short version of our approach is that we are trying to be smart about the user’s intent. That is, if the screen is locked, then we block the device. If a new keyboard is present and it tries to perform “dangerous” actions, we block them. Of course, you may very well expect that device to work when the screen is locked or the new keyboard to perform actions deems dangerous. This is why is make sure you have a way to opt out of the mechanism and continue to enjoy your GNOME experience. Almost all credits go to Ludovico for coming up with a set of patches as well as following up to make sure we can get it merged. Our slides are here and the video of our presentation is here:
But I wanted to write more about GUADEC… This year’s GUADEC was in Thessaloniki, Greece, and I had the pleasure to be talking about the above mentioned protection. It was the end of the summer so the city was nicely warm and comfy. The coffee, juices, pastries, and other food and drinks in small shops on the streets were amazingly fresh and yummie. Arriving in Thessaloniki was okay. I’ve had better airport transfers in my life, but since there were only two buses it was hard to get lost. I needed to pay attention to the GPS, though, to find my right stop. It’s been long since I’ve slept in a bunk bed, but because we’re all GNOME people we had a good time.
Meeting friends, old and new, is really good and I have had a fantastically efficient time talking to people. It’s so much better to meet in-person and talk directly rather than via email or bug tracker.
This year I spoke at FOSDEM again. It became sort of a tradition to visit Brussels in winter and although I was tempted to break with the tradition, I came again.
I had two talks at this year’s FOSDEM, both in the Security track. One on my work with Ludovico on protecting against rogue USB devices and another one on tracking users with core Internet protocols. We got a bigger room this year, but it was still packed. Despite the projector issues, which seem to be appearing more often recently, the talks went well. The audience was very engaged and we had a lively discussion in the hallway. In fact, the discussion was extremely fruitful because we were told about work in similar areas which we ought to check out.
For our USB talk I thought I’d set the mindset first and explain how GNOME thinks it should interact with the user. That is, the less interaction is required, the better it is. Especially for a security system where the user may not know what to do. In fact, we try to just make it work™ without the user having to do anything. That is vastly different from other projects are doing. In particular, Kaspersky wants you to enter a PIN when attaching a new keyboard and the USBGuard dialogue is not necessarily suitable for our users.
In the talk on Internet protocols I mainly showed that optimisations regarding the latency need to be balanced against the privacy needs of the users. Because in order to reduce latency you usually share a state with the other end which tends to be indicated through some form of token or cookie. And because you have this shared state, the server can discriminate you. What you can try to do is to not send the token or cookie in first place. Of course, then you lose the optimisation. In turns out, however, that TLS 1.3 can be as fast, i.e. 1 round trip, and that the latency is not better or worse if you resume a previous session. Note how I talk about latency only and ignore other aspects such as CPU cycles spent for the connection establishment. Another strategy is to not send the token unencryptedly. With TLS 1.2 the Session Ticket is sent without any form of encryption which enables a network-based attacker to see your token and correlate your requests. The same is true for other optimisations such as TCP Fast Open. I have also presented our approach to balancing privacy and latency, namely a patched WolfSSL and Linux. With these patched versions we send the TCP Fast Open cookie via TLS s.t. the attacker cannot see it when we request it.
The conference was super busy and I was super busy with talking to people. It’s amazing how fast time flies when you are engaged in interesting discussions. I bumped from one person into another and then it was already time for dinner. The one talk I’ve seen was done by my colleague on preventing cryptographic misuse of libraries. More precisely, an attempt to provide sane APIs which make shooting yourself in the foot very hard.
I was fortunate enough to be invited to Kyiv to keynote (video) the local Open Source Developer Network conference. Actually, I had two presentations. The opening keynote was on building a more secure operating system with fewer active security measures. I presented a few case studies why I believe that GNOME is well positioned to deliver a nice and secure user experience. The second talk was on PrivacyScore and how I believe that it makes the world a little bit better by making security and privacy properties of Web sites transparent.
The audience was super engaged which made it very nice to be on stage. The questions, also in the hallway track, were surprisingly technical. In fact, most of the conference was around Kernel stuff. At least in the English speaking track. There is certainly a lot of potential for Free Software communities. I hope we can recruit these excellent people for writing Free Software.
Lennart eventually talked about CAsync and how you can use that to ship your images. I’m especially interested in the cryptography involved to defend against certain attacks. We also talked about how to protect the integrity of the files on the offline disk, e.g. when your machine is off and some can access the (encrypted) drive. Currently, LUKS does not use authenticated encryption which makes it possible that an attacker can flip some bits in the disk image you read.
Canonical’s Christian Brauner talked about mounting in user namespaces which, historically, seemed to have been a contentious topic. I found that interesting, because I think we currently have a problem: Filesystem drivers are not meant for dealing with maliciously crafted images. Let that sink for a moment. Your kernel cannot deal with arbitrary data on the pen drive you’ve found on the street and are now inserting into your system. So yeah, I think we should work on allowing for insertion of random images without having to risk a crash of the system. One approach might be libguestfs, but launching a full VM every time might be a bit too much. Also you might somehow want to promote drives as being trusted enough to get the benefit of higher bandwidth and lower latency. So yeah, so much work left to be done. ouf.
Then, Tycho Andersen talked about forwarding syscalls to userspace. Pretty exciting and potentially related to the disk image problem mentioned above. His opening example was the loading of a kernel module from within a container. This is scary, of course, and you shouldn’t be able to do it. But you may very well want that if you have to deal with (proprietary) legacy code like Cisco, his employer, does. Eventually, they provide a special seccomp filter which forwards all the syscall details back to userspace.
As I’ve already mentioned, the conference was highly technical and kernel focussed. That’s very good, because I could have enlightening discussions which hopefully get me forward in solving a few of my problems. Another one of those I was able to discuss with Jakob on the days around the conference which involves the capabilities of USB keyboards. Eventually, you wouldn’t want your machine to be hijacked by a malicious security device like the Yubikey. I have some idea there involving modifying the USB descriptor to remove the capabilities of sending funny keys. Stay tuned.
Anyway, we’ve visited the city and the country before and after the event and it’s certainly worth a visit. I was especially surprised by the coffee that was readily available in high quality and large quantities.
I didn’t take notice of the LinuxFoundation announcing CHAOSS, an attempt to bundle various efforts regarding measuring and creating metrics of Open Source projects. The CHAOSS community is thus a bunch of formerly separate projects now having one umbrella.
OpenStack’s Ildiko Vancsa opened the conference by saying that metrics is what drives our understanding of communities and that we’re all interested in numbers. That helps us to understand how projects work and make a more educated guess how healthy a project currently is, and, more importantly, what needs to be done in order to make it more sustainable. She also said that two communities within the CHAOSS project exist: The Metrics and the Software team. The metrics care about what information should be extracted and how that can be presented in an informational manner. The Software team implements the extraction parts and makes the analytics. She pointed the audience to the Wiki which hosts more information.
Georg Link from the metrics team then continued saying that health cannot universally be determined as every project is different and needs a different perspective. The metrics team does not work at answering the health question for each and every project, but rather enables such conclusions to be drawn by providing the necessary infrastructure. They want to provide facts, not opinions.
I think that we in the GNOME community can use data to make more informed decisions. For example, right now we’re fading out our Bugzilla instance and we don’t really have any way to measure how successful we are. In fact, we don’t even know what it would mean to be successful. But by looking at data we might get a better feeling of what we are interested in and what metric we need to refine to express better what we want to know. Then we can evaluate measures by looking at the development of the metrics over time. Spontaneously, I can think of these relatively simple questions: How much review do our patches get? How many stale wiki links do we have? How soon are security issues being dealt with? Do people contribute to the wiki, documentation, or translations before creating code? Where do people contribute when coding stalls?
Bitergia’s Daniel reported on Diversity and Inclusion in CHAOSS and he said he is building a bridge between the metrics and the software team. He tried to produce data of how many women were contributing what. Especially, whether they would do any technical work. Questions they want to answer include whether minorities take more time to contribute or what impact programs like the GNOME Outreach Program for Women have. They do need to code up the relevant metrics but intend to be ready for the next OpenStack Gender diversity report.
Bitergia’s CEO talked about the state of the GriomoireLab suite.
It’s software development analysis toolkit written largely in Python, ElasticSearch, and Kibana. One year ago it was still complicated to run the stack, he said. Now it’s easy and organisations like the Document Foundation run run a public instance. Also because they want to be as transparent as possible, he said.
Yousef from Mozilla’s Open Innovation team then showed how they make use of Grimoire to investigate the state of their community. They ingest data from Github, Bugzilla, newsgroup, meetups, discourse, IRC, stackoverflow, their wiki, rust creates, and a few other things reaching back as far as 20 years. Quite impressive. One of the graphs he found interesting was one showing commits by time zone. He commented that it was not as diverse as he hope as there were still many US time zones and much fewer Asian ones.
Raymond from the Linux Foundation talked about Metrics in Open Source Communities, what are they measuring and what do they do with the data. Measuring things is not too complicated, he said. But then you actually need to do stuff with it. Certain things are simply hard to measure, he said. As an example he gave the level of user or community support people give. Another interesting aspect he mentioned is that it may be a very good thing when numbers go down, also because projects may follow a hype cycle, too. And if your numbers drop, it’ll eventually get to a more mature phase, he said. He closed with a quote he liked and noted that he’s not necessarily making fun of senior management: Not everything that can be counted counts and not everything that counts can be counted.
Boris then talked about Crossminer, which is a European funded research project. They aim for improving the management of software projects by providing in-context recommendations and analytics. It’s a continuation of the Ossmeter project. He said that such projects usually die after the funding runs out. He said that the Crossminer project wants to be sustainable and survive the post-funding state by building an actual community around the software the project is developing. He presented a rather high level overview of what they are doing and what their software tries to achieve. Essentially, it’s an Eclipse plugin which gives you recommendations. The time was too short for going into the details of how they actually do it, I suppose.
Eleni talked about merging identities. When tapping various data sources, you have to deal with people having different identity domains. You may want to merge the identities belonging to the same person, she said. She gave a few examples of what can go wrong when trying to merge identities. One of them is that some identities do not represent humans but rather bots. Commonly used labels is a problem, she said. She referred to email address prefixes which may very well be the same for different people, think j.wright@apple.com, j.wright@gmail.com, j.wright@amazon.com. They have at least 13 different problems, she said, and the impact of wrongly merging identities can be to either underestimate or overestimate the number of community members. Manual inspection is required, at least so far, she said.
The next two days were then dedicated to FOSDEM which had a Privacy Devroom. There I had a talk on PrivacyScore.org (slides). I had 25 minutes which I was overusing a little bit. I’m not used to these rather short slots. You just warm up talking and then the time is already up. Anyway, we had very interesting discussions afterwards with a few suggestions regarding new tests. For example, someone mentioned that detecting a CDN might be worthwhile given that CloudFlare allegedly terminates 10% of today’s Web traffic.
When sitting with friends we noticed that FOSDEM felt a bit like Christmas for us: Nobody really cares a lot about Christmas itself, but rather about the people coming together to spend time with each other. The younger people are excited about the presents (or the talks, in this case), but it’s just a matter of time for that to change.
It’s been an intense yet refreshing weekend and I’m looking very much forward to coming back next time. For some reason it feels really good to see so many people caring about Free Software.
This season’s CCCongress, the 34C3, (well, the 2017 one) moved from Hamburg to Leipzig. That was planned, in the sense that everybody knew before the event moved to Hamburg, that the location will only be available for a few years.
My own CCCongress experience in the new location is limited, because I could not roam around as much as I wanted to. But I did notice that it was much easier to get lost in Hamburg than in the new venue. I liked getting lost, though.
We had a talk on anonymisation networks (slides, video) scheduled with three people, but my experience with making a show with several people on stage is not so good. So we had a one man show which I think is good enough. Plus, I had private commitments that prevented me from attending the CCCongress as much as I would have wanted to.
I could attend a few talks myself, but I’ll watch most of them later. CCCongress is getting less about the talks but about meeting people you haven’t seen in a while. And it’s great to have such a nice event to cater for the desire to catch up with fellow hackers, exchange ideas and visions.
That said, I’ll happily come back next year, hopefully with a bit more time and preparation to get the most of the visit. Although it hasn’t been announced yet, I would be surprised if it does not take place there again. So you might as well book your accommodation already π
I attended this year’s MRMCD in Darmstadt, Germany. I attended a fewtimesinthepast and I think this year’s edition was not as successful as the last ones. The venue changed this year, what probably contributed to some more chaos than usual and hence things not running as smoothly as they did. I assume it will be better next year, when people know how to operate the venue. Although all tickets were sold during the presale phase, it felt smaller than in the last years. In fairness, though, the venue was also bigger this year. The schedule had some interesting talks, but I didn’t really get around to attend many, because I was busy preparing my own shows (yeah, should’ve done that before…).
I had two talks at this conference. The first was on playing the children’s game “battleship” securely (video). That means with cryptography. Lennart and I explained how concepts such as commitment schemes, zero knowledge proofs of knowledge, oblivious transfer, secure multiparty computation and Yao’s protocol can be used to play that game without a trusted third party. The problem, in short, is to a) make sure that the other party’s ships are placed correctly and b) to make sure the other party answers correctly. Of course, if you get hold of the placements of the ships these problems are trivial. But your opponent doesn’t like you to know about the placements. Then a trusted third party would solve that problem trivially. But let’s assume we don’t have such a party. Also, we want to decentralise things, so let’s come up with a solution that involves two players only.
The second problem can be solved with a commitment. A commitment is a statement about a something you’ve chosen but that doesn’t reveal the choice itself nor allows for changing ones mind later. Think of a letter in a closed envelope that you hand over. The receiver doesn’t know what’s written in the letter and the sender cannot change the content anymore. Once the receiver is curious, they can open the envelope. This analogy isn’t the best and I’m sure there’s better real-world concepts to compare to commitment schemes. Anyway, for battleship, you can make the other party commit to the placement of the ships. Then, when the battle starts, you have the other party open the commitment for the field that you’re shooting. You can easily check whether the commitment verifies correctly in order to determine whether you hit a ship or water.
The other problem is the correct placement of the ships, e.g. no ships shall be adjacent, exactly ten ships, exactly one five-field ship, etc. You could easily wait until the end of the game and then check whether everything was placed correctly. But that wouldn’t be (cryptographic) fun. Let’s assume one round of shooting is expensive and you want to make sure to only engage if the other party indeed follows the rules. Now it’s getting a bit crazy, because we need to perform a calculation without learning anything else than “the ships are correctly placed”. That’s a classic zero knowledge problem. And I think it’s best explained with the magic door in a cave.
Even worse, we need to somehow make sure that we cannot change our placement afterwards. There is a brain melting concept of secure multi-party computation which allows you to do exactly that. You can execute a function without knowing what you’re doing. Crazy. I won’t be able to explain how it works in a single blog post and I also don’t intend to, because others are much better in doing that than I could ever be. The gist of the protocol is, that you model your functionality as a Boolean circuit and assign random values to represent “0” or “1” for each wire. You then build the truth table for each gate and replace the values of the table (zeros and ones) with an encryption under both the random value for the first input wire and the random value for the second input wire. The idea now is that the evaluator can only decrypt one value in the truth table given the input keys. There are many more details to care about but eventually you have a series of encrypted, or garbled, gates and you need the relevant keys in order to evaluate it. You can’t tell from the keys you get whether it represents a “0” or a “1”. Hence you can evaluate without knowing the other party’s input.
My other talk was about a probable successor of Return Oriented Programming: Data Oriented Programming (video). In Return Oriented Programming (ROP) and its variants like JOP the aim is to diverge the original control flow in order to make the program execute the attacker’s functionality. This, however, can probably be thwarted by Control Flow Integrity. In its simplest form, it checks on every branch whether it is legit. Think of a database with a list of addresses which are allowed to a list of other addresses. Of course, real-world implementations are more clever. Anyway, let’s assume that we’ll have a hard time exploiting our target with ROP, because we cannot change the CFG of the program. If our attack doesn’t change the CFG, though, we should be safe for anything that detects its modification. That’s the central idea of DOP.
Although I’m not super excited about this year’s edition, I’m looking forward to seeing the next year’s event. I hope it’s going to be a bit more organised; including myself π
I attended my first FrOSCon in St. Augusting, Germany. It’s one of the bigger Free Software events in Germany. Supposedly, the Chemnitzer LinuxTage is one of the few events which are bigger than FrOSCon. I thought it’s time for me to attend this event, so I went.
I was scheduled for two talks. One in the very first slot and one in the very last slot. So, to some extent, I was opening and closing the conference π But the official keynote was, to my surprise, performed by Karen. She keynoted the conferences with her “big heart” talk. He told her story about her wanting to find out what software her pacemaker runs. Of course, it was an endless quest with no success. She described herself as a cyborg because of the machinery that is linked up to her body. She researched the security of devices such as pacemakers and found devastating results. In fact, software is deployed in many critical parts with people having no clue how the impact will be if the software is being attacked. She described the honeymoon effect and projected it to the security aspects of deployed software. She described it as a time in which no vulnerabilities are known. But once a vulnerability has been found, the number of known vulnerability increases exponentially. She found a study which shows that Free Software responds better to found vulnerabilities than proprietary systems. She said she went from thinking “Open Source was cool” to “Open Source is essential” because it responds much better in case of security breaches. She cautioned us to be careful with the Internet of Thingsβ’, because it will lead to people being connected without the people even knowing. All software has bugs, she said, but with Software Freedom we are able to do something about the situation. It’s been an enjoyable talk and I recommend watching the video.
Another interesting talk was given by Raffa about open data in public transport. Open data, especially in trip planning, can give us better results, he said, because personal preferences can be respected better. But also competition will become tougher if the data is free which might lead to better products. My personal argument in favour of open data is that it would allow offline routing rather than having to connect to the Internet. Some public transportation companies have freed their data, like the companies in Berlin, Ulm, Rhein Neckar, and Rhein Sieg, which is, funnily enough, the local company responsible for the public transport in the area of the event. However, some companies are still hesitant. The reasons are manifold. One is that they don’t want to deal with complaints about wrongly displayed data or simply outdated data that the third party didn’t bother to update. Also abuse is a concern. What would abuse even mean in this context? Well, some companies are afraid that the data is not only being used for trip planning but for finding out how the companies work or what their financial situation is, e.g. by inferring information from the data.
Andreas Schreiber talked about the complications of Open Source in Science. He works at DLR, which is a publicly funded research institute. Software is important to the DLR. 1500 people develop software which costs around 150 Million EUR per year and makes them probably the biggest software house in Germany, he said. As they are producing as releasing software they got in trouble with licensing issues. For example, they released software which was not open source although they thought it was. They also used software themselves which they may not have been entitled to use. Their CIO eventually issued a warning regarding the use and release of Open Source which made the speaker offer workshops and knowledge databases for issues around open source. They created a brochure which they intend to distribute to other institutes, too, because they tend to get more requests for this kind of information from the outside than from the inside of their organisation. I found interesting that the problems, according to the participants of their workshops, are that monetising won’t work, that building a community is hard, and that it costs more time to do “open source” than not which is demotivating. It’s been interesting to learn about issues involved in both consuming and producing open source software.
As I’ve mentioned, I was booked for two events, a talk and a workshop. My workshop was about signing OpenPGP keys. I held a small presentation and ranted, some times a bit unfairly, about the current state of affairs. I showed how people do it as of now and how I think we can do better than that. It’s been the first slot in this conference and the audience was small, albeit larger than expected. We even got to suggest improvements in Gentoo’s packaging, so I consider it a success. My talk (slides) was about how GNOME advances the security of desktop systems. The audience was super engaged and I felt I couldn’t focus so much on other things I only touched upon. But the discussion showed that people do care about a usable desktop. We were talking a lot about dialogues and modal prompts and how they do not contribute to the security of a system. I claimed that they exist because they were cheap for the app developer to do. But we at GNOME, I said, try or at least should try to avoid those as much as possible and we find other ways of enabling the app to capture the user’s intent. I’m surprised that we had such a lively discussion in the very last slot of the conference.
I’m happy to have attended the event and to meet surprisingly many GNOME people! It’s surprisingly close to Frankfurt and Cologne both of which have good connections via plane or train. With around 1800 attendees it’s quite big although the many tracks and rooms make it feel less crowded.
The venue was very easy to find due to poster hanging everywhere. The flow of information was good in general. That includes emails being every day which highlighted items in the schedule or restaurant recommendations for the evening.
I arrived just in time for my first show on GNOME Keysign. For better or worse we only very few people so we could discuss matters deeply. It was good, because we found bugs and other user facing issues that need to be resolved. The first and most obvious one was GnuPG 2.1 support. Although still experimental, OpenSuSE ships 2.1 by default. The wrapping library we’re using to interact with GnuPG did not support calling the newer gpg, so we had to identify the issues, find a fix, and test. It eventually worked out π
I also had a talk called “Five years after 3.0” which, to my surprise, has been covered by reddit and omgubuntu. I was also surprised by the schedule which only gave me 30 minutes instead of the usual 45 or 60. I was eventually politely reminded that I have significantly exceeded my time *blush*. We thus needed to move discussions outside which was fruitful. People at OpenSuSE Con are friendly and open-minded. It’s a pleasure to have arguments there π
But I have had very interesting and enlightening discussions about distributions, containerised apps, Open Build Service, OpenQA, dragging more GNOME people towards OpenSuSE, Fonts, and other issues. That’s the great thing about conferences: You get to know people with interesting stories. As for the fonts, for example, I was discussing the complexity involved in rendering glyphs and whether this could eventually lead to security problems. I think the attack surface of fonts has been undervalued and needs some investigation. I hope I can invest some time in looking at building and modifying fonts. I also found it interesting to discuss why I would not recommend OpenSuSE as a GNU/Linux distribution to anyone, mainly because I need to reflect and challenge myself. Turns out, I don’t have any good reason except that my habits simply don’t include using OpenSuSE myself and I am thus unable to give a recommendation. I think they have interesting infrastructure though. I see the build service for having peoples’ apps built and OpenQA for having them tested. Both seem to be a little crude overall, but could become the tools to use for distributing your flatsnappimgpack. An idea was circling around to have a freedesktop.org for those app image formats and execution environments. But in a somewhat more working state. I think key to success of any such body is being lightweight and not end up like openstack. Let’s hope we can bring people who work on various parts or even implementation of containerisation for desktop applications together. I also hope that the focus for containered desktop apps will be isolation from other apps rather than actually distributing the software, because I don’t think we have a big problem with getting Free Software into the user’s hands.
So a big “thank you” to this year’s organisers for this event. I hope I can attend on of the following conferences π
The conference itself featured three tracks, which is quite busy already. But in addition, an unconference was held as a fourth track. The talks were varying in topic, from community management, to MySQL deployment, and of course, GNOME. I presented the latest and greatest GNOME 3.12. Despite the many tracks, the hallway track was the most interesting one. I didn’t know too many faces and as it’s a GNU/Linux distribution conference which I have never attended before, many of the people I met had an interesting background which I was not familiar with. It was fun meeting new people who do exciting things. I hope to be able to stay in touch with many of them.
The conference was opened by the OpenSuSE Board. I actually don’t really know how OpenSuSE is governed and if there is any legal entity behind it. But the Board seems to be somehow elected by the community and was to announce a few changes to OpenSuSE. The title of the conference was “The Strength to Change” which is indeed inviting to announce radical changes. For better or worse, both the number and severity of the changes announced were limited. First and foremost, handling marketing materials is about to change. A new budget was put in place to allow for new materials to be generated to have a much bigger presence in the world. Also, the materials were created by SuSE’s designers on staff. So they are considered to be rather high quality. To get more contributors, they introduce formalised sponsorship program for people to attend conferences to present OpenSuSE. I don’t know what the difference to their Travel Support Program is, though. They will also reimburse for locally produced marketing materials which cannot be shipped around the world to encourage more people to spread the word about OpenSuSE. A new process will be put in place which will enable local contributors to produce materials up to 200 USD from a budget of 2000 USD per quarter. Something that will change, but not just yet, is the development and release model. Andrew Wafaa said that OpenSuSE was a victim of its own success. He mentioned the number of 7500 packages which should probably indicate that it is a lot for them to handle. The current release cycle of 8 months is to be discussed. There is a strong question of whether something new shall be tried. Maybe annual releases, or even longer to have more time for polish. Or maybe not do regular releases at all, like rolling releases or just take as long as it takes. A decision is expected after the next release which will happen as normal at the end of this year. There was an agreement that OpenSuSE wants to be easy to contribute to. The purpose of this conference is to grow the participants’ knowledge and connections in and about the FLOSS environment.
The next talk was Protect your MySQL Server by Georgi Kodinov. Being with MySQL since 2006 he talked about the security of MySQL in OpenSuSE. The first point he made was how the post-installation situation is on OpenSuSE 13.1. It ships version 5.6.12 which is not too bad because it is only 5 updates behind of what upstream released. Other distros are much further away from that, he said. Version 5.6 introduced cool security related features like expiring passwords, password strength policies, or SHA256 support. He urged the audience to stop using passwords on the command line and look into the 5.6 documentation instead. He didn’t make it any more concrete, though, but mentioned “login paths” later. He also liked that the server was not turned on by default which encourages you to use your self-made configuration instead of a default one. He also liked the fact that there is no pre-packaged database as that does not configure users that are not very well protected. Finally, he pointed out that he is pleased to see that no remote access is configured in the default configuration. However, he did not like that OpenSuSE does not ship the latest version. The newest upstream version 5.6.15 not only fixes around 25 security problems but also adds advanced AES functionalities such as keys being bigger than 128 bits. He also disliked that a mysql_secure_installation script is not run after installation. That script would put random passwords to the root account, would disallow anonymous access, and would do away with empty default passwords. Another regret he had was that mysql_config_editor is not packaged. That tool would help to get rid of passwords in scripts using MySQL by storing credentials in encrypted files. That way you would have to protect only one file, not a lot of scripts. For some reason OpenSuSE activates the “federated plugin” which is disabled upstream.
Another weird plugin is the archive plugin which, he said, is not needed. In fact, it is not even available so that the starting server throws errors… Also, authentication plugins which should only be used for testing are enabled by default which can be a problem as it could allow someone to log in as any user. After he explained how this was a threat, the actual attack seems to be a bit esoteric. Anyway, he concluded that you get a development installation when you install MySQL in OpenSuSE, rather than an installation suited for production use.
He went on to refer about how to harden it after installation. He proposed to run mysql_secure_installation as it wouldn’t cause any harm even if run multiple times. He also recommended to make it listen on specific interfaces only, instead of all interfaces which is does by default. He also wants you to generate SSL keys and certificates to allow for encrypted communication over the network.
Even more security can be achieved when turning off TCP access altogether, so you should do it if the environment allows it. If you do use TCP, he recommended to use SSL even if there is no PKI. An interesting advice was to use external authentication such as PAM or LDAP. He didn’t go into details how to actually do it, though. The most urgent tip he gave was to set secure_file_priv to a certain directory as it will restrict the paths MySQL can write to.
As for new changes that come with MySQL 5.7, which is the current development version accumulating changes over 18 months of development, he mentioned the option to log to syslog. Interestingly,
a --ssl option on the client is basically a no-op (sic!) but will actually enforce SSL in the upcoming version. The new version also adds more crypto functions such as RANDOM_BYTES() which interface with the SSL libraries. He concluded his talk with a quote: “Security is like plastic surgery. the more you invest, the prettier it gets.”.
Michael Meeks talked next on the history of the Document Foundation. He explained how it used to be in the StarOffice days. Apparently, they were very process driven and believed that the more processes with even more steps help the quality of the software they produced. He didn’t really share that view. The mind set was, he said, that people would go into a shop and buy a box with the software. He sees that behaviour declining steeply. So then hackers came and branched StarOffice into OpenOffice which had a much shorter release cycle than the original product and incorporated fixes and features of the future version. Everyone shipped that instead of the original thing. The 18 months of the original product were a bit of a long thing in the free software world, he said. He quoted someone saying “StarDivision a problem for every solution.”
He went on to rant about Contributor License Agreements and showed a graph of Fedora contributions which spiked off when they dropped the requirement of a CLA. The graph was impressive but really showed the number of active accounts in an unspecified system. He claimed that by now they have around the same magnitude of contributions as the kernel does and with set a new record with 3000 commits in February 2014. The dominating body of contributors is volunteers which is quite different when compared to the kernel. He talked about various aspects of the Document Foundation like the governance or the fact that they want to make it as easy to contribute to the project as possible.
The next talk was given on bcache by Oliver Neukum. Bcache is a disk cache which is probably primarily used to cache rotational disks with SSDs. He first talked about the principles of caching, like write-back, write-through, and write-around. That is, the cache is responsible for writing to the backing store, the cache places the data to be written in its buffer, or write to the backing storage, but not the cache, respectively. Subsequently, he explained how to actually use bcache. A demo given later revealed that it’s not fool proof and that you do need to get your commands straight in order to make it work properly. As to when to actually use Bcache, he explained that SSDs are cool as they are fast, but they are small and expensive. Fast, as he continued, can either mean throughput or latency. SSDs are good with regards to latency, but not necessarily with throughput. Other, probably similar options to Bcache are dm-cache, but it does not support safe writes. I guess that you cannot use it if you have the requirement of a write-through or write-around scenario. A different alternative is EnhanceIO, written originally by Facebook, which keeps hash structure of the data to be cached in RAM. Bcache, on the other hand, stores a b-tree on the SSD instead of in the RAM. It works on block devices, so anything goes. Tape drives, RAIDs, … It places a special superblock to indicate the partition is a bcache partition. A second block is created to indicate what the backing store is. Currently, the kernel does not auto detect these caches, hence making it work with the root filesystem is a bit tricky. He did a proper evaluation of the effects of the cache. So his statements were well founded which I liked a lot.
It was announced that the next year’s conference, oSC15, will be in The Hague, Netherlands. The city we had our GUADEC in, once. If you have some time in spring, probably in April, consider to go.
Attending last years CCCongress was a great pleasure. Although there were great lectures, it’s the spirit that’s the best part of the conference. Meeting all these nice hacker people, hanging around, talking, discussing, hacking is just brilliant. You’ve got all those smart hackers around you and it just can’t get boring.
A good way of socialising is, of course, visiting the various parties that take place. The Phenoelit party was awesome. Thanks FX for the invites π
Besides drinking I spent time on some crypto problems and tried to investigate on the magnetic-stripe-card authentication in Hotels and Hostels. I found out, that all our cards for one room are equal, but not one card that has been obtained later. The data on the card is just ~100bits and I tried to find timestamps and room numbers in it but I failed. I blame my dataset to be too small. I’ll launch more advanced experiments next year. If you happen to have insider knowledge in magnetic-stripe locks, drop me a line.
I want to highlight two things about the last CCCongress. Firstly, Friend Tickets were available and the concept is just awesome: Basically you can propose a friend of yours you think would benefit of attending the CCCongress but has no way to cover the expenses. The organisers then decide whether you can get a discount (which will, of course, apportioned to every regularly paying attendee). I like to see this solidarity among hackers. Unfortunately, no stats are available to see how many people were enabled to come through this method. I hope, having these friend tickets will be considered next year again. So if you wanted to come to the CCCongress but feared the expenses, consider asking for a discount. Just for the record: The prices are at rock bottom anyway: 80 Euros for a 4 day conference of this kind is amazingly cheap. Thanks to all the angels! π
The second noteworthy concept to distribute the CCCongress as much as possible (called Dragons Everywhere). The idea is fantastic: Increase the number of attendees as much as possible by building mini conferences and stream the most important things. It would be even better, if the gatherings had a feedback channel, i.e. Webcam. Hopefully, it’ll be better next year, i.e. better and more reliable streaming services and more places, especially in Berlin, because many people were sent away because the conference was already sold out π
If you want to get a feeling of what the CCCongress is like, you might want to have a look at the recordings. If you organize a public viewing, make sure you show these videos π Based on the feedback, the best talks were: