Speaking at FOSDEM 2019 in Belgium, Brussels

This year I spoke at FOSDEM again. It became sort of a tradition to visit Brussels in winter and although I was tempted to break with the tradition, I came again.

I had two talks at this year’s FOSDEM, both in the Security track. One on my work with Ludovico on protecting against rogue USB devices and another one on tracking users with core Internet protocols. We got a bigger room this year, but it was still packed. Despite the projector issues, which seem to be appearing more often recently, the talks went well. The audience was very engaged and we had a lively discussion in the hallway. In fact, the discussion was extremely fruitful because we were told about work in similar areas which we ought to check out.

For our USB talk I thought I’d set the mindset first and explain how GNOME thinks it should interact with the user. That is, the less interaction is required, the better it is. Especially for a security system where the user may not know what to do. In fact, we try to just make it work™ without the user having to do anything. That is vastly different from other projects are doing. In particular, Kaspersky wants you to enter a PIN when attaching a new keyboard and the USBGuard dialogue is not necessarily suitable for our users.

View post on imgur.com

In the talk on Internet protocols I mainly showed that optimisations regarding the latency need to be balanced against the privacy needs of the users. Because in order to reduce latency you usually share a state with the other end which tends to be indicated through some form of token or cookie. And because you have this shared state, the server can discriminate you. What you can try to do is to not send the token or cookie in first place. Of course, then you lose the optimisation. In turns out, however, that TLS 1.3 can be as fast, i.e. 1 round trip, and that the latency is not better or worse if you resume a previous session. Note how I talk about latency only and ignore other aspects such as CPU cycles spent for the connection establishment. Another strategy is to not send the token unencryptedly. With TLS 1.2 the Session Ticket is sent without any form of encryption which enables a network-based attacker to see your token and correlate your requests. The same is true for other optimisations such as TCP Fast Open. I have also presented our approach to balancing privacy and latency, namely a patched WolfSSL and Linux. With these patched versions we send the TCP Fast Open cookie via TLS s.t. the attacker cannot see it when we request it.

The conference was super busy and I was super busy with talking to people. It’s amazing how fast time flies when you are engaged in interesting discussions. I bumped from one person into another and then it was already time for dinner. The one talk I’ve seen was done by my colleague on preventing cryptographic misuse of libraries. More precisely, an attempt to provide sane APIs which make shooting yourself in the foot very hard.

Speaking at FOSDEM 2018 in Brussels, Belgium

As in the last ten (or so) years I attended FODSEM, the biggest European Free Software event. This year, though, I went a day earlier to attend one of the fringe events, the CHAOSSCon.

I didn’t take notice of the LinuxFoundation announcing CHAOSS, an attempt to bundle various efforts regarding measuring and creating metrics of Open Source projects. The CHAOSS community is thus a bunch of formerly separate projects now having one umbrella.

OpenStack’s Ildiko Vancsa opened the conference by saying that metrics is what drives our understanding of communities and that we’re all interested in numbers. That helps us to understand how projects work and make a more educated guess how healthy a project currently is, and, more importantly, what needs to be done in order to make it more sustainable. She also said that two communities within the CHAOSS project exist: The Metrics and the Software team. The metrics care about what information should be extracted and how that can be presented in an informational manner. The Software team implements the extraction parts and makes the analytics. She pointed the audience to the Wiki which hosts more information.

Georg Link from the metrics team then continued saying that health cannot universally be determined as every project is different and needs a different perspective. The metrics team does not work at answering the health question for each and every project, but rather enables such conclusions to be drawn by providing the necessary infrastructure. They want to provide facts, not opinions.

Jesus from Bitergia and Harish from Red Hat were talking on behalf of the technical team. Their idea is to build a platform to understand how software is developed. The core projects are prospector, cregit, ghdata, and grimoire, they said.

I think that we in the GNOME community can use data to make more informed decisions. For example, right now we’re fading out our Bugzilla instance and we don’t really have any way to measure how successful we are. In fact, we don’t even know what it would mean to be successful. But by looking at data we might get a better feeling of what we are interested in and what metric we need to refine to express better what we want to know. Then we can evaluate measures by looking at the development of the metrics over time. Spontaneously, I can think of these relatively simple questions: How much review do our patches get? How many stale wiki links do we have? How soon are security issues being dealt with? Do people contribute to the wiki, documentation, or translations before creating code? Where do people contribute when coding stalls?

Bitergia’s Daniel reported on Diversity and Inclusion in CHAOSS and he said he is building a bridge between the metrics and the software team. He tried to produce data of how many women were contributing what. Especially, whether they would do any technical work. Questions they want to answer include whether minorities take more time to contribute or what impact programs like the GNOME Outreach Program for Women have. They do need to code up the relevant metrics but intend to be ready for the next OpenStack Gender diversity report.

Bitergia’s CEO talked about the state of the GriomoireLab suite.
It’s software development analysis toolkit written largely in Python, ElasticSearch, and Kibana. One year ago it was still complicated to run the stack, he said. Now it’s easy and organisations like the Document Foundation run run a public instance. Also because they want to be as transparent as possible, he said.

Yousef from Mozilla’s Open Innovation team then showed how they make use of Grimoire to investigate the state of their community. They ingest data from Github, Bugzilla, newsgroup, meetups, discourse, IRC, stackoverflow, their wiki, rust creates, and a few other things reaching back as far as 20 years. Quite impressive. One of the graphs he found interesting was one showing commits by time zone. He commented that it was not as diverse as he hope as there were still many US time zones and much fewer Asian ones.

Raymond from the Linux Foundation talked about Metrics in Open Source Communities, what are they measuring and what do they do with the data. Measuring things is not too complicated, he said. But then you actually need to do stuff with it. Certain things are simply hard to measure, he said. As an example he gave the level of user or community support people give. Another interesting aspect he mentioned is that it may be a very good thing when numbers go down, also because projects may follow a hype cycle, too. And if your numbers drop, it’ll eventually get to a more mature phase, he said. He closed with a quote he liked and noted that he’s not necessarily making fun of senior management: Not everything that can be counted counts and not everything that counts can be counted.

Boris then talked about Crossminer, which is a European funded research project. They aim for improving the management of software projects by providing in-context recommendations and analytics. It’s a continuation of the Ossmeter project. He said that such projects usually die after the funding runs out. He said that the Crossminer project wants to be sustainable and survive the post-funding state by building an actual community around the software the project is developing. He presented a rather high level overview of what they are doing and what their software tries to achieve. Essentially, it’s an Eclipse plugin which gives you recommendations. The time was too short for going into the details of how they actually do it, I suppose.

Eleni talked about merging identities. When tapping various data sources, you have to deal with people having different identity domains. You may want to merge the identities belonging to the same person, she said. She gave a few examples of what can go wrong when trying to merge identities. One of them is that some identities do not represent humans but rather bots. Commonly used labels is a problem, she said. She referred to email address prefixes which may very well be the same for different people, think j.wright@apple.com, j.wright@gmail.com, j.wright@amazon.com. They have at least 13 different problems, she said, and the impact of wrongly merging identities can be to either underestimate or overestimate the number of community members. Manual inspection is required, at least so far, she said.

The next two days were then dedicated to FOSDEM which had a Privacy Devroom. There I had a talk on PrivacyScore.org (slides). I had 25 minutes which I was overusing a little bit. I’m not used to these rather short slots. You just warm up talking and then the time is already up. Anyway, we had very interesting discussions afterwards with a few suggestions regarding new tests. For example, someone mentioned that detecting a CDN might be worthwhile given that CloudFlare allegedly terminates 10% of today’s Web traffic.

When sitting with friends we noticed that FOSDEM felt a bit like Christmas for us: Nobody really cares a lot about Christmas itself, but rather about the people coming together to spend time with each other. The younger people are excited about the presents (or the talks, in this case), but it’s just a matter of time for that to change.

It’s been an intense yet refreshing weekend and I’m looking very much forward to coming back next time. For some reason it feels really good to see so many people caring about Free Software.

Attended FOSDEM 2017

Unsurprisingly, the biggest European Free Software event happened in Brussels, Belgium again. I’m talking about FOSDEM, of course. It’s a fixed entry in many peoples calendar and always a good excuse to visit Brussels πŸ™‚

I’m a bit late to report on what talks I managed to see as others have already covered some of the talks, but I still want to add some observations.

Richard Brown from SuSE talked about dinosaurs and resurrecting them (video). It was more about containerised apps than actual dinosaurs, though. The general theme was about repeating mistakes that we might or should have learned in the past. He started by mentioning that the Windows DLL Hell was a nightmare. You needed to test your application with each and every version combination of every possible library. The DLLs did not necessarily have ABI compatibility so it was very cumbersome to test. Windows 2000 brought Side-by-Side assembly, which is some form of DLL containerisation, he said. It uses separate memory space for each app and its DLLs. Programs can ship “private” DLLs in their application directory so you don’t necessarily break other apps with your DLL carrying the same name. This approach, however, still has issues: Security wise each app needs to update their libraries themselves rather than have them updated. So each app needed to build and ship their own updater which is not trivial to do. Legally it’s also interesting, he said, because bundling these DLLs may impose restrictions. Last but not least, you have to have the same DLL potentially multiple times on disk, because each app may ship the same DLL.

The contemporary software distribution model has its problems, too, he said. Compatibility with various distros is an issue, because each distro is slightly different. Each distribution also has their own pace of change which may be incompatible with the application in question, e.g. the distros may decide to ship an older version because they have tested it more. Different distributions have different libraries and versions thereof. Also, each distribution has different toolsets to package applications up for their environment. Application developers, however, don’t want to care about these details.

Containerised applications solve these issues. Maybe. He mentioned Flatpak, snappy, and Appimage. The latter is the oldest technology dating all the way back to 2003. The solutions have in common that they bundle the app and run it in some kind of container or sandbox. From his criteria, the compatibility issue is solved, because the libraries are in the bundles. Portability is solved, because all dependencies are shipped in the bundle. And the pace of change is up to the app developer.

The containerisations, though, make assumptions of a common standard base provided by the distributions. According to him, such a common standard base does not exist in a practical sense, though. With containerised apps, he said, we might be repeating history. He explained that we might get a security nightmare because each app needs to update their dependencies themselves. The question also begs whether all the libraries can actually be bundled and shipped. App developers are picking up the responsibilities that distros used to have. You still have to test everything on each distro just to be sure that your base dependencies still work correctly, he said. He sees distributions as part of the solution to these problems. He thinks that a rolling release might solve the issues we’re trying to solve with containserised apps. A rolling release can ship new releases of applications very quickly. The distribution still uses their tools for the common problems like maintenance, security, and legal stuff.

In a lightning talk, David talked about “practical TPM 2.0 usage”. He showed how to generate a signing key, sign a document with it, and verify the signature. He said that Microsoft mandated TPM2.0 for Windows 10 Mobile and that it is a cryptographic processor rather than an accelerator. TPM2.0 is different from TPM1.2 in various ways, he said. For example, the 2.0 can do ECC (P256 and BN256) and SHA-256. But it’s also “algorithm agile” which means that you can add algorithms without having to change the specification. He sees three main usages: Platform integrity like secure boot and trusted boot, disk encryption where the TPM stores and controls access to the key, and Digital Restriction Management by verifying code signatures. In order to use the TPM you have two options, he said. IBM or Intel have developed some tools. IBM doesn’t have a “resource manager” according to the specification. Like a multiplexer. Intel does have such a resource manager and they are working on putting that into Linux. However, Intel has less tools, he said, although it’s wasn’t entirely clear to me what he was referring to. He mentioned that his employer, Facebook, uses TPMs for platform attestation.

Hanno talked the security on the Linux desktop. He referred to the issues Chris Evans exposed a few weeks ago.
He wanted to make the audience angry, he said. But not at him, I suppose because he considers himself to be the messenger only. The basic problem is an unfortunate agglomeration of bugs or behaviours. It starts with the browser automatically downloading files into the users downloads folder, i.e. without asking the user. Then there is Tracker which indexes files that you add to your home directory. Such as the download folder. And then there are buggy (read: vulnerable) implementations of file parsers.

He also referred to Carlos’ comment about bugs being bugs and no problem being found except bugs being bugs. Hanno’s point, as far as I could make it out, was that a project of the size as tracker, especially with that number of dependencies that you don’t control, cannot make sure that there will be not yet another bug that can be exploited. That’s quite fatalistic but probably not too far from reality. It’s not just a Tracker issue, though, he said. KDE has Baloo and everybody wants to have thumbnails of the files in your folders. He reiterated that automatic downloads AND automatic indexing creates a huge attack surface. And that the indexers support a vast variety of file formats by using many libraries of varying quality. While Tracker quickly adopted sandboxing, he said, KDE hasn’t.

He mentioned other exploit mitigation techniques such as ASLR or CFI. With ASLR, he said, the idea is to load code and data into random addresses in memory. This mitigates exploits, because they cannot reliably target valid code in memory. A least that’s the idea. You need to compile the code with -fpic and -pie, he said. Linux distribution have been slow in adopting ASLR though. Ubuntu has introduced it with 16.10, Feora with 23, and Debian is WIP. OpenSuSE has it for a few packages only. It should be the default, he said. Windows, on the other hand, has it since Vista. They also explore and experiment with more modern mitigations like CFI. Yet another approach is to avoid the C language, because “[it] is full of memory corruptions”. Rust comes to mind as an alternative. GStreamer already supports plugins in Rust, he said. He concluded that fixing all these bugs, like Carlos seemed to be wanting, is very hard. Not only because GStreamer is very prone to memory corruption due to the amount of complicated formats it parses. He mentioned fuzzing as a viable strategy to shake out bugs and he found many bugs in a few days. He mentioned that probably to make do so more of that ourselves. I’m working on it. More to posted separately.

The next talk was about testing TLS implementations. For the last year or so I began investigating TLS issues myself and I was wishing for a TLS testing framework. Now I learned about an implementation. Hubert Karlo introduced his “tls fuzzer” which is a bit of a misnomer, because it actually doesn’t perform any fuzzing. He said that TLS was complex and that it has 326 official ciphersuites, 4 PKI cryptosystems, 16 signature-hash pairs, and many more countable things that make the test matrix grow fast. There is a lot of state to be maintained, he said. He presented his tool which takes care of TLS specifics but allows you to define your own payloads and modifications to them. For example, with a few lines of code you can define a client to open a TLS connection and to use a GCM ciphersuite for collecting the nonces. He claims to have found more than 20 issues in NSS, GnuTLS, and OpenSSL. I’m curious to play around with it and maybe hook it up with Scapy’s fuzzing facilities.

Another TLS related talk was given by Fridolin who showed us a TLS Linux Kernel module implementation. The advantages are manyfold he said. Obviously, establishing the connection should be cheaper in terms of computation because the context does not need to be switched so often. Others are already using a kernel implementation of TLS, he said. He mentioned that Solaris has a kssl socket and that netflix uses a modified sendfile() for TLS on BSD. His implementation has been evaluated by Facebook, he said. The implementation leaves the handshaking still to user space and cares about the symmetric encryption only.

Compared to other FOSDEMs, I was able to actually see a few talks, although I was impressed by the number of people I randomly bumped into and who kept me from attending more talks πŸ˜‰ The size of FOSDEM is its cause and solution to problems. A good thing about it was that I could bribe something to cook up a Debian package for GNOME Keysign so that, hopefully, 200 people don’t have to queue up and do weird things :o)

FOSDEM 2016

It the beginning of the year and, surprise, FOSDEM happened πŸ™‚ This year I even managed to get to see some talks and to meet people! Still not as many as I would have liked, but I’m getting there…

Lenny talked about systemd and what is going to be added in the near future. Among many things, he made DNSSEC stand out. I not sure yet whether I like it or not. One the one hand, you might get more confidence in your DNS results. Although, as he said, the benefits are small as authentication of your bank happens on a different layer.

Giovanni talked about the importance of FOSS in the surveillance era. He began by mentioning that France declared the state of emergency after the Paris attacks. That, however, is not in line with democratic thinking, he said. It’s a tool from a few dozens of years ago, he said. With that emergency state, the government tries to weaken encryption and to ban any technology that may be used by so called terrorists. That may very well include black Seat cars like the ones used by the Paris attackers. But you cannot ban simple tools like that, he said. He said that we should make our tools much more accessible by using standard FLOSS licenses. He alluded to OpenSSL’s weird license being the culprit that caused Heartbleed not to have been found earlier. He also urged the audience to develop simpler and better tools. He complained about GnuPG being too cumbersome to use. I think the talk was a mixed bag of topics and got lost over the many topics at hand. Anyway, he concluded with an interesting interpretation of Franklin’s quote: If you sacrifice software freedom for security you deserve neither. I fully agree.

In a terrible Frenglish, Ludovic presented on Python’s async and await keywords. He said you must not confuse asynchronous and parallel execution. With asynchronous execution, all tasks are started but only one task finishes at a time. With parallel execution, however, tasks can also finish at the same time. I don’t know yet whether that description convinces me. Anyway, you should use async, he said, when dealing with sending or receiving data over a (mobile) network. Compared to (p)threads, you work cooperatively on the scheduling as opposed to preemptive scheduling (compare time.sleep vs. asyncio.sleep).

Aleksander was talking on the Tizen security model. I knew that they were using SMACK, but they also use a classic DAC system by simply separating users. Cynara is the new kid on the block. It is a userspace privilege checker. A service, like GPS, if accessed via some form of RPC, sends the credentials it received from the client to Cynara which then makes a decision as to whether access is allowed or not. So it seems to be an “inside out” broker. Instead of having something like a reference monitor which dispatches requests to a server only if you are allowed to, the server needs to check itself. He went on talking about how applications integrate with Cynara, like where to store files and how to label them. The credentials which are passed around are a SMACK label to identify the application. The user id which runs the application and privilege which represents the requested privilege. I suppose that the Cynara system only makes sense once you can safely identify an application which, I think, you can only do properly when you are using something like SMACK to assign label during installation.

Daniel was then talking about his USBGuard project. It’s basically a firewall for USB devices. I found that particularly interesting, because I have a history with USB security and I do know that random USB devices pose a problem. We are also working on integrating USB blocking capabilities with GNOME, so I was keen on meeting Daniel. He presented his program, what it does, and how to use it. I think it’s a good initiative and we should certainly continue exploring the realm of blocking USB devices. It’s unfortunate, though, that he has made some weird technological choices like using C++ or a weird IPC system. If it was using D-Bus then we could make use of it easily :-/ The talk was actually followed by Krzyzstof who I reported on last time, who built USB devices in software. As I always wanted to do that, I approached him and complained about my problems doing so πŸ˜‰

Chris from wolfSSL explained how they do testing for their TLS implementation. wolfSSL is 10 years old and secures over 1 billion endpoints, he said. Most interestingly, they have interoperability testing with other TLS implementations. He said they want to be the most well tested TLS library available which I think is a very good goal! He was a very good speaker and I really enjoyed learning about their different testing strategies.

I didn’t really follow what Pam was talking about implicit trademark and patent licenses. But it seems to be an open question whether patents and trademarks are treated similarly when it comes to granting someone the right to use “the software”. But I didn’t really understand why it would be a question, because I haven’t heard about a case in which it was argued that the right on the name of the software had also been transferred. But then again, I am not a lawyer and I don’t want to become one…

Jeremiah referred on safety-critical FOSS. Safety critical, he said, was functional safety which means that your device must limp back home at a lower gear if anything goes wrong. He mentioned several standards like IEC 61508, ISO 26262, and others. Some of these standards define “Safety Integrity Levels” which define how likely risks are. Some GNU/Linux systems have gone through that certification process, he said. But I didn’t really understand what copylefted software has to do with it. The automotive industry seems to be an entirely different animal…

If you’ve missed this year’s FOSDEM, you may want to have a look at the recordings. It’s not VoCCC type quality like with the CCCongress, but still good. Also, you can look forward to next year’s FOSDEM! Brussels is nice, although they could improve the weather πŸ˜‰ See you next year!

FOSDEM 2015

It’s winter again and it was clear that FOSDEM was coming. However, preparation fell through the cracks, at least for me, mainly because my personal life is fast-paced at the moment. We had a table again, and our EventsBox, which is filled with goodness to demo GNOME, made its way from Gothenburg, where I actually carried it to a couple of months ago.

Unfortunately though, we didn’t have t-shirts to sell. We do have boxes of t-shirts left, but they didn’t make it to FOSDEM :-\ So this FOSDEM didn’t generate nearly as much revenue as the last years. It’s a pity that this year’s preparation was suboptimal. I hope we can improve next year. Were able to get rid of other people’s things, though πŸ˜‰ Like last year, the SuSE people brought beer, but it was different this time. Better, even πŸ˜‰

The fact that there wasn’t as much action at our booth as last years, I could actually attend talks. I was able to see Sri and Pam talking on the Groupon incident that shook us up a couple of months ago. It was really nice to see her, because I wanted to shake hands and say thanks. She did an amazing job. Interestingly enough, she praised us, the GNOME Foundation’s Board of Directors, for working very professionally. Much better than any client she has worked with. I am surprised, because I didn’t really have the feeling we were acting as promptly as we could. You know, we’re volunteers, after all. Also, we didn’t really prepare as much as we could have which led to some things being done rather spontaneously. Anyway, I take that as a compliment and I guess that our work can’t be all too bad. The talk itself showed our side of things and, if you ask me, was painting things in a too bright light. Sure, we were successful, but I attribute much of that success to network effects and a bit of luck. I don’t think we could replicate that success easily.

GNOME’s presence at FOSDEM was not too bad though, despite the lack of shirts. We had a packed beer event and more talks by GNOMEy people. The list includes Karen‘s keynote, Benzo‘s talk on SDAPDS, and Sri‘s talk on GNOME’s impact on the Free Software ecosystem. You can find more here.

A talk that I did see was on improving the keysigning situation. I really mean to write about this some more. For now, let me just say that I am pleased to see people working on solutions. Solutions to a problem I’m not sure many people see and that I want to devote some time for explaining it, i.e. in s separate post. The gist is, that contemporary “keysigning parties” come with non-negligible costs for both, the organiser and the participant. KeySigningPartyTools were presented which intend to improve they way things are currently done. That’s already quite good as it’ll reduce the number of errors people typically make when attending such a party.

However, I think that we need to rethink keysigning. Mostly, because the state of the art is a massive SecOps fail. There is about a gazillion traps to be avoided and many things don’t actually make so much sense. For example, I am unable to comprehend why we are muttering a base16 encoded version of your 160 bit fingerprint to ourselves. Or why we must queue outside in the cold without being able to jump the queue if a single person is a bit slow, because then everybody will be terribly confused and the whole thing taking even longer. Or why we need to do everything on paper (well, I know the arguments: Your computer can be hacked, be social, yadda yadda). I did actually give a talk on rethinking the keysigning problem (slides). It’s about a project that I have only briefly mentioned here and which I should really write about in the near future. GNOME Keysign intends to be less of a SecOps fail by letting the scan a barcode and click “next”. The rest will be operations known to the user such as sending an email. No more manually comparing fingerprints. No more leaking data to the Internet about who you want to contact. No more MITM attacks against your OpenPGP installation. No more short key ids that you accidentally use or because you mistyped a letter of the fingerprint. No more editing raw Perl in order to configure your keysigning tool. The talk went surprisingly well. I actually expected the people in the security devroom to be mad when someone like me is taking their perl and their command line away. I received good questions and interesting feedback. I’ll follow up here with another post once real-life lets me get to it.

Brussels itself is a very nice city. We were lucky, I guess, because we had some sunshine when we were walking around the city. I love the plethora of restaurants. And I like that Brussels is very open and cultural. Unfortunately, the makerspace was deserted when we arrived, but it is was somewhat expected as it was daytime… I hope to return again and check it out during the night πŸ˜‰

GNOME@FOSDEM 2014 – Stand and Panel

It is this time of the year again *yay*. The biggest and greatest Free Software conference took place in Brussels, Belgium. It’s good to see all those interested and passionate people care about Free Software. I hope that the (intellectual) gravity of the people gets more people interested and strengthens our communities. In fact, I feel it was one of the better FOSDEMs so far. Maybe even the best. We, GNOME, had a hand full (not kidding) of new members of our communities staffing the booth or just being available. I was very please to see new faces and to identify them as people who were very committed to Free Software and GNOME.

As indicated, we, GNOME, had a booth and a fun time entertaining people stopping by. With the help of many volunteers, we presented our most recent GNOME release, sold some t-shirts, and discussed our future ideas. It’s not necessarily a venue to convince people to use Free Software, or even to use GNOME. But I have the feeling we manage to get both messages across. Bar one case in which an unlucky fellah was angry about everything and especially that this Linux 20 we had installed wouldn’t ship Emacs by default. Other than that we showed people how cool the GNOME Shell extensions are, how to quickly launch applications, or how to access the notification area quickly. Or, yes of course, how to suspend. Or to shutdown…

I also had the pleasure of being interviewed by an Irish dude who produced episodes for Hacker Public Radio. I didn’t know about that but it seems to be a cool project. I don’t know when it will go live or whether it actually has been published already.

We also had panel with the governing bodies of GNOME and KDE. The intention was to debunk some myths and to make the work more visible. I was on the Panel (on behalf of GNOME) with Kat (from GNOME…) and Lydia from KDE. She was joined by Cornelius who serves on the KDE board for more than 9 years. We were lamenting about various aspects of our work such as where does money come from, where does it go to, what are the processes of getting rid of the money. But also why we were doing that, why we think it is important and what achievements we are proud of. Our host, Paul, was a nice and fun guy and did his job very well. I think it was a successful event. It could probably have been better in the sense that we could have focussed more on the audience and making them want to step up and take over responsibilities. But the way it went and the participation of the audience makes me happy nonetheless.

Update: The interviews have been posted: http://hackerpublicradio.org/eps.php?id=1452

GNOME @ FOSDEM 2013

Phew, I’m excited about FOSDEM and also exhausted. We had a nice GNOME presence with a lovely booth, many helpers and nice shirts. Thanks to everyone involved who made it such a success.

Our current T-shirt was designed last minute by Andreas, printed last second by an awesome printing shop, and I like it very much. Especially the girly shirts have a nice colour. The shirt accompanies our current Friends of GNOME campaign about Privacy and Security.

In case you haven’t heard: GNOME is raising money to make GNOME more privacy aware, i.e. to allow to you to use your computer anonymously or leave as few traces behind as possible. Also security is a vital part, so maybe the money will be spent on enabling the chat to transfer files encryptedly or better OpenPGP integration into GNOME. If you want to support these goals, consider becoming a Friend of GNOME. Also, if you only want one of those shirts, become a Friend of GNOME, because at a certain level, you will be eligible to get hold of one of those t-shirts πŸ™‚

Unfortunately, our donation process depends heavily on Paypal and is quite US centric. That’s not very nice, the majority of donations does not come from the US. In fact, many donations come from Europe.

Anyway, I couldn’t attend a single talk at FOSDEM, because I was so busy with the booth and with maintaining relationships with friends from other Free Software projects, i.e. OpenSuSE. They had, again, a very nice presence and “The Old Toad”, a nice German beer, which is really needed since the Belgian beer is barely drinkable πŸ˜‰

As for the GNOME night out, the GNOME Beer Event, it was seriously crowded. While we occupied the upper floor of a bar the last year, we had two floors this year. We did advertise it. Well enough it seems. We went through the building we had our booth in and taped loads of paper onto the walls and pillars. Not only beer event ads but also posters about GNOME Outreach program for Women or the fact that we had T-Shirts on sale.

Our stand was probably the second most beautiful after the OpenSuSE one. Our T-Shirts were aligned up nicely and we sold quite a few of them. Preliminary statistics suggest that we managed to convince people to buy something between 100 and 150 t-shirts. Next time we better try to provide more girly shirts in larger sizes as they ran out quickly. The KDE folks did have many girly shirts but overall their booth didn’t seem to be as well run as the other years.

While the booth generally went well, our interaction story with the people isn’t great. So far, we have a demo machine in the middle of the table which makes it really hard to do stuff together or to show off things, because you can’t really look at what the person is doing neither can you easily show stuff. So maybe putting the machine on either edge of the table would help.

I’m looking very forward to next year’s FOSDEM, hoping that we will have, again, a great set of people willing to spend their time standing there for GNOME.

GNOME @ FOSDEM2012

Is this time around again and the FOSDEM happened in Brussels, Belgium. Probably the biggest gathering of Free Software people was a lot colder than last year. It was covered in snow. So badly, that we had big troubles coming into Brussels. It took us almost twice as long to arrive than usual. The streets were packed with cars suffering from the severe conditions.

But all that didn’t stop us (GNOME that is) from having a nice presence. If you know FOSDEM you’d expect the booth to be in a packed and smelly area because all the people try to move along in the tiny hallway. But this year was different because we got a spacious place in a new building. That was cool, because it gave us much more area to move than usual but unfortunately it made the conference much more disconnected as there was yet another building involved. I didn’t even try to visit each and everything.

Anyway, from what I’ve seen, we had quite a good stand. Our friends from OpenSuSE received the most attention though. Rightfully so. Not only because they had almost free beer and nearly free other goodies, but because they had nice hardware, nice demos and nice people to present. Fortunately, we were located just next to our friends from KDE which enabled us to chit chat with well known people and to plan conspiracies for upcoming conferences in 2013. So stay tuned for that.

With the help of local GNOME people, we had our EventsBox which is well equipped. But well, since we had only one Box, we weren’t as well equipped as last time at LinuxTag. We had loads of T-Shirts from the Desktop Summit though which we tried to sell. I especially like the name tags we had. Somebody just got them printed so we looked much more inviting, I guess. Also cool were the posters that we got provided so we could pimp the glass wall behind us. It’s cool that we have people that provide such things just like that. We didn’t have the appropriate tools to handle the posters well. We used regular adhesive tape (which we ran out of in the middle of the day) which kinda destroys the posters. From our KDE friends we got some “blue-tac” or “patafix” which was really really great. Apparently it’s well known in the western end of Europe. I didn’t know it at all, but I now know that we definitely want to have this for the next time.

We also didn’t have blank sheets of paper to write stuff on which was a bit annoying. But well, we didn’t have markers either so we needed to get both first before being able to inform the people about the t-shirt prices.

Needless to say that our demo machine got upgraded to the latest Fedora and that that broke at least the web-cam. I mean it was supported in the last Fedora version so it’d be boring it was supported now, too. But the Fedora people had a nice gimmick to give away: A cheat cube which is just a well cut piece of paper that you can fold up to build a cube. You’re supposed to put that on your desk and use it to retrieve information quickly. I was wondering whether we could make something like that for GNOME Shell. Oh and while we’re at it: Many folks had roll-up displays which look very nice. They are around 1m wide and 2m high and you can have your big design on it. It doesn’t cost all too much but we’d need a proper motif first. So if you have any idea, feel free to discuss that in the wiki. And another thing that was annoying were our flyers. While it’s good that we had some, they were quite outdated. So we badly need some flyer material. Again, in the wiki is the place to show up.

So a big big thanks to the folks that helped out at the booth to make it rock. I hope we can make it work next year again.

There was also, again, a massive keysigning going on and I have to drop a quick rant about all that mess. caff on Fedora is kinda weird. It seems like the defaults in the man page don’t match the code, i.e. keyserver defaults to a different server than the man page states. And very annoying: It’s also different from GPG settings! So while trying to use caff it failed downloading the keys. I guess the server just hit a timeout or blocked my request altogether because it’s so many keys (/.-)
After having that sorted out, gpg asked to hit “y” all the time whether I was sure to sign the keys. Goddamnit. It’s about 100 keys and I sign with 4 private keys or so. Now I have to press 800 keys to get the fork()ing keys signed and mailed. Jesus Christ. It’s fricking 2012 and not 1972 anymore. I just want to conveniently sign the whole damn thing and not buy a new keyboard after each FOSDEM. Not only because I have to type so many keys but also because I feel the urge to smash it into someone face. But not only did I need some shell-fu to get the keys imported, I also needed to fiddle the fingerprints of the official key list because caff wouldn’t accept the fingerprints. The format though, is the format gpg uses to display fingerprints… So I had to do something like

cat /tmp/ksp.txt | tr -d ' ' | tr '\n' ' '

to get the proper format… And yeah, I’ll patch everything.. tomorrow…

Although I haven’t seen much of Brussels this time, I liked it being covered in snow and ice. I hope to be able to get more out of Brussels next time, especially improve my French πŸ˜‰ So yeah, I’m looking forward to next year.

GNOME @ FOSDEM 2011

I am very excited about having attended this years FOSDEM. Unfortunately, times were a bit busy so I am a bit late reporting about it, but I still want to state a couple of things.

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting (I wonder how that image will look in 2012 πŸ˜‰ )

First of all, I am very happy that our GNOME booth went very well. Thanks to Frederic Peters and Frederic Crozat for manning to booth almost all the time. I tried to organise everything remotely and I’d say I partly succeeded. We got stickers, t-shirts and staff for the booth. We lacked presentation material and instructions for the booth though. But it still worked out quite well. For the next time, I’d try to be communicate more clearly who is doing what to prevent duplicate work and ensure that people know who is responsible for what.

Secondly, I’d like to thank Canonical for their generosity to sponsor a GNOME Event Box. After the orginal one went missing, Canocical put stuff like a PC, a projector, a monitor and lots of other stuff together for us to be able to show off GNOME-3. The old Box, however, turns out to be back again *yay*!

Sadly, we will not represent GNOME at upcoming CeBIT. But we will at LinuxTag. Latest.

Anyway, during FOSDEM, we got a lot of questions about GNOME 3 and Ubuntu, i.e. will it be easily possible to run GNOME 3 on Ubuntu. I hope we can make it possible to have a smooth transition from Unity to GNOME Shell. Interestingly enough, there isn’t a gnome-shell package in the official natty repositories yet πŸ™

It was especially nice to see and talk to old GNOME farts. And I enjoyed socialising with all the other GNOME and non-GNOME people as well. Sadly, I didn’t like the GNOME Beer Event very much because it was very hot in the bar so I left very quickly.

So FOSDEM was a success for GNOME I’d say. Let’s hope that future events will work at least as well and that we’ll have a strong GNOME representation even after the GNOME 3 release.

FOSDEM 2010

This years FOSDEM involved meeting familiar and new people as well as a lot of beer πŸ˜‰ I can’t understand why the Belgians are so proud of their beer though :> Anyway, I got way too less sleep and spent too much money…
I wished I connected to more new people but I was terribly busy catching up with all the faces that I haven’t seen in a while. Hopefully, I can meet more new people next time. FOSDEM Logo

Although I was scheduled as the very first in the morning after the official Beer-Event (thx teuf…) my talk in the GNOME devroom went well and I hope I represented GNOMEs Bugsquad well. At least two people wanted to help out πŸ™‚ I hope I was inviting and clear enough. I definitely need to try to hold the people by at least writing to bugsquad-list. I hope I come around doing that, but I also have a huge backlog that wants to be processed. On the todo list is a new bugsquad as well as a membership-committee meeting, so if you are interested, watch out for mails πŸ™‚

If you happen to have seen my talk at FOSDEM and want to look over the slides, please find themΒ  here. If you have been there and want to join the bugsquad fun: Awesome! Join the mailinglist now and wait for the next meeting to be organized. Don’t hesitate to push for it πŸ˜‰
If you haven’t been there but you want to help the Free Software movement or GNOME in particular: Awesome! Consider subscribing the mailinglist or join the IRC Channel and make sure that you’ve read our awesome TriageGuide πŸ™‚

Talks that I have enjoyed at FOSDEM include Maemo6 Platform Security by Elena because Nokia is about to build yet another security for Linux to meet their needs. Apparently the new Maemo devices will come with a TPM to allow DRM like scenarios. But also encrypting data on the device will be possible using an API which in turn uses the built-in keys. These turn out to be recoverable nowadays. If I read this correctly, then the “Open Mode” will not make use of the TPM keys. This means that if your contacts, images, texts, etc…, were encrypted using the above mentioned API, then you couldn’t get hold of this data in Open Mode πŸ™ I thus reckon that stuff like Contacts will not be stored encrypted. Hence you would leak all your data when losing the device. So I don’t expect a real advantage but we’ll see.
Another not very informative yet entertaining talk was done by Greg Kroah-Hartman and dealt with creating a patch for Linux. It actually motivated me so that I put “fixing some random driver in staging” on my Todo-List πŸ˜‰

Note to self for the next FOSDEM: Book accommodation early. Very early! Also, Charleroi might not be worth it, because the Bus from Brussels to CLR is 13 Euro, return 21.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.