GNOME @ FOSDEM 2013

Phew, I’m excited about FOSDEM and also exhausted. We had a nice GNOME presence with a lovely booth, many helpers and nice shirts. Thanks to everyone involved who made it such a success.

Our current T-shirt was designed last minute by Andreas, printed last second by an awesome printing shop, and I like it very much. Especially the girly shirts have a nice colour. The shirt accompanies our current Friends of GNOME campaign about Privacy and Security.

In case you haven’t heard: GNOME is raising money to make GNOME more privacy aware, i.e. to allow to you to use your computer anonymously or leave as few traces behind as possible. Also security is a vital part, so maybe the money will be spent on enabling the chat to transfer files encryptedly or better OpenPGP integration into GNOME. If you want to support these goals, consider becoming a Friend of GNOME. Also, if you only want one of those shirts, become a Friend of GNOME, because at a certain level, you will be eligible to get hold of one of those t-shirts πŸ™‚

Unfortunately, our donation process depends heavily on Paypal and is quite US centric. That’s not very nice, the majority of donations does not come from the US. In fact, many donations come from Europe.

Anyway, I couldn’t attend a single talk at FOSDEM, because I was so busy with the booth and with maintaining relationships with friends from other Free Software projects, i.e. OpenSuSE. They had, again, a very nice presence and “The Old Toad”, a nice German beer, which is really needed since the Belgian beer is barely drinkable πŸ˜‰

As for the GNOME night out, the GNOME Beer Event, it was seriously crowded. While we occupied the upper floor of a bar the last year, we had two floors this year. We did advertise it. Well enough it seems. We went through the building we had our booth in and taped loads of paper onto the walls and pillars. Not only beer event ads but also posters about GNOME Outreach program for Women or the fact that we had T-Shirts on sale.

Our stand was probably the second most beautiful after the OpenSuSE one. Our T-Shirts were aligned up nicely and we sold quite a few of them. Preliminary statistics suggest that we managed to convince people to buy something between 100 and 150 t-shirts. Next time we better try to provide more girly shirts in larger sizes as they ran out quickly. The KDE folks did have many girly shirts but overall their booth didn’t seem to be as well run as the other years.

While the booth generally went well, our interaction story with the people isn’t great. So far, we have a demo machine in the middle of the table which makes it really hard to do stuff together or to show off things, because you can’t really look at what the person is doing neither can you easily show stuff. So maybe putting the machine on either edge of the table would help.

I’m looking very forward to next year’s FOSDEM, hoping that we will have, again, a great set of people willing to spend their time standing there for GNOME.

Talks at FOSS.in 2012

Let me recap the talks held at FOSS.in a bit. It’s a bit late, I’m sorry for that, but the festive season was a bit demanding, timewise.

FOSS.IN

The conference started off smoothly with a nice Indian breakfast, coffee and good chats. The introductory talk by Atul went well and by far not as long as we expected it to be. Atul was obviously not as energetic as he used to be. I think he grew old and does visibly suffer from his illness. So a big round of applause and a bigger bucket of respect for pulling this event off nonetheless.

The first talk of the day was given by Gopal and he talked about “Big Data”. He started off with a definition and by claiming that what is considered to be big data now, is likely not to be considered big data in the future. We should think about 1GB RAM now in our laptops. Everybody ran 1GB or more in their laptops. But 10 years ago that would not have been the case. The only concept, he said, that survived was “Divide and Conquer”. That is to break up a problem into smaller sub problems which then can be run on many processing units in parallel. Hence distributed data and distributed processing was very important.

The prime example of big data was to calculate the count of unique items in a large set, i.e. compare the vocabulary of two books. You split up the books into words to find the single words and then count every one of them to find out how often it was present. You could also preprocess the words with a “stemming filter” to get rid of forms and flexions. If your data was big enough, “sort | uniq” wouldn’t do it, because “sort” would use up all your memory. To do it successfully anyway, you can split your data up, do the sorting and then merge the sort result. He was then explaining how to split up various operations and merge them together. Basically, it was important to split and merge every operation possible to scale well. And that was exactly what “Hadoop” does. In fact, it’s got several components that facilitate dealing with all that: “splitter”, “mapper”, “combiner”, “partitioner” , “shuffle fetch” and a “reducer”. However, getting data into Hadoop, was painful, he said.

Lydia from KDE talked about “Wikidata – The foundation to build your apps on“. She introduced her talk with a problem: “Which drugs are approved for pregnancy in the US?”. She said, that the Wikipedia couldn’t really answer this question easily, because maintaining such a list would be manual labour which is not really fascinating. One would have to walk through every article about a drug and try to find the information whether it was approved or not and then condense it to a list. She was aiming at, I guess, Wikipedia not really storing sematic data.

Wikidata wants to be similar to Wikimedia Commons, but for data of the world’s knowledge. It seems to that missing semantic storage which is also able to store information about the sources of the information that confirm correctness. Something like the GDP of a country or length of a river would be prime examples of use cases for Wikidata. Eventually this will increase the number of editors because the level to contribute will be lowered significantly. Also every Wikipedia language can profit immediately because it can be easily hooked up.

I just had a quick peek at Drepper’s workshop on C++11, because it was very packed. Surprisingly many people wanted to listen to what he had to say about the new C++. Since I was not really present I can’t really provide details on the contents.

Lenny talked about politics in Free Software projects. As the title was “Pushing Big Changes“, the talk revolved around issues around acquiring and convincing people to share your vision and have your project accepted by the general public. He claimed that the Internet is full of haters and that one needed a thick skin to survive the flames on the Internet. Very thick in fact.

An interesting point he made was, that connections matter. Like personal relationships with relevant people and being able to influence them. And he didn’t like it. That, and the talk in general, was interesting, because I haven’t really heard anyone talking about that so openly. Usually, everybody praises Free Software communities as being very open, egalitarian and what not. But not only rumour has it, that this is rarely the case. Anyway, The bigger part of the talk was quite systemd centric though and I don’t think it’s applicable to many other projects.

A somewhat unusual talk was given by Ben & Daniel, talking about how to really use Puppet. They do it at Mozilla at a very large scale and wanted to share some wisdom they gained.

They had a few points to make. Firstly: Do not store business data (as opposed to business logic) in Puppet modules. Secondly: Put data in “PuppetDB” or use “Hiera”. Thirdly: Reuse modules from either the “PuppetForge” or Github. About writing your own modules, they recommended to write generic enough code with parametrised classes to support many more configurations. Also, they want you to stick to the syntax style guide.

Sebastian from the KDE fame talked about KDE Plasma and how to make us succeed on mobile targets such as mobile phones or tablets. Me, not knowing “Plasma” at all, was interested to learn that Plasma was “a technology that makes it easy to build modern user interfaces”. He briefly mentioned some challenges such as running on multiple devices with or without touchscreens. He imagines the operating system to be provided by Mer and then run Plasma on top. He said that there was a range of devices that were supported at the moment. The developer story was also quite good with “Plasma Quick” and the Mer SDK.

He tried to have devices manufactured by Chinese companies and told some stories about the problems involved. One of them being that “Freedom” (probably as in Software Freedom) was not in their vocabulary. So getting free drivers was a difficult, if not impossible, task. Another issue was the size of orders, so you can’t demand anything with a order of a size of 10000 units, he said. But they seem to be able to pull it off anyway! I’m very eager to see their devices.

The last talk, which was the day’s keynote, went quite well and basically brought art and code together. He introduced us to Processing, some interesting programming IDE to produce mainly visual arts. He praised how Free Software (although he referred to it as Open Source) made everybody more creative and how the availability of art transformed the art landscape. It was interesting to see how he used computers to express his creativity and unfortunately, his time was up quite quickly.

Drepper, giving quite a few talks, also gave a talk about parallel programming. The genesis of problem was the introduction of multiple processors into a machine. It got worse when threads were introduced where they share the address space. It allowed for easy data sharing between threads but also made corrupting other threads very very easy. Also in subtle ways that you would not anticipate like that all threads share one working directory and if one thread changed it, it would be changed for all the threads of the process. Interestingly, he said that threads are not something that the end user shall use, but rather a tool for the system to exploit parallelism. The system shall provide better means for the user to use parallelism.

He praised Haskell for providing very good means for using threads. It is absolutely side effect free and even stateful stuff is modelled side effect free. So he claimed that it is a good research tool, but that it is not as efficient as C or C++. He also praised Futures (with OpenMP) where the user doesn’t have to care about the details about the whole threading but leave it up to the system. You only specify what can run in parallel and the system does it for you. Finally, he introduced into C++11 features that help using parallelism. There are various constructs in the language that make it easy to use futures, including anonymous functions and modelling thread dependencies. I didn’t like them all too much, but I think it’s cool that the language allows you to use these features.

There was another talk from Mozilla’s IT given by Shyam and he talked about DNSSec. He started with a nice introduction to DNSSec. It was a bit too much, I feel, but it’s a quite complicated topic so I appreciate all the efforts he made. The main point that I took away was to not push the DS too soon, because if you don’t have signed zones yet, resolvers don’t trust your answers and your domain is offline.

Olivier talked about GStreamer 1.0. He introduced into the GStreamer technology by telling that its concept is around elements, which are put in bins and that elements have source and sink pads that you connect. New challenges were DSPs, different processing units like GPUs. The new 1.0 included various new features better locking support that makes it easier for languages like Python or better memory management with GstBufferPool.

I couldn’t really follow the rest of the talks as I was giving one myself and was busy talking to people afterwards. It’s really amazing how interested people are and to see the angle they ask questions from.

2.9-C/3 – N.O-T/MY(D/E.PA/R.T-ME-N/T.

Just a quick note: 29C3 rocked. Awesome location, awesome people, awesome talks. Very nice indeed.

Very brief thumbs up: Videos were available almost right after the talks. In a stunning quality. Also live streams. How many conferences do you know that do that?

Also, I consider this to be particularly interesting.

Sorry to all those I couldn’t talk to long enough or at all. Hope to see you again next year!

Talking at FOSS.in 2012, Bangalore, India

As reported, FOSS.in took place this year, in Bangalore, India. I was fortunate enough to be invited again to this leading Free Software event in India, if not Asia.

Queueing people trying to get in to FOSS.in

The event hosted many very good people and it was a real pleasure to be surrounded by smart folks that love Free Software. It’s a real honour to be invited and speak on the same stage as these people. And it’s an honour to be able to talk about Free Software in a so called developing country and try to form the next generation of Free Software hackers.

There were many talks and I think I will follow up with a separate post about that.

My first talk went really well I think (others do seem to think so, too). The audience seemed to be genuinely interested and I enjoyed being on stage. At some stage, I need to revamp my slides though. I usually go with TeXed slides, but for the GNOME ones, I keep using LibreOffice. One of the minor problems is, that I want to play videos from within the presentation. I can do that (more or less) with LibreOffice and PDF can also do it. But this is not working with my version of Evince :-\

Anyway, thanks to hasgeek.tv, we have recordings of FOSS.in (Day1, Day2, Day3)! And here is my first talk live on tape:

The second talk was a surprise for me, because I was told just a few hours in advance that I need to give another one. Apparently someone couldn’t come and the slot needed to be filled. I jumped in and did my show. I was still a bit hung over from the night before, but it went off well. Except for the fact that my laptop went off the presenter desk. It’s a bit shaky still, so if you happen to have a spare machine that’s decent enough, let me know. Anyway, I have to say, that I dislike the fact that I was told just a few hours in advance that I had to give another talk. But I appreciated being the one that is considered to entertain the people the most. Also very interesting was that I sat on a panel that Lenny moderated. I remember well when Lenny was asked to do that for the first time last year in Japan. He does it well and again, I felt very honoured to be invited to sit next to all those important people, eventually being considered being one of them. However, it appears that there no videos yet.

As for the rest of the trip, we went to Sri Lanka and did a round trip there. An interesting country indeed. Very developed. Not as affordable as expected but still very good value for us whities.

Panorama from Sigiriya Rock

I hope that the FOSS.in team manages to pull it off again next year. I really believe that the event impacts the development of Free Software in the region. And without such an event, great opportunities are lost.

As usual, thanks to FOSS.in and the GNOME Foundation for supporting me to go there.

GUADEC 2012 in A Corunha


As so many people did, I attended GUADEC in A Conrunha *yay*. Overall, the conference was well organised. The local team was really committed and helped us a lot with all our matters. Little details like providing fruits, some sweets and chocolate for the hacking areas made everything just nice.

They also were very careful about keeping the news updated and the GUADEC website interesting. So they published interviews, photos and announcements regularly so one had an incentive to browse the website often. Very well and smartly done.

While I didn’t attend that many talks, I do think that the first keynote stood out. Jake Appelbaum gave a really inspiring talk about Tor and GNOME. He explained Tor and why it is important to provide anonymous internet access not only for wrongdoers but more so for regular people! For example, he mentioned that he had to use Tor on the venue because the WiFi would block SSH. So to get uncensored access to the network, he would use Tor. Another example was to not tell Google where you are. You authenticate with your credentials, but not from your IP, so you only share your location if you really want to. He had very clear proposals for GNOME and hope to be able to share the list soon. I, personally, would like to see us communicate very clearly, why we spy on our website users using Piwik.

The second keynote was a bit annoying, as she was referring to “open source” all the time although she really meant Free Software. Anyway, at the end of the day, I think her message was that other people exist that want a Free society and that we should not feel alone.

Between the talks, one could have a great time talking to people, especially during lunch. For not talking so much, the WiFi worked pretty well all the time. Quite amazing actually. I am also amazed by the effort people put in to things for GNOME. The locals did, i.e. put some GNOME feet stickers on the ground or hung a daily sheet on the wall to indicate today’s timetable. Daniel created an awesome Yearbook for the GSoC and OPW students and Andreas created an annual report. Thanks for working so hard on cool GNOME things!

It also happened that we had our first in person board meeting and I was very excited about that. We were quite productive during the rather long meeting. But afterwards I was quite exhausted. I guess it was the same for everyone involved. I am also quite happy to see two strong proposals for a GUADEC next year. It will be great.

Also thanks to the GNOME Foundation for sponsoring my travel to this year’s GUADEC!

I realised again, though, that I don’t like the Madrid airport and Iberia all too much. It’s a huge airport with no clear way indications, too few benches and power, and annoyingly loud and pointless passenger announcements. But well, it seems to be the cheapest in Spain…

Another huge round of “thank-yous” must be given to the i18n team. It is just incredible how they manage to cater for so many languages in usually close to no time. I have met many people at conferences or exhibitions that mentioned that if there was a success story to GNOME, it would be the translations. And the very fact that we get mails and bugreports in non english languages shows the success of the team, namely giving a very native feel to the users. To show our appreciation, we went for dinner and had a very good evening with discussions, food and wine. Again: Thanks!

PS: Here the whishlist:

Empathy should support OTR and it should be enabled by default (like adium)
I heard this so many times, I nearly stopped asking for feedback at all!
ZRTP/SRTP/TLS for all VoIP services (forward secrecy and strong crypto)
Tor controller extension for gnome-shell – why settle for only having
Vidalia?
What if we could contextually launch applications anonymously? A 'Launch
Torified' context for applications (perhaps with torsocks?)-
 NAT? Who cares? How about 'single-click file sharing over hidden services?
 Decentralized instant messaging – resist traffic analysis (Federated
XMPP HS? For extra fun add decentralized and anonymous offline message
queuing.)
 network-manager improvements:
Ability to configure wireless networks before connecting to them
VPN 'automatically connect' checkbox should work and no traffic should
leak before the VPN comes up.
 VPN connections must fail closed.	
Ability to override DNS settings for all connections.
macchanger support in network-manager
Random MAC addresses per connection or per if-up
Ability to use a Tor DNS resolver on unpriviliged port
Normal modem support
Full Tor support in NetworkManager
Think of it as a free VPN
Full Guest mode in Gnome/GDM that uses Tor by default for all network
traffic – don't just refuse to write data to the disk, refuse to write
information to the bare network too

FOSS.in 2012 \o/

FOSS.IN

After it didn’t happen last year, it will this year! I’m talking about FOSS.in, the premier Free Software conference in India, if not Asia. I’m very pleased to see that this event managed to pull it off again. Also, everything seems to be very much in time this year, so I expect things to go down smoothly.

If you have something cool to share and want to attract a highly motivated audience, which doesn’t only want to listen, but also to do something, then you should consider submitting something. The FOSS.in 2012 takes place, again, in Bangelore, India, from 2012-11-29 until 2012-12-01.

FOSS.IN

The Call for Papers is closing soon, so hurry up!

GNOME.Asia 2012 in Hong Kong

I had the great pleasure to be invited to GNOME.Asia taking place in Hong Kong and to give a talk there.

The first day started off with a very nice introduction by the local organizing committee. It is amazing how much energy they invest in Free Software, especially in GNOME. I think it’s outstanding given that I don’t see that many contributers from eastern Asia and that I was told several times that the attitude in Free Software communities is discomforting, at best, to people from eastern Asian cultures. But maybe it’s because of GNOME’s rather friendly community these people feel comfortable in GNOME. Let’s keep it that way.

The organizers greeting us

The main talks were given by westerners and I hope we (the westerners) could encourage the audience to believe in themselves and in GNOME. We, I and my old GNOME friend Andre Klapper, were talking about how to start contributing to GNOME as a member of the Bugsquad. We already talked together a couple of GUADECs back. Our slides can be found here. With probably 75% of the conference attendees the talk was comparately well attended and I think it went well, too. We had a good and very unexpected discussion afterwards, too. That was very refreshing.

The crowd for our talk

The second day was filled with talks, too, although I didn’t find it as interesting as the first one. Mainly because I couldn’t understand many talks. The language barrier was quite high for me as my Chinese isn’t all too good πŸ˜‰ While I do appreciate the Free Software communities for enabling everyone to have access to computing, i.e. by translating the software into every language in the universe. I do sometimes wonder whether we actually fragment ourselves and should rather concentrate on improving the actual code. Especially since we are an international community having interational conferences. If there were isolated communities, it is crystal clear that translating everything into these languages is a major bonus. But since we eventually want to talk to each other and support each other, the translations are a bit of a hurdle to overcome. But this point is very moot because these people probably wouldn’t even know about Free Software, not to mention want to exchange thoughts, if the software wasn’t translated in first place.

Allan Day talking about Every Detail Matters

There was actually one talk about Asian Women’s participation in Free Software Projects. But the talk disqualified itself quite early by bringing the common biological argument of different brains and that thus women could not code (sic!).

The *Woman are too stupid to code* talk

I enjoyed the stay in Hong Kong so much that I decided to append two weeks of traveling through China. It was very hot and humid and next time I’ll try to carry less things with me (although I do travel very lightly already).

Thanks a lot to the GNOME Foundation for making this possible for me. I also think that it helped to foster Free Software and GNOME in Hong Kong, China and Asia.

LinuxTag 2012

At this time of the year, there is a special thing happening in Berlin. It’s the annual LinuxTag, a mix of conference and expo. And again, we (GNOME) had a booth.

We shared the space with our friends from Qt and KDE, as we already did for last FOSDEM, and we got along quite well. It’s good to see friends again and again.

The critics from last events, i.e. FOSDEM and LinuxTag, were incorporated. So I did get enough tape, glue, T-Shirts and even a rollup-display *yay* Thanks to the GNOME Foundation for providing resources.

However, compared to last year we had less material, because only one EventsBox was available and we had less furniture for the booth, because LinuxTag lacks sponsors. So we had to deal with non ideal situations, but well, that’s how it always goes, no? And as we are engineers, we managed quite well, I’d say.

Unfortunately, we didn’t have any GNOME talk, so this is something that is definitely to improve for next year. You can already think about cool things to present in lovely Berlin. Interestingly enough, the computer, we used to demo GNOME, was very stable. Obviously, I wanted to show the freshest GNOME release, which was 3.4, but so far no distribution had a stable release which included the newest GNOME. So I used a Fedora 17 Beta and well, some things crashed (reliably) but it was still very smooth. The webcam was the most annoying piece of hardware. But well, it was stolen quite early so we didn’t have to bother too much about it πŸ˜‰ So yeah, if you happen to have a spare webcam that works with a recent Linux and Cheese, we’d happily incorporate that into our EventsBox.

Generally though, people were interested in the newest developments and we had nice chats about the past and the future of GNOME. Unlike last year, we probably did not convince anybody to go to GUADEC (as it’s now in Spain and not in Berlin) πŸ˜‰ We also couldn’t convice too many people to buy T-Shirts. The dark green one from second last FOSDEM were quite popular but as they are old, we only had 4 to sell.

A big thank you to all the people helping out at the booth and of course to LinuxTag for providing us with the opportunity to present ourselves.

The talks I’ve seen, which were not many, as I’ve spent much time at the booth, were not really exciting. I’ve seen Ulrich Drepper talking about Lock Free Data Structures on modern CPUs which was, well, a bit slow for me. He seems to be very knowledgeable but I think he presumed the audience not to be. Anyway, apparently modern Intel CPUs can do transactional memory and you can even now write code that uses that feature in the future while staying compatible with today’s CPUs. You need a new enough toolchain though.

Some other guy talked about forking. I was curious but he delivered his story about forking Nagios only. He didn’t mention any problematic fact at all and was mainly concerned about establishing an own brand.

Christoph Wickert does the Beefy Miracle

I followed “Distro Battle” for a short period of time. Basically, five contestants were about to solve some problems a user could face with her distribution. So Mageia, Fedora, Debian, Kubuntu and OpenSuSE with their respective representative should solve problems like “install this printer” or “use this 3G USB dongle”. They had the chance to introduce themselves first. Mageia was running LXDE, Fedora had a GNOME 3.2, Debian a GNOME 2 and Kubuntu and OpenSuSE were running some recent KDE version. The Kubuntu representative introduced her distro by showing how easy it was to install the whole non-free packages and by stating this would be the very first thing you wanted to do on your fresh install. Funnily enough, Kubuntu self-destructed with a reboot into memcheck. Apparently, she aborted the install at a very unpleasant moment while there was no kernel ready. So the GRUB menu didn’t have any other option than memcheck. The non-GNOME desktops failed getting the 3G dongle to work while NetworkManager sorted that out on the GNOME desktops. The printing failed completely in OpenSuSE because they used their Zast-Tool; and Debian had a minor issue with ZeroConf not working.

So it’s quite a funny concept this “Distro Battle” although nowadays the GNU/Linux base is rather streamlined, isn’t it? So it doesn’t matter much which distro you use in order to get a printer or 3G dongle running unless you try to implement your own stuff.

19th DFN Workshop 2012

The 19th DFN Workshop happened again *yay* and I was lucky enough to be able to take part πŸ™‚

After last year we all knew the venue and it’s great. The hotel is very professional and the receptions are very good. The conference room itself is very spacious and well equipped for having a couple of hundred people there.

So after a first caffeine infusion the conference started and the first guy gave the keynote. Tom Vogt (from Calitarus GmbH) talked about Security and Usability and he made some interesting points. He doesn’t want to have more “Security Awareness” but more “User Awareness”. He claims that users are indeed aware of security issues but need to be properly communicated with. He gave Facebook as an example: If you log in wrongly a couple of times, Facebook will send you an email, excusing themselves for the troubles *you* have while logging in. As opposed to the “if the question is stupid, the helpdesk will set you on fire” attitude.

So instead of writing security policies with a lot of rules he wants us to write policies that take the user’s view into account and make sense for the average user. He also brought up passwords and password policy. Instead of requiring at least 8 characters (which will be read as “8 characters” by the user anyway) one should encourage a more sensible strategy, i.e. the XKCD one.

He also disliked the metaphors we’re using all the time, i.e. we’re talking about documents or crypto keys. A document is something static that you hold in your hand. It can’t do any harm. But a Word-“document” is indeed something different, because there are macros and whatnot. And it’s not a big problem to temporarily give away physical keys. But in the crypto world, it is. And people, he claimed, would make those associations when confronted with these terms. Unfortunately, he didn’t have a fix for those long-term used metaphors but he said extra caution needed to be applied when talking in these terms.

Dissonance was another big thing. He claimed that it’s problematic that starting a program and opening a file is the very same action in modern operating systems. If the open document was triggered differently, then the user could see if the document that they received was indeed a text file or a some binary gibberish.

And well, as the talk was titled “Usability” user interfaces were criticised, too. He mentioned that dialogues were very rude and that it was equal to holding someone until they answer a question. That trained the user to avoid and escape the dialogue as quickly as possible without even reading them, totally destroying the whole point of a dialogue. So we should only use them in a “life or death” situation where it would be okay to physically hold someone. And well, “user errors are interface errors”.

My favourite usability bug is the whole Keysigning story. It’s broken from beginning to end. I think that if we come up with a nice and clean design of a procedure to sign each others keys, the Web of Trust model will be used more and more. Right now, it’s an utterly complex process involving different media and all that is doomed to be broken.

After that, a guy from the Leibniz-Rechenzentrum talked about internal perpetrators from university data centres. They basically introduced Login IDS, a tool to scrub your logs and make them more administration friendly. He said that they didn’t watch their logs because it was way too much data. They had around 800 logins per day on their two SSH and two Citrix servers and nobody really checked when somebody was logging in. To reduce the amount of log, they check the SSHd log and fire different events, i.e. if there is someone logging in for the very first time. Or if user hasn’t logged in at that time of the day or from the IP she’s using before. That, he claimed, reduced their amount of log to 10% of the original volume. Unfortunately, the git repo shows a single big and scary Perl file with no license at all 😐

Another somewhat technical talk followed by Michael Weiser. He talked about security requirements for modern high performance computing environments and I couldn’t really follow all the way through. But from what I’ve understood, he wants to be able to execute big jobs and have all the necessary Kerberos or AFS tokens because you don’t know for how long you’ll have to wait until you can process your data. And well, he showed some solutions (S4U2self) and proposed another one which I didn’t really understand. But apparently everything needs to be very complex because you cannot get a ticket that’s valid long enough. And instead you get a “Granting-Ticket” which empowers you to get all the tickets you want for a basically unlimited amount of time…?

The break was just coming up at the right time so that the caffeine stock could be replenished. It did get used up quite quickly πŸ˜‰

The first talk after the break introduced to HoneypotMe, a technology that enables you to put honeypots on your production-mode machines without risking to have them compromised. They basically create tunnel for the ports that are open on the honeypot but not on the production machine. So an attacker would not detect the honeypot that easily. Although it’s kinda nonsensical for a Linux machine to have the MSSQL port open. Interesting technology, although I don’t quite understand, why they put the honeypot after the production machine (network topology wise), so that you have to modify the TCP stack on the production machine in order to relay connections to the actual honeypot. Instead, one could put the honeypot in front and relay connections to the production machine. That way, one would probably reduce plumbing the TCP layer on the machine that’s meant to serve production purposes.

Another, really technical talk was given by a guy from the research centre juelich. It was so technical that I couldn’t follow. Jesus christ were the slides packed. The topic was quite interesting though. Unfortunate that it was a rather exhausting presentation. He tried to tell us how to mange IPv6 or well, to better damn manage it, because otherwise you’d have loads of trouble in your network. He was referring a lot to the very interesting IPv6 toolkit by THC. He claimed that those attacks were not easy to defend against. But it doesn’t need an attacker, he said. Windows would be enough to screw up your network, i.e. by somehow configuring Internet Connection Sharing it would send weird Router Advertisements. But I might have gotten that wrong because he was throwing lots of words and acronyms on us. NDPMON. RAPIXD. RAMOND. WTF. Fortunately, it was the last talk and we could head off to have some proper beer.

After way too less sleep and ridiculous amounts of very good food, the second day started off with a very great talk by a guy from RedTeam Pentesting. He did very interesting research involving URL shortening services and presented us his results. Some of which are quite scary. If you’re remotely interested in this topic, you should have a look at the paper once it is available. There is slightly different version here.

So the basic problem was described as follows: A user wants to send a link to a friend but the URL is too long so that email clients break it (well, he didn’t mention which though) or Twitter would simply not accept it… We kinda have to assume that Twitter is a useful thing that people do actually use to transmit links. Anyway, to shorten links, people may use a service that translates the long URL into a short one. And now the problems start.

First of all, the obvious tracking issues arise. The service provider can see who clicks on which links and even worse: Set cookies so that users are identifiable even much later. Apparently, almost all of these service do make use of tracking cookies which last for a couple of years. Interestingly, Google is reported to not make use of tracking technologies in their URL shortening service.

Secondly, you eventually leak a secret which is encoded in the URL you are shortening. And that’s apparently, what people do. They do use Google Docs or other sensitive webapps that encode important access tokens in the URL that you are throwing with both hands at the service provider. He claimed to have found many interesting documents, ranging from “obviously very private photos” over balance sheets from some company to a list of addresses of kindergarten kids. He got a good percentage of private documents which was really interesting to see.

But it gets worse. He set up a brand new web server listening on a brand new domain (fd0.me) and created URLs which he then shortened using the services. On the page his webserver delivered was a password which no search engine knew back then. The question was: Do URL shortening services leak their data to search engines? Or worse: Do they scan the database for interesting looking URLs themselves? Turns out: Yes and yes. He found his password on search engines and curious administrators in his webserver log.

Other obvious problems include loss of URL. Apparently people do use shortened URLs in long lasting things like books. And well, URL shortening services are not necessarily known for being long living. Fun fact: His university used to have such a service, but they shut it down…

Another technical issue is speed. Because of the indirection, you have an overhead in time. Google are the winner here again. They serve the fastest.

So yeah that was a very interesting talk which clearly showed the practical risks of such services.

A electronic ID card was introduced in Germany rather recently and the next guy did some research (sponsered by the ministry of interior) to explore the “eID Online Authentication Network Threat Model, Attacks and Implications”. Nobody in the audience actually used the eID so he had to tell us what you are supposed to do with it. It is used to authenticate data like your name, address, birthday or just the fact that you are at legal age. It’s heavily focussed on Browser stuff, so the scenarios are a bank or a web shop. After the website requested eID functions, the browser speaks to the local eID deamon which then wants to read your eID and communicates with the servers. Turns out, that everything seems to be quite well designed, expect well, the browsers. So he claims it is possible to Man in the Middle a connection if one can make a browser terminate a successfully opened connection. I.e. after all the TLS handshakes were finished, one would terminate the connection, intercept it and then no further verification was done. A valid attack scenario, not necessarily easy to be in that position though.


There were tiny talks as well. My favourite was Martin John from SAP talking about Cross Domain Policies. Apparently, standards exist to “enhance” the same origin policy and enable JavaScripts in browsers to talk to different domains. He scanned the internet^tm and found 3% of the domains to have wildcard policies. 50% of those had in some way sensitive webapps, i.e. authentication. He closed giving the recommendation of using CORS to do cross domain stuff.

The last two talks were quite interesting. The first one talked about XML Signature Wrapping. A technique that I haven’t heard of before, mostly because I’m not into XML at all. But it seems that you can sign parts of a XML document and well, because XML is utterly complex, libraries fail to handle that properly. There are several attacks including simply reproducing the XML tree with different properties and hoping that the parser would verify the correct tree, but work on the other. Simple, huh? But he claimed to have found CVE 2011-1411, a vulnerability in an interesting user of XML: SAML, some authentification protocol based on XML.

Afterwards, I was surprised to see an old tool I was playing with some time ago: Volatility. It gained better Linux support and the speaker showed off some features and explained how to make it support your Linux version. Quite interesting to see that people focus on bringing memory forensics to Linux.

So if you are more interested in the topics, feel free to browse or buy the book which includes all the papers.

This year’s DFN Workshop was much more interesting content wise and I am glad that it managed to present interesting topics. Again, the setting and the catering are very nice and I hope to be able to attend many more DFN Workshops in the future πŸ™‚

GNOME @ FOSDEM2012

Is this time around again and the FOSDEM happened in Brussels, Belgium. Probably the biggest gathering of Free Software people was a lot colder than last year. It was covered in snow. So badly, that we had big troubles coming into Brussels. It took us almost twice as long to arrive than usual. The streets were packed with cars suffering from the severe conditions.

But all that didn’t stop us (GNOME that is) from having a nice presence. If you know FOSDEM you’d expect the booth to be in a packed and smelly area because all the people try to move along in the tiny hallway. But this year was different because we got a spacious place in a new building. That was cool, because it gave us much more area to move than usual but unfortunately it made the conference much more disconnected as there was yet another building involved. I didn’t even try to visit each and everything.

Anyway, from what I’ve seen, we had quite a good stand. Our friends from OpenSuSE received the most attention though. Rightfully so. Not only because they had almost free beer and nearly free other goodies, but because they had nice hardware, nice demos and nice people to present. Fortunately, we were located just next to our friends from KDE which enabled us to chit chat with well known people and to plan conspiracies for upcoming conferences in 2013. So stay tuned for that.

With the help of local GNOME people, we had our EventsBox which is well equipped. But well, since we had only one Box, we weren’t as well equipped as last time at LinuxTag. We had loads of T-Shirts from the Desktop Summit though which we tried to sell. I especially like the name tags we had. Somebody just got them printed so we looked much more inviting, I guess. Also cool were the posters that we got provided so we could pimp the glass wall behind us. It’s cool that we have people that provide such things just like that. We didn’t have the appropriate tools to handle the posters well. We used regular adhesive tape (which we ran out of in the middle of the day) which kinda destroys the posters. From our KDE friends we got some “blue-tac” or “patafix” which was really really great. Apparently it’s well known in the western end of Europe. I didn’t know it at all, but I now know that we definitely want to have this for the next time.

We also didn’t have blank sheets of paper to write stuff on which was a bit annoying. But well, we didn’t have markers either so we needed to get both first before being able to inform the people about the t-shirt prices.

Needless to say that our demo machine got upgraded to the latest Fedora and that that broke at least the web-cam. I mean it was supported in the last Fedora version so it’d be boring it was supported now, too. But the Fedora people had a nice gimmick to give away: A cheat cube which is just a well cut piece of paper that you can fold up to build a cube. You’re supposed to put that on your desk and use it to retrieve information quickly. I was wondering whether we could make something like that for GNOME Shell. Oh and while we’re at it: Many folks had roll-up displays which look very nice. They are around 1m wide and 2m high and you can have your big design on it. It doesn’t cost all too much but we’d need a proper motif first. So if you have any idea, feel free to discuss that in the wiki. And another thing that was annoying were our flyers. While it’s good that we had some, they were quite outdated. So we badly need some flyer material. Again, in the wiki is the place to show up.

So a big big thanks to the folks that helped out at the booth to make it rock. I hope we can make it work next year again.

There was also, again, a massive keysigning going on and I have to drop a quick rant about all that mess. caff on Fedora is kinda weird. It seems like the defaults in the man page don’t match the code, i.e. keyserver defaults to a different server than the man page states. And very annoying: It’s also different from GPG settings! So while trying to use caff it failed downloading the keys. I guess the server just hit a timeout or blocked my request altogether because it’s so many keys (/.-)
After having that sorted out, gpg asked to hit “y” all the time whether I was sure to sign the keys. Goddamnit. It’s about 100 keys and I sign with 4 private keys or so. Now I have to press 800 keys to get the fork()ing keys signed and mailed. Jesus Christ. It’s fricking 2012 and not 1972 anymore. I just want to conveniently sign the whole damn thing and not buy a new keyboard after each FOSDEM. Not only because I have to type so many keys but also because I feel the urge to smash it into someone face. But not only did I need some shell-fu to get the keys imported, I also needed to fiddle the fingerprints of the official key list because caff wouldn’t accept the fingerprints. The format though, is the format gpg uses to display fingerprints… So I had to do something like

cat /tmp/ksp.txt | tr -d ' ' | tr '\n' ' '

to get the proper format… And yeah, I’ll patch everything.. tomorrow…

Although I haven’t seen much of Brussels this time, I liked it being covered in snow and ice. I hope to be able to get more out of Brussels next time, especially improve my French πŸ˜‰ So yeah, I’m looking forward to next year.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.