Posts Tagged ‘conference’

Desktop Summit 2011 in Berlin

Wednesday, September 28th, 2011

This years GUADEC^W DesktopSummit took place in Berlin. Sure thing that I attended :-) Due to loads of stuff happening meanwhile, I didn’t come around to actually write about it. But I still want to mention a few things.

It was, like always, pretty nice to see all the faces again and catch up. The venue was almost excellent and provided good lecture halls and infrastructure, although the wireless was a bit flaky and spots to sit down and get together were sparse. Anyway, I have never seen so many actual users or wannabe users. Being in the heart of Germany’s capital definitely helped to make ourselves visible. Funnily enough, I met some folks who I chatted up during LinuxTag while I was presenting GNOME. I invited them to come to the DesktopSummit and so they did \o/

There were many talks and I didn’t see most of them. In fact, I was volunteering and meeting people so I couldn’t attend many lectures. But there was nothing I regret not having seen. The ones I did see were interesting enough, but not ground breaking. There’s a good summary over here.

We tried to record the talks but for some technical reasons it didn’t work out of the box. The network was too slow and no disks were available. We convinced the guy in charge to make us buy disks which eventually got used but I actually don’t know whether the lectures will be released at all.

A nice surprise was Intel giving away ExoPCs. In return they required you to sit and listen to presentations about their Appstore thingy called “AppUp“. Apparently a technology that tries to resemble OBS (because of distributing software via the web) and .debfiles+Synaptic (because of distributing software with a native GUI) with an additional payment layer in between. But it fails big time to do so. Not only can it not build binaries out of the sources that you give it, but it also can’t track dependencies. Welcome to 2011. Needless to say that it’s heavily targeted for Windows. Double fail that you can’t build Windows software for their store thing without having a Windows platform (and development tools) yourself.

The PC itself is neat. It’s a full tablet with only one soft key. The MeeGo version that came with it was out of date and updating was a major pain in the afternoon. It involved getting a USB keyboard because the OnScreenKeyboard would of course not show up if you open a terminal. And you needed a keyboard, because you can of course not update the software with that MeeGo version. There is no software management application at all. And most certainly, Intel’s new AppUp thing is not included in the latest and greatest release. In fact, it’s not even easily installable as it involves googling for the RPM file to be manually installed. By now I talked to Intel engineers and it seems that an actual vendor is supposed to integrate their version of the AppUp thing in the rest of the OS. So Intel doesn’t see itself in the position to do this. Other weird glitches include the hardware: While the ExoPC has a rotation sensor, it would be way to boring if it worked; so it doesn’t. And the hardware turns itself off after a while. Just like that. It feels like we have at least 3 years of engineering left before we can start dreaming of being able to ship a tablet platform that is ready for a day to day use. I have to note that the content centric UI approach is definitely very handy. Let’s hope it improves by fixing all those tiny things around the actual UI.

The discussion about a joint conference was bubbling up again. Of course. The main argument against a joint conference seems to be that it is considered to slow GNOME’s development down if we meet together, because we have to give time to the other people and cannot do our own program meanwhile. There are probably many variations of that argument, including that we do not collaborate anyway, so let’s rather not invest time in a joint conference.

While I do agree that the current form of the conference is not necessarily optimal, I don’t think that we should stop meeting up together. At the end of day, multiple (desktop) implementations or technologies are just pointless duplications. So let’s rather try to unify and be a unity (haha, pun intended) instead of splitting up further. I am very well aware of the fact that it’s technically unrealistic right now. But that’s the future we endeavour and we should work on making it possible, not work against it. So we don’t necessarily need to have a fully joint conference. After all, our technologies do differ quite substantially, depending on how you look at it. But let’s give each other the opportunity to learn about other technologies and speak to the key people. We might not fully make use of these opportunities yet, but if we design the conferences in a way that the camps have enough time to handle their internal issues and before or afterwards do a joint thing, then no harm is done if opportunities weren’t taken.

Another hot topic were Copyright Assignments. There was a panel made up of interesting people including Mark Shuttleworth. The discussion was alright, but way too short. They barely had 45 minutes which made it less than 15 minutes each. Barely enough to get a point across. And well, I didn’t really understand the arguments *for* giving away any of your rights. It got really weird as Mark tried to make another point: It would be generous if a contributor donated the code to the company and the participants should take into account that generosity would be a strong factor contributors would strive for. I haven’t seen that point being discussed anywhere yet, so let me start by saying that it is quite absurd. What could be more generous than giving your code to the public and ensuring that it stays freely available?! Just to be very clear: The GPL enables you to effectively do exactly that: Release code and ensure that it remains free.

I was delighted to see that the next GUADEC will take place in A Corunha. I’d rather have gone to Prague though. But maybe the Czech team can be motivated to apply again the next time.

LinuxCon Japan 2011

Thursday, June 30th, 2011

Thanks to the Linux Foundation I was able to attend LinuxCon 2011 in Japan.

I used the opportunity to distribute GNOME 3 DVD Images and leaflets during my talk about GNOME 3 which was well enough received I’d say. While I collected a lot of experience approaching people and telling them about all the niceties that GNOME 3 offers over the last few month, I really had too little time to tell all the brilliant things about our new GNOME. Anyway, it was nice to be on the very same schedule as the very important Linux people like Greg KH, Linus or Lenny.

The conference itself was hosted in a very spacious building: The Pacifico in Yokohama. One could see that impressive building from our hotel room. Just nice. The conference was well organised and the provided amenities such as food and drinks were good enough. I was particularly impressed by the simultaneous translations that were done by two elderly men.

The talks were generally interesting, probably because I haven’t been to a kernel focused conference and I found it interesting to get new input. My favourites were the Kernel Developer Panel were one could pose question onto the Kernel people face to face and the talks about the social aspect of Kernel development.

Despite all the trouble in Japan, we had a very good time and in fact, there weren’t many indicators to the earthquake or the nuclear catastrophe. The most annoying inconveniences probably were the turned off elevators. Other than that, we didn’t really see any disrupted services or chaos or problems at all. Traveling in Japan is a real pleasure as the train system is gorgeous and the cities are very well mapped. You encounter a city map just about every other corner and it’s very detailed and helpful. Japanese people are extraordinarily friendly and although there is a language barrier, they try to understand and help you. The downside is, that Japan is quite expensive. Especially the train system, but also lodging and food. However, the quality is very good, so it’s probably worth the money.

I’m looking forward to attend the next LinuxCon, maybe even in Japan :-)

GNOME at LinuxTag 2011

Tuesday, May 17th, 2011

Last week, I had the pleasure to attend LinuxTag and manage the GNOME booth. All in all, the GNOME booth went quite well. We had loads of visitors wanting to see the new GNOME Shell and discuss its design. But it was such a busy time that I didn’t even had the opportunity to leave the booth and look at all the other projects. It was, however, pretty nice. It took me a day to recover though. Being at the booth for all the four conference days in a row from (ideally) 09:00 until 18:00, always smiling and entertaining was quite exhausting.

To help the GNOME presence: I printed flyers and posters all day before LinuxTag. It was a pain to do, because we are lacking good material. We do have some Brochures to print out, but they are either outdated or in a miserable quality. It definitely needs some quality brochures for GNOME. We have more Posters and some of them are really nice. But I couldn’t render some of them because of bugs somewhere in the stack. Anyway, I managed to print posters on A4 paper which meant that they had to be glued together… To ease poster printing in the future, I uploaded the PDFs I generated to the wiki.

What worked well was our booth setup: We had Posters, Sticker, Flyers and (thanks to openSuSE) GNOME 3 Live DVDs to give away. Also our booth looked nice with GNOME banners hanging from the walls. Also, the ordered furniture looked nice to the outside, i.e. a presenter desk, a long cupboard and a bar table together with bar chairs made it look inviting. However, we lacked a small table and some chairs to cater for the many friends that were in the booth and not in front. Thanks to all the helping people. It was really awesome how quickly our booth looked nicely.

And fortunately, there is room for improvement. It would have been nice if we brought, i.e. T-Shirts to sell or Posters and Flyers for the GUADEC. But everything was still really okay. I hope we manage to do so well next year, too.

So thanks to Canonical for the EventsBox and openSuSE for the DVDs! If you happen to be in the need of some of the DVDs, give me a shout and we’ll arrange the shipping.

RFID Workshop at CampusGruen’s Datenschutzkongress

Tuesday, April 5th, 2011

I was asked to give a workshop about RFID for the CampusGruen Datenschutzkongress in Hamburg. So I did :-)

I used the opportunity to introduce the audience to the basics of RFID, i.e. what technologies exist and what they are used for. Also, I took arguments from pro and anti RFID groups to have them discussed.

You can have a look at the slides altough I doubt that they make much sense without actually having heard what was to be said. We spend good two hours talking and discussing over my twenty-something slides. Thanks again to the interested audience.

Afterwards, we had a small hacking session. I brought some RFID readers, tags, a passport, etc. and we used all that to play around. We also scanned some wallets to find out whether anybody had unwanted chips in their wallet.

GNOME @ FOSDEM 2011

Sunday, February 27th, 2011

I am very excited about having attended this years FOSDEM. Unfortunately, times were a bit busy so I am a bit late reporting about it, but I still want to state a couple of things.

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting (I wonder how that image will look in 2012 ;-) )

First of all, I am very happy that our GNOME booth went very well. Thanks to Frederic Peters and Frederic Crozat for manning to booth almost all the time. I tried to organise everything remotely and I’d say I partly succeeded. We got stickers, t-shirts and staff for the booth. We lacked presentation material and instructions for the booth though. But it still worked out quite well. For the next time, I’d try to be communicate more clearly who is doing what to prevent duplicate work and ensure that people know who is responsible for what.

Secondly, I’d like to thank Canonical for their generosity to sponsor a GNOME Event Box. After the orginal one went missing, Canocical put stuff like a PC, a projector, a monitor and lots of other stuff together for us to be able to show off GNOME-3. The old Box, however, turns out to be back again *yay*!

Sadly, we will not represent GNOME at upcoming CeBIT. But we will at LinuxTag. Latest.

Anyway, during FOSDEM, we got a lot of questions about GNOME 3 and Ubuntu, i.e. will it be easily possible to run GNOME 3 on Ubuntu. I hope we can make it possible to have a smooth transition from Unity to GNOME Shell. Interestingly enough, there isn’t a gnome-shell package in the official natty repositories yet :(

It was especially nice to see and talk to old GNOME farts. And I enjoyed socialising with all the other GNOME and non-GNOME people as well. Sadly, I didn’t like the GNOME Beer Event very much because it was very hot in the bar so I left very quickly.

So FOSDEM was a success for GNOME I’d say. Let’s hope that future events will work at least as well and that we’ll have a strong GNOME representation even after the GNOME 3 release.

DFN Workshop 2011

Friday, February 18th, 2011

I had the opportunity to attend the 18th DFN Workshop (I wonder how that link will look like next year) and since it’s a great event I don’t want you to miss out. Hence I’ll try to sum the talks and the happenings up.

It was the second year for the conference to take place in Hotel Grand Elysee in Hamburg, Germany. I was unable to attend last year, so I didn’t know the venue. But I am impressed. It is very spacious, friendly and well maintained. The technical equipment seems to be great and everything worked really well. I am not too sure whether this is the work of the Hotel or the Linux Magazin though.

After a welcome reception which provided a stock of caffeine that should last all day long, the first talk was given by Dirk Kollberg from Sophos. Actually his boss was supposed to give the talk but cancelled it on short notice so he had to jump in. He basically talked about Scareware and that it was a big business.

He claimed that it used to be cyber graffiti but nowadays it turned into cyber war and Stuxnet would be a good indicator for that. The newest trend, he said, was that a binary would not only be compressed or encrypted by a packer, but that the packer itself used special techniques like OpenGL functions. That was a problem for simulators which were commonly used in Antivirus products.

He investigated a big Ukrainian company (Innovative Marketing) that produced a lot of scareware and was in fact very well organised. But apparently not from a security point of view because he claimed to have retrieved a lot of information via unauthenticated HTTP. And I mean a lot. From the company’s employees address book, over ERM diagrams of internal databases to holiday pictures of the employees. Almost unbelievable. He also discovered a server that malware was distributed from and was able to retrieve the statistics page which showed how much traffic the page made and which clients with which IPs were connecting. He claimed to have periodically scraped the page to then compile a map with IPs per country. The animation was shown for about 90 scraped days. I was really wondering why he didn’t contact the ISP to shut that thing down. So I asked during Q&A and he answered that it would have been for Sophos because they wouldn’t have been able to gain more insight. That is obviously very selfish and instead of providing good to the whole Internet community, they only care about themselves.

The presentation style was a bit weird indeed. He showed and commented a pre-made video which lasted for 30 minutes out of his 50 minutes presentation time. I found that rather bold. What’s next? A pre-spoken video which he’ll just play while standing on the stage? Really sad. But the worst part was as he showed private photos of the guy of that Ukrainian company which he found by accident. I also told him that I found it disgusting that he pillared that guy in public and showed off his private life. The people in the audience applauded.

A coffee break made us calm down.

The second talk about Smart Grid was done by Klaus Mueller. Apparently Smart Grids are supposed to be the new big thing in urban power networks. It’s supposed to be a power *and* communications network and the household or every device in it would be able to communicate, i.e. to tell or adapt its power consumption.

He depicted several attack scenarios and drew multiple catastrophic scenarios, i.e. what happens if that Smart Grid system was remotely controllable (which it is by design) and also remotely exploitable so that you could turn off power supply for a home or a house?
The heart of the Smart Grid system seemed to be so called Smart Meters which would ultimately replace traditional, mechanical power consumption measuring devices. These Smart Meters would of course be designed to be remotely controllable because you will have an electrified car which you only want to be charged when the power is at its cheapest price, i.e. in the night. Hence, the power supplier would need to tell you when to turn the car charging, dish or clothes washing machine on.

Very scary if you ask me. And even worse: Apparently you can already get Smart Meters right now! For some weird reason, he didn’t look into them. I would have thought that if he was interested in that, he would buy such a device and open it. He didn’t even have a good excuse, i.e. no time or legal reasons. He gave a talk about attack scenarios on a system which is already partly deployed but without actually having a look at the rolled out thing. That’s funny…

The next guy talked about Smart Grids as well, but this time more from a privacy point of view. Although I was not really convinced. He proposed a scheme to anonymously submit power consumption data. Because the problem was that the Smart Meter submitted power consumption data *very* regularly, i.e. every 15 minutes and that the power supplier must not know exactly how much power was consumed in each and every interval. I follow and highly appreciate that. After all, you can tell exactly when somebody comes back home, turns the TV on, puts something in the fridge, makes food, turns the computer on and off and goes to bed. That kind of profiles are dangerous albeit very useful for the supplier. Anyway, he committed to submitting aggregated usage data to the supplier and pulled off self-made protocols instead of looking into the huge fundus of cryptographic protocols which were designed for anonymous or pseudonymous encryption. During Q&A I told him that I had the impression of the proposed protocols and the crypto being designed on a Sunday evening in front of the telly and whether he actually had a look at any well reviewed cryptographic protocols. He didn’t. Not at all. Instead he pulled some random protocols off his nose which he thought was sufficient. But of course it was not, which was clearly understood during the Q&A. How can you submit a talk about privacy and propose a protocol without actually looking at existing crypto protocols beforehand?! Weird dude.

The second last man talking to the crowd was a bit off, too. He had interesting ideas though and I think he was technically competent. But he first talked about home routers being able of getting hacked and becoming part of a botnet and then switched to PCs behind the router being able to become part of a botnet to then talk about installing an IDS on every home router which not only tells the ISP about potential intrusions but also is controllable by the ISP, i.e. “you look like you’re infected with a bot, let’s throttle your bandwidth”. I didn’t really get the connection between those topics.

But both ideas are a bit weird anyway: Firstly, your ISP will see the exact traffic it’s routing to you whatsoever. Hence there is no need to install an IDS on your home router because the ISP will have the information anyway. Plus their IDS will be much more reliable than some crap IDS that will be deployed on a crap Linux which will run on crappy hardware. Secondly, having an ISP which is able to control your home router to shape, shut down or otherwise influence your traffic is really off the wall. At least it is today. If he assumes the home router and the PCs behind it to be vulnerable, he can’t trust the home router to deliver proper IDS results anyway. Why would we want the ISP then to act upon that potentially malicious data coming from a potentially compromised home router? And well, at least in the paper he submitted he tried to do an authenticated boot (in userspace?!) so that no hacked firmware could be booted, but that would require the software in the firmware to be secure in first place, otherwise the brilliantly booted device would be hacked during runtime as per the first assumption.

But I was so confused about him talking about different things that the best question I could have asked would have been what he was talking about.

Finally somebody with practical experience talked and he presented us how they at Leibniz Rechenzentrum. Stefan Metzger showed us their formal steps and how they were implemented. At the heart of their system was OSSIM which aggregated several IDSs and provided a neat interface to search and filter. It wasn’t all too interesting though, mainly because he talked very sleepily.

The day ended with a lot of food, beer and interesting conversations :-)

The next day started with Joerg Voelker talking about iPhone security. Being interested in mobile security myself, I really looked forward to that talk. However, I was really disappointed. He showed what more or less cool stuff he could do with his phone, i.e. setting an alarm or reading email… Since it was so cool, everybody had it. Also, he told us what important data was on such a phone. After he built his motivation, which lasted very long and showed many pictures of supposed to be cool applications, he showed us which security features the iPhone allegedly had, i.e. Code Signing, Hardware and File encryption or a Sandbox for the processes. He read the list without indicating any problems with those technologies, but he eventually said that pretty much everything was broken. It appears that you can jailbreak the thing to make it run unsigned binaries, get a dump of the disk with dd without having to provide the encryption key or other methods that render the protection mechanisms useless. But he suffered a massive cognitive dissonance because he kept praising the iPhone and how cool it was.
When he mentioned the sandbox, I got suspicious, because I’ve never heard of such a thing on the iPhone. So I asked him whether he could provide details on that. But he couldn’t. I appears that it’s a policy thing and that your application can very well read and write data out of the directory it is supposed to. Apple just rejects applications when they see it accessing files it shouldn’t.
Also I asked him which protection mechanisms on the iPhone that were shipped by Apple do actually work. He claimed that with the exception of the File encryption, none was working. I told him that the File encryption is proprietary code and that it appears to be a designed User Experience that the user does not need to provide a password for syncing files, hence a master key would decrypt files while syncing.

That leaves me with the impression that an enthusiastic Apple fanboy needed to justify his iPhone usage (hey, it’s cool) without actually having had a deeper look at how stuff works.

A refreshing talk was given by Liebchen on Physical Security. He presented ways and methods to get into buildings using very simple tools. He is part of the Redteam Pentesting team and apparently was ordered to break into buildings in order to get hold of machines, data or the network. He told funny stories about how they broke in. Their tools included a “Keilformgleiter“, “Tuerfallennadeln” or “Tuerklinkenangel“.
Once you’re in you might encounter glass offices which have the advantage that, since passwords are commonly written on PostIts and sticked to the monitor, you can snoop the passwords by using a big lens!

Peter Sakal presented a so called “Rapid in-Depth Security Framework” which he developed (or so). He introduced to secure software development and what steps to take in order to have a reasonably secure product. But all of that was very high level and wasn’t really useful in real life. I think his main point was that he classified around 300 fuzzers and if you needed one, you could call him and ask him. I expected way more, because he teased us with a framework and introduced into the whole fuzzing thing, but didn’t actually deliver any framework. I really wonder how the term “framework” even made it into the title of his talk. Poor guy. He also presented softscheck.com on every slide which now makes a good entry in my AdBlock list…

Fortunately, Chritoph Wegener was a good speaker. He talked about “Cloud Security 2.0” and started off with an introduction about Cloud Computing. He claimed that several different types exist, i.e. “Infrastructure as a Service” (IaaS), i.e. EC2 or Dropbox, “Platform as a Service” (PaaS), i.e. AppEngine or “Software as a Service (SaaS), i.e. GMail or Twitter. He drew several attack scenarios and kept claiming that you needed to trust the provider if you wanted to do serious stuff. Hence, that was the unspoken conclusion, you must not use Cloud Services.

Lastly, Sven Gabriel gave a presentation about Grid Security. Apparently, he supervises boatloads of nodes in a grid and showed how he and his team manage to do so. Since I don’t operate 200k nodes myself, I didn’t think it was relevant albeit it was interesting.

To conclude the DFN Workshop: It’s a nice conference with a lot of nice people but it needs to improve content wise.

FOSS.in last edition 2010

Saturday, January 15th, 2011

I had the pleasure to be invited to FOSS.in 2010. As I was there to represent parts of GNOME I feel obliged to report what actually happened.

The first day was really interesting. It was very nice to see that many people having a real interest in Free Software. It was mostly students that I have talked to and they said that Free Software was by far not an issue at colleges in India.

Many people queued up to register for the conference. That’s very good to see. Apparently, around 500 people showed up to share the Free Software love. the usual delays in the conference setup were there as expected ;-) So the opening ceremony started quite late and started, as usual, with lighting the lamp.

Danese from the Wikimedia Foundation started the conference with her keynote on the technical aspects of Wikipedia.

She showed that there is a lot of potential for Wikipedia in India, because so far, there was a technical language barrier in Wikipedia’s software. Also, companies like Microsoft have spent loads of time and money on wiping out a free (software) culture, hence not so many Indians got the idea of free software or free content and were simply not aware of the free availability of Wikipedia.

According to Danese, Wikipedia is the Top 5 website after companies like Google or Facebook. And compared to the other top websites, the Wikimedia Foundation has by far the least employees. It’s around 50, compared to the multiple tens of thousands of employees that the other companies employ. She also described the openness of Wikipedia in almost every aspect. Even the NOC is quite open to the outside world, you can supposedly see the network status. Also, all the documentation is on the web about all the internal process so that you could learn a lot about the Foundation a lot if you wanted to.

She presented us several methods and technologies which help them to scale the way the Wikipedia does, as well as some very nerdy details like the Squid proxy setup or customisations they made to MySQL. They are also working on offline delivery methods because many people on the world do not have continuous internet access which makes browsing the web pretty hard.

After lunch break, Bablir Singh told us about caching in virtualised environments. He introduced into a range of problems that come with virtualisation. For example the lack of memory and that all the assumption of caches that Linux makes were broken when virtualising.
Basically the problem was that if a Linux guest runs on a Linux host, both of them would cache, say, the hard disk. This is, of course, not necessary and he proposed two strategies to mitigate that problem. One of them was to use a memory balloon driver and give the kernel a hint that the for the caching allocated pages should be wiped earlier.

Lenny then talked about systemd and claimed that it was Socket Based Activation that made it so damn fast. It was inspired by Apples launchd and performs quite well.

Afterwards, I have been to the Meego room where they gave away t-shirts and Rubix-cubes. I was told a technique on how to solve the Rubix-cube and I tried to do it. I wasn’t too successful though but it’s still very interesting. I can’t recite the methods and ways to solve the cube but there are tutorials on the internet.

Rahul talked about failures he seen in Fedora. He claimed that Fedora was the first project to adopt a six month release cycle. He questioned whether six month is actually a good time frame. Also the governance modalities were questioned. The veto right in the Fedora Board was prone to misuse. Early websites were ugly and not very inviting. By now, the website is more appealing and should invite the audience to contribute. MoinMoin was accused of not being as good MediaWiki, simply because Wikipedia uses MediaWiki. Not a very good reasoning in my opinion.

I was invited to do a talk about Security and Mobile Devices (again). I had a very interested audience which pulled off an interesting Q&A Session. People still come with questions and ideas. I just love that. You can find the slides here.

As we are on mobile security, I wrote a tiny program for my N900 to sidejack Twitter accounts. It’s a bit like firesheep, but does Twitter only (for now) and it actually posts a nice message. But I’ve also been pnwed;-)

But more on that in a separate post.


Unfortunately, the FOSS.in team announced, that this will be the last FOSS.in they organise. That’s very sad because it was a lot of fun with a very interesting set of people. They claim that they are burnt out and that if one person is missing, nothing will work, because everyone knew exactly what role to take and what to do. I don’t really like this reasoning, because it reveals that the Busfactor is extremely low. This, however, should be one of the main concerns when doing community work. Hence, the team is to blame for having taken care of increasing the Busfactor and thus leading FOSS.in to a dead end. Very sad. But thanks anyway for the last FOSS.in. I am very proud of having attended it.

MeeGo Conference 2010 in Dublin

Sunday, November 21st, 2010

The MeeGo Conference 2010 took place from 2010-11-15 until 2010-11-17 and it was quite good. I think I haven’t seen so much money being put into a conference so far. That’s not to be read as a complaint though ;-)

The conference provided loads of things, i.e. lunch, which was apparently sponsored by Novell. It was very good: Yummie lamb stew, cooked salmon and veg was served to be finished with loads of ice cream and coffee. Very delicious. Breakfast was provided by Codethink as far as I can tell. The first reception in the evening was held by Collabora and drinks and food were provided. That was, again, very well and a perfect opportunity to meet and chat with people. In fact, I’ve met a lot old folks that II haven’t seen for at least half a year. But with the KDE folks entering the scene I’ve also met a few new interesting people.

The venue itself is very interesting and they definitely know how to accommodate conference attendees. It’s a stadium and very spacious. There were an awful lot of stadium people taking care of us. The rooms were well equipped although I was badly missing power supply.

The second evening was spent in the Guinness Warehouse, an interesting museum which tells you how the Guinness is made. They also have a bar upstairs and food, drinks and music was provided. I guess the Guinness couldn’t have been better :-)

Third evening was spent in the Stadium itself to watch Ireland playing Norway. Football that is. There was a reception with drinks and food downstairs in the Presidents Suite. They even handed out own scarfs which read “MeeGo Conference”. That was quite decadent. Anyway, I’ve only seen the first half because I was at the bar for the second half, enjoying Guinness and Gin Tonic ;-)

Having sorted out the amnesties (more described here), let’s have a look at the talks that were given. I actually attended a few, although I loved to have visited more.

Enterprise Desktop – Yan Li talked about his work on making MeeGo enterprise ready, meaning to have support for VPNs, Exchange Mail, large LDAP address books, etc… His motivation is to bring MeeGo to his company, Intel. It’s not quite there yet, but apparently there is an Enterprise MeeGo which has a lot of fixes already which were pushed upstream but are not packaged in MeeGo yet. His strategy to bring the devices to the people was to not try to replace the people’s old devices but rather give them an additional device to play with. Interesting approach and I’d actually like to see the results in a year or so.

Compliance – There is a draft specification but the final one will be ready soon. If you want to be compliant, you have to ensure that you are using MeeGo API (Qt, OpenGL ES, …) only. That will make it compatible for the whole minor version series. There will also be profiles (think: Handset, Netbook) which well define additional APIs or available screen estate. In return, you are allowed to use the MeeGo name and the logo. Your man asked the audience to try the compliance tools and give feedback and to review the handset profile draft.

Security – There will be a MSSF, a Mobile Simplified Security Framework in MeeGo 1.2. It’s a MAC system which is supposed to be in mainline. So yes, it is yet another security framework in Linux and I didn’t really understand, why it’s necessary. There’ll be a “Trusted Execution Environment’ (TrEE) as well. That will mean that the device has to have a TPM with a hardwired key that you can’t see nor exchange. I don’t necessarily like TPMs. Besides all that, “Simplified Mandatory Access Control” (SMACK) will be used. It is supposedly like SELinux, but doesn’t suck as much. Everything (processes, network packets, files, I guess other IPC, …) will get labels and policies will be simple. Something like “Application 1 has a red label and only if you have a red label, too, you can talk to Appilcation 1″. We’ll see how that’ll work. On top of all that, an Integrety Protection “IMA” system will be used to load and execute signed binaries only.

Given all that, I don’t like the development in this direction. It clearly is not about the security of the person owning the device in question but about protecting the content mafia. It’s a clear step into the direction of Digital Restriction Management (DRM) under the label of protection the users data. And I’m saying that they are trying to hide it, but they are not calling it by its right name either.

A great surprise was to see Intel and Nokia handing out Lenovo Ideapads to everybody. We were asked to put MeeGo on the machine, effectively removing the Windows installation. Three years ago, when I got my x61s, it was a piece of cake to return your Windows license. By now, things might have changed. We’ll see. I’ll scratch the license sticker off the Laptop and write a letter to Lenovo and see what happens. Smth like this (copied from here):

Lenovo Deutschland GmbH
Gropiusplatz 10
70563 Stuttgart

Rückgabe einer Windows-Lizenz

Sehr geehrte Damen und Herren,

hiermit gebe ich die gemeinsam mit einem Lenovo-Notebook erworbene Windows-Lizenz gemäß des End User License Agreement (EULA) von Microsoft Windows zurück.

Das EULA von Windows gewährt mir das Recht, beim Hersteller des Produkts, mit dem ich die Lizenz erworben habe, den Preis für die Windows-Lizenz zurückerstattet zu bekommen, falls die mitgelieferte Windows-Lizenz beim Start nicht aktiviert und registriert wurde und das EULA nicht akzeptiert worden ist. Ich habe der EULA nicht zugestimmt, da sie zahlreiche für mich inakzeptable Punkte enthält, beispielsweise:

– Die Aktivierung der Software sendet Hardware-Informationen an Microsoft (Punkt 2 des EULA).
– „Internetbasierte Dienste“ wie das „Windows-Updatefeature“ können von Microsoft jederzeit gesperrt werden (Punkt 7 des EULA). Dadurch existiert de facto kein Recht auf Security-Updates.

Ich entschied mich stattdessen für das Konkurrenz-Produkt Ubuntu, da dieses eine bessere Qualität aufweist und ein verbraucherfreundlicheres EULA hat.

Sie haben anderen Lenovo-Kunden in der Vergangenheit die Rückgabe der Windows-Lizenz verweigert mit der Verweis, dass es sich bei dem mit dem Gerät erworbenen Windows-Betriebssystem um einen “integrativen Bestandteil” des Produkts handle und man die Windows-Lizenz nur mit dem gesamten Produkt zurückgeben kann.

Diese Auffassung ist aus den folgenden Gründen nicht zutreffend:
– Windows-Lizenzen werden auch einzeln verkauft, eine Bindung von Software an ein bestimmtes Hardware-Gerät (OEM-Vertrag) ist nach deutschem Recht nicht zulässig. [1]
– Das betreffende Notebook lässt sich auch mit anderen, einzeln erhältlichen Betriebssystemen (u.a. Ubuntu) produktiv betreiben. Insbesondere Ihre Produkte laufen mit Ubuntu (mit sehr wenigen Ausnahmen) ganz hervorragend.
– Jedoch lässt sich das vorliegende Notebook nicht ohne Windows-Lizenz oder ganz ohne Betriebssystem erwerben.

Mir sind desweiteren mehrere Fälle bekannt, in denen Sie erfolgreich mit dem von mir verwendeten Formular Windows-Lizenzen zurückerstattet haben.

Ich bitte Sie deshalb, mir die Kosten für die Windows-Lizenz zurückzuerstatten und die erworbene Windows-Lizenz einzeln zurückzunehmen.

Hilfsweise teilen sie mir mit, wie ich das Geraet als ganzes zurureck geben kann.

Mit freundlichen Grüßen

[1] Vgl. dazu das Urteil des BGH I ZR 244/97 vom 6. Juli 2000
(http://tiny.cc/IZR24497 sowie http://www.jurpc.de/rechtspr/20000220.htm).

The performance of MeeGo on that device is actually extremely bad. WiFi is probably the only thing that works out of the box. The touchpad can’t click, the screen doesn’t rotate, the buttons on the screen don’t do anything, locking the screen doesn’t work either, there is no on-screen keyboard, multi touch doesn’t work with the screen, accelerometer doesn’t work. It’s almost embarrassing. But Chromium kinda works. Of course, it can’t actually do all the fancy gmail stuff like phone or video calls. The window management is a bit weird. If you open a browser it’ll get maximised and you’ll get a title bar for the window. And you can drag the title bar to unmaximise the window. But if you then open a new browser window, it’ll be opened on a new “zone”. Hence, it’s quite pointless to have a movable browser window with a title bar. In fact, you can put multiple (arbitrary) windows in zone if you manually drag and drop them from the “zones” tab which is accessible via a quake style top panel. If you put multiple windows into one zone, the window manager doesn’t tile the windows. By the way: If you’re using the touchscreen only, you can’t easily open this top panel bar, because you can’t easily reach the *very* top of the screen. I hope that many people will have a look at these issues now and eventually fix them. Anyway, thanks Intel and Nokia :-)

mrmcd1001b Impressions

Wednesday, September 8th, 2010

I had the pleasure to be invited to the MetaRheinMain ChaosDays 1001b (mrmcd1001b) in Darmstadt. This years motto was “Beyond Science Fiction” and ~250 people gathered together to discuss “Society and Technology in 20th century fiction and 21th century reality”.  

The presented talks were mostly interesting, although I didn’t attend that many. I spent most of the time talking to people or giving (two) talks myself: Security in Mobile Devices and Virtualised USB Fuzzing.

The first one went as expected and I think the attendees enjoyed it very much. Again, talking about technical details that a buffer overflow on x86 involves is not that much fun but I think it went at least alrightish. Slides can be found here.

The second talk was kind of a rehearsal for my final thesis presentation. So I took the chance to prepare myself for Dublin and present brand new stuff^tm. I started off crashing a Linux PC with my N900 and went then to the talk. It was a bit confusing, I guess. But in fairness: It was very late in every sense of the word ;-) But I got positive feedback nonetheless so it’s better if you make up your own mind with the slides. Although I don’t think the slides alone are that interesting.

For some reason, people were interested in the commands that I’ve used for the demo:

  1. Boot Ubuntu
    /opt/muelli/qemu/bin/qemu-system-x86_64 -enable-kvm -hda ubuntu.img -cdrom ~/ISOs/ubuntu-10.04.1-desktop-amd64.iso -monitor stdio -serial vc -m 1G -loadvm 1
  2. Setup Filter
  3. usb_filter_setup /tmp/filter
    export PYTHONPATH=~/hg/scapy-com/
    python recordingfilter.py /tmp/filter /tmp/phonet.dump

  4. Attach device
  5. info usbhost
    usb_add host:0421:01c8
    sudo chown muelli /dev/bus/usb/002/004

    usb_filter_remove
    usb_del 0.2

  6. Replay
  7. usb_add emul:full:/tmp/filter
    cat /tmp/filter.in &
    cat /tmp/phonet.dump.out > /tmp/filter.out

    usb_del 0.0
    kill %%

  8. Fuzz (didn’t really work because of a Heisenbug)
  9. python emulator.py --relaxed /tmp/filter /tmp/phonet.dump.combined
    python fuzzingemulator.py /tmp/filter webcam.dump
    usb_del 0.0

  10. Fully Virtualise

  11. usb_add emul:full:/tmp/filter
    python usbmachine.py /tmp/filter.in /tmp/filter.out
    usb-devices

FOSS.in 2010 does take place \o/

Monday, August 30th, 2010

I am delighted to see that this years FOSS.in will indeed take place. There were rumours about it not happening but fortunately you will have the opportunity to have a great time from 2010-12-15 to 2010-12-17!

You might have realised already, that his is only three days:

This year, the event is 3 days instead of the usual 5 days –  a 5 day event was simply too exhausting for everyone (participants and team). Also, we have moved the event into the middle of December, to give students of colleges that usually have their exams end-November or early-December a chance to attend. Our American friends will be happy to note that we have moved the event safely out of Thanksgiving range :)

As last year, I expect the conference to be great. I do hope, that GNOME will be well represented, especially since GNOME-3 will be released and we have the potential to attract many new hackers. Also, because the KDE folks were staffed very well and we were not.