Talking at Def.camp 2016 in Bucharest, Romania

Just at the beginning of this month I was invited to going to Bucharest, Romania, for giving a talk on GNOME at this year’s def.camp. The conference seems to be an established event in the Romanian security community and has been organised quite well. As I said in my talk I was happy to be there to tell those people about Free Software. I saw many people running around with their proprietary systems. It seems that certain parts of the security community does not believe that the security of a system greatly increases when it’s based on Free Software. In fairness, the event seemed to be a bit on the suit-and-tie-side where Windows is probably much more common than people want.

Andrei Avădănei opened the conference by saying how happy he was that, even at that unholy hour (09:00 in the morning…) he counted 1100 people from 30 countries and he expected that number to grow over the following hours. It didn’t feel that big, but the three halls were quite large indeed. One of those halls was the “hacking village” in which participants can practise real life “problem solving skills”. The hacking village was more of an expo where vendors had there booths but also some interesting security challenges. My favourite booth was the Virtual Reality demo. Someone brought an HTC VR system and people could play a simple game. I’ve tried an Oculus Rift before in which I road a roller coaster. With the HTC system, I also had some input methods which really enhanced the experience. Very immersive.

Anyway, Andrei mentioned, how happy he was to have the biggest security event in Romania being very grassroots- and community driven. Unfortunately, he then let some representative from Orange, the main sponsor, talk. Of course, you cannot run a big event like that without having enough financial backup. But then giving the main stage, the prime opening spot to the main sponsor does not leave the impression that they are community driven… I expected the first talk after the opening to be setting the theme for the conference. In this case, it was a commercial. That doesn’t actually fit the conference too badly, because out the 32 talks I counted 13 (or 40%) being delivered from sponsors. With sponsors I mean all companies listed on the homepage for their support. It may very well be that I am mistaking grassrooty supporters for commercial sponsors.

The Orange CTO mentioned that connectivity is the new electricity which shapes countries and communities. For them, a telco, in order to ensure connectivity, they need to maintain security, he said. The Internet of connected devices (IoT) is growing exponentially and so are the threats. Orange has to invest in order to maintain security for its client. And they do, it seems. He showed a fancy looking “threat map” which showed attacks in real-time. Probably a Snort (or whatever IDS is currently the en-vogue) with a map showing arrows from Geo-IP locations pointing towards Romania.

Next up was Jason Street who talked about how he failed doing his job. He was a blue team security guy, he said, and worked for a bank as security information officer. He was seen by the people as the bad guy making your life dreadful. That was bad, he said, because he didn’t teach the people the values and usefulness of information security. Instead he taught them that they better not get to meet him. The better approach, he said, is trying to be part of a solution not looking for problems. Empower the employees in what information security is doing or trying to do. It was a very entertaining presentation given by a very engaged speaker. I couldn’t get so much from the content though.

Vlad from Orange talked about their challenges providing an open, easy to use, and yet secure WiFi infrastructure. He referred on the user expectations and the business requirements. Users expect to be able to just connect without much hassle. The business seems to be wanting to identify the user and authorise usage. It was mainly on a high level except for a few runs of authentication protocol. He mentioned EAP-SIM and EAP-AKA as more seamless authentication protocols compared to, say, a captive Web portal. I didn’t know that it’s possible to use your perfectly valid shared secret in your SIM for authentication. It makes perfect sense. Even more so for a telco such as Orange.

Mihai from Bitdefender talked about Browser instrumentation for exploit analysis. That means, as I found out after the talk, to harness the Browser’s internals to analyse malicious payloads. He showed how your Browser (well… Internet Explorer with Flash) is exploited nowadays. He ran a “Cerber” demo of exploiting an Internet Explorer with some exploit kit. He showed fiddler and process explorer which displayed the HTTP traffic and the spawned processes. After visiting a simple Web page the malicious payload was delivered, exploited the IE, and finally crashed it. The traffic in fiddler revealed that the malware was delivered via a crafted Flash program. He used a Flash decompiler to look at the files. But he didn’t really find the exploit itself, probably because of some obfuscation. So what is the actual exploit? In order to answer that properly, you need to inspect the memory during runtime, he said. That’s where Browser instrumentation comes into play. I think he interposed several functions, such as document.write, eval, object parameters, Flash’s LoadBytes, etc to analyse what goes in and out. All that information was then saved to disk in separate files, i.e. everything that went to document.write was written to c:\share\document.write, everything that Flash’s loadbytes took, was written to c:\shared\loadbytes. He showed another demo with the Sundown exploit delivery framework which successfully exploited his browser. He then showed the filesystem containing the above mentioned information which made it easier to spot to actual exploit and shellcode. To prevent such exploits, he recommended to use Windows 10 and other browsers than Internet Explorer. Also, he recommended to use AdBlock to stop “malvertising”. That is in line with what I recommended several moons ago when analysing embedded JavaScripts being vulnerable for DOM-based XSS. The method is also very similar to what I used back in the day when hacking on Chromium and V8, so I found the presentation quite good. Except for the speaker :-/ He was looking at his slides with his back to the audience often and the audio wasn’t really good. I respect him for having shown multiple demos with virtual machine snapshots. I wouldn’t have done it, because demos usually fail! 😉

Inbar Raz talked about Tinder bots. He said he was surprised to find so many “matches” when being in Sweden. He quickly noticed that he was chatted up by bots, though, because he got sent the very same message from different profiles. These profiles also don’t necessarily make sense. For example, the name and the age shown on the Tinder profile did not match the linked Instagram or Facebook profiles. The messages he received quickly included a link to a dodgy Web site. When asking whois about the ownership he found many more shady domains being used for dragging people to porn sites. The technical details weren’t overly elaborate, but the talk was quite entertaining.

Raul Alvarez talked about reverse engineering polymorphic ransom ware. I think he mentioned those Locky type pieces of malware which lock your computer or files. Now you might want to know how that malware actually works. He mentioned Ollydbg, immunity debugger, and x64dgb as tools to use for reverse engineering your files. He said that malware typically includes an unpacker which you need to survive first before you’re able to see the actual malware. He mentioned on-demand polymorphic functions which are being called during the unpacking stage. I guess that the unpacker decrypts or uncompresses to different bytes everytime it’s run. The randomness is coming from the RDTSC call, he said. The way I understand that mechanism, the unpacker only modified a few bytes at a time and potentially modifies irrelevant bytes. Imagine code that jumps over a few bytes. These bytes could be anything, because they are never used let alone executed. But I’m not sure whether this is indeed the gist of what he described in a rather complicated fashion. His recommendation for dealing with metamorphic code is to catch it right when it finished decrypting the payload. I think everybody wishes to be able to do that indeed… He presented a general method for getting rid of malware once it hit you: Start in safe mode and remove suspicious registry entries for the “run” key. That might not be interesting to Windows people, but now I, being very ignorant about Windows, have learned something 🙂

Chris went on to talk about securing a mobile cryptocoin wallet. If you ask me, he really meant how to deal with the limitation of the platform of his choice, the iPhone. He said that sometimes it is very hard to navigate the solution space, because businesses are not necessarily compatible with blockchains. He explained some currencies like Bitcoin, stellar, ripple, zcash or ethereum. The latter being much more flexible to also encode contracts like “in the event of X transfer Y amount of money to account Z”. Financial institutions want to keep their ledgers private, but blockchains were designed to run in public, he said. In addition, trust between financial institutions is low. Bitcoin is hard to use, he said, because the cryptography itself is hard to understand and to use. A balance has to be struck between usability and security. Secrets, he said, need to be kept secret. I guess he means that nobody, not even the user, may access the secret an application needs. I fundamentally oppose. I agree that secrets need to be kept as securely as possible. But secrets must not be known by anyone else but the users who are supposed to benefit from them. If some other entity controls my secret, I am not really in control. Anyway, he looked at existing bitcoin wallet applications: Bither and Breadwallet. He thinks that the state of the art can be improved if you are willing to break the existing protocol. More precisely, he wants to leverage the “security hardware” present in current mobile devices like Biometric sensors or “enclaves” in modern CPUs to perform the operations based on the secret unextractibly stored in hardware. With such an enclave, he wants to generate a key there and use it to sign data without the key ever leaving the enclave. You need to change the protocol, he said, because Apple’s enclave uses secp256r1, but Bitcoin uses secp256k1.


My own talk went reasonably well, I think. I am not super happy but happy enough. But I’ve realised a few times now that I left out things I wanted to mention or how I could have better explained what I wanted. Then again, being perfect would be boring, so better leave some room for improvement 😉 I talked about how I think GNOME is a good vendor of security software. It’s focus on user experience is it’s big advantage. The system should make informed decisions as much as possible and try to leave the user out as much as possible. Security should be an inherent feature, not something that you need to actively care about. I expected a more extreme reaction from the security focused audience, but it seemed people mostly agreed. In my mind, “these security people” translate security with maximum control placed in users’ hands which has to manifest itself in being able to control each and every aspect of a solution. That view is not compatible with trying to leave the user out of the security equation. It may be that I am doing “these security people” wrong. Or that they have changed. Or simply that the audience was not composed of the people I thought they were. I was hoping for developers creating security software and I mentioned that GNOME libraries would perform great for their tasks. Let’s see whether anyone actually takes my word for it and complains to me 😉

Matt Suiche followed “the money of security companies, IPOs, and M&A”. In 2016, he said, the situation is not very different from the 90s: Software still has bugs, bad configuration is still a problem, default passwords are still being used… The newly founded infosec companies reported by Crunchbase has risen a lot, he said. If you multiply that number with dollars, you can see 40 billion USD being raised since 1998. What’s different nowadays, according to him, is that people in infosec are now more business oriented rather than technically. We have more “cyber” now. He referred to buzzwords being spread. Also we have bug bounty programmes luring people into reporting vulnerabilities. For example, JP Morgan is spending half a billion USD on cyber security, he said. Interestingly, he showed that the number of vulnerabilities, i.e. RCE CVEs has increased, but the number of actual exploitations within the first 30 days after a patch has decreased. He concluded that Microsoft got more efficient at mitigating vulnerabilities. I think you can also conclude other things like that people care less about exploitation or that detection of exploitation has gotten worse. He said that the cost of an exploit has increased. It wasn’t long ago here you could cook up an exploit within two weeks. Now you need several people for three months at least. It’s been a well made talk, but a bit too fluffy for my taste.

Stefan and David from Kaspersky talked off-the-record (i.e. without recordings) about “read-world lessons about spies every security researcher should know”. They have been around the industry for more than a decade and they have started to notice patterns, they said. Patterns of weird things that happen which might not be easy to explain at first. It all begins with the realising that we live in a world, whether we want it or not, where we have certain control over the success of espionage attacks. Right now people reverse engineer malware which means that other people’s operations are being disrupted. In fact, he claimed that they reverse engineer and identify the world’s most advanced persistent threats like Duqu, Flame, Hellsing, or many others and that their company is getting better and better at identifying other people’s operations. For the first time in history, he said, we as geeks have an influence about espionage. That makes some entities not very happy and they let certain people visit you. These people come in various types. The profile of a typical government employee is that they are very open and blunt about their desires. Mostly, they employ patriotism to persuade you. Another type is the impersonator, they said. That actor is not perfectly honest with you. He gave an example of him meeting another person who identified with the very same name as him. It got strange, he said, when he met that person on a different continent a few months later and got offered to perform a highly paid training. Supposedly only to build up a relationship. These people have enough budget to get closer to you, they said, Another type of attacker is the “Banya Girl”. Geeks, they said, who sat most of their life in front of the computer are easily attracted by girls. They have it easier to get into your room or brain. His example took place one year ago: He analysed a satellite exploiting malware later known as Turla when he met this super beautiful girl in the hotel who sat there everyday when he went to the sauna. The day they released the results about Turla they went for dinner together and she listened to a phone call he had with a reporter. The girl said something like “funny that you call it Turla. We call it Uroboros”. Then he got suspicious and asked her about who “they” are. She came up with stories he found weird and seemed to be convinced that she knows more than she was willing to reveal. In general, they said, asking for a selfie or a Facebook friend request can be an effective counter measure to someone spying on you. You might very well ask what to do when you think you’re targeted. It’s probably best to do nothing, they said. It’s their game, you better not start playing it even if you wake up in the middle of it. You can try to take care about your OpSec to protect against certain data being collected or exfiltrated. After all, people are being killed based on metadata. But you should also try to not get yourself into trouble. Sex and money are probably the oldest weapons people employ against you. They also encouraged people to define trust and boundaries for existing and upcoming relationships. If you become too paranoid, you’ve already lost the battle, they said. Keep going to conferences, keep meeting people, and don’t close yourself down.

It were two busy days in Bucharest. I’m happy to have gone and I hope I will have another chance to visit the lovely city 🙂 By that time the links here in this post will probably be broken 😉 I recommended using the “archive” URLs, i.e. https://def.camp/archives/2016/ already now, but nobody is listening to me… I can also not link to the individual talks, because the schedule page is relatively click-intensive, i.e. not deep-linkable 🙁 …

Talking at GNOME.Asia Summit 2016 in New Delhi, India

It’s spring time and that means it’s time for GNOME.Asia Summit! This year’s edition took place in New Delhi, India. This years makes five years after the initial GNOME 3.0 release. In fact, an important releases planning hackfest happened five years ago in India, so it’s been a somewhat remarkable date.

The conference felt a little smaller than the last edition, although I guess the college we were hosted at tried hard to bring their students to the talks. That was especially noticeable in the opening slot were everybody who felt sufficiently important had something to say. The big auditorium was filled with students, but I doubt they were really interested or listening. The opening was a bit weird for my taste, anyway. I have seen many conference openings, I would say. But that guy from the college who opened GNOME.Asia 2016 seemed to be a little bit confused, I have the feeling. He said that GNOME started 2008 so that all the software you use can be had freely so that you can upgrade your devices, like GPS satnavs. The opening ceremony, and yes, it’s really more of a ceremony rather than a short “welcome, good that you’re here” talk seems to be quite a formal thing in this college. Everybody on the stage receives a bouquet of flowers and many people were greeted and saluted to which stretched everything to an enormous length which in turn made the schedule slip by two hours or so.

Cosimo keynoted the conference and presented his ideas for the future of the GNOME project. We’ve come a long way, he said, with GNOME 3, which has initially been released five years ago. GNOME has aged well, he said. No wrinkles can be seen and GNOME is looking better than ever. He said that he likes GNOME 2 to be thought of GNOME in the era of distributions, because you could plug together modules that you liked. And everybody liked that. The pain point, he said, was that distributions chose which modules to plug together which finally decided about the user experience. Due to module proliferation was felt as impacting the project negatively the new world of GNOME 3 was introduced. One the most controversial but also most successful thing GNOME 3 did, he said, was to put the responsibility of defining the user experience back in upstream’s hands by eliminating choices. While causing people to complain, it led to a less complicated test matrix which eventually made GNOME accessible to less technical people. He said, GNOME 3 is the era of Operating Systems, so there are not distributions packaging GNOME but rather Operating systems built on GNOME, like endless, mint, or solus. The big elephant in the room is the role of applications, he said. If cohesive Operating Systems are built upon GNOME, how can applications work with different operating systems? Currently, you cannot, he said, run elementary applications on GNOME and vice versa. xdg-app will hopefully address that, he said. It’s a big transition for the GNOME project and that transition is even bigger than the one from GNOME 2 to GNOME 3, he said. Unfortunately, the audience seemed to be a little tired by from the length of the opening session and it felt like they were demanding a break by starting to chat with their neighbours…

Pravin then continued to talk about the state of Indian languages in GNOME. He mentioned that some Indian languages are well supported while some others have no support at all. He also showed that with Fedora 24 you get a text prediction engine. So you can type Latin characters for the word you want to enter in a different script. The Q&A revealed that the list of suggested words is sorted by frequency. Apparently they did some analysis of usage of words. I wonder whether it’s also able to learn from the user’s behaviour.

The talk on privacy given by Ankit Prateek showed how your typical Internet and Web usage would leave traces and what you mitigation you could employ. He mentioned specific Web attacks like Super Cookies or Canvas fingerprinting. He recommended using NoScript whichs usefulness the audience immediately questioned. To my surprise, he didn’t mention one my favourite plugins Google Privacy, because Google remembers what search results you click.

I got to talk about five years of GNOME 3. I conveyed the story of how the 3.0 release happened and what was part of it. For example, we had so many release parties with swag being sent around the world! But I also showed a few things that have changed since the initial 3.0.

Another talk I had was about Security. I explained why I see GNOME being in the perfect position to design, develop, and deploy security systems for a wide range of users. First, I ranted about modal dialogues, prompts, and that they are not a good choice for making a security decision. Then, I explained how we could possible defend against malicious USB devices. I think it’s work we, as developers of a Free Software desktop, have to do in order to serve our users. Technically, it’s not very hard, e.g. you block new USB devices being plugged in, when the screensaver is shown. We know how to do the blocking and unblocking of USB devices. More subtle issues involve the policies to apply and how to make the user aware of USB devices. Another pet peeve of mine is Keysigning, so I also ranted about the state of the art and we can and should improve things.

Thanks to the local organising people and the GNOME Foundation for flying me in and out.
Sponsored by GNOME!

Talking on Searchable Encryption at 32C3 in Hamburg, Germany

This year again, I attended the Chaos Communication Congress. It’s a fabulous event. It has become much more popular than a couple of years ago. In fact, it’s so popular, that the tickets (probably ~12000, certainly over 9000) have been sold out a week or so after the sales opened. It’s gotten huge.

This year has been different than the years before. Not only were you able to use your educational leave for visiting the CCCongress, but I was also giving a talk. Together with my colleague Christian Forler, we presented on Searchable Encryption. We had the first slot on the last day. I think that’s pretty much the worst slot in the schedule you could get 😉 Not only because the people are in Zombie mode, but also because you have received all those viruses and bacteria yourself. I was lucky enough, but my colleague was indeed sick on day 4. It’s a common thing to be sick after the CCCongress :-/ Anyway, we have hopefully entertained the crowd with what I consider easy slides, compared to the usual™ Crypto talk. We used a lot imagery and tried to allude to funny stuff. I hope people enjoyed it. If you have seen it, don’t forget to leave feedback! It was hard to decide on the appropriate technical level for the almost 1800 people. The feedback we’ve received so far is mixed, so I guess we’ve hit a good spot. The CCCongress was amazingly organised for speakers. They really did care for us and made sure everything was right. So everything was perfect expect for pdfpc which crashed whenever it was meant to display a certain slide… I used Evince then and it worked…

The days at the CCCongress were intense as you might be able to tell from the Fahrplan. It generally started at about 12:00 and ended at about 01:00. And that’s only the talks. You can’t avoid bumping into VIP (very interesting people) and thus spend time in the hallway. And then you have these amazing parties. This year, they had motor-homes and lasers in the dance hall (last year it was a water cannon…). Very crazy atmosphere. It’s highly recommended to spend a night there.

Anyway, day 1 started for me with the Keynote by Fatuma Musa Afrah. The speaker stretched her time a little, I felt. At the beginning I couldn’t really grasp what her topic was or what she wanted to tell us. She repeatedly told us that we had to “kill the time together” which killed my sympathy to some extent. The conference’s motto was Gated Communities. She encouraged us to find ways to open these gates by educating people and helping them. She said that we have to respect each other irrespective of the skin colour or social status. It was only later that she revealed being refugee who came to Germany. Although she told us that it’s “Newcomers”, not “refugees”. In fact, she grew up in Kenya where she herself was a newcomer. She fled to Kenya, so she fled twice. She told us stories about her arriving and living in Germany. I presume she is breaking open the gates which separate the communities she’s living in, but that’s speculation. In a sense she was connecting her refugee community with our hacker community. So the keynote was interesting for that perspective.

Joanna Rutkowska then talked about trustworthy laptops. The basic idea is to have no state on the laptop itself, i.e. no place where malware could be injected. The state should instead be kept on a personal storage medium, like an SD card or a pen drive. She said that laptops are inherently not trustworthy. Trust, she said can be broken up into Trusted, Secure, and Trustworthy. Secure is resistant to attacks. Trusted is something we, as Security community, do not want to have, like a Trusted third party. Trustworthy, she said, is something different, like the Intel Management Engine which might be resistant to attacks, yet it is not acting in the interest of the user. Application level security is meaningless, she said, when we cannot trust the Operating System, because it is the trusted part. If it is compromised then every effort is not useful. Her project, Qubes OS, attempts to reduce the Trusted Computing Base. What is Operating system to the application, is the hardware to the Operating system. The hardware, she said, has been assumed to be trusted. A single malicious peripheral, like a malicious wifi module, can compromise the whole personal computer, the whole digital life, she said. She was referring to problems with Intel x86 platforms. Present Intel processors integrate everything on the main chip. The motherboard has been made more or less only a holder for the CPU and the memory. The construction of those big chips is completely opaque. We have no control over what is inside. And we cannot look inside, even if we wanted to. The firmware is being loaded during boot from a discrete element on the mainboard. We cannot, however, verify what firmware really is on the chip. One question is how to enforce read-only-ness of the system or how to upload your own firmware. For many years, she and others believed that TPM, TXT, or UEFI secure boot could solve that problem. But all of them have shown to fail horribly, she said. Unfortunately, she didn’t mention how so. So as of today, there is no such thing as a secure boot. Inside the processor is a management engine which special, because it is the perfect entry for backdooring and zombification of personal computing. By zombification, she means that the involvement of the Apps (vs. OS vs. Hardware) is decreasing heavily and make the hardware have much more of a say. She said that Intel wants to make the Hardware fully control your computing by having much more logic in the management engine. The ME is, in a sense, a gated community, because you cannot, whatsoever, inspect it, tinker with it, or otherwise get in touch. She said that the war is lost on X86. Even if we didn’t have the management engine. Again, she didn’t say why. Her proposal is to move all those moving firmware parts out to a trusted storage. It was an interesting perspective on what I think is a “simple” Free Software problem. Because we allow proprietary software, we now have the problem to see what is loaded into the hardware. With Free Software we’d still have backdoors in hardware, but assuming that most functionality is encoded in firmware, we could see and modify the existing firmware and build, run, and share our “better” firmware.

Ilja van Sprundel talked about Windows driver security or rather their attack surface. I’m not necessarily interested in Windows per se, but getting some lower level knowledge sounded intriguing. He gave a more high level overview of what to do and what to not do when doing driver development for Windows. The details, he said, matter. For example whether the IOManager probes a buffer in *that* instance. The Windows kernel is made of several managers, he said. The Windows Driver Model (WDM) is the standard model for how drivers are written. The IO Manager proxies requests from user to (WDM drivers. It may or may not validate arguments. Another central piece in the architecture are IO Request Packets (IRPs). They are being delivered from the IO Manager to the driver and contain all the necessary information for the operation in question. He went through the architecture really fast and it was hard for a kernel newbie like me to follow all the concepts he mentioned. Interestingly though, the IO Manager seems to also care about transferring the correct amount of memory from userspace to kernel space (e.g. makes sure data does not overflow) if you want it to using METHOD_BUFFERED. But, as he said, most of the drivers use METHOD_NEITHER which does not check anything at all and is the endless source of driver bugs. It seems as if KMDF is an alternative framework which makes it harder to have bugs. However, you seem to need to understand the old framework in order to use that new one properly. He then went on to talk about the actual attack surface of drivers. The bugs are either privilege escalation, denial of service, or information leak. He said that you could avoid the problem of integer overflow by using the intsafe library. But you have to use them properly! Most importantly, you need to check their return type and use the actual values you want to have been made safe. During creation of a device, a driver can call either IoCreateDeviceSecure with an SDDL string or use an INF file to ACL the device. That is, however, done either rarely or wrongly, he said. You need to think about who needs to have access to your device. When you work with the IOManager, you need to check whether Irp->MdlAddress is NULL which can happen, he said, if it’s a zero sized buffer. Similarly, when using the safer METHOD_BUFFERED mentioned earlier, Irp->AssociatedIrp.SystemBuffer can also be NULL. So avoid having that false sense of security when using that safe API. Another area of bugs is the cancellation of IRPs. The userland can cancel requests which apparently is not handled gracefully by drivers and leads to deadlocks, memory leaks, race conditions, double frees, and other classes of bugs. When dealing with data from userland, you are supposed to “probe” the memory which is basically checking whether the pointers are valid and in the expected range. If you don’t do that, it’ll lead to you writing to arbitrary kernel memory. If you do validate the data from userspace, make sure you don’t fetch it again from user space assuming that it hasn’t changed. There might be race between your check and your usage (TOCTOU). So better capture, validate, and use the data. The same applies when using MDLs. That, however, is more tricky, because you have a double mapping and you are using a kernel pointer. So it is very subtle. When you do memory allocation you can either use ExAllocatePool or ExAllocatePoolWithQuota. The latter throws an exception instead of returning NULL. Your exception or NULL pointer handling needs to be double checked, he said. It was a very technical talk on Windows which was way out of my comfort zone. I only understood a tiny fraction of what he was presenting. But I liked it for the new insight on Windows drivers and that the same old classes of bug have not died yet.

High up on my list of awaited talks was the talk on train systems by the SCADA strangelove people. Railways, he said, is the biggest system built by mankind. It’s main components are signals and switches. Old switches are operated manually by pure force. Modern switches are interlocked with signals such that the signals display forbidden entry when switches are set in certain positions. On tracks, he said, signals are transmitted over the actual track by supplying them with AC or DC. The locomotive picks up the signals and supplies various systems with them. The Eurostar, they said, has about seven security systems on board, among them a “RPS”, a Reactor Protection System which alludes to nuclear trains… They said that lately the “Bahn Automatisierungssystem (SIBAS)” has been updated to use much more modern and less proprietary soft- and hardware such as VxWorks and x86 with ELF binaries as well as XML over HTTP or SS7. In the threat model they identified, they see several attack vectors. Among them are making someone plug a malicious USB device in controlling machines in some operation center. He showed pictures from supposedly real operation centers. The physical security, he said, is terrible. With close to no access control. In one photograph, he showed a screenshot from a documentary aired on TV which showed credentials sticking on the screen… Even if the security is quite good, like good physical security and formally proven programs, it’s still humans who write the software, he said, so there will be bugs to be exploited. For example, he showed screenshots of when he typed “railway” into Shodan and the result included a good number of railway stations. Another vector is GSM-R. If you jam the train’s GSM-R connection, the train will simply stop. Also, you might be able to inject SIM toolkit malware. Via that vector, you might make the modem identify as, e.g. a keyboard and then penetrate further into the systems. Overall an entertaining talk, but the claims were a bit over the top. So no real train hacking just yet.

The talk on memory corruption by Mathias Payer started off by saying that software is unsafe and insecure. Low level languages trade type safety and memory safety for performance. A large set of legacy applications, he said, are prone to memory vulnerabilities. There are, he continued too many bugs to find and fix manually. So we need a runtime system to ensure security. An invalid dereference or an out of bounds pointer is the core of memory unsafety problems. But according to the C language, he claimed, it’s only a violation if the invalid pointer is read, written to, or freed. So during runtime, there are tons and tons of dangling pointers which is perfectly fine. With such a vulnerability a control-flow attack could be executed. Several defenses exist: Data Execution Prevention prevents code from actually being executed. Address Space Layout Randomisation scrambles the memory locations of executable code which makes it harder to successfully exploit a vulnerable. Stack canaries are special values which are supposed to detect overflowing writes. Safe exception handlers ensure that exception code paths follow predefined patterns. The DEP can only work together with ASLR, he said. If you broke ASLR, you could re-use existing code; as it turns out, people do break ASLR every now and then. Two new mechanisms are Stack Integrity and Code Flow Integrity. Stack Integrity enforces to return to the actual caller by having a shadow stack. He didn’t mention how that actually works, though. I suppose you obtain a more secret stack address somewhere and switch the stack pointer before returning to check whether the return address is still correct. Control Flow Integrity builds a control flow graph during compilation and for every control flow change it checks at run time whether the target address is allowed. Apparently, many CFI implementations exist (eleven were shown). He said they’ve measured those and IFCC and Lockdown performed rather badly. To show how all of the protection mechanisms fail, he presented printf-oriented programming. He said that printf was Turing complete and presented a domain specific language. They have built a brainfuck interpreter with snprintf calls. Another rather technical talk by a good speaker. I remember that I was already impressed last year when he presented on these new defense mechanisms.

DJB and Tanja Lange started their “late night show” by bashing TLS as a “gigantic clusterfuck”. They were presenting on quantum computing and cryptography. They started by mentioning that the D-Wave quantum computer exists, but it’s not useful, he said. It doesn’t do the basic things, and can only do limited computations. It can especially not perform Shor’s algorithm. So there’s no “Shor monster coming”. They recommended the Timeline of Quantum Computing as a good reference of the massive research effort going into quantum computing. If there was a general quantum computer pretty much every public key scheme deployed on the Internet today will be broken. But also symmetric schemes are under attack due to Grover’s algorithm which speeds up brute force algorithms significantly. The solution could be physical crypto like using strong (physical) locks. But, he said, the assumptions of those systems are already broken. While Quantum Key Distribution is secure under certain assumptions, those assumptions are off, he said. Secure schemes that survive the quantum era were the topic of their talk. The first workshop on that workshop happened in 2006 and efforts are still being made, e.g. with EU projects on the topic. The time it takes for a crypto scheme to gain significant traction has been long, so far. They gave ECC as an example. It has been introduced in the 1980s, but it’s only now that it’s taking over the deployed crypto on the Internet. So the time it takes is long. They gave recommendations on what to do to have connections that are secure “for at least the next hundred years”. These include at least 256 bit keys for symmetric encryption. McEliece with binary Goppa codes n=6960 k=5413 t=119. An efficient implementation of such a code based scheme is McBits, she said. Hash based signatures with, e.g. XMSS or SPHINCS-256. All you need for those is a proper hash function. The stuff they recommend for the next 100 years, like the McEliece system, are things from the distant past, she said. He said that Post Quantum Cryptography will be the standard in a couple of years from now so he urged the cryptographers in the audience to “get used to this stuff”.

Next on my list was Markus’ talk on Landesverrat which is the incident of Netzpolitik.org being investigated for revealing secret documents. He referred on the history of the case, how it came around that they were suspected of revealing secret documents. He said that one of their believes is to publish their sources, even the secret ones. They want their work to be critically reviewed and they believe that it is only possible if the readers can inspect the sources. The documents which lead to the criminal investigations were about finances of the introduction of the XKeyscore software. Then, the president of the “state security” filed a case against because of revealing secret documents. They wanted to publish the investigation files, but they couldn’t see them, because they were considered to be more secret than the documents they have already published… From now on, he said, they are prepared for the police raiding their offices, which I suppose is good standard preparation. They were lucky, he said, because their case fell into the regular summer low of news which could make the case become quite popular in the media. A few weeks earlier or later and they were much less popular due to the refugees or Greece. During the press coverage, they had a second battleground where they threw out a Russian television team who entered their offices without having called or otherwise introduced themselves… For the future, he wants to see changes in what is considered to be a state secret. He doesn’t want the government to decide what such a secret is. He also wants to have much more protection for whistle blowers. Freedom of press should also hold for people who do not blog for their “occupation”, but also hobbyists.

Vincent Haupert was then talking on App-based TAN online banking methods. It’s a classic two factor method: Not only username and password, but also a TAN. These TAN methods have since evolved in various ways. He went on to explain the general online banking process: You log in with your credentials, you create a new wire transfer and are then asked to provide a TAN. While ChipTAN would solve many problems, he said, the banking industry seems to want their customers to be able to transfer money everywhere™. So you get to have two “Apps” on your mobile computer. The banking app and a TAN app. However, malware in “official” app stores are a reality, he said. The Google Playstore cannot protect against malware, as a colleague of him demonstrated during his bachelor thesis. This could also been by the “Brain Test” app which roots your device and then loads malware. Anyway, they hijacked the connection from the banking app to modify the recipient of the issued wire transfer and the TAN being pushed on the device. They looked at the apps and found that they “protected” their app with “Promon Shield“. That seems to be a strong obfuscation framework. Their attack involved tricking the root and hooks detection. For the root detection they check on the file system for certain binaries. He could simply change the filenames and was good to go. For the hooks (Xposed) it was pretty much the same with the exception of a few filenames which needed more work. With these modifications they could also “hack” the newer version 1.0.7. Essentially the biggest part of the problem is that the two factors are on one device. If the attacker hijacks that one device then ,

The talk by Christian Schaffner on Quantum Cryptography was introducing the audience to quantum mechanics. He said that a qubit can be imagined as the direction of a polarised photon. If you make the direction of the photons either horizontal or vertical, you can imagine that as representing 0 or 1. He was showing an actual physical experiment with a laser pointer and polarisation filters showing how the red dot of the laser pointer is either filtered or very visible. He also showed how actually measuring the polarisation changes the state of the photons! So yet another filter made the point in the back brighter. That was a bit weird, but that’s quantum mechanics. He showed a quantum random number generator based on that technology. One important concept is the no-cloning theorem which state that you can make a perfect copy of a quantum bit. He also compared current and “post quantum” crypto systems against efficient classical attackers and efficient quantum attackers. AES, SHA, RSA (or discrete logs) will be broken by quantum attacks. Hash-based signatures, McEliece, and lattice-based cryptography he considered to be resistant against quantum based attacks. He also mentioned that Quantum Key Distribution systems will also be against an exhaustive attacker who applies brute force. QKD is based on the no-cloning theorem so an eavesdropper cannot see the same bits as the communicating parties do. Finally, he asked how you could prove that you have been at a certain location to avoid the pizza delivery problem (i.e. to be certain about the place of delivery).

Fefe was talking on privileges. He said that software will be vulnerable. Various techniques should be applied such as simply fixing all the bugs (haha…) or make exploitation harder by applying ASLR or ROP protection. Another idea is to put the program in a straight jacket and withdraw privileges. That sounds a lot like containerisation. Firstly, you can drop your privileges from superuser down to the least privileges you need, then do privilege separation. Another technique is the admin confining the app in a jail instead of the app confining itself. Also, you can implement access control via a broker service by splitting up your process into, say, a left half which opens and reads files and a right half which processes data. When doing privilege separation, the idea is to split up the process into several separately running programs. Jailing is like firewall rules for syscalls which, he said, is impossible for complex programs. He gave Firefox as an example of it being impossible to write a rule set for. The app containing itself is like a werewolf chaining itself to the wall before midnight, he said. You restrict yourself from opening files, creating socket, or from attaching yourself as a debugger to other processes. The broker service is probably like a reference monitor. He went on showing how old-school privilege dropping works. You could do it as easily as seteuid(getuid()), but that’s not enough, because there is the saved UID, so you need to setresuid and not forget to check the return code. Because the call can fail if, for example, the target UID had already been running too many processes for its quota. He said that you should fail the build if your target platform does not provide setresuid. However, dropping privileges is more than setting your UID. It’s also about freeing resources you don’t necessarily need. Common approaches to jailing your process are to have a fake filesystem with only the necessary files, so your process cannot ever access anything that it shouldn’t. On Linux, that would probably be chroot. However, you can escape using fchdir . Also, mounting your /proc into the chroot, information about the host is exposed. So you need to do more work than calling chroot. The BSDs, he said, have Securelevel which is a kernel mode that only increases which withdraws certain privileges. They also have jails which is a chroot on steroids, he said. It leaks some information due the PIDs, though, he said.

The next talk was on Shellphish, an automatic exploitation framework. This is really fascinating stuff. It’s been used for various Capture the Flag contests which are basically about hacking other teams’ software services. In fact, the presenters were coming from the UCSB which is hosting the famous iCtF. They went from solving security challenges to developing a system which solves security challenges. From a vulnerability binary, they (automatically) develop an exploit and a patched binary which is immune to the exploit, but preserves the functionality of the program. They automatically find vulnerabilities, patches, and test both the exploits and the patches. For the automated vulnerability component, they presented Angr. It has a symbolic execution engine looking for memory accesses outside allocated regions and unconstrained instruction pointer which is a jump controlled by user input (JMP eax). They have written a paper for NDSS about “Augmenting Fuzzing Through Selective Symbolic Execution“. Angr is a Python library and they showed how to use it for identifying the overhyped Back to 28 vulnerability. Actually, there is too much state for a regular symbolic executor to find this problem. Angr does “veritesting“. He showed that his Angr script found the vulnerability by him having excluded many paths of execution that don’t really generate new state with a few lines of code. He didn’t show though what the lines of code were and how he determined how the states are not adding any new information.

The next talk was given by the people behind Intelexit was about convincing NSA agents to stop their work and serve democracy instead. They rented a van with big mottoes printed on them, like “Listen to your heart, not to private phone calls”. They also stuck the constitution on the “constitution protection office” which then got torn apart. Another action they did was to fly over the dagger complex and to release flyers about leaving the secret services. They want to have a foundation helping secret service agents to leave their job or to blow the whistle. They also want an anonymous call service where agents can call to talk about their job. I recommend browsing their photos.

Another artsy talk was on a cheap Facebook army. Actually it was on Instagram followers. The presenter is an artist himself and he’d buy Instagram followers for fellow artists “to make them all equal”. He dislikes the fact that society seems to measure the value or quality of art in followers or likes on social media.

Around the CCCongress were also other artsy installations like this one called “machine learning”:

It’s been a fabulous event. I really admire the people organising this event each and every year. Thank you so much and see you next year, hopefully.

Open Source Hong Kong 2015

Recently, I’ve been to Hong Kong for Open Source Hong Kong 2015, which is the heritage of the GNOME.Asia Summit 2012 we’ve had in Hong Kong. The organisers apparently liked their experience when organising GNOME.Asia Summit in 2012 and continued to organise Free Software events. When talking to organisers, they said that more than 1000 people registered for the gratis event. While those 1000 were not present, half of them are more realistic.

Olivier from Amazon Web Services Klein was opening the conference with his keynote on Big Data and Open Source. He began with a quote from RMS: about the “Free” in Free Software referring to freedom, not price. He followed with the question of how does Big Data fit into the spirit of Free Software. He answered shortly afterwards by saying that technologies like Hadoop allow you to mess around with large data sets on commodity hardware rather than requiring you to build a heavy data center first. The talk then, although he said it would not, went into a subtle sales pitch for AWS. So we learned about AWS’ Global Infrastructure, like how well located the AWS servers are, how the AWS architecture helps you to perform your tasks, how everything in AWS is an API, etc. I wasn’t all too impressed, but then he demoed how he uses various Amazon services to analyse Twitter for certain keywords. Of course, analysing Twitter is not that impressive, but being able to do that within a few second with relatively few lines of code impressed me. I was also impressed by his demoing skills. Of course, one part of his demo failed, but he was reacting very professionally, e.g. he quickly opened a WiFi hotspot on his phone to use that as an alternative uplink. Also, he quickly grasped what was going on on his remote Amazon machine by quickly glancing over netstat and ps output.

The next talk I attended was on trans-compiling given by Andi Li. He was talking about Haxe and how it compiles to various other languages. Think Closure, Scala, and Groovy which all compile to Java bytecode. But on steroids. Haxe apparently compiles to code in another language. So Haxe is a in a sense like Emcripten or Vala, but a much more generic source-to-source compiler. He referred about the advantages and disadvantages of Haxe, but he lost me when he was saying that more abstraction is better. The examples he gave were quite impressive. I still don’t think trans-compiling is particularly useful outside the realm of academic experiments, but I’m still intrigued by the fact that you can make use of Haxe’s own language features to conveniently write programs in languages that don’t provide those features. That seems to be the origin of the tool: Flash. So unless you have a proper language with a proper stdlib, you don’t need Haxe…

From the six parallel tracks, I chose to attend the one on BDD in Mediawiki by Baochuan Lu. He started out by providing his motivation for his work. He loves Free/Libre and Open Source software, because it provides a life-long learning environment as well as a very supportive community. He is also a teacher and makes his students contribute to Free Software projects in order to get real-life experience with software development. As a professor, he said, one of his fears when starting these projects was being considered as the expert™ although he doesn’t know much about Free Software development. This, he said, is shared by many professors which is why they would not consider entering the public realm of contributing to Free Software projects. But he reached out to the (Mediawiki) community and got amazing responses and an awful lot of help.
He continued by introducing to Mediawiki, which, he said, is a platform which powers many Wikimedia Foundation projects such as the Wikipedia, Wikibooks, Wikiversity, and others. One of the strategies for testing the Mediawiki is to use Selenium and Cucumber for automated tests. He introduced the basic concepts of Behaviour Driven Development (BDD), such as being short and concise in your test cases or being iterative in the test design phase. Afterwards, he showed us how his tests look like and how they run.

The after-lunch talk titled Data Transformation in Camel Style was given by Red Hat’s Roger Hui and was concerned with Apache Camel, an “Enterprise Integration” software. I had never heard of that and I am not much smarter know. From what I understood, Camel allows you to program message workflows. So depending on the content of a message, you can make it go certain ways, i.e. to a file or to an ActiveMQ queue. The second important part is data transformation. For example, if you want to change the data format from XML to JSON, you can use their tooling with a nice clicky pointy GUI to drag your messages around and route them through various translators.

From the next talk by Thomas Kuiper I learned a lot about Gandi, the domain registrar. But they do much more than that. And you can do that with a command line interface! So they are very tech savvy and enjoy having such customers, too. They really seem to be a cool company with an appropriate attitude.

The next day began with Jon’s Kernel Report. If you’re reading LWN then you haven’t missed anything. He said that the kernel grows and grows. The upcoming 4.2 kernel, probably going to be released on August 23rd. might very well be the busiest we’ve seen with the most changesets so far. The trend seems to be unstoppable. The length of the development cycle is getting shorter and shorter, currently being at around 63 days. The only thing that can delay a kernel release is Linus’ vacation… The rate of volunteer contribution is dropping from 20% as seen for 2.6.26 to about 12% in 3.10. That trend is also continuing. Another analysis he did was to look at the patches and their timezone. He found that that a third of the code comes from the Americas, that Europe contributes another third, and so does Australasia. As for Linux itself, he explained new system calls and other features of the kernel that have been added over the last year. While many things go well and probably will continue to do so, he worries about the real time Linux project. Real time, he said, was the system reacting to an external event within a bounded time. No company is supporting the real time Linux currently, he said. According to him, being a real time general purpose kernel makes Linux very attractive and if we should leverage that potential. Security is another area of concern. 2014 was the year of high profile security incidents, like various Bash and OpenSSL bugs. He expects that 2015 will be no less interesting. Also because the Kernel carries lots of old and unmaintained code. Three million lines of code haven’t been touch in at least ten years. Shellshock, he said, was in code more than 20 years old code. Also, we have a long list of motivated attackers while not having people working on making the Kernel more secure although “our users are relying on us to keep them safe in a world full of threats”

The next presentation was given by Microsoft on .NET going Open Source. She presented the .NET stack which Microsoft has open sourced at the end of last year as well as on Visual Studio. Their vision, she said, is that Visual Studio is a general purpose IDE for every app and every developer. So they have good Python and Android support, she said. A “free cross platform code editor” named Visual Studio Code exists now which is a bit more than an editor. So it does understand some languages and can help you while debugging. I tried to get more information on that Patent Grant, but she couldn’t help me much.

There was also a talk on Luwrain by Michael Pozhidaev which is GPLv3 software for blind people. It is not a screen reader but more of a framework for writing software for blind people. They provide an API that guarantees that your program will be accessible without the application programmer needing to have knowledge of accessibility technology. They haven’t had a stable release just yet, but it is expected for the end of 2015. The demo unveiled some a text oriented desktop which reads out text on the screen. Several applications already exist, including a file editor and a Twitter client. The user is able to scroll through the text by word or character which reminded of ChorusText I’ve seen at GNOME.Asia Summit earlier this year.

I had the keynote slot which allowed me to throw out my ideas for the future of the Free Software movement. I presented on GNOME and how I see that security and privacy can make a distinguishing feature of Free Software. We had an interesting discussion afterwards as to how to enable users to make security decisions without prompts. I conclude that people do care about creating usable secure software which I found very refreshing.

Both the conference and Hong Kong were great. The local team did their job pretty well and I am proud that the GNOME.Asia Summit in Hong Kong inspired them to continue doing Free Software events. I hope I can be back soon 🙂

GNOME.Asia Summit 2015 in Depok, Indonesia

I have just returned from the GNOME.Asia Summit 2015 in Depok, Indonesia.

Out of the talks, the most interesting talk I have seen, I think, was the one from Iwan S. Tahari, the manager of a local shoe producer who also sponsored GNOME shoes!

Open Source Software in Shoes Industry” was the title and he talked about how his company, FANS Shoes, est 2001, would use “Open Source”. They are also a BlankOn Linux partner which seems to be a rather big thing in Indonesia. In fact, the keynote presentation earlier was on that distribution and mentioned how they try to make it easier for people of their culture to contribute to Free Software.
Anyway, the speaker went on to claim that in Indonesia, they have 82 million Internet users out of which 69 million use Facebook. But few use “Open Source”, he asserted. The machines sold ship with either Windows or DOS, he said. He said that FANS preferred FOSS because it increased their productivity, not only because of viruses (he mentioned BRONTOK.A as a pretty annoying example), but also because of the re-installation time. To re-install Windows costs about 90 minutes, he said. The average time to install Blank On (on an SSD), was 15 minutes. According to him, the install time is especially annoying for them, because they don’t have IT people on staff. He liked Blank On Linux because it comes with “all the apps” and that there is not much to install afterwards. Another advantage he mentioned is the costs. He estimated the costs of their IT landscape going Windows to be 136,57 million Rupees (12000 USD). With Blank On, it comes down to 0, he said. That money, he can now spend on a Van and a transporter scooter instead. Another feature of his GNU/Linux based system, he said, was the ability to cut the power at will without stuff breaking. Indonesia, he said, is known for frequent power cuts. He explicitly mentioned printer support to be a major pain point for them.

When they bootstrapped their Free Software usage, they first tried to do Dual Boot for their 5 employees. But it was not worth their efforts, because everybody selected Windows on boot, anyway. They then migrated the accounting manager to a GNU/Linux based operating system. And that laptop still runs the LinuxMint version 13 they installed… He mentioned that you have to migrate top down, never from bottom to top, so senior management needs to go first. Later Q&A revealed that this is because of cultural issues. The leaders need to set an example and the workers will not change unless their superiors do. Only their RnD department was hard to migrate, he said, because they need to be compatible to Corel Draw. With the help of an Indonesian Inkscape book, though, they managed to run Inkscape. The areas where they lack support is CAD (think AutoCAD), Statistics (think SPSS), Kanban information system (like iceScrum), and integration with “Computer Aided Machinery”. He also identified the lack of documentation to be a problem not only for them, but for the general uptake of Free Software in Indonesia. In order to amend the situation, they provide gifts for people writing documentation or books!

All in all, it was quite interesting to see an actual (non-computer) business running exclusively on Free Software. I had a chat with Iwan afterwards and maybe we can get GNOME shaped flip-flops in the future 🙂

The next talk was given by Ahmad Haris with GNOME on an Android TV Dongle. He brought GNOME to those 30 USD TV sticks that can turn your TV into a “smart” device. He showed various commands and parameters which enable you to run Linux on these devices. For the reasons as to why put GNOME on those devices, he said, that it has a comparatively small memory footprint. I didn’t really understand the motivation, but I blame mostly myself, because I don’t even have a TV… Anyway, bringing GNOME to more platforms is good, of course, and I was happy to see that people are actively working on bringing GNOME to various hardware.

Similarly, Running GNOME on a Nexus 7 by Bin Li was presenting how he tried to make his Android tabled run GNOME. There is previous work done by VadimRutkovsky:

He gave instructions as to how to create a custom kernel for the Nexus 7 device. He also encountered some problems, such as compilations errors, and showed how he fixed them. After building the kernel, he installed Arch-Linux with the help of some scripts. This, however, turned out to not be successful, so he couldn’t run his custom Arch Linux with GNOME.
He wanted to have a tool like “ubuntu-device-flash” such that hacking on this device is much easier. Also, downloading and flashing a working image is too hard for casually hacking on it, he said.

A presentation I was not impressed by was “In-memory computing on GNU/Linux”. More and more companies, he said, would be using in-memory computing on a general operating system. Examples of products which use in-memory computing were GridGain, SAP HANA, IBM DB2, and Oracle 12c. These products, he said, allow you to make better and faster decision making and to avoid risks. He also pointed out that you won’t have breaking down hard-drives and less energy consumption. While in-memory is blazingly fast, all your data is lost when you have a power failure. The users of big data, according to him, are businesses, academics, government, or software developers. The last one surprised me, but he didn’t go into detail as to why it is useful for an ordinary developer. The benchmarks he showed were impressive. Up to hundred-fold improvements for various tests were recorded in the in-memory setting compared to the traditional on-disk setting. The methodology wasn’t comprehensive, so I am yet not convinced that the convoluted charts show anything useful. But the speaker is an academic, so I guess he’s got at least compelling arguments for his test setup. In order to build a Linux suitable for in-memory computation, they installed a regular GNU/Linux on a drive and modify the boot scripts such that the disk will be copied into a tmpfs. I am wondering though, wouldn’t it be enough to set up a very aggressive disk cache…?

I was impressed by David’s work on ChorusText. I couldn’t follow the talk, because my Indonesian wasn’t good enough. But I talked to him privately and he showed me his device which, as far as I understand, is an assistive screen reader. It has various sliders with tactile feedback to help you navigating through text with the screen reader. Apparently, he has low vision himself so he’s way better suited to tell whether this device is useful. For now, I think it’s great and I hope that it helps more people and that we can integrate it nicely into GNOME.

My own keynote went fairly well. I spent my time with explaining what I think GNOME is, why it’s good, and what it should become in the future. If you know GNOME, me, and my interests, then it doesn’t come as a surprise that I talked about the history of GNOME, how it tries to bring Free computing to everyone, and how I think security and privacy will going to matter in the future. I tried to set the tone for the conference, hoping that discussions about GNOME’s future would spark in the coffee breaks. I had some people discussing with afterwards, so I think it was successful enough.

When I went home, I saw that the Jakarta airport runs GNOME 3, but probably haven’t done that for too long, because the airport’s UX is terrible. In fact, it is one of the worst ones I’ve seen so far. I arrived at the domestic terminal, but I didn’t know which one it was, i.e. its number. There were no signs or indications that tell you in which terminal you are in. Let alone where you need to go to in order to catch your international flight. Their self-information computer system couldn’t deliver. The information desk was able to help, though. The transfer to the international terminal requires you to take a bus (fair enough), but whatever the drivers yell when they stop is not comprehensible. When you were lucky enough to get out at the right terminal, you needed to have a printed version of your ticket. I think the last time I’ve seen this was about ten years ago in Mumbai. The airport itself is big and bulky with no clear indications as to where to go. Worst of all, it doesn’t have any air conditioning. I was not sure whether I had to pay the 150000 Rupees departure tax, but again, the guy at the information desk was able to help. Although I was disappointed to learn that they won’t take a credit card, but cash only. So I drew the money out of the next ATM that wasn’t broken (I only needed three attempts). But it was good to find the non-broken ATM, because the shops wouldn’t take my credit card, either, so I already knew where to get cash from. The WiFi’s performance matches the other airport’s infrastructure well: It’s quite dirty. Because it turned out that the information the guy gave me was wrong, I invested my spare hundred somewhat thousands rupees in dough-nuts in order to help me waiting for my 2.5 hours delayed flight. But I couldn’t really enjoy the food, because the moment I sat on any bench, cockroaches began to invade the place. I think the airport hosts the dirtiest benches of all Indonesia. The good thing is, that they have toilets. With no drinkable water, but at least you can wash your hands. Fortunately, my flight was only two hours late, so I could escape relatively quickly. I’m looking forward to going back, but maybe not via CGK 😉

All in all, many kudos to the organisers. I think this year’s edition was quite successful.

Sponsored by GNOME!

GNOME.Asia Summit 2014

I was fortunate to be able to attend this year’s GNOME Asia Summit in Beijing, China.

It was co-hosted with FUDCon, the Fedora Users and Developers Conference. We had many attendees and the venue provided good facilities to talk about Free Software and the Free Desktop.

Fudcon Beijing Logo

The venue was the Beihai University somewhat north of Beijing. Being Chinese, the building was massive in size. So we had loads of space, anyway 😉 The first day was reserved for trainings and attendees could get their feet wet with thinks like developing a GNOME application. I took part myself and was happy to learn new GNOME APIs. I think the audience was interested and I hope we could inspire a few attendees to create their next application using GNOME technologies.

I was invited to keynote the conference. It was my first time to do such a thing and I chose to give a talk that I would expect from a keynote, namely something that leads the conference and gives a vision and ideas about what to discuss during the conference. I talked on GNOME, GNOME 3, and GNOME 3.12. I tried to promote the ideas of GNOME and of Free Software. Unfortunately, I prepared for 60 minutes rather than 45, so I needed to cut off a good chunk of my talk :-/ Anyway, I am happy with how it went and especially happy with the fact that I wasn’t preaching to the choir only, as we had e.g. Fedora people in the audience, too.

We had RMS explaining Free Software to the audience and I think the people enjoyed his talking. I certainly did, although I think it doesn’t address problems we face nowadays. People have needs, as the discussion with the audience revealed. Apparently, people do want to have the functionality Facebook or Skype offers. I think that addressing these needs with the warning “you must not fall for the convenience trap” is too short sighted. We, the Free Software community, need to find better answers.

The event was full of talks and workshops from a diverse range of topics, which is a good thing for this conference. Of course, co-hosting with FUDCon helped that. The event is probably less technical than GUADEC and attendees can learn a lot from listening and talking to other people. I hope we can attract more Asian people to Free Software this way. I am not entirely sure we need to have the same setup as with GUADEC though. With GUADEC, we change the country every year. But Asia is about ten times larger than Europe. In fact, China alone is larger than all of Europe. It makes it somewhat hard for me to justify the moving around. We do need more presence in Asia, so trying to cover as much as possible might be an approach to attract more people. But I think we should investigate other approaches, such as focussing on an annual event in one location to actually create a strong Free Software location in Asia, before moving on. I wouldn’t know how to define “strong” right now, but we have absolutely no measure of success right now, anyway. That makes it a bit frustrating for me to pour money over Asia without actually seeing anything in return.

Anyway, Beijing is fun. We went to see the Great Wall and enjoyed the subway 😉

I would like to thank the organisers for having provided a great place us, the Free Software community, to spread the word about the benefits of free computing. I would also like to thank the GNOME Foundation for enabling people like me to attend the event.

Sponsored by GNOME!

[Update: Here is the recording of the talk]

Stip-OUT report

For my recent year in Dublin I got a “STIP-OUT” scholarship from my local university and I was supposed to write a tiny report. So here it comes (in German though).

I won’t go into too much detail about the course I attended just yet, but in a nutshell: Dublin was a nice experience, personally and technically. Going to Dublin for a year abroad is a good option to just “get out”, because although it is different, it is still European enough. However, people and universities work differently. Everything is very open but also very stressful.

Practicum Status Update Week 10

  • As mentioned in the last report, I skipped one week in favour of the GUADEC.
  • I had a funny C problem. Consider the following two functions:
     
    static int
    safe_read (void *data, size_t length, FILE* file)
    {
    	int status = fread(data, length, 1, file);
     
    	if (status == -1) error_report("%s: read packet (%lu) data on stream %p "
    								   "failed (%d): %s",
    								   __FUNCTION__, length, file,
    								   status, strerror(errno));
    	status = fflush(file);
    	return status;
    }
     
    static int
    safe_write (void *data, size_t length, FILE* file)
    {
    	int status = fwrite(data, length, 1, file);
     
    	if (status == -1) error_report("%s: writing packet (%lu) data on stream %p "
    								   "failed (%d): %s",
    								   __FUNCTION__, length, file,
    								   status, strerror(errno));
    	status = fflush(file);
    	return status;
    }

    Now you might want to deduplicate the code and make it one big and two small functions:

     
    static int
    safe_operation (size_t (func) (void *, size_t, size_t, FILE*), void *data, size_t length, FILE* file)
    {
    	int status = func(data, length, 1, file);
    	const char *funcstr = "undeclared";
     
    //	switch (*func) {
    //		case fread:
    //			funcstr = "read";
    //			break;
    //		case fwrite:
    //			funcstr = "write";
    //			break;
    //		default:
    //			funcstr = "?";
    //			break;
    //	}
     
    	if (status == -1) error_report("%s: %s (%p) packet (%lu) data on stream %p "
    								   "failed (%d): %s",
    								   __FUNCTION__, funcstr, *func, length, file,
    								   status, strerror(errno));
    	status = fflush(file);
    	return status;
    }

    but it wouldn’t compile because fread and fwrite have slightly different signatures.
    The solution is to:

     
    typedef size_t (*fwrite_fn)(const void * __restrict, size_t, size_t, FILE * __restrict);
     
    static int
    safe_operation (fwrite_fn func, void *data, size_t length, FILE* file)
    {
            int status = func(data, length, 1, file);
            const char *funcstr = "undeclared";
     
            if (status == -1) error_report("%s: %s (%p) packet (%lu) data on stream %p "
                                                                       "failed (%d): %s",
                                                                       __FUNCTION__, funcstr, *func, length, file,
                                                                       status, strerror(errno));
            status = fflush(file);
            return status;
    }
     
    int
    main(void)
    {
            int x;
     
            safe_operation((fwrite_fn)fread, &x, sizeof x, stderr);
            safe_operation(fwrite, &x, sizeof x, stderr);
            return 0;
    }

    Thanks to Roland for pointing that out.

  • On smth unrelated: Fought with OpenSSL and it’s API and documentation. But more on that in a different post.
  • Fortunately, only Gajim crashed once. Well rhythmbox locks up, too, as it always does
  • Annoyed by the fact, that it takes ages to “make” a freshly made kernel!
    muelli@bigbox ~/git/linux-2.6 $ time make 
      CHK     include/linux/version.h
      CHK     include/generated/utsrelease.h
      CALL    scripts/checksyscalls.sh
      CHK     include/generated/compile.h
      CHK     include/linux/version.h
    make[2]: `scripts/unifdef' is up to date.
      TEST    posttest
    Succeed: decoded and checked 1382728 instructions
    Kernel: arch/x86/boot/bzImage is ready  (#14)
      Building modules, stage 2.
      MODPOST 2107 modules
    WARNING: modpost: Found 4 section mismatch(es).
    To see full details build your kernel with:
    'make CONFIG_DEBUG_SECTION_MISMATCH=y'
    
    real	14m7.842s
    user	1m33.747s
    sys	0m25.388s
    muelli@bigbox ~/git/linux-2.6 $ 
    
  • Trying to automatically create a FAT image and fill populate it with the built modules is more cumbersome than expected. guestmount is way too much overhead: It requires qemu and channels the data out over the network (sic!). I just want a FUSE implementation that is capable of writing a FAT image! There seems to be UMFUSE but it’s packaged for Debian/Ubuntu and not for Fedora.Find the sources is quite a challenge (it’s here: https://view-os.svn.sourceforge.net/svnroot/view-os/trunk/fuse-modules/fat) but I can’t build it, because they haven’t really prepared their code for anybody else to build it. After being harassed to generate the ./configure file (autoconf,; aclocal; autoconf), it also wants shtool to be installed AND in a local directory (/.-). I gave up as it kept bugging me about a missing config.sub. But I still wanted to get that FUSE module so I dug up my Ubuntu chroot and apt-get sourced the files, ./configure && make && make install. Beautiful. Turns out, that the official FUSE wiki lists two ways to mount a FATfs: the one I’ve just described and a dead project (FatFuse).

    I then threw together this shellscript:

    ##!/bin/bash
     
    MOD_DIR=/tmp/linux-modules/
    FAT_IMAGE=/tmp/modules.$$.fat
    FAT_MOUNT=/tmp/share/
    FAT_TARGET_IMAGE=/tmp/modules.fat
     
    make modules_install INSTALL_MOD_PATH="$MOD_DIR" &&
     
    bytes=$(( $(du -s $MOD_DIR | awk '{print $1}') + $(( 20 * 1024)) ))
    #
    # create FAT image
    dd if=/dev/zero of=$FAT_IMAGE bs=1024 count=$bytes &&
    mkfs.vfat $FAT_IMAGE &&
    fusefat -o nonempty -o rw+ $FAT_IMAGE $FAT_MOUNT &&
    cp -dRx $MOD_DIR/* $FAT_MOUNT
    fusermount -u $FAT_MOUNT &&
    echo $FAT_IMAGE &&
    ln -sf $FAT_IMAGE $FAT_TARGET_IMAGE

    and I start qemu like that:

    /opt/muelli/qemu/bin/qemu-system-x86_64 -drive file=/tmp/ubuntu-snapshots.qcow2,if=virtio -kernel ~/git/linux-2.6/arch/x86_64/boot/bzImage -append 'selinux=0 root=/dev/vda init=/sbin/init' -m 1G -drive file=/tmp/modules.fat,if=virtio,readonly=on -monitor stdio -loadvm 1

    which allows me to access the module on the FAT drive on /dev/vdb. I can also snapshot the booted machine which saves me an awful lot of time for booting. But it took me quite a while to make QEmu do that, because the snapshot parameter for the disk does not save snapshots! Also, the FAT drive has to marked as readonly for QEmu to only save snapshots on the only remaining writable drive. But QEmu fails to make a drive readonly if the selected interface is IDE. Thus, you need virtio… Thanks to the helpful folks on IRC…

  • So yeah, all in all, I didn’t make any substantial progress :-/ I hope to finish a Webcam in software soonish though.

Practicum Status Update Week 8

  • Over the weekend, I’ve talked at the ChaosBBQ in Dortmund, Germany. A nice and small conferency gathering.
  • This week I had only a few bugs (that I reported):
  • The filter stuff is actually more complex than I initially thought: I do indeed have to handle async USB URBs which makes the code look more ugly and taste more like spaghetti. Anyway, the filter stuff should fully work now *yay*.

    I hope the video works. If it doesn’t, please download first. So we can replace packets on the fly, in and out. And of course, you could design a more complex scenario: Let the first read pass by unmodified but after that modify the packets.

  • Read a bit in “Essential Linux Device Drivers” which is an interesting to read book. I like the relaxed writing style. I haven’t gotten to the nitty gritty USB details yet though.
  • Wireshark can sniff USB communication, too. And it can save as a pcap file. And it dissects bits of the protocols. At least it shows the SCSI request for an USB Massstorage. I have to test whether it knows, say, webcams. First investigations show that it only supports USB Massstorage though. And it’d be interesting whether it can sniff the communication I’m filtering with QEmu.
  • Apparently, USB network magic for pcap is 0xdc:
    typedef struct pcap_hdr_s {
            guint32 magic_number;   /* magic number */
            guint16 version_major;  /* major version number */
            guint16 version_minor;  /* minor version number */
            gint32  thiszone;       /* GMT to local correction */
            guint32 sigfigs;        /* accuracy of timestamps */
            guint32 snaplen;        /* max length of captured packets, in octets */
            guint32 network;        /* data link type */
    } pcap_hdr_t;

    Wirehsark dumps the following:

    0000000: d4 c3 b2 a1 02 00 04 00 00 00 00 00 00 00 00 00  ................
    0000010: ff ff 00 00 dc 00 00 00 43 8e 44 4c d7 96 05 00  ........C.DL....

    So I might be able to implement saving to PCap at some stage.

  • Next week will be GUADEC! *yay* But that will also mean that I can’t work that much.
  • So my emulation seems to work now, too. I now need to speak the right protocol. So if you know a good resource that describes how a, say, USB Webcam behaves on the bus, drop me a line. Or anything that can dissect USB packets would be fine.

Practicum Status Update Week 7

  • Read about Radare. Apparently, they have “USB support” but I could only see a USB communication sniffer. So Radare doesn’t dissect USB pakets 🙁
  • Installed GDB from git, because the GDB in Fedora 13 crashes way too often. I didn’t file as many new bugs this week though 😉 I seem to have worked around all my crashers…
  • Fought a lot with git 🙁 It’s incredibly hostile. I tried to rebase stuff and it keeps bugging me with old commits still being visible although I’ve changed them 🙁 I probably haven’t understood what it does yet. Tried to fix as much as possible using git reflog. Of course, the man page references options (–verbose in my case) that are not existant. Brilliant. I don’t know why I actually expected git to help me.
    This is hilarious, too:

    muelli@bigbox ~/git/qemu $ git rebase setup_fds 
    First, rewinding head to replay your work on top of it...
    Applying: Temporary migration to usb_packet_filter_setup_fds
    Using index info to reconstruct a base tree...
    Falling back to patching base and 3-way merge...
    Auto-merging usb-linux.c
    CONFLICT (content): Merge conflict in usb-linux.c
    Failed to merge in the changes.
    Patch failed at 0001 Temporary migration to usb_packet_filter_setup_fds
    
    When you have resolved this problem run "git rebase --continue".
    If you would prefer to skip this patch, instead run "git rebase --skip".
    To restore the original branch and stop rebasing run "git rebase --abort".
    
    
    muelli@bigbox ~/git/qemu $ nano usb-linux.c # hack hack hack
    muelli@bigbox ~/git/qemu $ git add usb-linux.c
    muelli@bigbox ~/git/qemu $ git rebase --continue
    Applying: Temporary migration to usb_packet_filter_setup_fds
    No changes - did you forget to use 'git add'?
    
    When you have resolved this problem run "git rebase --continue".
    If you would prefer to skip this patch, instead run "git rebase --skip".
    To restore the original branch and stop rebasing run "git rebase --abort".
    
    muelli@bigbox ~/git/qemu $ 
    

    WTF?!

    That one is brilliant, too:

    muelli@bigbox ~/git/qemu $ git rebase -i setup_fds
    # Stupid me: I selected "f" for the very first entry in that edit window
    Cannot 'fixup' without a previous commit
    # Fair enough, let me restart then:
    muelli@bigbox ~/git/qemu $ git rebase setup_fds 
    Interactive rebase already started
    # O_o WTF? What else, besides aborting, could I possibly do anyway?!
    muelli@bigbox ~/git/qemu $ git rebase --abort
    muelli@bigbox ~/git/qemu $ git rebase setup_fds 
    # Now it works...
    
  • Reimplemented host side USB filters to obtain valid USB communication. I have various simple filters: PassThrough, Logging and Replacing. The first one does nothing but return the data w/o any modification. The second one writes the bytes it reads and writes to files. The third one replaces 512 “A”s with 512 “B”s. Still need separate packets from the device in question to the host from packets from the host to the device to obtain valid device behaviour without reading all of the documentation. That will give me a good starting point to actually do the fuzzing.

    That replace filter produced interesting results. I replaced every “A” transmitted by a “B”. On the host, I created a file on a mass storage with 4KB “A”s. When “cat”ting the file from the guest, I saw “A”s. But copying the file in the guest resulted in the new file having all “B”s. I expected the “cat” showing all “B”s, too. And as far as I can see, the “A”s are actually replaced for the “cat”.

    Of course, Istanbul crashed while trying to make that screencast.
    Note that the filter code actually changed by now, not only because I enhanced the protocol (in the version you’re seeing, only USB payload is exchanged. In the new version, also the PID, device address and device endpoint are filtered) but also because I refactored the communication bits into a USBPacket class.
    I missed to show the pen drive from the host point of view after having copied the file in the guest, but the “bbbb” file is full of “B”s.

  • I’m on my way to emulating a USB device, i.e. make the guest think it has a USB device attached but the device is a program running on the guest. I basically copied the USB serial driver and the HID driver and modified them to get packets from a pipe and send them to a pipe. I had serious problems with QEmu: QEmu didn’t register my new “device”. Now I called the right function to initialize the USB device and voila, it attaches it like it should.
    Now I need to obtain valid USB communication using the filter so that I can respond to incoming packets properly.
  • Dear lazyweb, I’m wondering whether I could make my OS load an application but then break on main() so that I can attach a debugger. I cannot run the application *with* GDB. Instead, I want to attach a GDB after the program is fully loaded. Maybe LD_PRELOADing on main() will work?
Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.