Your encrypted hard disk is not safe – cold boot attack

Thanks to Alex Graveley for linking to a very interesting new research result from Ed Felten and others, explaining that encryption keys can be easily retrieved from the memory of a running system by power-cycling it. Contrary to what most people think, it is possible to retrieve almost all data from a DRAM chip several seconds or minutes after a power cut. Many companies (including the one I work for) require hard disk encryption for all laptop computers in order to ensure that any sensitive information stored on the machine cannot be retrieved even if the machine is stolen.

However, the report published by the Princeton researchers shows that if the machine is running or is in suspended mode, then an attacker can steal it and get both the encrypted hard disk and the decryption key. This key must be stored in the RAM of the running system so that it can access the files on disk. The attack consists in briefly removing the power from the machine and rebooting it using a small program that will save the contents of the memory to some external storage. Once this is done, the hard disk encryption key can be retrieved from the saved data. Some machines have a mechanism that clears their memory after a reboot (this is often the case with ECC memory). But even in this case, it is also possible to retrieve the decryption key by cooling down the memory chips, removing them from the machine and inserting them into another machine that will extract the valuable information.

This is a serious problem for anybody who relies on hard disk encryption for protecting confidential data: an attacker who has physical access to the machine (even for just a brief moment) may be able to retrieve the decryption key and get full access to the contents of the disk. Leaving the machine unattended in suspended mode or with the screen locked may be the same as leaving it fully open.

There are not many ways to avoid this problem, besides preventing physical access to the machine or using some software or hardware self-destruction mechanisms in case the machine is tempered with. If the machine is suspended, the research paper (PDF) explains that it may be possible to clear or obscure the key before suspending the system so that it cannot be retrieved easily. The user would then have to re-enter the disk encryption key before resuming the system, or enter a password to decrypt that key. This is not trivial to implement because the system cannot read any information from the encrypted disk until the user has entered the right password, so all software needed for entering passwords and setting input and output devices to a known state must be available before the system is resumed.

It is not possible to implement the same protection when the screen is simply locked, because there will usually be some software that wants to access the hard disk while the screen is locked. The paper describes a way to make it slightly more difficult to retrieve the key from RAM: if the system does not need to access the disk for a while, it could scramble the key (in a reversible way) and spread it over a larger area in memory in such a way that a single bit error over the whole area would make the key unusable. As soon as the key is needed again, it is reassembled and used until it is not needed anymore. This can provide some limited protection because the cold boot attack does not always get a perfect copy of the RAM. But even with this additional level of protection, it looks like a locked screen is a very weak protection against data theft.

Tivoization still possible with GPLv3 (draft4)?

The latest draft of the GPLv3 contains many improvements over the previous ones. It also still contains several minor issues, some of which date back to the first draft. Among these, there is a paragraph that remained unchanged since the first draft, although there were several comments saying that it could provide a loophole:

The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.

The problem is that “automatically” is not defined and it could lead to abuses, including preventing users from running modified versions of GPL software on some devices (the Tivoization problem that GPLv3 tries to prevent). “Automatically” can cover current practices such as generating Makefile from Makefile.in using autoconf, generating parser.c from parser.y using bison, etc.

But “automatically” could also include some operations that are impractical in terms of time or special equipment required. A file that can be regenerated automatically but requires several hundred years of computation on a supercomputer will effectively prevent most people from compiling the software and installing it on their device (if that file is required during installation or during run time). The canonical example would be if the tool that regenerates the missing source file requires the factorization of the product of two very large prime numbers.

As long as the company selling the device provides the complete Corresponding Source (including tools necessary for regenerating the missing files) and Installation Information, then they would be compliant to the GPLv3. As long as the source code (with the missing file) is the “prefered form of the work for making modifications to it”, then they have followed the GPL to the letter… while still preventing users from running modified code on their devices.

Of course I reported this problem and I included links to the previous comments on the same issue. But it looks like this issue has been ignored so far, despite the fact that the comments on the first draft are more than a year old. 🙁

Long time no write…. My last diary entry was almost
one year ago!

Playing with LILO and Slashdot

This morning, I loaded the Slashdot home page
and… Oops! What’s there in the story at the top of the
page? Three links to my LILO pages. Ouch! This is going
to hurt… Welcome to the Slashdot effect! Quick look at
the logs of the web server: since this morning, the server
has already seen more than 20,000 visitors making more than
300,000 requests. And many people in the US are still in
bed at this time. All these downloads are going to suck a
significant amount of bandwidth…

Playing with LILO is fun. It is also
interesting because it encourages good programming
practices. Testing a modified boot screen requires a reboot
of the PC, and any fatal error in the program is likely to
prevent the computer from booting at all. So I take the
time to re-read my code before rebooting. This reminds me
of the good old time when I was programming in Z80 assembler
on my ZX Spectrum.

Playing with the Linux kernel

Yesterday, I had to run some tests at work with
a modified version of the Linux TCP stack. The goal was to
change the initial size of the congestion window and to run
some performance tests on a dedicated network (with high
bandwidth*delay product). Of course, there is no
/proc interface for changing that, because this
would violate the standards. So I decided to add my own. I
had never looked closely at the Linux kernel code before,
and I never touched the TCP stack.

It took me a while to find the file that I had
to
change, but find, grep and emacs
are very useful tools. Once I found the file
(net/ipv4/tcp_input.c), it was really easy to
change the way the cwnd was initialized. Half an
hour later, I had created two new interfaces in
/proc/sys/net/ipv4 and everything was working. I
even added a new option in net/ipv4/Config.in to
make these features optional. By reading or writing to the
pseudo-files in /proc, I could dynamically alter
the behavior of the TCP stack and make it
standards-compliant or not.

This was a very interesting experience for me,
because I have been working on free software for a long
time, but still I did not expect that it would be so easy to
add a new feature to something as complex as the TCP stack
of Linux. Of course, I only had to do a very small change
that was limited to a few files, but it was interesting for
me to see how easy it was to understand how the
/proc interfaces work and how the kernel
configuration works, considering that it was the very first
time that I looked at it. So I have to congratulate the
kernel hackers for all this nice work.

hadess:
There is a pointer to the improvements for TCP in the Ericsson
Eifel license
that I mentioned. The first paragraph
contains a reference to the Internet-Draft
that describes the Eifel algorithm. Mind you, this is a
draft and not yet an RFC.
In the References section of the draft, there is a link to a
paper
that gives a bit more information about why the Eifel
algorithm could be useful for TCP.
Oh and by the way, I come from the french-speaking part of
Belgium, not from France. 😉

schoen:
I just saw your AskAdvogato message in which you
ask how to keep ants out without killing them. Although
killing them is usually the easiest solution (using boxes
with small ant-sized holes containing a poison that the ants
eat), the best way to keep them out is to make it hard for
them to get in. If it is not possible for you to seal all
openings in your house, you can try to smear grease in their
path, or to use chalk or talc powder around the openings
through which the ants enter your house. They hate these
things because it makes it harder for them to walk, and they
give up after a while… or find another opening that you
had forgotten. Good luck!

More patents usable in free software…

Following the example set by Raph with his royalty free license
for using his patents in
free software (released under the GPL), there is now a
similar license
granted by Ericsson for some proposed improvements of the
TCP protocol (the Eifel algorithm). More power to free
software!

That license
allows GPLed software to include the proposed improvements
to the TCP stack, as well as any operating system that is
entirely Open Source. So this covers Linux, FreeBSD,
OpenBSD and NetBSD, among others.

(Disclaimer: I work for Ericsson and I contributed to the
wording of that license, but I am currently only speaking
for myself, not for my employer.)

David O’Toole writes:

[…] Looking at stuff like this
makes me get just a tiny bit upset about how badly
the linux world is dragging its political feet
with respect to improving the interface. I’m not talking
about making
all the OK buttons respond to the Enter key
(currently my biggest pet peeve about GNOME, and it’s slowly
being
fixed—recent GIMP etc.)

I’m talking about the imaging model. I don’t
want to criticize X unfairly. The X Window System was
brilliant for its
time and in its environment. But it simply does
not support what people want to do now well enough to
continue.
Fast vector imaging, transparency,
high-resolution monitors, antialiasing. Yes, you can
implement software on top
but there’s no standard and it’s slow.

The first defense I hear all the time is network
transparency. I respond: who cares.
[…]

Well… I, for one, care very much about the network
transparency of X. I am currently typing this from a
Solaris machine on which I have other windows displayed
remotely from a Linux machine and other Solaris machines.
Not only some XTerms and Emacs that could also work over
telnet/rsh/ssh, but also graphical applications like Purify,
Quantify, Netscape, XMMS and some other goodies. They are
all on the same LAN so speed is not really an issue.
Without X’s ability to display anything anywhere, writing
and debugging my programs would be much harder.

So maybe I am among the 1% of people who really use the
remote displays and would not be satisfied with text-based
remote logins. This does not mean that nothing should be
done for the other 99% who would like to get a much better
performance from the applications that are running on the
local display.

I don’t think that it is necessary to throw X away and to
start again from scratch. The DGA extension (available on
OpenWindows and XFree86) proves that you can get decent
performance out of X, although this requires some specific
code that is rather ugly and not easy to write and
maintain. Most programmers do not want to write some
additional code for specific X extensions, and indeed they
should not be required to do so.

But it would be possible to get a better performance
while keeping the X API. Imagine that someone modifies the
shared X library (libX11.so) so that if the client connects
to the local server, all X calls which are normally sent to
the X server over a socket would be translated into some
optimized drawing operations accessing the video buffer
directly. The shared X library would more or less contain
some bits of the server code (actually, a stub could dlopen
the correct code). If the X client connects to a remote
server, then the X function calls would fall back to the
standard X protocol. All clients that are dynamically
linked to that modified library would automatically benefit
from these improvements without requiring any changes to the
code. So it can be done without throwing away the benefits
of X.
Actually, I believe that some people are working on that for
the moment…

Question: maximum information density in the
print-scan process?

Does anybody know how much information can be stored and
reliably retrieved from a piece of paper, using a standard
printer (inkjet or laser, 300dpi) and a scanner (1200 dpi)?
Since a piece of paper can be affected by bit
rot
(literally) and can be damaged in various ways, some
error correction (e.g. Reed Solomon) and detection (e.g.
CRC) is necessary. Also, I do not want to rely on
high-quality paper so I have to accept some ink diffusion
and “background noise” introduced by defects in the
paper.

I found some references to 2D
barcodes
(such as DataMatrix,
PDF-417
and others) but these codes are designed to be scanned
efficiently by relatively cheap and fast CCD scanners. I am
not worried about the scanning time (I am using a flatbed
scanner) or the processing time (I can accept some heavy
image processing). Also, I would like to encode raw bits
and pack as much information as possible on a sheet of
paper, regardless of its size. These 2D barcodes have a
fixed or maximum symbol size and it is necessary to use
several of them if I want to fill a sheet of paper, wasting
space in the duplicated calibration areas and guard
areas.

PDF-417 has a maximum density of 106 bytes per square
centimeter (686 bytes per square inch, for you retrogrades),
which is quite low. It is certainly possible to do better,
but I would like to know if there are any standards for
doing that. I am especially interested in methods that are
in the public domain, because most 2D barcodes are patented
(e.g. PDF-417 is covered by US patent 5,243,655
and DataMatrix is covered by 4,939,354,
5,053,609
and 5,124,536).

If you know any good references, please post them in a
diary entry (I try to check the recent diaries once a day,
but I may miss some of them) or send them to me by e-mail:
quinet (at) gamers (dot) org. Thanks!

Hmmm… This is a bit long for a diary entry. But I
don’t think that such a question deserves an article in the
front page. If you think that I should I have posted this
as an article, then send me an e-mail and I will re-post
this question and edit it out of my diary.

I posted my
opinion
on using GdkRgb in Ghostscript, in the LinuxToday
article
about Raph‘s open
letter to the Ghostscript community. IMHO, GdkRgb is the
best solution and those who see it as an attempt to force
them to use “Gnome stuff” on their desktop do not understand
the way GhostScript works or what GdkRgb is.

This is not new, but it looks like anything that mentions
Gnome is flamed by KDE bigots, and vice-versa (yes, it does
happen both ways). The interesting thing here is that the
most vocal critics are not developers and/or show clearly
that they do not understand what they are talking about.
Sure, they want someone (who?) to fork GhostScript,
presumably to create a highly productive KDE branch or
something like that. What a bright idea! Sure, they could
get rid of any Bonobo linking, but throwing GdkRgb away
would be stupid.

Sigh! Even if you are careful about what you communicate
(I
think that Raph’s letter
was nice and explained very well that using GdkRgb would
have no influence on KDE), some morons will find a way to
interpret it in a different way.

I’m going to Bristol (UK) for the HUC2k symposium. I suppose
that the probability of meeting someone reading Advogato at
this
conference is close to zero, but I will be there anyway.
And I will
stay in the Posthouse hotel from Sunday evening to
Wednesday, so if you are reading this (maybe), and
you met
me at GUADEC or something (unlikely) and you will be
in
Bristol for the conference (extremely unlikely), then
feel
free to come and say hello.

Ghostscript

It is nice to see that Ghostscript has a new
maintainer, in the person of raph. Congratulations and good
luck! Ghostscript is already very good, and adding better
antialiasing and other stuff from Libart will make
it even better.

Hmm… There seems to be an account for L. Peter Deutsch on Advogato. Not
very active, apparently…

Diaries, yet another
meta-discussion…

At the end of a previous diary
entry
, raph mentioned that
the diary format is working, but is not ideal for
question-and-answer discussions. Well… Obviously the
diaries were not designed for that, but it is great to see
how they have evolved. There seems to be a need (among the
free software community) for this kind of discussions, which
are more public than direct e-mail, mailing lists or IRC,
but whithout being restricted to a particular topic like the
articles on the front page.

A first step would be to use automatic bi-directional
links whenever possible. Whenever someone posts a diary
entry containing a link to someone else’s diary, the filter
that parses the submission would at the same time add a
backwards link at the end of the other diary (e.g. “[1
comment by so-and-so]
“). It would then be easier to
check
if someone has replied to your diary entry.

But as the number of diaries grows, it becomes
increasingly difficult to keep up with the postings. It
will not take long before
the daily submissions cannot fit on the front page. Already
now, it is easy to miss some parts of a discussion if you go
away for a couple of days. And the only way to read the
missing parts is to look at the pages of all potential
participants and check their previous entries. This is not
very convenient, because you may forget some of them and you
may not know that a new guy has posted some interesting
comments. Of course, that could be solved by another hack
to Advogato: allow the “recentlog” to take a range of dates,
or at least a starting date. It would then display all
diaries that have been posted or modified during that time,
so that you could read last week’s diaries in chronological
order if you missed them. (Implementation note: Advogato
should store a chronological index of all diaries, otherwise
finding and sorting them would be inefficient.)

But where does that lead to? If it is easier to discuss
things in the diaries, that part of Advogato would become
similar to a web-based bulletin board or chat room. Or a
web-based version of USENET. The comparison with USENET and
other chat rooms is interesting: they allow threading (using
a “References” header in the newsgroups, or direct links in
the web fora) and they provide easy ways to separate the
unrelated topics (different subject lines, newsgroups or
chat rooms). The Advogato diaries put everything in one
large page and it is up to the readers to separate the
interesting things from the noise. But on the other hand,
this can be considered as a feature that reinforces the
community, because all members get the opportunity to read
some articles that they might have skipped if the topics had
been clearly separated. Also, another feature of the
diaries is that they do not have subject lines: those who
want to add them can do it (using bold and/or indentation)
but nobody is forced to structure their diaries in any way.
It is difficult to please everybody…

So I don’t know what would be best for Advogato (anyway,
who am I to judge?) but I think that there are several
significant differences between the diaries and a
full-featured discussion forum, and these differences may be
good for Advogato. If nobody has enough spare time to add a
discussion forum besides (and not as a replacement for) the
diaries, then I am happy with the current situation.
Hmm… Maybe it would be better with the addition of
bi-directional links…

Attribution-ShareAlike 3.0
This work is licensed under a Attribution-ShareAlike 3.0.