Building IoT projects with Ubuntu Core talk

Last week I gave a talk at Perth Linux Users Group about building IoT projects using Ubuntu Core and Snapcraft. The video is now available online. Unfortunately there were some problems with the audio setup leading to some background noise in the video, but it is still intelligible: The slides used in the talk can be found here. The talk was focused on how Ubuntu Core could be used to help with the ongoing security and maintenance of IoT projects. While it might be easy to buy a Raspberry Pi, install Linux and your application, how do you make sure the device remains up to date with security updates? How do you push out updates to your application in a reliable fashion? I outlined a way of deploy a project using Ubuntu Core, including: Packaging a simple web server app using the snapcraft tool. Configuring automatic builds from git, published to the edge channel on the Snap Store. This is also an easy way to get ARM builds for a package, rather than trying to deal with cross compilation tool chains. Using the ubuntu-image command to create an Ubuntu Core image with the application preinstalled I gave a demo booting such an image in a virtual machine. This showed the application up and running ready to use. I also demonstrated how promoting a build from the edge channel to stable on the store would make it available to the system wide automatic update mechanism on the device.

Performing mounts securely on user owned directories

While working on a feature for snapd, we had a need to perform a "secure bind mount". In this context, "secure" meant: The source and/or target of the mount is owned by a less privileged user. User processes will continue to run while we're performing the mount (so solutions that involve suspending all user processes are out). While we can't prevent the user from moving the mount point, they should not be able to trick us into mounting to locations they don't control (e.g. by replacing the path with a symbolic link). The main problem is that the mount system call uses string path names to identify the mount source and target. While we can perform checks on the paths before the mounts, we have no way to guarantee that the paths don't point to another location when we move on to the mount() system call: a classic time of check to time of use race condition. One suggestion was to modify the kernel to add a MS_NOFOLLOW flag to prevent symbolic link attacks. This turns out to be harder than it would appear, since the kernel is documented as ignoring any flags other than MS_BIND and MS_REC when performing a bind mount. So even if a patched kernel also recognised the MS_NOFOLLOW, there would be no way to distinguish its behaviour from an unpatched kernel. Fixing this properly would probably require a new system call, which is a rabbit hole I don't want to dive down. So what can we do using the tools the kernel gives us? The common way to reuse a reference to a file between system calls is the file descriptor. We can securely open a file descriptor for a path using the following algorithm: Break the path into segments, and check that none are empty, ".", or "..". Open the root directory with open("/", O_PATH|O_DIRECTORY). Open the first segment with openat(parent_fd, "segment", O_PATH|O_NOFOLLOW|O_DIRECTORY). Repeat for each of the remaining file descriptors, closing parent descriptors as needed. Now we just need to find a way to use these file descriptors with the mount system call. I came up with two strategies to achieve this. Use the current working directory The first idea I tried was to make use of the fact that the mount system call accepts relative paths. We can use the fchdir system call to change to a directory identified by a file descriptor, and then refer to it as ".". Putting those together, we can perform a secure bind mount as a multi step process: fchdir to the mount source directory. Perform a bind mount from "." to a private stash directory. fchdir to the mount target directory. Perform a bind mount from the private stash directory to ".". Unmount the private stash directory. While this works, it has a few downsides. It requires a third intermediate location to stash the mount. It could interfere with anything else that relies on the working directory. It also only works for directory bind mounts,…

ThinkPad Infrared Camera

One of the options available when configuring the my ThinkPad was an Infrared camera. The main selling point being "Windows Hello" facial recognition based login. While I wasn't planning on keeping Windows on the system, I was curious to see what I could do with it under Linux. Hopefully this is of use to anyone else trying to get it to work. The camera is manufactured by Chicony Electronics (probably a CKFGE03 or similar), and shows up as two USB devices: 04f2:b5ce Integrated Camera 04f2:b5cf Integrated IR Camera Both devices are bound by the uvcvideo driver, showing up as separate video4linux devices. Interestingly, the IR camera seems to be assigned /dev/video0, so generally gets picked by apps in preference to the colour camera. Unfortunately, the image it produces comes up garbled: So it wasn't going to be quite so easy to get things working. Looking at the advertised capture modes, the camera supports Motion-JPEG and YUYV raw mode. So I tried capturing a few JPEG frames with the following GStreamer pipeline: gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=10 ! image/jpeg ! multifilesink location="frame-%02d.jpg" Unlike in raw mode, the red illumination LEDs started flashing when in JPEG mode, which resulted in frames having alternating exposures. Here's one of the better exposures: What is interesting is that the JPEG frames have a different aspect ratio to the raw version: a more normal 640x480 rather than 400x480. So to start, I captured a few raw frames: gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=10 ! "video/x-raw,format=(string)YUY2" ! multifilesink location="frame-%02d.raw" The illumination LEDs stayed on constantly while recording in raw mode. The contents of the raw frames show something strange: 00000000 11 48 30 c1 04 13 44 20 81 04 13 4c 20 41 04 13 |.H0...D ...L A..| 00000010 40 10 41 04 11 40 10 81 04 11 44 00 81 04 12 40 |@.A..@....D....@| 00000020 00 c1 04 11 50 10 81 04 12 4c 10 81 03 11 44 00 |....P....L....D.| 00000030 41 04 10 48 30 01 04 11 40 10 01 04 11 40 10 81 |A..H0...@....@..| ... The advertised YUYV format encodes two pixels in four bytes, so you would expect any repeating patterns to occur at a period of four bytes. But the data in these frames seems to repeat at a period of five bytes. Looking closer it is actually repeating at a period of 10 bits, or four packed values for every five bytes. Furthermore, the 800 byte rows work out to 640 pixels when interpreted as packed 10 bit values (rather than the advertised 400 pixels), which matches the dimensions of the JPEG mode. The following Python code can unpack the 10-bit pixel values: def unpack(data): result = [] for i in range(0, len(data), 5): block = (data[i] | data[i+1] << 8 | data[i+2] << 16 | data[i+3] << 24 | data[i+4] << 32) result.append((block >> 0) & 0x3ff) result.append((block >> 10) & 0x3ff) result.append((block >> 20) & 0x3ff) result.append((block >> 30) & 0x3ff) return result After adjusting the…