David: taking a quick look at Google’s documentation, it sure looks like OpenID to me. The main items of note are:
- It documents the use of OpenID 2.0’s directed identity mode. Yes this is “a departure from the process outlined in OpenID 1.0”, but that could be considered true of all new features found in 2.0. Google certainly isn’t the first to implement this feature:
- Yahoo’s OpenID page recommends users enter “yahoo.com” in the identity box on web sites, which will initiate a directed identity authentication request.
- We’ve been using directed identity with Launchpad to implement single sign on for various Canonical/Ubuntu sites.
Given that Google account holders identify themselves by email address, users aren’t likely to know a URL to enter, so this kind of makes sense.
- The identity URLs returned by the OpenID provider do not directly reveal information about the user, containing a long random string to differentiate between users. If the relying party wants any user details, they must request them via the standard OpenID Attribute Exchange protocol.
- They are performing access control based on the OpenID realm of the relying party. I can understand doing this in the short term, as it gives them a way to handle a migration should they make an incompatible change during the beta. If they continue to restrict access after the beta, you might have a valid concern.
It looks like there would be no problem talking to their provider using existing off the shelf OpenID libraries (like the ones from JanRain).
If you have an existing site using OpenID for login, chances are that after registering the realm with Google you’d be able to log in by entering Google’s OP server URL. At that point, it’d be fairly trivial to add another button to the login page – sites seem pretty happy to plaster provider-specific radio buttons and entry boxes all over the page already …
One of the nice features of the PlayStation 3 is the UPNP/DLNA media renderer. Unfortunately, the set of codecs is pretty limited, which is a problem since most of my music is encoded as Vorbis. MediaTomb was suggested to me as a server that could transcode the files to a format the PS3 could understand.
Unfortunately, I didn’t have much luck with the version included with Ubuntu 8.10 (Intrepid), and after a bit of investigation it seems that there isn’t a released version of MediaTomb that can send PCM audio to the PS3. So I put together a package of a subversion snapshot in my PPA which should work on Intrepid.
With the newer package, it was pretty easy to get things working:
- Install the mediatomb-daemon package
- Edit the /etc/mediatomb/config.xml file and make the following changes:
- Change the <protocolInfo/> line to set extend="yes".
- In the <extension-mimetype> section, uncomment the line to map “avi” to “video/divx”. This will get a lot of videos to play without problem.
- In the <mimetype-upnpclass> section, add a line to map “application/ogg” to “object.item.audioItem.musicTrack”. This is needed for the vorbis files to be recognised as music.
- In the <mimetype-contenttype> section add a line to map “audio/L16” to “pcm”.
- On the <transcoding> element, change the enabled attribute to “yes”.
- Add the settings from here to the <transcoding> section.
- Edit the /etc/default/mediatomb script and set INTERFACE to the network interface you want to advertise on.
- Restart the mediatomb daemon.
- Go to the web UI (try opening /var/lib/mediatomb/mediatomb.html in a web browser), and add the directories you want to export.
- Test things on the PS3.
Things aren’t perfect though. As MediaTomb is simply piping the transcoded audio to the PS3, it doesn’t implement seeking on such files, and it seems that the PS3 won’t even let you pause a stream that doesn’t allow seeking. With a less generalised transcoding backend, it seems like it should be trivial to support seeking in an uncompressed PCM stream though, since the byte offsets can be trivially mapped to sample numbers.
The other problem I found was that none of the recent music I’d ripped showed up. It seems that they’d been ripped with the .oga file extension rather than .ogg. This change appears to have been made in bug 543306, but the reasoning seems suspect: the guidelines from Xiph indicate that the files generated by this encoding profile should continue to use the .ogg file extension.
I tried adding some extra mappings to the MediaTomb configuration file to recognise the files without success, but eventually decided to just rename them and fix the encoding profile locally.
A Perfect Media Server
While MediaTomb mostly works for me, it doesn’t do everything I’d like. A few of the things I’d like out of a media server include:
- No need to configure things via a web UI. In fact, I could do without a web UI all together – something nicely integrated into the desktop would be nice.
- No need to set model specific settings in the configuration file. Ideally it would know how to talk to common players by default.
- Supports transcoding and seeking within transcoded files. Preferably knows what needs transcoding for common players.
- Picks up new files in real time. So something inotify based rather than periodic reindexing.
- A virtual folder tree for music based on artist/album metadata. A plain folder tree for other media would be fine.
- Cached video thumbnails would be nice too. The build of MediaTomb in my PPA includes support for thumbnails (needs to be enabled in the config file), but they aren’t cached so are slow to appear.
Perhaps Zeeshan‘s media server will be worth trying out at some point.
I’ve been playing with OAuth a bit lately. The OAuth specification fulfills a role that some people saw as a failing of OpenID: programmatic access to websites and authenticated web services. The expectation that OpenID would handle these cases seems a bit misguided since the two uses cases are quite different:
- OpenID is designed on the principle of letting arbitrary OpenID providers talk to arbitrary relying parties and vice versa.
- OpenID is intentionally vague about how the provider authenticates the user. The only restriction is that the authentication must be able to fit into a web browsing session between the user and provider.
While these are quite useful features for a decentralised user authentication scheme, the requirements for web service authentication are quite different:
- There is a tighter coupling between the service provider and client. A client designed to talk to a photo sharing service won’t have much luck if you point it at a micro-blogging service.
- Involving a web browser session in the authentication process for individual web service request is not a workable solution: the client might be designed to run offline for instance.
While the idea of a universal web services client is not achievable, there are areas of commonality between different the services: gaining authorisation from the user and authenticating individual requests. This is the area that OAuth targets.
While it has different applications, it is possible to compare some of the choices made in the protocol:
- The secrets for request and access tokens are sent to the client in the clear. So at a minimum, a service provider’s request token URL and access token URL should be served over SSL. OpenID nominally avoids this by using Diffie-Hellman Key Exchange to avoid evesdropping, but ended up needing it to avoid man in the middle attacks. So sending them in the clear is probably a more honest approach.
- Actual web service methods can be authenticated over plain HTTP in a fairly secure means using the HMAC-SHA1 or RSA-SHA1 signature methods. Although if you’re using SSL anyway, the PLAINTEXT authentication method is probably not any worse than HMAC-SHA1.
- The authentication protocol supports both web applications and desktop applications. Though any security gained through consumer secrets is invalidated for desktop applications, since anyone with a copy of the application will necessarily have access to the secrets. A few other points follow on from this:
- The RSA-SHA1 signature method is not appropriate for use by desktop applications. The signature is based only on information available in the web service request and the RSA key associated with the consumer, and the private key will need to be distributed as part of the application. So if an attacker discovers an access token (not access token secret), they can authenticate.
- The other two authentication methods — HMAC-SHA1 and PLAINTEXT — depend on an access token secret. Along with the access token, this is essentially a proxy for the user name and password, so should be protected as such (e.g. via the GNOME keyring). It still sounds better than storing passwords directly, since the token won’t give access to unrelated sites the user happened to use the same password on, and can be revoked independently of changing the password.
- While the OpenID folks found a need for a formal extension mechanism for version 2.0 of that protocol, nothing like that seems to have been added to OAuth. There are now a number of proposed extensions for OAuth, so it probably would have been a good idea. Perhaps it isn’t as big a deal, due to tigher coupling of service providers and consumers, but I could imagine it being useful as the two parties evolve over time.
So the standard seems decent enough, and better than trying to design such a system yourself. Like OpenID, it’ll probably take until the second release of the specification for some of the ambiguities to be taken care of and for wider adoption.
From the Python programmer point of view, things could be better. The library available from the OAuth site seems quite immature and lacks support for a few aspects of the protocol. It looks okay for simpler uses, but may be difficult to extend for use in more complicated projects.