D-Bus optimizations

In the last month and a half, I have been working, as part of my work at Collabora, on optimizing D-Bus, which even though is a great piece of software, has some performance problems that affect its further adoption (specially on embedded devices).

Fortunately, we didn’t have to start from scratch, since this has been an ongoing project at Collabora, where previous research and upstream discussions had been taking place.

Based on this great work (by Alban Créquy and Ian Molton, BTW), we started our work, looking first at the possible solutions for the biggest problems (context switches, as all traffic in the bus goes through the D-Bus daemon, as well as multiple copies of messages in their trip from one peer, via the kernel, then to the daemon, to end up in the peer the message is targeted to), which were:

  • AF_DBUS work from Alban/Ian: while it improved the performance of the bus by a big margin, the solution wasn’t very well accepted in the upstream kernel mailing list, as it involved having lots of D-Bus-specific code in the kernel (all the routing).
  • Shared memory: this has no proof-of-concept code to look at, but was a (maybe) good idea, as it would mean peers in the bus would use shared memory segments to send messages to each other. But this would mean mostly a rewrite of most of the current D-Bus code, so maybe an option for the future, but not for the short term.
  • Using some sort of multicast IPC that would allow peers in the bus to send messages to each other without having all messages go through the daemon, which, as found out by several performance tests, is the biggest bottleneck in current D-Bus performance. We had a look at different options, one of them being AF_NETLINK, which mostly provides all that is needed, although it has some limitations, the biggest one being that it drops packets when the receiver queue is full, which is not an option for the D-Bus case.
    UDP/IP multicast has been mentioned also in some of the discussions, but this seems to be too much overhead for the D-Bus use, as we would have to use eth0 or similar, as multicast on loopback device doesn’t exist (hence no D-Bus in computers without a network card). Also, losing packets is another caveat of this solution, as well as message order guarantee.

So, the solution we have come up with is to implement multicast on UNIX sockets, and make it support what we need for it in D-Bus, and, of course, make use of that in the D-Bus implementation itself. So, here’s what we have right now (please note that this is still a work in progress):

The way this works is better seen on a diagram, so here it is. First, how the current D-Bus architecture works:

and how this would be changed:

That is, when a peer wants to join a bus, it would connect to the daemon (exactly as it does today), authenticate, and, once the daemon knows the peer is authenticated, it would join the accept‘ed socket to the multicast group (this is important, as we don’t want to have peers join by themselves the multicast group, so it’s the daemon’s job to do that). Once the peer has joined the multicast group, it would use socket filters to determine what traffic it wants to receive, so that it only gets, from the kernel, the messages it really is interested in. The daemon would do the same, just setting its filters so that it only gets traffic to the bus itself (org.freedesktop.DBus well-known name).

In this multicast solution, we might have to prevent unauthorized eavesdropping, even though peers need to authenticate through the daemon to join the multicast group. For this, we have been thinking about using Linux Security Modules. It is still not 100% clear how this would be done, so more information on this soon.

The above-mentioned branches work right now, but as I said before, they are still a work in progress, so they still need several things before we can call this work finalized. For now, we have succeeded in making the daemon not get any traffic at all apart from what it really needs to get, so a big win there already as we are avoiding the expensive context switches, but the socket filters still need a lot of work, apart from other minor and not so minor things.

Right now, we are in the process of getting the kernel part accepted, which is in progress, and to finish the D-Bus branch to be in an upstreamable form. Apart from that, we will provide patches for all the D-Bus bindings we know about (GLib, QtDBus, python, etc).

Comments/suggestions/ideas welcome.

10 thoughts on “D-Bus optimizations”

  1. To solve the eavesdropping problem, why not have the D-Bus daemon set up the BPF filter, and only allow the D-Bus clients to read through that filter?

  2. @Josh Triplett That is actually what we are doing. Rodrigo didn’t go into the nitty-gritty details yet since there will be a whole series of posts to come.

    Being able to set BPF filters from the daemon side is required for that and this patch has not been accepted upstream yet.

  3. This is really great stuff Rodrigo :), in Tracker we started using FD passing between our daemons precisely to minimize the dbus overhead (Tracker is also quite intensive with it), but it just helps with not sending large chunks of data over the bus. It’ll be great to see how fast Tracker gets with this

  4. Nils, we are looking at it, as it was suggested on the kernel mailing list, where we haven’t succeeded so far in getting the multicast on UNIX solution accepted.

    I didn’t know about it, so having a look as we speak at it

  5. How’s this security-wise? Are all D-Bus clients required to validate the data they get? Are they protected against DOS’ing or that sort of thing?

    I don’t know how the threat model is, but it seems to me that without a trusted middle-man, you push some responsibility for sensible behaviour onto the clients. Not that that’s necessarily a problem if they are using a common infrastructure that can be updated in the same fashion as the daemon.

  6. Hi Folks.

    After reading the lenghtly discussions on


    i came to think that the problem is that the both communication modes, Client-Server-Call and Signal Sender/Receiver might need to be handled differently in DBUS:

    Client/Server Call: One client invokes a method on a DBUS-Service. Only this client needs to see the answer. For this we need:
    -> One to one communication over a private channel. Maybe without involving the dbus daemon since only communication setup between the two needs to be handled centrally and authenticated.

    Signal Sender/Receiver: One sends, many receive. Who can connect to the cenctral system/session bus needs to be authenticated, but not who can receive which message. So i think for this there should be a “BUS” like thing in the kernel where everyone, who is authenticated to connect to, can connect and see all messages.

    Adding a filter option to Signal Sender/Receiver within the kernel-“BUS” which allows to further optimize (only receive messages which are subscribed) could be a second step.
    But maybe one shold not rely on the thing that signals could not be read by everyone on the bus.

    Maybe this separation can improve the architecture of the dbus since then the IPC mechanism for both can be choosen differently:

    -> Client/Server: One-To-One. Reliable transport. Blocking sender if receiver is full. No reordering allowed. Sensible data allowed. Since communcation is direct, no need to mediate communication via dbus.

    -> Sender/Receiver: One to many/all: Reliable transport. No blocking of sender. Allow reorderign? No sensible data allowed as signal. If “BUS” kernel service is used for this, no need to mediate communication via dbus-daemon.

Leave a Reply

Your email address will not be published. Required fields are marked *