D-Bus optimizations II

As explained in my previous post, we are working on optimizing D-Bus for its usage on embedded systems (more precisely on GENIVI).

We started the discussion on the Linux-netdev mailing list about getting our patch to add multicast on UNIX sockets accepted, but, unfortunately, the feedback hasn’t been very good. So, since one of the premises from GENIVI is to get all the work we are doing accepted upstream, we have been thinking in the last few days about what else to use for, as stated in my previous post, fixing the main issue we have found in D-Bus performance, which is the number of context switches needed when getting all traffic on the bus through dbus-daemon. So, here’s a summary of the stuff we have been looking at:

  • Use TIPC, a socket family already available in modern kernel versions and which is targeted at clustering environments, although it can be used for communications inside a single node.
  • Use ZeroMQ, which is a library that, from a first look, provides the stuff we need for D-Bus, that is multicast on local sockets.
  • Provide the multicast on UNIX sockets as a new socket family (AF_MCAST), although this wasn’t well received neither on the linux-netdev discussion. This will contain a trimmed down version of AF_UNIX with only the stuff needed for doing multicast.
  • Extend the AF_DBUS work from Alban to include what we have learnt from the multicast AF_UNIX architecture. This would mean having a patch to the kernel that, as with the AF_MCAST solution, would have to be maintained by distributors, as the linux-netdev people didn’t like this solution neither.
  • Use Netlink, which has all that we need for D-Bus, that is, multicast and unicast, plus it is an established IPC mechanism in the kernel (from kernel space to user space), and is even used for other services similar to D-Bus. We would create a new Netlink subfamily for D-Bus, that would contain code to do the routing, as Netlink, for security reasons, does not allow direct connection between user space apps.
  • Use KBUS, which is a lightweight messaging system, provided as a kernel module.

Right now, we have working code for AF_MCAST, and are looking at Netlink, TIPC and KBUS, so will be blogging more details on what we find out in our experiments. But any feedback would be appreciated since, as stated before, we want to have all this work accepted upstream. So, comments, suggestions?

6 thoughts on “D-Bus optimizations II”

  1. If your deadline will permit it, have you considered using binder? Binder is a lightweight IPC that was developed with embedded applications in mind and is widely used and well-tested. Binder’s intents mechanism and token-based access method is not that different than what D-Bus employs, as discussed at length in the famous exchange between Marcel Holtmann and Dianne Hackborn on LKML (which appears to be down at the moment) in June 2009. Binder is in linux-next (http://elinux.org/Android_Mainlining_Project) and will be merged to the mainline kernel in the next year or so. Since intents are essentially a pub-sub mechanism, the result is similar to multicast and might be useful for your specific purpose. Since I am working with GENIVI and am quite interested in IPC mechanisms, I’d love to learn more about what youre’ doing.

  2. Is this really the problem in D-BUS performance?

    I thought (based on some dim recollections from years ago) it was the constant marshaling & de-marshaling (which makes the daemon CPU bound instead of IO-bound[1]) and latency in context switching related to socket usage which can be improved a bit by decreasing kernel socket buffers.

    [1] If D-BUS isn’t IO-bound, changes on kernel side aren’t going to help much I think…

    What tools you’ve used to investigate the performance?

  3. I tend to agree with Eero . . . does D-Bus really have a performance problem or is the overall system architecture to blame? D-Bus is an *application* bus and nothing more. It’s not intended to be a high-performance bus and is ill-suited for high-frequency/high-bandwidth messaging. If you’re trying to use it for this purpose then I suspect you are using the wrong IPC mechanism. Your time (and GENIVI’s) would probably be better spent selecting a different IPC solution (be it multi-cast, zero-mq, or a proprietary one). I have a great deal of experience implementing and using D-Bus in an automotive solution . . . it works fine if you don’t abuse it. If you’re making more than 900 round-trips/second then your archtiecture is flawed. Don’t blame D-Bus . . . either re-architect your solution or pick an IPC which is more suitable for your intended use case. One IPC cannot be made to solve all problems equally well. That’s been my experience but your mileage may vary.

Leave a Reply

Your email address will not be published. Required fields are marked *