Christian was looking at PGO and BOLT recently I figured I’d write down my notes from the discussions we had on how we’d go about making things faster on our stack, since I don’t have time or the resource to pursue those plans myself atm.
First off let’s start with the basics, PGO (profile guided optimizations) and BOLT (Binary Optimization and Layout Tool) work in similar ways. You capture one or more “profiles” of a workload that’s representative of a usecase of your code and then the tools do their magic to make the common hot paths more efficient/cache-friendly/etc. Afterwards they produce a new binary that is hopefully faster than the old one and functionally identical so you can just replace it.
Now already we have two issues here that arise here:
First of all we don’t really have any benchmarks in our stack, let alone, ones that are rounded enough to account for the majority of usecases. Additionally we need better instrumentation to capture stats like frames, frame-times, and export them both for sysprof and so we can make the benchmark runners more useful.
Once we have the benchmarks we can use them to create the profiles for optimizations and to verify that any changes have the desired effect. We will need multiple profiles of all the different hardware/software configurations.
For example for GTK ideally we’d want to have a matrix of profiles for the different render backends (NGL/Vulkan) along with the mesa drivers they’d use depending on different hardware AMD/Intel and then also different architectures, so additional profile for Raspberrypi5 and Asahi stacks. We might also want to add a profile captured under qemu+virtio while we are it too.
Maintaining the benchmarks and profiles would be a lot of work and very tailored to each project so they would all have to live in their upstream repositories.
On the other hand, the optimization itself has to be done during the Tree/userland/OS composition and we’d have to aggregate all the profiles from all the projects to apply them. This is easily done when you are in control of the whole deployment as we can do for the GNOME Flatpak Runtime. It’s also easy to do if you are targeting an embedded deployment where most of the time you have custom images you are in full control off and know exactly the workload you will be running.
If we want distros to also apply these optimizations and for this to be done at scale, we’d have to make the whole process automatic and part of the usual compilation process so there would be no room for error during integration. The downside of this would be that we’d have a lot less opportunities for aggregating different usecases/profiles as projects would either have to own optimizations of the stack beneath them (ex: GTK being the one relinking pango) or only relink their own libraries.
To conclude, Post-linktime optimization would be a great avenue to explore as it seems to be one of the lower-hanging fruits when it comes to optimizing the whole stack. But it also would be quite the effort and require a decent amount of work to be committed to it. It would be worth it in the long run.