Startup time profiling of gnome-software

Following on from the heap profiling I did on gnome-software to try and speed it up for Endless, the next step was to try profiling the computation done when starting up gnome-software — which bits of code are taking time to run?

tl;dr: There is new tooling in sysprof and GLib from git which makes profiling the performance of high-level tasks simpler. Some fixes have landed in gnome-software as a result.

Approaches which don’t work

The two traditional tools for this – callgrind, and print statements – aren’t entirely suitable for gnome-software.

I tried running valgrind --tool=callgrind gnome-software, and then viewing the results in KCachegrind, but it slowed gnome-software down so much that it was unusable, and the test/retry cycle of building and testing changes would have been soul destroyingly slow.

callgrind works by simulating the CPU’s cache and looking at cache reads/writes/hits/misses, and then attributing costs for those back up the call graph. This makes it really good at looking at the cost of a certain function, or the total cost of all the calls to a utility function; but it’s not good at attributing the costs of higher-level dynamic tasks. gnome-software uses a lot of tasks like this (GsPluginJob), where the task to be executed is decided at runtime with some function arguments, rather than at compile time by the function name/call. For example “get all the software categories” or “look up and refine the details of these three GsApp instances”.

That said, it was possible to find and fix a couple of bits of low-hanging optimisation fruit using callgrind.

Print statements are the traditional approach to profiling higher-level dynamic tasks: print one line at the start of a high-level task with the task details and a timestamp, and print another line at the end with another timestamp. The problem comes from the fact that gnome-software runs so many high-level tasks (there are a lot of apps to query, categorise, and display, using tens of plugins) that reading the output is quite hard. And it’s even harder to compare the timings and output between two runs to see if a code change is effective.

Enter sysprof

Having looked at sysprof briefly for the heap profiling work, and discounted it, I thought it might make sense to come back to it for this speed profiling work. Christian had mentioned at GUADEC in Thessaloniki that the design of sysprof means apps and libraries can send their own profiling events down a socket, and those events will end up in the sysprof capture.

It turns out that’s remarkably easy: link against libsysprof-capture-4.a and call sysprof_capture_writer_add_mark() every time a high-level task ends, passing the task duration and details to it. There’s even an example app in the sysprof repository.

So I played around with this newly-instrumented version of gnome-software for a bit, but found that there were still empty regions in the profiling trace, where time passed and computation was happening, but nothing useful was logged in the sysprof capture. More instrumentation was needed.

sysprof + GLib

gnome-software does a lot of its computation in threads, bringing the results back into the main thread to be rendered in the UI using idle callbacks.

For example, the task to list the apps in a particular category in gnome-software will run in a thread, and then schedule an idle callback in the main thread with the list of apps. The idle callback will then iterate over those apps and add them to (for example) a GtkFlowBox to be displayed.

Adding items to a GtkFlowBox takes some time, and if there are a couple of hundred of apps to be added in a single idle callback, that can take several hundred milliseconds — a long enough time to block the main UI from being redrawn that the user will notice.

How do you find out which idle callback is taking too long? sysprof again! I added sysprof support to GLib so that GSource.dispatch events are logged (along with a few others), and now the long-running idle callbacks are displayed in the sysprof graphs. Thanks to Christian and Richard for their reviews and contributions to those changes.

This capture file was generated using sysprof-cli --gtk --use-trace-fd -- gnome-software, and the ‘gnome-software’ and ‘GLib’ lines in the ‘Timings’ row need to be made visible using the drop-down menu in the ‘Timings’ row.

It’s important to call g_task_set_source_tag() or g_task_set_name() on all the GTasks in your code, and to call g_source_set_name() on the GSources (like this), so that the marks in the capture file have helpful names.

In it, you can see the ‘get-updates’ plugin job on gnome-software’s flatpak plugin is taking 1.5 seconds (in a thread), and then 175ms to process the results in the main thread.

The selected row above that is showing it’s taking 110ms to process the results from a call to gs_plugin_loader_job_get_categories_async() in the main thread.

What’s next?

With the right tooling in place, it should be easier for me and others to find and fix performance issues like these, in gnome-software and in other projects.

I’ve submitted a few fixes, but there are more to do, and I need to shift my focus onto other things for now.

Please try out the new sysprof features, and add libsysprof-capture-4.a support to your project (if it would help you debug high-level performance problems). Ask questions on Discourse (and @ me).

To try out the new features, you’ll need the latest versions of sysprof and GLib from git.

2 thoughts on “Startup time profiling of gnome-software

  1. eeeee

    The biggest problem with gnome-software is that it autostarts (once per user wtf?) and consumes too much memory even with no window shown.

    1. Philip Withnall Post author

      See my previous post (linked to at the top of this one) about reducing memory consumption in gnome-software. Making it start up faster is one step towards stopping it running in the background.

Comments are closed.