At the Endless OS Foundation, we’ve recently been updating some of our internal policies. One of these is our equipment policy, covering things like what laptops and peripherals are provided to employees. While updating it, we took the opportunity to think about the environmental impact it would have, and how we could reduce that impact compared to standard or template equipment policies.
How this matters
For many software organisations, the environmental impact of hardware purchasing for employees is probably at most the third-biggest contributor to the organisation’s overall impact, behind carbon emissions from energy usage (in building and providing software to a large number of users), and emissions from transport (both in sending employees to conferences, and in people’s daily commute to and from work). These both likely contribute tens of tonnes of emissions per year for a small/medium sized organisation (as a very rough approximation, since all organisations are different). The lifecycle emissions from a modern laptop are in the region of 300kgCO2e, and one target for per-person emissions is around 2.2tCO2e/year by 2030.
If changes to this policy reduce new equipment purchase by 20%, that’s a 20kgCO2e/year reduction per employee.
So, while changes to your organisation’s equipment policy are not going to have a big impact, they will have some impact, and are easy and unilateral changes to make right now. 20kgCO2e is roughly the emissions from a 150km journey in a petrol car.
What did we put in the policy?
We started with a fairly generic policy. From that, we:
Removed time-based equipment replacement schedules (for example, replacing laptops every 3 years) and went with a more qualitative approach of replacing equipment when it’s no longer functional enough for someone to do their job properly on.
Provided recommended laptop models for different roles (currently several different versions of the Dell XPS 13), which we have checked conform to the rest of the policy and have an acceptable environmental impact — Dell are particularly good here because, unlike a lot of laptop manufacturers, they publish a lifecycle analysis for their laptops
But also gave people the option to justify a different laptop model, as long as it meets certain requirements:
All laptops must meet the following standards in order to have low lifetime impacts:
Most recent Energy Star rating (version 8.0, as of May 2021)
All other equipment must meet relevant environmental standards, which should be discussed on a case by case basis
If choosing a device not explicitly listed above, manufacturers who provide Environmental Product Declarations for their products should be preferred
These requirements aim to minimise the laptop’s carbon emissions during use (i.e. its power consumption), and increase the chance that it will be repairable or upgradeable when needed. In particular, having a replaceable battery is important, as the battery is the most likely part of the laptop to wear out.
The policy prioritises laptop upgrades and repairs over replacement, even when the laptop would typically be coming up for replacement after 3 years. The policy steers people towards upgrading it (a new hard drive, additional memory, new battery, etc.) rather than replacing it.
When a laptop is functional but no longer useful, the policy requires that it’s wiped, refurbished (if needed) and passed on to a local digital inclusion charity, school, club or similar.
If a laptop is broken beyond repair, the policy requires that it’s disposed of according to WEEE guidelines (which is the norm in Europe, but potentially not in other countries).
A lot of this just codifies what we were doing as an organisation already — but it’s good to have the policy match the practice.
Your turn
I’d be interested to know whether others have similar equipment policies, or have better or different ideas — or if you make changes to your equipment policy as a result of reading this.
There have been a few instances recently where people have pointed out that GNOME Software marks some apps as not free software when they are. This is a bug in the appdata files provided by those applications, which typically includes something like
<project_license>GPL-3.0+ and CC-BY-SA-3.0</project_license>
This is generally an attempt to list the license of the code and of the documentation. However, the resulting SPDX expression means to apply the most restrictive interpretation of both licenses. As a result, the expression as a whole is considered not free software (CC-BY-SA-3.0 is not a free software license as per the FSF or OSI lists).
Instead, those apps should probably just list the ‘main’ license for the project:
<project_license>GPL-3.0+</project_license>
and document the license for their documentation separately. As far as I know, the appdata format doesn’t currently have a way of listing the documentation license in a machine readable way.
If you maintain an app, or want to help out, please check the licensing is correctly listed in your app’s appdata.
(To avoid doubt, I think CC-BY-SA-3.0 is a fine license for documentation; it’s just problematic to include it in the ‘main’ appdata license statement for an app.)
This is a quick write-up of a feature I added last year to libsoup and sysprof which exposes basic information about HTTP/HTTPS requests to sysprof, so they can be visualised in GNOME Builder.
Prerequisites
libsoup compiled with sysprof support (-Dsysprof=enabled), which should be the default if libsysprof-capture is available on your system.
Your application (and ideally its dependencies) uses libsoup for its HTTP requests; this won’t work with other network libraries.
Instructions
Run your application in Builder under sysprof, or under sysprof on the CLI (sysprof-cli -- your-command) then open the resulting capture file in Builder.
Ensure the ‘Timings’ row is visible in the ‘Instruments’ list on the left, and that the libsoup row is enabled beneath that.
Results
You should then be able to see a line in the ‘libsoup’ row for each HTTP/HTTPS request your application made. The line indicates the start time and duration of the request (from when the first byte of the request was sent to when the last byte of the response was received).
The details of the event contain the URI which was requested, whether the transaction was HTTP or HTTPS, the number of bytes sent and received, the Last-Modified header, If-Modified-Since header, If-None-Match header and the ETag header (subject to a pending fix).
What’s that useful for?
Seeing what requests your application is making, across all its libraries and dependencies — often it’s more than you think, and some of them can easily be optimised out. A request which is not noticeable to you on a fast internet connection will be noticeable to someone on a slower connection with higher request latencies.
Quickly checking that all requests are HTTPS rather than HTTP.
Quickly checking that all requests from the application set appropriate caching headers (If-Modified-Since, If-None-Match) and that all responses from the server do too (Last-Modified, ETag) — if a HTTP request can result in a cache hit, that’s potentially a significant bandwidth saving for the client, and an even bigger one for the server (if it’s seeing the same request from multiple clients).
Seeing a high-level overview of what bandwidth your application is using, and which HTTP requests are contributing most to that.
Seeing how it all ties in with other resource usage in your application, courtesy of sysprof.
Yes that seems useful
There’s plenty of scope for building this out into a more fully-featured way of inspecting HTTP requests using sysprof. By doing it from inside the process, using sysprof – rather than from outside, using Wireshark – this allows for visibility into TLS-encrypted conversations.
tl;dr: Use callgrind to profile CPU-heavy workloads. In some cases, movingheap allocations to the stack helps a lot. GNOME Software startup time has decreased from 25 seconds to 12 seconds (-52%) over the GNOME 40 cycle.
To wrap up the sporadic blog series on the progress made with GNOME Software for GNOME 40, I’d like to look at some further startup time profiling which has happened this cycle.
This profiling has focused on libxmlb, which gnome-software uses extensively to query the appstream data which provides all the information about apps which it shows in the UI. The basic idea behind libxmlb is that it pre-compiles a ‘silo’ of information about an XML file, which is cached until the XML file next changes. The silo format encodes the tree structure of the XML, deduplicating strings and allowing fast traversal without string comparisons or parsing. It is memory mappable, so can be loaded quickly and shared (read-only) between multiple processes. It allows XPath queries to be run against the XML tree, and returns the results.
gnome-software executes a lot of XML queries on startup, as it loads all the information needed to show many apps to the user. It may be possible to eliminate some of these queries – and some earlierwork did reduce the number by binding query parameters at runtime to pre-prepared queries – but it seems unlikely that we’ll be able to significantly reduce their number further, so better speed them up instead.
Profiling work which happens on a CPU
The work done in executing an XPath query in libxmlb is largely on the CPU — there isn’t much I/O to do as the compiled XML file is only around 7MB in size (see ~/.cache/gnome-software/appstream), so this time the most appropriate tool to profile it is callgrind. I ruled out using callgrind previously for profiling the startup time of gnome-software because it produces too much data, risks hiding the bigger picture of which parts of application startup were taking the most time, and doesn’t show time spent on I/O. However, when looking at a specific part of startup (XML queries) which are largely CPU-bound, callgrind is ideal.
It takes about 10 minutes for gnome-software to start up and finish loading the main window when running under callgrind, but eventually it’s shown, the process can be interrupted, and the callgrind log loaded in kcachegrind:
Here I’ve selected the main() function and the callee map for it, which shows a 2D map of all the functions called beneath main(), with the area of each function proportional to the cumulative time spent in that function.
The big yellow boxes are all memset(), which is being called on heap-allocated memory to set it to zero before use. That’s a low hanging fruit to optimise.
In particular, it turns out that the XbStack and XbOperands which libxmlb creates for evaluating each XPath query were being allocated on the heap. With a few changes, they can be allocated on the stack instead, and don’t need to be zero-filled when set up, which saves a lot of time — stack allocation is a simple increment of the stack pointer, whereas heap allocation can involve page mapping, locking, and updates to various metadata structures.
The changes are here, and should benefit every user of libxmlb without further action needed on their part. With those changes in place, the callgrind callee map is a lot less dominated by one function:
There’s still plenty left to go at, though. Contributions are welcome, and we can help you through the process if you’re new to it.
What’s this mean for gnome-software in GNOME 40?
Overall, after all the performance work in the GNOME 40 cycle, startup time has decreased from 25 seconds to 12 seconds (-52%) when starting for the first time since the silo changed. This is the situation in which gnome-software normally starts, as it sits as a background process after that, and the silo is likely to change every day or two.
Aside from performance work, there’s a number of other improvements to gnome-software in GNOME 40, including a new icon, some improvements to parts of the interface, and a lot of bug fixes. But perhaps they should be explored in a separate blog post.
Many thanks to my fellow gnome-software developers – Milan, Phaedrus and Richard – for their efforts this cycle, and my employer the Endless OS Foundation for prioritising working on this.
Thanks to Krzesimir Nowak, a 17-year-old feature request in GLib has been implemented: it’s now possible to define GError domains which have extended information attached to their GErrors.
You could now, for example, define a GError domain for text parser errors which includes context information about a parsing failure, such as the current line and character position. Or attach the filename of a file which was being read, to the GError informing of a read failure. Define an extended error domain using G_DEFINE_EXTENDED_ERROR(). The extended information is stored in a ‘private’ struct provided by you, similarly to how it’s implemented for GObjects with G_DEFINE_TYPE_WITH_PRIVATE().
There are code examples on how to use the new APIs in the GLib documentation, so I won’t reproduce them here.
An important limitation to note is that existing GError domains which have ever been part of a stable public API cannot be extended retroactively unless you are breaking ABI. That’s because extending a GError domain increases the size of the allocated GError instances for that domain, and it’s possible that users of your API will have stack-allocated GErrors in the past.
Please don’t stack-allocate GErrors, as it makes future extensions of the API impossible, and doesn’t buy you notable extra performance, as GErrors should not be used on fast paths. By their very nature, they’re for failure reporting.
The new APIs are currently unstable, so please try them out and provide feedback now. They will be frozen with the release of GLib 2.67.3, scheduled for 11th February 2021.
GLib 2.65.1 has been released with a new g_file_set_contents_full() API which you should consider using instead of g_file_set_contents() for writing out a file — it’s a drop-in replacement. It provides two additional arguments, one to control the trade-off between safety and performance when writing the file, and one to set the file’s mode (permissions).
What’s wrong with g_file_set_contents()?
g_file_set_contents() has worked fine for many years (and will continue to do so). However, it doesn’t provide much flexibility. When writing a file out on Linux there are various ways to do it, some slower but safer — and some faster, but less safe, in the sense that if your program or the system crashes part-way through writing the file, the file might be left in an indeterminate state. It might be garbled, missing, empty, or contain only the old contents.
g_file_set_contents() chose a fairly safe (but not the fastest) approach to writing out files: write the new contents to a temporary file, fsync() it, and then atomically rename() the temporary file over the top of the old file. This approach means that other processes only ever see the old file contents or the new file contents (but not the partially-written new file contents); and it means that if there’s a crash, either the old file will exist or the new file will exist. However, it doesn’t guarantee that the new file will be safely stored on disk by the time g_file_set_contents() returns. It also has fewer guarantees if the old file didn’t exist (i.e. if the file is being written out for the first time).
In most situations, this is the right compromise. But not in all of them — so that’s why g_file_set_contents_full() now exists, to let the caller choose their own compromise.
Choose your own tradeoff
The level of safety/speed of g_file_set_contents() can be chosen using GFileSetContentsFlags.
Situations where your code might want a looser set of guarantees from the defaults might be when writing out cache files (where it typically doesn’t matter if they’re lost or corrupted), or when writing out large numbers of files where you’re going to call fsync() once after the whole lot (rather than once per file).
Conversely, your code might want a tighter set of guarantees when writing out files which are well-formed-but-incorrect when empty or filled with zeroes (as filling a file with zeroes is one of the failure modes of the existing g_file_set_contents() defaults, if the file is being created), or when writing valuable user data.
So, maybe now is the time to quickly grep your code for g_file_set_contents() calls, and see whether the default tradeoff is the right one in all the places you call it?
Following on from the heap profiling I did on gnome-software to try and speed it up for Endless, the next step was to try profiling the computation done when starting up gnome-software — which bits of code are taking time to run?
tl;dr: There is new tooling in sysprof and GLib from git which makes profiling the performance of high-level tasks simpler. Some fixes have landed in gnome-software as a result.
Approaches which don’t work
The two traditional tools for this – callgrind, and print statements – aren’t entirely suitable for gnome-software.
I tried running valgrind --tool=callgrind gnome-software, and then viewing the results in KCachegrind, but it slowed gnome-software down so much that it was unusable, and the test/retry cycle of building and testing changes would have been soul destroyingly slow.
callgrind works by simulating the CPU’s cache and looking at cache reads/writes/hits/misses, and then attributing costs for those back up the call graph. This makes it really good at looking at the cost of a certain function, or the total cost of all the calls to a utility function; but it’s not good at attributing the costs of higher-level dynamic tasks. gnome-software uses a lot of tasks like this (GsPluginJob), where the task to be executed is decided at runtime with some function arguments, rather than at compile time by the function name/call. For example “get all the software categories” or “look up and refine the details of these three GsApp instances”.
That said, it was possible to find and fix a couple of bits of low-hanging optimisation fruit using callgrind.
Print statements are the traditional approach to profiling higher-level dynamic tasks: print one line at the start of a high-level task with the task details and a timestamp, and print another line at the end with another timestamp. The problem comes from the fact that gnome-software runs so many high-level tasks (there are a lot of apps to query, categorise, and display, using tens of plugins) that reading the output is quite hard. And it’s even harder to compare the timings and output between two runs to see if a code change is effective.
Enter sysprof
Having looked at sysprof briefly for the heap profiling work, and discounted it, I thought it might make sense to come back to it for this speed profiling work. Christian had mentioned at GUADEC in Thessaloniki that the design of sysprof means apps and libraries can send their own profiling events down a socket, and those events will end up in the sysprof capture.
It turns out that’s remarkably easy: link against libsysprof-capture-4.a and call sysprof_capture_writer_add_mark() every time a high-level task ends, passing the task duration and details to it. There’s even an example app in the sysprof repository.
So I played around with this newly-instrumented version of gnome-software for a bit, but found that there were still empty regions in the profiling trace, where time passed and computation was happening, but nothing useful was logged in the sysprof capture. More instrumentation was needed.
sysprof + GLib
gnome-software does a lot of its computation in threads, bringing the results back into the main thread to be rendered in the UI using idle callbacks.
For example, the task to list the apps in a particular category in gnome-software will run in a thread, and then schedule an idle callback in the main thread with the list of apps. The idle callback will then iterate over those apps and add them to (for example) a GtkFlowBox to be displayed.
Adding items to a GtkFlowBox takes some time, and if there are a couple of hundred of apps to be added in a single idle callback, that can take several hundred milliseconds — a long enough time to block the main UI from being redrawn that the user will notice.
How do you find out which idle callback is taking too long? sysprof again! I added sysprof support to GLib so that GSource.dispatch events are logged (along with a few others), and now the long-running idle callbacks are displayed in the sysprof graphs. Thanks to Christian and Richard for their reviews and contributions to those changes.
sysprof in GNOME Builder, showing profiling output from GLib and gnome-software
This capture file was generated using sysprof-cli --gtk --use-trace-fd -- gnome-software, and the ‘gnome-software’ and ‘GLib’ lines in the ‘Timings’ row need to be made visible using the drop-down menu in the ‘Timings’ row.
In it, you can see the ‘get-updates’ plugin job on gnome-software’s flatpak plugin is taking 1.5 seconds (in a thread), and then 175ms to process the results in the main thread.
The selected row above that is showing it’s taking 110ms to process the results from a call to gs_plugin_loader_job_get_categories_async() in the main thread.
What’s next?
With the right tooling in place, it should be easier for me and others to find and fix performance issues like these, in gnome-software and in other projects.
I’ve submitted a fewfixes, but there are more to do, and I need to shift my focus onto other things for now.
Please try out the new sysprof features, and add libsysprof-capture-4.a support to your project (if it would help you debug high-level performance problems). Ask questions on Discourse (and @ me).
To try out the new features, you’ll need the latest versions of sysprof and GLib from git.
Every time a CI pipeline runs on GitLab, it downloads the git repository for your project. Often, pipeline jobs are set up to make further downloads (of dependencies or subprojects), which are also run on each job.
Limiting the clone depth of your repository in the GitLab settings: Settings ? CI/CD, and change it to use a ‘git shallow clone’ of depth 1.
Adding --branch, --no-tags and --depth 1 arguments to every git clone call you make during a CI job. Here’s an example for GLib.
Adding depth=1 to your Meson .wrap files to achieve the same thing when (for example) meson subprojects download is called. See the same example merge request.
For GLib, the difference between git clone https://gitlab.gnome.org/GNOME/glib.git and git clone --depth 1 https://gitlab.gnome.org/GNOME/glib.git is 66MB (reducing from 74MB to 8MB), or a factor of 10. It won’t be as much for younger or smaller projects, but still worthwhile.
Marc-André Lureau has landed GUri support in GLib, and it’ll be available in GLib 2.65.1 (due out in the next few days).
GUri is a new API for parsing and building URIs, roughly equivalent to SoupURI already provided by libsoup — but since URIs are so pervasive, and used even if you’re not actually doing HTTP conversations, it makes sense to have a structured representation for them in GLib.
g_autoptr(GError) local_error = NULL;
const gchar *uri_str;
g_autoptr(GUri) uri = NULL;
g_autoptr(GHashTable) query_params = NULL;
uri_str = "https://discourse.gnome.org/search?q=search%20terms#ember372";
uri = g_uri_parse (uri_str,
G_URI_FLAGS_PARSE_STRICT |
G_URI_FLAGS_ENCODED_QUERY,
&local_error);
if (uri == NULL)
{
/* Handle the error */
g_error ("Invalid URI: %s", uri_str);
return;
}
g_assert_cmpstr (g_uri_get_scheme (uri), ==, "https");
g_assert_cmpstr (g_uri_get_host (uri), ==, "discourse.gnome.org");
g_assert_cmpstr (g_uri_get_path (uri), ==, "/search");
g_assert_cmpstr (g_uri_get_query (uri), ==, "q=search%20terms");
g_assert_cmpstr (g_uri_get_fragment (uri), ==, "ember372");
/* Parse the params further. Using g_uri_parse_params() requires that we pass G_URI_FLAGS_ENCODED_QUERY to g_uri_parse() above, otherwise the %-encoded values could be decoded to create more separators */
query_params = g_uri_parse_params (g_uri_get_query (uri), -1,
"&",
G_URI_PARAMS_NONE,
&local_error);
if (query_params == NULL)
{
/* Handle the error */
g_error ("Invalid query: %s", g_uri_get_query (uri));
return;
}
g_assert_cmpstr (g_hash_table_lookup (query_params, "q"), ==, "search terms");
Building a URI is a matter of calling g_uri_build() or g_uri_join(), which should be self-explanatory.
Please try it out! The API is unstable until GLib makes its 2.66.0 stable release (freezing on 2020-08-08), so now is the time to comment on things which don’t make sense or are hard to use.
The last week has been a fun process of starting to profile gnome-software with the aim of lowering its resource consumption and improving its startup speed. gnome-software is an important part of the desktop, so having it work speedily, especially on resource constrained computers, is important. This work is important for Endless OS, and is happening upstream.
To start with, I’ve looked at gnome-software’s use of heap memory, particularly during startup. While allocating lots of memory on the heap isn’t always a bad thing (caches are a good example of heap allocations being used to speed up a program overall), it’s often a sign of unnecessary work being done. Large heap allocations do take a few tens of milliseconds to be mapped through the allocator too. To do this profiling, I’ve been using valgrind’s massif tool, and massif-visualizer to explore the heap allocations. I could also have used heaptrack, or gobject-list, but they’re tools to explore another time.
Profile your app
Before diving into the process of optimising, the summary is that this work dropped gnome-software’s pixbuf heap usage by 24MB, and its non-pixbuf heap usage after initialisation (i.e. at the point when the main window is visible and ready to use) by 12%, from 15.7MB to 13.7MB (on my set of flatpak repositories on Fedora 32). I’ve been doing this work upstream, and it’ll trickle down to the downstream copy of gnome-software in Endless OS.
There is more low-hanging fruit to explore, and plenty of opportunities to dive in and trim more memory usage from gnome-software, or other apps. If you’re interested, please dive in! Get in touch if you have questions, or post them on GNOME’s Discourse instance and tag me. I’ll be happy to help!
How to profile heap usage
Profiling heap usage using massif is an iterative process: run your program under massif, do some actions in the program, quit, then open the resulting massif.out.pid file in massif-visualizer and see where allocations are coming from. Pick an allocation which looks large or unnecessary, find it in the code, optimise the code (if possible), and then repeat the process.
When running it, I wait for gnome-software to finish loading its main window, then I exit; so all this profiling work is for allocations during startup.
I run massif using this script, which I’ve put in ~/.local/bin/massif:
All the --alloc-fn arguments hide internal GLib functions so that the output is a little easier to interpret directly. There currently isn’t a way to store them in a config file or suppression file.
Some typical output from massif-visualizer before any code improvements:
massif-visualizer output from before code improvements to gnome-software, libxmlb or json-glib. The majority of the allocations are for pixbufs.
The window shows heap allocations against time in instructions executed. The breakdown of where each allocation came from is known in detail at key snapshots (which are expandable in the side pane), and the total heap usage is known in summary for the other snapshots, which allows the graph to be drawn. Allocations coming from different functions are coloured differently in the graph.
There are two sets of allocations to focus on: the red plateau between time 1e+10 and 1.6e+10, and the orange step from time 1e+10 onwards.
Selecting the red plateau shows the backtrace which led to its allocation in the side pane, and (despite some missing debug symbols, leading to the ‘???’ entries), it seems to have come from within libjpeg, as part of loading a JPEG pixbuf. gnome-software has various large JPEG images which are displayed in the featured app banners. It seems that libjpeg makes some big temporary allocations when loading a JPEG.
The orange step from 1e+10 onwards is another target. Looking at the backtraces, it seems it’s a series of similar allocations for pixbuf pixel storage for the featured app banners and for app icons. Some quick calculations show that each 1024×400 pixel banner will take around 6.5MB of memory to store its uncompressed pixels (at 16B per pixel).
From the graph and the backtraces, it seems that almost 100MB is used for pixbuf data for featured app banners. At 6.5MB per banner, that’s 15 banners, which seems reasonable. But actually gnome-software limits itself to 5 banners, so something’s amiss.
Style providers aren’t cheap
After adding some debug prints in GTK where it loads the pixbufs for CSS background properties, it became evident that the same few images were being loaded multiple times. CSS is used to style each featured app tile, including setting the background, since that allows a lot of artistic freedom quite easily. However, the CSS was being refreshed and set a few times for each tile, with a new GtkCssProvider each time. The old provider was staying in place, but with its properties overridden. This included the previously-loaded background image, which remained loaded but unused (essentially, leaked!). With that chased down, it was possible to fix the problem.
Back to profiling
With one issue investigated and fixed, the next step is to do another profiling run, find another target for reducing heap allocations, and repeat.
While we might have fixed one pixbuf bug in gnome-software, it does still use a lot of memory for pixbufs, since it displays a lot of high-resolution app icons. Those pixbuf allocations occupy a lot of space in the massif-visualizer view, and take up a large percentage of the ‘threshold’ of heap allocations which massif includes in its traces.
massif provides the --ignore-fn argument to allow certain allocations to be ignored, so that you can more easily profile others. So I did further profiling runs with a series of --ignore-fn arguments to ignore pixbuf allocations.
massif-visualizer output from before code improvements to gnome-software, libxmlb or json-glib, with pixbuf allocations ignored.
With the --ignore-fn arguments, and increasing the ‘Stacked diagrams’ level in the toolbar to show more individual areas on the graph, it’s now possible to see more detail on the largest non-pixbuf allocations, and hence easier to choose where to focus next.
massif-visualizer output from before code improvements to gnome-software, libxmlb or json-glib, with pixbuf allocations ignored, and more stacked diagrams shown.
From this screenshot, perhaps the next place to focus on would be GHashTable creation and insertions, since that totals around 1MB of the heap usage (once pixbufs are ignored).
Summary
I have iterated through the gnome-software massif profiles a few times, and have submitted various other fixes to gnome-software and libxmlb which are in the process of being reviewed and merged, but I won’t walk through each of them. There are still improvements to be made in future: gnome-software is quite complex!
In total, the changes reduced gnome-software’s heap usage at startup by 26MB, though the actual numbers will vary on other systems depending on how often feature tiles get refreshed, and how many apps and repositories you have configured.
These changes have not made a significant improvement to the startup time of gnome-software, which is more significantly influenced by network activity and file parsing (and the subject of some future work).
Hopefully this post gives a workable introduction to how to use massif on your own software. Please speak up if you have any questions. If you do profiling work on your software, please blog about it — it would be interesting to see what improvements are possible.