Mini-GUADEC 2022 Berlin: retrospective

I’m really pleased with how the mini-GUADEC in Berlin turned out. We had a really productive conference, with various collaborations speeding up progress on Files, display colour handling, Shell, adaptive apps, and a host of other things. We watched and gave talks, and that seemed to work really well. The conference ran from 15:00 to 22:00 in Berlin time, and breaks in the schedule matched when people got hungry in Berlin, so I’d say the timings worked nicely. It helped to be in a city where things are open late.

c-base provided us with a cool inside space, a nice outdoor seating area next to the river, reliable internet, quality A/V equipment and support for using it, and a big beamer for watching talks. They also had a bar open later in the day, and there were several food options nearby.

At least from our end, GUADEC 2022 managed to be an effective hybrid conference. Hopefully people in Guadalajara had a similarly good experience?

Tobias and I spent a productive half a day working through a load of UI papercuts in GNOME Software, closing a number of open issues, including some where we’d failed to make progress for months. The benefits of in-person discussion!

Sadly despite organising the mini-GUADEC, Sonny couldn’t join us due to catching COVID. So far it looks like others avoided getting ill.

Travel

Allan wrote up how he got to Berlin, for general reference and posterity, so I should do the same.

I took the train from north-west England to London one evening and stayed the night with friends in London. This would normally have worked fine, but that was the second-hottest day of the heatwave, and the UK’s rails aren’t designed for air temperatures above 30°C. So the train was 2.5 hours delayed. Thankfully I had time in the plan to accommodate this.

The following morning, I took the 09:01 Eurostar to Brussels, and then an ICE at 14:25 to Berlin (via Köln). This worked well — rails on the continent are designed for higher temperatures than those in the UK.

The journey was the same in reverse, leaving Berlin at 08:34 in time for a 18:52 Eurostar. It should have been possible to then get the last train from London to the north-west of England on the same day, but in the end I changed plans and visited friends near London for the weekend.

I took 2 litres of water with me each way, and grabbed some food beforehand and at Köln, rather than trying to get food on the train. This worked well.

Within Berlin, I used a single 9EUR monatskarte for all travel. This is an amazing policy by the German government, and subjectively it seemed like it was being widely used. It would be interesting to see how it has affected car usage vs public transport usage over several months.

Climate

Overall, I estimate the return train trip to Berlin emitted 52kgCO2e, compared to 2610kgCO2e from flying Manchester to Guadalajara (via Houston). That’s an impact 50× lower. 52kgCO2e is about the same emissions as 2 weeks of a vegetarian diet; 2610kgCO2e is about the same as an entire year of eating a meat-heavy diet.

(Train emissions calculated one-way as 14.8kgCO2e to London, 4.3kgCO2e to Brussels, 6.5kgCO2e to Berlin.)

Tobias gave an impactful talk on climate action, and one of his key points was that significant change can now only happen as a result of government policy changes. Individual consumer choices can’t easily bring about the systemic change needed to prevent new oil and coal extraction, trigger modal shift in transport use, or rethink land allocation to provide sufficient food while allowing rewilding.

That’s very true. One of the exceptions, though, is flying: the choices each of the ~20 people at mini-GUADEC made resulted in not emitting up to 50 tonnes of CO2e in flights. That’s because flights each have a significant emissions cost, and are largely avoidable. (Doing emissions calculations for counterfactuals is a slippery business, but hopefully the 50 tonne figure is illustrative even if it can’t be precise.)

So it’s pretty excellent that the GNOME community supports satellite conferences, and I strongly hope this is something which we can continue to do for our big conferences in future.

Tourism

After the conference, I had a few days in Berlin. On the recommendation of Zeeshan, I spent a day in the Berlin technical museum, and another day exploring several of the palaces at Potsdam.

It’s easy to spend an entire day at the technical museum. One of the train sheds was closed while I was there, which is a shame, but at least that freed up a few hours which I could spend looking at the printing and the jewellery making exhibits.

One of the nice things about the technical museum is that their displays of old machinery are largely functional: they regularly run demonstrations of entire paper making processes or linotype printing using the original machinery. In most other technical museums I’ve been to, the functioning equipment is limited to a steam engine or two and everything else is a static display.

The palaces in Potsdam were impressive, and look like a maintenance nightmare. In particular, the Grotto Hall in the Neues Palais was one of the most fantastical rooms I’ve ever seen. It’s quite a ridiculous display of wealth from the 18th century. The whole of Sanssouci Park made another nice day out, though taking a picnic would have been a good idea.

Thanks!

Thanks again to everyone who organised GUADEC in Guadalajara, Sonny and Tobias for organising the mini-GUADEC, the people at c-base for hosting us and providing A/V support, and the GNOME Foundation for sponsoring several of us to go to mini-GUADEC.

Sponsored by GNOME Foundation

Looking at project resource use and CI pipelines in GitLab

While at GUADEC I finished a small script which uses the GitLab API to estimate the resource use of a project on GitLab. It looks at the CI pipeline job durations and artifact storage for the project and its forks over a given period, and totals things.

You might want to run it on your project!

It gives output something like the following:

Between 2022-06-23 00:00:00+00:00 and 2022-07-23 00:00:00+00:00, GNOME/glib and its 20 forks used:

  • 4592 CI jobs, totalling 17125 minutes (duration minimum 0.0, median 2.3, maximum 65.0)
  • Total energy use: 32.54kWh
  • Total artifact storage: 4426 MB (minimum 0.0, median 0.2, maximum 20.9)

This is useful for giving a rough look at the CI resources used by a project, which could be useful for noticing low-hanging fruit for speeding things up or reducing resource waste.

What can I do with this information?

If total pipeline durations are long, either reduce the number of pipeline jobs or speed them up. Speeding them up almost always has no downsides. Reducing the number of jobs is a tradeoff between convenience of development and resource usage. Two ideas for reducing the number of jobs are to make some jobs manual-only, if they are very unlikely to find problems. Or run them on a schedule rather than on every commit, if it’s OK for them to catch problems up to a week after they’re introduced.

If total artifact storage use is high, store fewer artifacts, or expire them after a week (or so). They are likely not so useful after that point anyway.

If artifacts are being used to cache build dependencies, then consider moving those dependencies into a pre-built container image instead. It may be cached better between CI runners.

This script is rubbish, how do I improve it?

Merge requests welcome on https://gitlab.gnome.org/pwithnall/gitlab-stats, or perhaps you’d like to integrate it into cauldron.io so that the data could be visualised over time? The same query code should work for all GitLab instances, not just GNOME’s.

How does it work?

It queries the GitLab API in a few ways, and then applies a very simple model to the results.

It can take a while to run when querying for large projects or for periods of over a couple of weeks, as it needs to make a REST request for each CI job individually.

Mini-GUADEC 2022 in Berlin

GUADEC 2022 has been happening in person for the first time in two years, in Guadalajara. Twenty of us in Europe met up in Berlin for a mini-GUADEC, to attend the main conference remotely. There have been several talks given from here using the nice A/V setup in c-base, who are hosting us.

I gave my talk this afternoon, on the threading rework which is ongoing in gnome-software. The slides are here, the notes are here (source is here), and the recording should be available soon on the GUADEC YouTube channel.

As part of the question and answer session afterwards, it was suggested that it might be helpful to write a blog post about strategies for making async code in C more readable. I’ll try and write something about that soon.

Thanks to the GUADEC organising team for hosting the conference and integrating remote participation so well, to Sonny and Tobias for organising the Berlin mini-GUADEC, and to the c-base technical people for setting things up nicely here.

How your organisation’s travel policy can impact the environment

Following on from updating our equipment policy, we’ve recently also updated our travel policy at the Endless OS Foundation. A major part of this update was to introduce consideration of carbon emissions into the decision making for when and how to travel. I’d like to share what we came up with, as it should be broadly applicable to many other technology organisations, and I’m quite excited that people across the foundation worked to make these changes happen.

Why is this important?

For a technology company or organisation, travel is likely to be the first or second largest cause of emissions from the organisation. The obvious example in free software circles is the emissions from taking a flight to go to a conference, but actually in many cases the annual emissions from commuting to an office by car are comparable. Both can be reduced through an organisation’s policies.

In Endless’ case, the company is almost entirely remote and so commuting is not a significant cause of emissions. Pre-pandemic, air travel caused a bit under a third of the organisation’s emissions. So if there are things we can do to reduce our organisation’s air travel, that would make a significant difference to our overall emissions.

On an individual level, one return transatlantic flight (1.6tCO2e, which is 1.6 tonnes of carbon dioxide equivalent, the unit of global warming potential) is more than half of someone’s annual target footprint which is 2.8tCO2e for 2030. So not taking a flight is one of the most impactful single actions you can take.

Similarly, commuting 10 miles a day by petrol car, for 227 working days per year, causes annual emissions of about 0.55tCO2e, which is also a significant proportion of a personal footprint when the aim is to limit global warming to 1.5°C. An organisation’s policies and incentives can impact people’s commuting decisions.

Once the emissions from a journey have been made, they can’t be un-made anywhere near as easily or quickly. Reducing carbon emissions now is more impactful than reducing them later.

How did we change the policy?

Previously, Endless’ travel policy was almost entirely focused around minimising financial cost by only allowing employees to choose the cheapest option for a particular travel plan. It had detailed sections on how to minimise cost for flights and private car use, and didn’t really consider other modes of transport.

In the updated policy, financial cost is still a big consideration, but it’s balanced against environmental cost. I’ve included some excerpts from the policy at the bottom of this post, which could be used as the basis for updating your policy.

Due to COVID, not much travel has happened since putting the policy in place, so I can’t share any comparisons of cost and environmental impact before and after applying the policy. The intention is that reducing the number of journeys made will balance slightly increased costs for taking lower-carbon transport modes on other journeys.

The main changes we made to it are:

  • Organise the policy so that it’s written in decision making order: sections cover necessity of travel, then travel planning and approval, then accommodation, then expenses.
  • Critically, the first step in the decision making process is “do you need to travel and what are the alternatives?”. If it’s decided that travel is needed, the next step is to look at how that trip could be combined with other useful activities (meetings or holiday) to amortise the impact of the travel.
  • We give an explicit priority order of modes of travel to choose:
    1. Rail (most preferred)
    2. Shared ground transport (coach/bus, shared taxi)
    3. Private ground transport (taxi, car rental, use of own vehicle)
    4. Air (least preferable)
  • And, following that, a series of rules for how to choose the mode of transport, which gives some guidance about how to balance environmental and financial cost (and other factors):

You should explore travel options in that order, only moving to the next option if any of the following conditions are true:

  • No such option exists for the journey in question
    • e.g. there is no rail/ground link between London and San Francisco
  • This mode of travel, or the duration of time spent traveling via such means, is regarded as unsafe or excessively uncomfortable at that location
    • For example, buses/coaches are considered to be uncomfortable or unsafe in certain countries/regions.
  • The journey is over 6 hours, and the following option reduces the journey time by 2× (or more)
    • We have a duty to protect company time, so you may (e.g.) opt for flying in cases where the travel time is significantly reduced.
    • Even if there is the opportunity for significant time savings, you are encouraged to consider the possibility of working while on the train, even if it works out to be a longer journey.
  • The cost is considered unreasonably/unexpectedly high, but the following option brings expenses within usual norms
    • The regular pricing of the mode of transport can be considered against the distance traveled. If disproportionately high, move onto other options.

In summary, we prefer rail and ground transportation to favor low-emissions, even if they are not the cheapest options. However, we also consider efficient use of company time, comfort, safety, and protecting ourselves from unreasonably high expenditure. You should explore all these options and considerations and discuss with your manager to make the final decision.

Your turn

I’d be interested to know whether others have similar travel policies, or have better or different ideas — or if you make changes to your travel policy as a result of reading this.

Policy excerpt

State persistence for apps and sessions: Endless Orange Week

The second part of my project in Endless Orange Week was to look at state persistence for apps and sessions. At its core, this means providing

  • some way for apps to save and restore their state (such as which part of the UI you’re looking at, what’s selected, and unsaved content);
  • code in the session manager to save and restore the state of all applications, and a list of which applications are running, when shutting down/restarting/logging out/logging in/starting up.

Those two bullet points hide a lot of complexity, and it’s not surprising that I didn’t get particularly far in this project! It requires coordinated changes in a lot of components: GLib, GTK, gnome-session and applications themselves.

A lot of these changes have been prototyped or worked on before, by various people, but nothing has yet come together. In fact, gnome-session used to fully support restoring apps to a certain degree — before it was ported away from XSMP, it used to support saving the set of apps when closing a session, and re-starting those apps when starting the session again. It did not support restoring the state of each app, though, just the fact that it was running.

Breaking down the problem

Updating and building on the proposals and branches from others so far, we end up with:

  • Changes to GLib to add a ‘restart data’ property to GApplication, which allows the application to expose its current state to the session manager (to be saved), and which is initialised with the restored state from the session manager on startup. These build heavily on changes proposed by Bastien Nocera, but my tweaks to them are still pretty exploratory and not ready for review.
  • Code in GTK to support serialising the widget tree. This implements saving the state of the UI, and was prototyped by Matthias Clasen. My additions (also not yet ready for review) tie it in to the gnome-session API. Further work (but not very much of it) would have to be done to tie Matthias’ proposals into the final shape of the GLib API.
  • Preparatory cleanups of old code in gnome-session (this one is ready to review and hopefully merge!).
  • Work to re-support session restore in gnome-session. This is mostly ready, but needs tidying up and testing (which is likely to be a significant amount of work). It ties in with systemd transient scope work which Benjamin Berg and Iain Lane have been working on.

The final two pieces of the puzzle above took most of my week, and included a lot of time spent learning the architecture of gnome-session, and working out a bit of a protocol for apps to register for session restore and, in particular, for gnome-session to be able to re-launch apps the right way when restoring a session. That’s something which deserves a blog post of its own at some point in the future, once I’m sure I’ve got everything straight in my head.

In summary, there’s not been much progress in terms of creating merge requests. But the week was, I think, well spent: I’ve learned a lot about the shape of the problem, and feel like I have a better idea of how to split it up into chunks which it might be possible to land over time to get the feature working. Many thanks to Endless for giving me the opportunity to do so.

Certainly, I think this project is too big to easily do in a single GNOME release. There’s too much coordination required between different projects with different cadences and development resources.

The next step will be to land the preparatory gnome-session cleanups, and to discuss and land the GLib API so that people can start building with that.

Runtime control of debug output: Endless Orange Week

Recently at Endless we had a week of focused working on projects which are not our day-to-day work. It was called ‘Endless Orange Week’, and everyone was encouraged to explore a topic of their choosing.

I chose to look at two projects, both of which included a D-Bus/API component. My thinking was that review of the new interfaces on each project might take a while, so it would make sense to have two projects running in parallel so I could switch between them when blocked.

I’m going to blog about the two projects separately, to avoid one mega-long post.

The first project was to add a D-Bus debug interface for applications. This would allow debug output from an application to be turned on and off at runtime, rather than just being set with a command line argument or environment variable when the application is first started.

This would allow users and developers to get debug output from long-running applications without having to restart them, as quite often restarting a process will destroy the state you were hoping to debug.

What I came up with is GDebugController, which is awaiting review in GLib now. It’s an interface, currently implemented by GDebugControllerDBus. When instantiated, GDebugControllerDBus creates a new D-Bus object and interface for controlling the debug output from the application. It hooks into the standard g_debug() message functions, and can be hooked into a custom log writer function if your application uses one of those.

It essentially exists to expose one D-Bus property and allow that to be hooked in to your log writer. It has to be a bit more complex than that, though, as it needs to be able to handle authorisation: checking that the D-Bus peer who’s requesting to enable debug output on your application is actually allowed to do so. For services in particular, this is important, as allowing any peer to enable debug output could count as a privilege escalation. They might be able to slow your process down due to the volume of debug output it produces; fill the disk up; or look at the outputted logs and see private information. GDebugControllerDBus has an authorize signal to support this, and it works quite similarly to the GDBusInterfaceSkeleton::g-authorize-method signal.

Using it in an application

Firstly, you need to wait for it to be reviewed and land in GLib. The API might change during review.

Once it’s landed, assuming nothing changes, you just need to create an instance of GDebugControllerDBus. It will create the D-Bus object and hook it all up. When peers change the debug level in your application, the default handler will call g_log_set_debug_enabled() which will change the behaviour of GLib’s default log writer function.

If you have a custom log writer function, you will need to change it to check g_log_get_debug_enabled() and output debug messages if it’s true.

Using it in a service

Using it in a service will typically involve hooking up authorisation. I’ve implemented support for it in libgsystemservice, so that it will be enabled for any user of libgsystemservice after version 0.2.0.

To use polkit for authorisation, set the GssService:debug-controller-action-id property to the ID of the polkit action you want to use for authorising enabling/disabling debug mode. libgsystemservice will handle the polkit checks using that. Here’s an example.

If that property is not set, a default policy will be used, where debug requests will be accepted unconditionally if your service is running on the session bus, and rejected unconditionally if it’s running on the system bus. The thinking is that there’s no security boundary on the session bus (all peers are equally trusted), whereas there are a lot of security boundaries on the system bus so libgsystemservice is best to fail closed and force you to write your own security policy.

That’s it! Reviews and feedback welcome. Many thanks to Endless for running this week and actively encouraging everyone to make use of it.

Insulating a suspended timber floor

In a departure from my normal blogging, this post is going to be about how I’ve retrofitted insulation to some of the flooring in my house and improved its airtightness. This has resulted in a noticeable increase in room temperature during the cold months.

Setting the scene

The kitchen floor in my house is a suspended timber floor, built over a 0.9m tall sealed cavity (concrete skim floor, brick walls on four sides, air bricks). This design is due to the fact the kitchen is an extension to the original house, and it’s built on the down-slope of a hill.

The extension was built around 1984, shortly before the UK building regulations changed to (basically) require insulation. This meant that the floor was literally some thin laminate flooring, a 5mm underlay sheet for that, 22mm of chipboard, and then a ventilated air cavity at outside temperature (which, in winter, is about 4°C).

In addition to that, there were 10mm gaps around the edge of the chipboard, connecting the outside air directly with the air in the kitchen. The kitchen is 3×5m, so that gives an air gap of around 0.16m². That’s equivalent to leaving a window open all year round. The room has been this way for about 36 years! The UK needs a better solution for ongoing improvement and retrofit of buildings.

I established all this initial information fairly easily by taking the kickboards off the kitchen units and looking into the corners of the room; and by drilling a 10mm hole through the floor and threading a small camera (borescope) into the cavity beneath.

Making a plan

The fact that the cavity was 0.9m high and in good structural shape meant that adding insulation from beneath was a fairly straightforward choice. Another option (which would have been the only option if the cavity was shallower) would have been to remove the kitchen units, take up all the floorboards, and insulate from above. That would have been a lot more disruptive and labour intensive. Interestingly, the previous owners of the house had a whole new kitchen put in, and didn’t bother (or weren’t advised) to add insulation at the same time. A very wasted opportunity.

I cut an access hatch in one of the floorboards, spanning between two joists, and scuttled into the cavity to measure things more accurately and check the state of things.

Under-floor cavity before work began (but after a bit of cleaning)

The joists are 145×45mm, which gives an obvious 145mm depth of insulation which can be added. Is that enough? Time for some calculations.

I chose several potential insulation materials, then calculated the embodied carbon cost of insulating the floor with them, the embodied financial cost of them, and the net carbon and financial costs of heating the house with them in place (over 25 years). I made a number of assumptions, documented in the workings spreadsheet, largely due to the lack of EPDs for different components. Here are the results:

Heating scenarioInsulation assemblyU-value of floor assembly (W/m2K)Energy loss to floor (W)Net cost over 25 years (£)Net carbon cost over 25 years (kgCO2e)
Current gas tariff
(3.68p/kWh, 0.22kgCO2e/kWh)
Current floor2.60382308017980
Thermojute 160mm0.22327301700
Thermoflex 160mm0.21308601450
Thermojute 300mm0.121810201190
Thermoflex 240mm0.1117910820
Mineral wool 160mm0.24355401680
ASHP estimate
(13.60p/kWh, 0.01kgCO2e/kWh)
Current floor(as above)(as above)113701140
Thermojute 160mm1420290
Thermoflex 160mm1520110
Thermojute 300mm1410410
Thermoflex 240mm128080
Mineral wool 160mm1290150
Average future estimate (hydrogen grid)
(8.40p/kWh, 0.30kgCO2e/kWh)
Current floor702025090
Thermojute 160mm10602290
Thermoflex 160mm11702010
Thermojute 300mm12001520
Thermoflex 240mm10901140
Mineral wool 160mm8902320
Costings for different floor assemblies; see the spreadsheet for full details

In retrospect, I should also have considered multi-layer insulation options, such as a 20mm layer of closed-cell foam beneath the chipboard, and a 140mm layer of vapour-open insulation below that. More on that below.

In the end, I went with 160mm of Thermojute, packed between the joists and held in place with a windproof membrane stapled to the underside of the joists. This has a theoretical U-value of 0.22W/m2K and hence an energy loss of 32W over the floor area. Over 25 years, with a new air source heat pump (which I don’t have, but it’s a likelihood soon), the net carbon cost of this floor (embodied carbon + heating loss through the floor) should be at most 290kgCO2e, of which around 190kgCO2e is the embodied cost of the insulation. Without changing the heating system it would be around 1700kgCO2e.

The embodied cost of the insulation is an upper bound: I couldn’t find an embodied carbon cost for Thermojute, but its Naturplus certification puts an upper bound on what’s allowed. It’s likely that the actual embodied cost is lower, as the jute is recycled in a fairly simple process.

Three things swung the decision: the availability of Thermojute over Thermoflex, the joist loading limiting the depth of insulation I could install, and the ease of not having to support insulation installed below the depth of the joists.

This means that the theoretical performance of the floor is not Passivhaus standard (around 0.10–0.15W/m2K), although this is partially mitigated by the fact that the kitchen is not a core part of the house, and is separated from it by a cavity wall and some tight doors, which means it should not be a significant heat sink for the rest of the house when insulated. It’s also regularly heated by me cooking things.

Hopefully the attention to detail when installing the insulation, and the careful tracing of airtightness and windtightness barriers through the design should keep the practical performance of the floor high. The windtightness barrier is to prevent wind-washing of the insulation from below. The airtightness barrier is to prevent warm, moisture-laden air from the kitchen escaping into the insulation and building structure (particularly, joists), condensing there (as they’re now colder due to the increased insulation) and causing damp problems. An airtightness barrier also prevents convective cooling around the floor area, and reduces air movement which, even if warm, increases our perception of cooling.

I did not consider thermal bridging through the joists. Perhaps I should have done?

Insulation installation

Installation was done over a number of days and evenings, sped up by the fact the UK was in lockdown at the time and there was little else to do.

Cross sections of the insulation details

The first step in installation was to check the blockwork around each joist end and seal that off to reduce draughts from the wall cavity into the insulation. Thankfully, the blockwork was in good condition so no work was necessary.

The next step was to add an airtightness seal around all pipe penetrations through the chipboard, as the chipboard was to form the airtightness barrier for the kitchen. This was done with Extoseal Magov tape.

Sealing pipe penetrations through the chipboard floor using Extoseal Magov.

The next step in installation was to tape the windproof membrane to the underside edge of the chipboard, to separate the end of the insulation from the wall. This ended up being surprisingly quick once I’d made a cutting template.

The next step was to wedge the insulation batts in the gap between each pair of joists. This was done in several layers with offset overlaps. Each batt was slightly wider than the gap between joists, so could easily be held in place with friction. This arrangement shouldn’t be prone to gaps forming in the insulation as the joists expand and contract slightly over time.

One of the positives of using jute-based insulation is that it smells of coffee and sugar (which is what the bags which the jute fibres came from were originally used to transport). One of the downsides is that the batts need to be cut with a saw and the fibres get everywhere.

Some of the batts needed to be carefully packed around (insulated) pipework, and I needed to form a box section of windproof membrane around the house’s main drainage stack in one corner of the space, since it wasn’t possible to fit insulation or the membrane behind it. I later added closed-cell plastic bubblewrap insulation around the rest of the drainage stack to reduce the chance of it freezing in winter, since the under-floor cavity should now be significantly colder.

As more of the insulation was installed, I could start to staple the windproof membrane to the underside of the joists, and seal the insulation batts in place. The room needed three runs of membrane, with 100mm taped overlaps between them.

With the insulation and membrane in place and taped, the finishing touches in the under-floor cavity were to reinstall the pipework insulation and seal it to the windproof membrane to prevent any (really minor) wind washing of the insulation from draughts through the pipe holes; to label everything; insulate the drainage stack; re-clip the mains wiring; and tie the membrane into the access hatch.

Airtightness work in the kitchen

With the insulation layer complete under the chipboard floor, the next stage in the job was to ensure a continuous airtightness layer between the kitchen walls (which are plasterboard, and hence airtight apart from penetrations for sockets which I wasn’t worried about at the time) and the chipboard floor. Each floor board is itself airtight, but the joints between each of them and between them and the walls are not.

The solution to this was to add a lot of tape: cheaper paper-based Uni tape for joining the floor boards, and Contega Solido SL for joining the boards to the walls (Uni tape is not suitable as the walls are not smooth and flat, and there are some complex corners where the flexibility of a fabric tape is really useful).

Tediously, this involved removing all the skirting board and the radiator. Thankfully, though, none of the kitchen units needed to be moved, so this was actually a fairly quick job.

Finally, with some of the leftover insulation and windproof membrane, I built an insulation plug for the access hatch. This is attached to the underside of the hatch, and has a tight friction fit with the underfloor insulation, so should be windtight. The hatch itself is screwed closed onto a silicone bead, which should be airtight and replaceable if the hatch is ever opened.

The final step was to reinstall the kitchen floor, which was fairly straightforward as it’s interlocking laminate strips. And, importantly, to print out the plans, cross-sections, data sheets, a big warning about the floor being an air tightness barrier, and a sign to point towards the access hatch, and put them in a wallet under the kitchen units for someone to find in future.

Retrospective

This was a fun job to do, and has noticeably improved the comfort of my kitchen.

I can’t give figures for how much of an improvement it’s made, or whether its actual performance matches the U-value calculations I made in planning, as I don’t have reliable measured energy loss figures from the kitchen from before installing the insulation. Perhaps I’d try and measure things more in advance of a project like this next time, although that does require an extra level of planning and preparation which can be hard to achieve for a job done in my spare time.

I’m happy with the choice of materials and installation method. Everything was easy to work with and the job progressed without any unexpected problems.

If I were to do the planning again, I might put more thought into how to achieve a better U-value while being limited by the joist height. Extending the joists to accommodate more depth of insulation was something I explored in some detail, but it hit too many problems: the air bricks would need to be ducted (as otherwise they’d be covered up), the joist loading limits might be hit, and the method for extending the joists would have to be careful not to introduce thermal bridges. The whole assembly might have bridged the damp proof course in the walls.

It might, instead, have worked to consider a multi-layer insulation approach, where a thin layer of high performance insulation was used next to the chipboard, with the rest of the joist depth taken up with the thermojute. I can’t easily change to that now, though, so any future improvements to this floor will either have to add insulation above the chipboard (and likely another airtightness layer above that), or extend below the joists and be careful about it.

Add metadata to your app to say what inputs and display sizes it supports

The appstream specification, used for appdata files for apps on Linux, supports specifying what input devices and display sizes an app requires or supports. GNOME Software 41 will hopefully be able to use that information to show whether an app supports your computer. Currently, though, almost no apps include this metadata in their appdata.xml file.

Please consider taking 5 minutes to add the information to the appdata.xml files you care about. Thanks!

If your app supports (and is tested with) touch devices, plus keyboard and mouse, add:

<recommends>
  <control>keyboard</control>
  <control>pointing</control>
  <control>touch</control>
</recommends>

If your app is only tested against keyboard and mouse, add:

<requires>
  <control>keyboard</control>
  <control>pointing</control>
</requires>

If it supports gamepads, add:

<recommends>
  <control>gamepad</control>
</recommends>

If your app is only tested on desktop screens (the majority of cases):

<requires>
  <display_length compare="ge">medium</display_length>
</requires>

If your app is adaptive and works on mobile device screens through to desktops, add:

<requires>
  <display_length compare="ge">small</display_length>
</requires>

Or, if you’ve developed your app to work at a specific size (mostly relevant for mobile devices), you can specify that explicitly:

<requires>
  <display_length compare="ge">360</display_length>
</requires>

Note that there may be updates to the definition of display_length in appstream in future for small display sizes (phones), so this might change slightly.

Another example is what I’ve added for Hitori, which supports touch and mouse input (but not keyboard input) and which works on small and large screens.

See the full specification for more unusual situations and additional examples.

How your organisation’s equipment policy can impact the environment

At the Endless OS Foundation, we’ve recently been updating some of our internal policies. One of these is our equipment policy, covering things like what laptops and peripherals are provided to employees. While updating it, we took the opportunity to think about the environmental impact it would have, and how we could reduce that impact compared to standard or template equipment policies.

How this matters

For many software organisations, the environmental impact of hardware purchasing for employees is probably at most the third-biggest contributor to the organisation’s overall impact, behind carbon emissions from energy usage (in building and providing software to a large number of users), and emissions from transport (both in sending employees to conferences, and in people’s daily commute to and from work). These both likely contribute tens of tonnes of emissions per year for a small/medium sized organisation (as a very rough approximation, since all organisations are different). The lifecycle emissions from a modern laptop are in the region of 300kgCO2e, and one target for per-person emissions is around 2.2tCO2e/year by 2030.

If changes to this policy reduce new equipment purchase by 20%, that’s a 20kgCO2e/year reduction per employee.

So, while changes to your organisation’s equipment policy are not going to have a big impact, they will have some impact, and are easy and unilateral changes to make right now. 20kgCO2e is roughly the emissions from a 150km journey in a petrol car.

What did we put in the policy?

We started with a fairly generic policy. From that, we:

  • Removed time-based equipment replacement schedules (for example, replacing laptops every 3 years) and went with a more qualitative approach of replacing equipment when it’s no longer functional enough for someone to do their job properly on.
  • Provided recommended laptop models for different roles (currently several different versions of the Dell XPS 13), which we have checked conform to the rest of the policy and have an acceptable environmental impact — Dell are particularly good here because, unlike a lot of laptop manufacturers, they publish a lifecycle analysis for their laptops
  • But also gave people the option to justify a different laptop model, as long as it meets certain requirements:

All laptops must meet the following standards in order to have low lifetime impacts:

All other equipment must meet relevant environmental standards, which should be discussed on a case by case basis

If choosing a device not explicitly listed above, manufacturers who provide Environmental Product Declarations for their products should be preferred

  • These requirements aim to minimise the laptop’s carbon emissions during use (i.e. its power consumption), and increase the chance that it will be repairable or upgradeable when needed. In particular, having a replaceable battery is important, as the battery is the most likely part of the laptop to wear out.
  • The policy prioritises laptop upgrades and repairs over replacement, even when the laptop would typically be coming up for replacement after 3 years. The policy steers people towards upgrading it (a new hard drive, additional memory, new battery, etc.) rather than replacing it.
  • When a laptop is functional but no longer useful, the policy requires that it’s wiped, refurbished (if needed) and passed on to a local digital inclusion charity, school, club or similar.
  • If a laptop is broken beyond repair, the policy requires that it’s disposed of according to WEEE guidelines (which is the norm in Europe, but potentially not in other countries).

A lot of this just codifies what we were doing as an organisation already — but it’s good to have the policy match the practice.

Your turn

I’d be interested to know whether others have similar equipment policies, or have better or different ideas — or if you make changes to your equipment policy as a result of reading this.

Don’t (generally) put documentation license in appdata

There have been a few instances recently where people have pointed out that GNOME Software marks some apps as not free software when they are. This is a bug in the appdata files provided by those applications, which typically includes something like

<project_license>GPL-3.0+ and CC-BY-SA-3.0</project_license>

This is generally an attempt to list the license of the code and of the documentation. However, the resulting SPDX expression means to apply the most restrictive interpretation of both licenses. As a result, the expression as a whole is considered not free software (CC-BY-SA-3.0 is not a free software license as per the FSF or OSI lists).

Instead, those apps should probably just list the ‘main’ license for the project:

<project_license>GPL-3.0+</project_license>

and document the license for their documentation separately. As far as I know, the appdata format doesn’t currently have a way of listing the documentation license in a machine readable way.

If you maintain an app, or want to help out, please check the licensing is correctly listed in your app’s appdata.

There’s an issue open against the appdata spec for improving how licenses are documented in future — contributions also welcome there.

(To avoid doubt, I think CC-BY-SA-3.0 is a fine license for documentation; it’s just problematic to include it in the ‘main’ appdata license statement for an app.)