Iterating a GMainContext without using a GMainLoop

tl;dr: Use g_main_context_iteration() in a loop with a termination condition; when changing that condition, call g_main_context_wakeup().

GMainLoop is a bit of a pain to use if you want to run a main context with non-trivial termination conditions, since you need to put g_main_loop_quit() calls in various places, and the logic for terminating the loop becomes quite spread out.

Instead, it’s better to iterate the underlying GMainContext directly, like this:

while (async_result == NULL)
  g_main_context_iteration (context, TRUE);

where your termination condition is async_result != NULL. When changing the termination condition, call g_main_context_wakeup() to ensure the current iteration of the main context is unblocked in order to check the condition again. (This is technically only necessary if you’re changing the condition from another thread, in which case you also want to get/set the condition atomically; but it’s a good habit to get into anyway.)

This allows an easy pattern for turning an asynchronous operation into a synchronous one, which can be quite useful in unit tests:

typedef struct
{
  GAsyncResult **result_out;
  GMainContext  *context;
} ContextData;

static void
async_result_cb (GObject      *obj,
                 GAsyncResult *result,
                 gpointer      user_data)
{
  ContextData *data = user_data;
  *data->result_out = g_object_ref (result);
  g_main_context_wakeup (data->context);
}

static void
test_function (void)
{
  …

  g_autoptr(GAsyncResult) result = NULL;
  g_autoptr(GMainContext) context = g_main_context_new ();
  ContextData data = { &result, context };
  some_operation_async (…, async_result_cb, &data);

  while (result == NULL)
    g_main_context_iteration (data.context, TRUE);

  g_autoptr(GError) error = NULL;
  gboolean retval = some_operation_finish (…, result, &error);

  …
}

End of year thoughts

Inspired by others, I thought doing a retrospective on 2017 would be an interesting thing to look back on in a year’s time and see what’s changed.

Work things

December 2017 marked a year of me working for Endless. It’s been twelve months of fixing small bugs, maintaining some OS components, poking my nose into lower parts of the OS than I’m used to, and taking on one or two big projects. I spent a significant amount of time on a project to add new distribution features to libostree and flatpak. That’s something which will hopefully be rolling out in early 2018. It was good to be able to get fairly deeply involved with a new component at a lower level in the stack. More of that in 2018!

I also spent some of my time in 2017 picking up a bit more of the GLib maintenance workload. I’m not sure how much of a difference it’s made to the bug backlog, but it’s kept me occupied anyway.

Hobby things

For most of my working life, I’ve had the luxury of being able to work on FOSS software (mostly in the GNOME ecosystem) as my day job, and as a result, quite a few of my hobby projects are actually maintained during the day. The ones which aren’t have suffered during 2017, because time and energy are limited. I’ve been thinking of ways to ensure that code gets maintained, but haven’t come up with any good solutions in 2017. That’s one to carry over into 2018.

Trips

2017 was a bit less of a plane-heavy year than 2016, but some trips still happened:

  • FOSDEM, catching up with old friends and colleagues, and where the start of the current phase of GLib maintenance started.
  • A week of caving in South Wales, including a trip down the fantastic Dan-yr-Ogof cave (the short round), which included floating down an underground canal on an inflatable swimming pool ring.
  • A week of walking in the Glencoe area, where the weather was uncharacteristically cooperative, and the views were, predictably, pretty good.
  • A party in London to celebrate Endless’ 5th birthday. As always, it was good to spend quality time with my Endless colleagues in endless pubs.
  • Two weeks of caving in Austria, finding some new cave, and exploring further into existing cave. This is something I’m hoping to repeat in future.
  • GUADEC in Manchester, right on the back of the Austria trip (including some fun in posting a laptop to myself so I could have it at the conference). I gave a talk, which some people listened to. We also went on a walk in the Peak District, which was good fun (even if the weather was a bit grey).
  • Two weeks of long-distance trekking in the Svaneti region of Georgia. An excellent destination, with excellent cheese bread. We derived continual amusement from the guide’s dry humour, and the ‘helpful’ comments left by others on the trek information we were using. I did not get struck by lightning.
  • A long weekend in Stockholm to explore the city and catch up with friends. Stockholm has good running!

The outdoors

2017 has definitely been a year of taking advantage of living in the north of England.

  • Around 40 caving trips on weeknights and weekends, which have been interesting and (mostly) fun.
  • 12 fell races, a fun run along with a friend for part of their Bob Graham round, and my first ultra.
  • Running really took off for me: around 1300km run in total (and 57km of ascent), and about 150 hours of 2017 spent running.

Reading and listening

Gigs were a bit thin on the ground: despite there being plenty on in my local area, I always had something else to do. Despite that:

  • Insomnium were good, though I had to leave before the end because of trains.
  • Breabach were very good, and a band I hadn’t heard before going to the gig. Now a favourite.
  • Kreator sounded uncannily like their last live album, but were otherwise enjoyable.
  • Opeth were pretty fantastic, playing a good variety of new and old stuff.

I managed to read only 13 books in 2017, though that number is largely padded out by some short stories I read just to reach my yearly target. That’s not quite fair, though; I read 3250 pages in total. Most recommenable: Where Late the Sweet Birds Sang; most disappointing: Hiroshima.

Debugging critical warnings from GLib code

tl;dr: G_DEBUG=fatal-warnings gdb ./my-program

If you have some code which uses GLib, and it emits a critical warning, for example if a g_return_if_fail() check fails or if a g_warning() message is emitted, how do you track it down and debug it?

Run your code under gdb with G_DEBUG=fatal-warnings (for g_return_if_fail() and g_warning()) or G_DEBUG=fatal-criticals (for g_return_if_fail()), and gdb will break execution when the failing precondition or warning is reached. If there are multiple warnings and you want to skip through to get to a particular one, just use the continue command in gdb until you reach the one you want.

Cave exploration in Austria

For once, this is going to be a non-technical post. I hope to share some of what I’ve been up to in my summer holidays this year.

In late July, I spent two weeks on the Löser plateau in Austria, as part of a long-running caving expedition exploring the caves up there. The plateau is a huge expanse of limestone, opposite the Dachstein, and it contains hundreds of caves of varying sizes. The same expedition has been going there every summer for the last 40 years, slowly working its way across the plateau, trying to find big and deep caves. This was the first time I’d joined them.

Credit: Chris Densham

Some brief background: What is caving? It’s a sport where people descend caves, generally to the bottom (or as deep as they can get), to see and map what is there. It typically involves a lot of water (less of that in Austria than the UK) and mud, cold temperatures (definitely cold in Austria), and technical rope work to descend and ascend vertical shafts (‘pitches’). It combines the skills of climbing, scrambling and surveying; and often requires unshakeable enthusiasm for prolonged physical misery. It’s good fun.

Credit: Luke Stangroom

This year, we focused primarily on two existing (and large) caves: Tunnock’s, and Balcony. I spent a number of my days down Balcony, at around -300m (that’s 300m vertically below the entrance of the cave). We explored various new bits of passage, including a 100m×80m×80m chamber which, sadly, was a dead end; but good fun to get to and explore. Other trips included setting up the ropes (’rigging’) in some bits of cave so they could be re-explored from previous years; and re-surveying some other pieces of passage where the original surveys were incorrect.

Credit: Brendan Hall

Aside from trips down Balcony, we spent some time prospecting for new caves, finding a couple of promising new ones, and another which looked promising then turned into a dead end after 100m of depth. Since I left, another few cave entrances (some new, some rediscovered from 2012) have been found, and leads have been pushed even further in the existing caves.

Credit: Brendan Hall

What are conditions like in the caves? Unlike caves in the UK, most of the ones on the Löser plateau are dry apart from one or two sections. It’s only very recently that exploration has got down to a depth which routinely sees water. There is some mud, but not as much as in the UK. However, what there is is thicker and more pervasive. There’s generally more sand than one sees in caves in the UK, which does a good job of gritting up equipment and hands (think about what happens whenever you go to the beach). The caves are cold, but not ludicrously so — a few of them are cold enough to maintain large ice columns, but I was warm enough in my UK caving gear without extra thermals.

Credit: Brendan Hall

Why do people do this expedition caving? Many reasons, but most commonly, because it takes you to interesting new places, it’s a technical challenge, it’s a physical challenge, and the other people who do it are good fun to be around.

Credit: Luke Stangroom

When not caving, due to tiredness, laziness or weather, people spent their time in the valley, relaxing and drawing up surveys of the sections of caves they’d recently explored (‘nerding’). There are various bits of software for this, which take the legs of dead-reckoned survey and tie them together, using error distribution through loop closures to increase accuracy. The results are pretty nifty, though it takes a while to get up to speed with the software and draw up your surveys efficiently.

Credit: http://expo.survex.com/1623/264/264.html

Caving’s a fun sport with opportunities to go places where literally no human has ever been before, if you take it far enough. It’s easy to get into, too. Read more updates from the expedition if you want.

GUADEC 2017: sun, rain, Coverity, walks

GUADEC 2017 has ended in Manchester. It’s been great; thanks to the organisers and sponsors for a fun conference (this year’s highlight: a preponderance of Tiki bars).

We’ve had sun and heat, and we’ve had rain and more rain. Often within the same hour. On the final day of the conference, a group of us went out to Edale to do some walks to see the Peak District, a national park area near Manchester. This is an area I’ve visited many times before, so it was fun to be able to show it to GNOME people.

This year I gave a talk about the Coverity scans I’ve been running on various GNOME and freedesktop modules for the last year. The slides are online and the video will be up with the rest of the GUADEC videos. If you have a security-critical (or other) module which you’d like to be included in the scan set, let me know. Coverity’s good at finding bugs in complex control flows, but you do need to put some time into triaging its reports. I’m happy to provide guidance about using it.

I spent a fair amount of time during the unconference days reviewing Simon McVittie’s D-Bus work to add support for app-containers into the D-Bus specification and dbus-daemon. This is the first part of an effort to improve support for exposing unconfined D-Bus services to confined app-containers safely and efficiently. The rest of my time was spent working on exciting support for updating flatpak over the LAN for Endless OS. I’ll blog about this more in future.

Thanks to the GUADEC team for organising a great conference, the conference sponsors, and to my employer, Endless, for sponsoring me to go.

Building a GNOME nightly app: Hitori

Following on from earlier efforts to make Hitori a flatpak app, it’s now available as a GNOME nightly app (click here to install), built from git master.

Thanks to hard work by Alex Larsson and others, this was ridiculously easy (see the wiki page on it):

  1. Write flatpak manifest with source and build instructions for Hitori (test locally with flatpak-builder)
  2. Add .app file pointing to it in gnome-apps-nightly
  3. Wait for build to complete
  4. Add .flatpakref file pointing to the build in gnome-apps-nightly

Speaking of Hitori, I don’t have much time to maintain it at the moment, and there are some interesting open feature requests. If anybody is looking for a fun little project to take on, I am happy to mentor work on them.

Running GitLab CI on autotools projects

Inspired by the talk at FOSDEM, I’ve just enabled GitLab’s continuous integration (CI) for building make distcheck for Walbottle, and it was delightfully easy. The results are on Walbottle’s GitLab page.

Steps

  1. Create a ci branch to contain the mess you’ll make while iterating over the correct compile steps.
  2. Create and push a .gitlab-ci.yml file containing build rules similar to the following:
    image: debian:unstable
    
    before_script:
      - apt update -qq
      - apt install -y -qq build-essential autoconf automake pkg-config libtool m4 autoconf-archive gtk-doc-tools libxml2-utils gobject-introspection libgirepository1.0-dev libglib2.0-dev libjson-glib-dev
    
    stages:
      - build
    
    # FIXME: Re-enable valgrind once running the tests under it doesn’t take forever (it causes timeouts).
    # Re-add valgrind to apt-install line above
    build-distcheck:
      stage: build
      script:
        - mkdir build
        - cd build
        - ../autogen.sh --disable-valgrind
        - make V=1 VERBOSE=1
        - DISTCHECK_CONFIGURE_FLAGS=--disable-valgrind make distcheck V=1 VERBOSE=1
    
      # The files which are to be made available in GitLab
      artifacts:
        paths:
          - build/*
  3. Iterate a few times until you get all the dependencies right.
  4. Fix any problems you find (because this might well find problems with your dependency declaration in configure.ac, or other distcheck problems in your project).
  5. Merge ci to master and profit from CI results on every branch and master commit.

Looking at the .gitlab-ci.yml file

For information on the overall layout of the YAML file, and the phases available, you’re best off looking at the comprehensive GitLab documentation. Here are some notes about the autotools-and-C–specific bits of it:

  • The image is a Docker image; I picked a Debian one from the Docker hub.
  • Package installation seems to need to be done in the before_script phase, or the packages can’t be found (which is presumably a protection against rogue build systems).
  • I chose to build distcheck in my build rule because that runs the build, runs the tests, and tries various srcdir ? builddir configurations. You can add other build targets (like build-distcheck to try other build setups).
  • Pass V=1 VERBOSE=1 to get verbose build and test log output in your CI build logs, otherwise you will struggle to work out what is causing any failures.
  • Note that configure flags passed to ./configure are not automatically passed in again when ./configure is run as part of distcheck — so use DISTCHECK_CONFIGURE_FLAGS for that. Ideally, your project will be less fragile than mine, and hence not need any of this.
  • Export the whole build directory as an artifact on success, so you can look at any of the build objects, or the generated tarball, or documentation. You could limit this (for example, to just the tarball) if you’re sure you’ll never need the rest of it.

Going to FOSDEM

I’m going to FOSDEM 2017!

I’ll have a spare, unopened, Nitrokey Pro with me to give to anyone who’s got a good plan for improving the user experience for them in GNOME. That might mean making the setup seamless; it might mean working on the rewrite of Seahorse; it might mean integrating them with LUKS; or something else. Contact me if you’re interested and have a plan.

Validating e-mail addresses

tl;dr: Most likely, you want to validate using the regular expression from the WhatWG (please think about the trade-off you want between practicality and precision); but if you read the caveats below and still want to validate to RFC 5322, then you want libemailvalidation.

Validating e-mail addresses is hard, and not something which you normally want to do in great detail: while it’s possible to spend a lot of time checking the syntax of an e-mail address, the real measure of whether it’s valid is whether the mail server on that domain accepts it. There is ultimately no way around checking that.

Given that a lot of mail providers implement their own restrictions on the local-part (the bit before the ‘@’) of an e-mail address, an address like !!@gmail.com (which is syntactically valid) probably won’t actually be accepted. So what’s the value in doing syntax checks on e-mail addresses? The value is in catching trivial user mistakes, like pasting the wrong data into an e-mail address field, or making a trivial typo in one.

So, for most use cases, there’s no need to bother with fancy validation: just check that the e-mail address matches the regular expression from the WhatWG. That should catch simple mistakes, accept all valid e-mail addresses, and reject some invalid addresses.

Why have I been doing further? Walbottle needs it — I think where one RFC references another is one of the few times it’s necessary to fully implement e-mail validation. In this case, Walbottle needs to be able to validate e-mail addresses provided in JSON files, for its email defined format.

So, I’ve just finished writing a small copylib to validate e-mail addresses according to all the RFCs I could get my hands on; mostly RFC 5322, but there is a sprinking of 5234, 5321, 3629 and 6532 in there too. It’s called libemailvalidation (because naming is hard; typing is easier). Since it’s only about 1000 lines of code, there seems to be little point in building a shared library for it and distributing that; so add it as a git submodule to your code, and use validate.c and validate.h directly. It provides a single function:

size_t error_position;

is_valid = emv_validate_email_address (address_to_check,
                                       length_of_address_to_check,
                                       EMV_VALIDATE_FLAGS_NONE,
                                       &error_position);

if (!is_valid)
  fprintf (stderr, "Invalid e-mail address; error at byte %zu\n",
           error_position);

I’ve had fun testing this lot using test cases generated from the ABNF rules taken directly from the RFCs, thanks to abnfgen. If you find any problems, please get in touch!

Fun fact for the day: due to the obs-qp rule, a valid e-mail address can contain a nul byte. So unless you ignore deprecated syntax for e-mail addresses (not an option for programs which need to be interoperable), e-mail addresses cannot be passed around as nul-terminated strings.

Where are messages on the terminal coming from?

From a discussion on #gtk+ this morning: if you’re using recent versions of GLib with structured logging support, and you want to work out which bit of your code is causing a certain message to be printed to the terminal, run your application in gdb and add a breakpoint on g_log_writer_standard_streams.

(This assumes you’re using the default log writer function; if not, you need to add a breakpoint on something in your writer function.)