Category Archives: GNOME

Posts about GNOME and programming for it.

A detailed look at GSource

Another post in this sporadic mini-series on GMainContext, this time looking at GSource and writing your own type of event source. This post is actually adapted from a page I wrote about it for developer.gnome.org, which includes fully-compilable and unit tested source code;  future tweaks and clarifications can be found there. If you find a problem, please file a bug.

tl;dr: Write a custom GSource if you have a non-file-descriptor-based event source to integrate with a GMainContext. It’s a matter of writing a few virtual functions.

What is GSource?

A GSource is an expected event with an associated callback function which will be invoked when that event is received. An event could be a timeout or data being received on a socket, for example.

GLib contains various types of GSource, but also allows applications to define their own, allowing custom events to be integrated into the main loop.

The structure of a GSource and its virtual functions are documented in detail in the GLib API reference.

A message queue source

As a running example, a message queue source will be used which dispatches its callback whenever a message is enqueued to a queue internal to the source (potentially from another thread).

This type of source is useful for efficiently transferring large numbers of messages between main contexts. The alternative is transferring each message as a separate idle GSource using g_source_attach(). For large numbers of messages, this means a lot of allocations and frees of GSources.

Structure

Firstly, a structure for the source needs to be declared. This must contain a GSource as its parent, followed by the private fields for the source: the queue and a function to call to free each message once finished with.

typedef struct {
  GSource         parent;
  GAsyncQueue    *queue;  /* owned */
  GDestroyNotify  destroy_message;
} MessageQueueSource

Prepare function

Next, the prepare function for the source must be defined. This determines whether the source is ready to be dispatched. As this source is using an in-memory queue, this can be determined by checking the queue’s length: if there are elements in the queue, the source can be dispatched to handle them.

return (g_async_queue_length (message_queue_source->queue) > 0);

Check function

As this source has no file descriptors, the prepare and check functions essentially have the same job, so a check function is not needed. Setting the field to NULL in GSourceFuncs bypasses the check function for this source type.

Dispatch function

For this source, the dispatch function is where the complexity lies. It needs to dequeue a message from the queue, then pass that message to the GSource’s callback function. No messages may be queued: even through the prepare function returned true, another source wrapping the same queue may have been dispatched in the mean time and taken the final message from the queue. Further, if no callback has been set for the GSource (which is allowed), the message must be destroyed and silently dropped.

If both a message and callback are set, the callback can be invoked on the message and its return value propagated as the return value of the dispatch function. This is FALSE to destroy the GSource and TRUE to keep it alive, just as for GSourceFunc — these semantics are the same for all dispatch function implementations.

/* Pop a message off the queue. */
message = g_async_queue_try_pop (message_queue_source->queue);

/* If there was no message, bail. */
if (message == NULL)
  {
    /* Keep the source around to handle the next message. */
    return TRUE;
  }

/* @func may be %NULL if no callback was specified.
 * If so, drop the message. */
if (func == NULL)
  {
    if (message_queue_source->destroy_message != NULL)
      {
        message_queue_source->destroy_message (message);
      }

    /* Keep the source around to consume the next message. */
    return TRUE;
  }

return func (message, user_data);

Callback functions

The callback from a GSource does not have to have type GSourceFunc. It can be whatever function type is called in the source’s dispatch function, as long as that type is sufficiently documented.

Normally, g_source_set_callback() is used to set the callback function for a source instance. With its GDestroyNotify, a strong reference can be held to keep an object alive while the source is still alive:

g_source_set_callback (source, callback_func,
                       g_object_ref (object_to_strong_ref),
                       (GDestroyNotify) g_object_unref);

However, GSource has a layer of indirection for retrieving this callback, exposed as g_source_set_callback_indirect(). This allows GObject to set a GClosure as the callback for a source, which allows for sources which are automatically destroyed when an object is finalized — a weak reference, in contrast to the strong reference above:

g_source_set_closure (source,
                      g_cclosure_new_object (callback_func,
                                             object_to_weak_ref));

It also allows for a generic, closure-based ‘dummy’ callback, which can be used when a source needs to exist but no action needs to be performed in its callback:

g_source_set_dummy_callback (source);

Constructor

Finally, the GSourceFuncs definition of the GSource can be written, alongside a construction function. It is typical practice to expose new source types simply as GSources, not as the subtype structure; so the constructor returns a GSource*.

The example constructor here also demonstrates use of a child source to support cancellation conveniently. If the GCancellable is cancelled, the application’s callback will be dispatched and can check for cancellation. (The application code will need to make a pointer to the GCancellable available to its callback, as a field of the callback’s user data set in g_source_set_callback()).

GSource *
message_queue_source_new (GAsyncQueue    *queue,
                          GDestroyNotify  destroy_message,
                          GCancellable   *cancellable)
{
  GSource *source;  /* alias of @message_queue_source */
  MessageQueueSource *message_queue_source;  /* alias of @source */

  g_return_val_if_fail (queue != NULL, NULL);
  g_return_val_if_fail (cancellable == NULL ||
                        G_IS_CANCELLABLE (cancellable), NULL);

  source = g_source_new (&message_queue_source_funcs,
                         sizeof (MessageQueueSource));
  message_queue_source = (MessageQueueSource *) source;

  /* The caller can overwrite this name with something more useful later. */
  g_source_set_name (source, "MessageQueueSource");

  message_queue_source->queue = g_async_queue_ref (queue);
  message_queue_source->destroy_message = destroy_message;

  /* Add a cancellable source. */
  if (cancellable != NULL)
    {
      GSource *cancellable_source;

      cancellable_source = g_cancellable_source_new (cancellable);
      g_source_set_dummy_callback (cancellable_source);
      g_source_add_child_source (source, cancellable_source);
      g_source_unref (cancellable_source);
    }

  return source;
}

Complete example

The complete source code is available in gnome-devel-docs’ git repository, along with unit tests.

Further examples

Sources can be more complex than the example given above. In libnice, a custom GSource is needed to poll a set of sockets which changes dynamically. The implementation is given as ComponentSource in component.c and demonstrates a more complex use of the prepare function.

Another example is a custom source to interface GnuTLS with GLib in its GTlsConnection implementation. GTlsConnectionGnutlsSource synchronizes the main thread and a TLS worker thread which performs the blocking TLS operations.

D-Bus API design guidelines

D-Bus has just gained some upstream guidelines on how to write APIs which best use its features and follow its conventions. Hopefully they draw together all the best practices from the various articles which have been published about writing D-Bus APIs in the past. If anybody has suggestions, criticism or feedback about them, or ideas for extra guidelines to include, please get in touch!

GNOME programming guidelines: the rise of gnome-devel-docs

Codethink logo

Before I start, an addendum to my last post about the DX hackfest: I wish to thank Codethink for sponsoring dinner one night of the event. I forgot to include that in my original post, sorry! Thanks again to all the companies who let employees attend: Endless, Codethink, Canonical and Red Hat.

tl;dr: Check out the new GNOME Programming Guidelines and file bugs in Bugzilla. Though it looks like a cronjob may be taking the docs offline occasionally. This will be fixed.

Now, to some of the results of the hackfest. In the last week or so, I’ve been working on expanding the GNOME programming guidelines, upstreaming various bits of documentation which Collabora have been writing for a customer who is using the GNOME stack in a large project. The guidelines were originally written in the early days of GNOME by Federico, Miguel and Morten; Federico updated them in 2013, and now they’ve been expanded again.

It looks like these guidelines can fill one of the gaps we currently have in documentation, where we need to recommend best practices and give tutorial-style examples, but gtk-doc–generated API manuals are not the right place. For example, the new guidelines include recommendations for making libraries parallel-installable (based off Havoc’s original article, with permission); or recommendations for choosing where to store data (in GSettings, a serialised GVariant store, or a full-on GOM/SQLite database?). The guidelines are intended to be useful to all developers, although inherently need to target newer developers more, so may simplify a few things.

I’ve still got some ideas for things to add. For example, I need to rework some of my blog posts about GMainContext into an article, since we should be documenting before blogging. Other ideas are very welcome, as is criticism, feedback and improvements: please file a bug against gnome-devel-docs. Thanks to the documentation team for their help and reviews!

DX hackfest 2015: day 5

Day 5, and the DX and docs hackfest in Collabora HQ, Cambridge has drawn to a close. It’s been great to have everyone here, and there have been a lot of in-depth discussions over the last few days about the details of app sandboxing, runtimes, Builder integration with various new services, the development of an IDE abstraction layer, approaches for making build systems accessible to Builder, lots of new things to statically analyse, and some fairly fundamental additions to GLib in the form of G_DECLARE_[FINAL|DERIVABLE]_TYPE and general-purpose reference counted memory areas. Whew! We even had a fleeting visit by Richard Hughes to discuss packaging issues for apps.

For all the details, see the blogs by Ryan, Cosimo, Ryan again, Alberto, Christian and Emmanuele.

I can’t do justice to the work of the docs team, who put in consistent, solid effort throughout the hackfest. See the blogs by Petr, Bastian, Kat and Jim for all the details. They even left me with a seemingly endless supply of Mallard balls to throw around the office!

Dave and I have spent a little while working on further deprecating gnome-common. More details to come once the migration guide is finished.

Thank you to Collabora for hosting, Codethink for sponsoring a dinner, and Endless, Codethink (again), Canonical and Red Hat for letting people attend; and thank you to the GNOME Foundation for sponsoring some of the attendees. It would not be a hackfest without the hackers!

Collabora logoCodethink logoEndless Mobile logoRed Hat logoCanonical logo

DX hackfest 2015: day 1

It’s a sunny Sunday here in Cambridge, UK, and GNOMErs have been arriving from far and wide for the first day of the 2015 developer experience hackfest. This is a week-long event, co-hosted with the winter docs hackfest (which Kat has promised to blog about!) in the Collabora offices.

Today was a bit of a slow start, since people were still arriving throughout the day. Regardless, there have been various discussions, with Ryan, Emmanuele and Christian discussing performance improvements in GLib, Christian and Allan plotting various different approaches to new UI in Builder, Cosimo and Carlos silently plugging away at GTK+, and Emmanuele muttering something about GProperty now and then.

Tomorrow, I hope we can flesh out some of these initial discussions a bit more and get some roadmapping down for GLib development for the next year, amongst other things. I am certain that Builder will feature heavily in discussions too, and apps and sandboxing, now that Alex has arrived.

I’ve spent a little time finishing off and releasing Walbottle, a small library and set of utilities I’ve been working on to implement JSON Schema, which is the equivalent of XML Schema or RELAX-NG, but for JSON files. It allows you to validate JSON instances against a schema, to validate schemas themselves and, unusually, to automatically generate parser unit tests from a schema. That way, you can automatically test json-glib–based JsonReader/JsonParser code, just by passing the JSON schema to Walbottle’s json-schema-generate utility.

It’s still a young project, but should be complete enough to be useful in testing JSON code. Please let me know of any bugs or missing features!

Tomorrow, I plan to dive back in to static analysis of GObject code with Tartan

Automatically Valgrinding code with AX_VALGRIND_CHECK

tl;dr: For `make check` with Valgrind support, add AX_VALGRIND_CHECK to configure.ac and @VALGRIND_CHECK_RULES@ to tests/Makefile.am. Use memcheck, helgrind, drd and sgcheck.

Once you’ve written a unit test suite with near-100% coverage, you can validate software with a lot more confidence than before, since any effort you put into validation is working across the whole code base.1 It’s quite common to check for memory leaks with Valgrind’s memcheck tool — but Valgrind has several additional tools which are useful in production: helgrind, drd and sgcheck.2 All of these can be run over unit tests.

How can you run a test suite under these tools? The normal method is some kind of hard-to-remember rune involving libtool, and either something about TESTS_ENVIRONMENT or some kind of LOG_COMPILER (but why would I want to compile my test logs?). For that reason, I’ve written an autoconf macro which abstracts it all a bit: AX_VALGRIND_CHECK. It adds a check-valgrind target to your makefile which handles running the `make check` tests under all four Valgrind tools of interest. Various ancillary variables (such as VALGRIND_SUPPRESSIONS_FILES or VALGRIND_FLAGS) allow the Valgrind options to be changed. `make check-valgrind` will return an error exit code if any problems are found, with the idea that the target can be used for automated testing and continuous integration.

To use it, either add a dependency on autoconf-archive to your project, or copy the ax_valgrind_check.m4 macro in-tree (and add it to EXTRA_DIST), then follow the instructions at the top of the file for adding it to configure.ac and tests/Makefile.am.

With this macro, I’ve tried to bring together existing makefile snippets which exist in various projects to produce one macro which can be reused everywhere. Of course, I’ve probably missed something – some feature or specific use case which someone has – so if anyone has any suggestions for improvement, please let me know or submit a patch to the autoconf-archive! For example, at the moment the macro requires automake’s parallel test harness, and does not support older versions of automake.

Thanks to my employer, Collabora, for enabling me to work on (hopefully useful) stuff like this!


  1. Normal caveats about the combinatorial complexity of dynamic testing apply. 

  2. Although currently the threading tools may not work with GLib threads, due to its use of futexes rather than pthreads

Use g_set_object() to simplify (and safetyify) GObject property setters

tl;dr: Use g_set_object() to conveniently update owned object pointers in property setters (and elsewhere); see the bottom of the post for an example.

A little while ago, GLib gained a useful function called g_clear_object(), which clears an owned object pointer (a pointer which owns a reference to a GObject). GLib also just gained a function called g_set_object(), which works similarly but can either clear an object pointer or update it to point to a new object.

Why is this valuable? It saves a few lines of code each time an object pointer is updated, which isn’t much in itself. However, one thing it gets right is the order of reference counting operations, which is a common mistake in property setters. Instead of:

/* This is bad code. */
if (object_ptr != NULL)
    g_object_unref (object_ptr);
if (new_object != NULL)
    g_object_ref (new_object);
object_ptr = new_object;

you should always do:

/* This is better code. */
if (new_object != NULL)
    g_object_ref (new_object);
if (object_ptr != NULL)
    g_object_unref (object_ptr);
object_ptr = new_object;

because otherwise, if (new_object == object_ptr) (or if the objects have some other ownership relationship) and the object only has one reference left, object_ptr will end up pointing to a finalised GObject (and g_object_ref() will be called on a finalised GObject too, which it really won’t like).

So how does g_set_object() help? We can now do:

g_set_object (&object_ptr, new_object);

which takes care of all the reference counting, and allows new_object to be NULL. &object_ptr must not be NULL. If you’re worried about performance, never fear. g_set_object() is a static inline function, so shouldn’t adversely affect your code size.

Even better, the return value of g_set_pointer() indicates whether the value changed, so we can conveniently base GObject::notify emissions off it:

/* This is how all GObject property setters should look in future. */
if (g_set_object (&priv->object_ptr, new_object))
    g_object_notify (self, "object-ptr");

Hopefully this will make property setters (and other object updates) simpler in new code.

A checklist for writing pkg-config files

tl;dr: Use AX_PKG_CHECK_MODULES to split public/private dependencies; use AC_CONFIG_FILES to magically include the API version in the .pc file name.

A few tips for creating a pkg-config file which you will never need to think about maintaining again — because one of the most common problems with pkg-config files is that their dependency lists are years out of date compared to the dependencies checked for in configure.ac. See lower down for some example automake snippets.

  • Include the project’s major API version1 in the pkg-config file name. e.g. libfoo-1.pc rather than libfoo.pc. This will allow parallel installation of two API-incompatible versions of the library if it becomes necessary in future.
  • Split private and public dependencies between Requires and Requires.private. This eliminates over-linking when dynamically linking against the project, since in that case the private dependencies are not needed. This is easily done using the AX_PKG_CHECK_MODULES macro (and perhaps using an upstream macro in future — see pkg-config bug #87154). A dependency is public when its symbols are exposed in public headers installed by your project; it is private otherwise.
  • Include useful ancillary variables, such as the paths to any utilities, directories or daemons which ship with the project. For example, glib-2.0.pc has variables giving the paths for its utilities: glib-genmarshal, gobject-query and glib-mkenums. libosinfo-1.0.pc has variables for its database directories. Ensure the variables use a variable form of ${prefix}, allowing it to be overridden when invoking pkg-config using pkg-config --define-variable=prefix=/some/other/prefix. This allows use of libraries installed in one (read only) prefix from binaries in another, while installing ancillary files (e.g. D-Bus service files) to the second prefix.
  • Substitute in the Name and Version using @PACKAGE_NAME@ and @PACKAGE_VERSION@ so they don’t fall out of sync.
  • Place the .pc.in template in the source code subdirectory for the library it’s for — so if your project produces multiple libraries (or might do in future), the .pc.in files don’t get mixed up at the top level.

Given all those suggestions, here’s a template libmy-project/my-project.pc.in file (updated to incorporate suggestions by Dan Nicholson):

prefix=@prefix@
exec_prefix=@exec_prefix@
libdir=@libdir@
includedir=@includedir@

my_project_utility=my-project-utility-binary-name
my_project_db_dir=@sysconfdir@/my-project/db

Name: @PACKAGE_NAME@
Description: Some brief but informative description
Version: @PACKAGE_VERSION@
Libs: -L${libdir} -lmy-project-@API_VERSION@
Cflags: -I${includedir}/my-project-@API_VERSION@
Requires: @AX_PACKAGE_REQUIRES@
Requires.private: @AX_PACKAGE_REQUIRES_PRIVATE@

And here’s a a few snippets from a template configure.ac:

# Release version
m4_define([package_version_major],[1])
m4_define([package_version_minor],[2])
m4_define([package_version_micro],[3])

# API version
m4_define([api_version],[1])

AC_INIT([my-project],
        [package_version_major.package_version_minor.package_version_micro],
        …)

# Dependencies
PKG_PROG_PKG_CONFIG
PKG_INSTALLDIR

glib_reqs=2.40
gio_reqs=2.42
gthread_reqs=2.40
nice_reqs=0.1.6

# The first list on each line is public; the second is private.
# The AX_PKG_CHECK_MODULES macro substitutes AX_PACKAGE_REQUIRES and
# AX_PACKAGE_REQUIRES_PRIVATE.
AX_PKG_CHECK_MODULES([GLIB],
                     [glib-2.0 >= $glib_reqs gio-2.0 >= $gio_reqs],
                     [gthread-2.0 >= $gthread_reqs])
AX_PKG_CHECK_MODULES([NICE],
                     [nice >= $nice_reqs],
                     [])

AC_SUBST([PACKAGE_VERSION_MAJOR],package_version_major)
AC_SUBST([PACKAGE_VERSION_MINOR],package_version_minor)
AC_SUBST([PACKAGE_VERSION_MICRO],package_version_micro)
AC_SUBST([API_VERSION],api_version)

# Output files
# Rename the template .pc file to include the API version on configure
AC_CONFIG_FILES([
libmy-project/my-project-$API_VERSION.pc:libmy-project/my-project.pc.in
…
],[],
[API_VERSION='$API_VERSION'])
AC_OUTPUT

And finally, the top-level Makefile.am:

# Install the pkg-config file; the directory is set using
# PKG_INSTALLDIR in configure.ac.
pkgconfig_DATA = libmy-project/my-project-$(API_VERSION).pc

Once that’s all built, you’ll end up with an installed my-project-1.pc file containing the following (assuming a prefix of /usr; note that by default autoconf substitutes in references to variables rather than the values themselves, so pkg-config can continue to be used with --define-variable to override the prefix):

prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib
includedir=${prefix}/include

my_project_utility=my-project-utility-binary-name
my_project_db_dir=${prefix}/etc/my-project/db

Name: my-project
Description: Some brief but informative description
Version: 1.2.3
Libs: -L${libdir} -lmy-project-1
Cflags: -I${includedir}/my-project-1
Requires: glib-2.0 >= 2.40 gio-2.0 >= 2.42 nice >= 0.1.6
Requires.private: gthread-2.0 >= 2.40

All code samples in this post are released into the public domain.


  1. Assuming this is the number which will change if backwards-incompatible API/ABI changes are made. 

AM_INCLUDES doesn’t exist

tl;dr: Don’t use AM_INCLUDES in Makefiles. It does nothing. INCLUDES and AM_CPPFLAGS should be avoided in favour of my_program_name_CPPFLAGS.

Once upon a time, there was an automake variable called INCLUDES. It was used to pass flags to the preprocessor, such as -I flags to specify include paths. All was good.

Then the gentle leaders of automake decided that INCLUDES was not a very generic name, so they deprecated it in favour of the *_CPPFLAGS variables, like the target-specific my_program_name_CPPFLAGS. For those who wanted to specify preprocessor flags for all targets in their Makefile, the AM_CPPFLAGS variable was provided. Due to their kind and wise oversight, the transition between these variables (in the time of the coming of automake 1.12) was peaceful, and tranquillity continued to reign in the pastures of automakery.

And yet, some lands were troubled. Some of the citizens of automake had developed a confusion. They had taken the INCLUDES incantations of old, and mixed them with the AM_ prefix of new to create an AM_INCLUDES monster. ‘This is surely better’, they reasoned — ‘we have global preprocessor flags, but allow it to be overridden by the user due to our use of the AM_ prefix’. These citizens went ahead and used their AM_INCLUDES variable, but were confused when it failed to affect their automagic. They thought the runes were not in their favour, so copied its value into AM_CFLAGS as well, just in case.


These citizens are wrong. AM_INCLUDES does not exist. Do not use it. automake does not recognise it. You are wasting your time. It has no effect.

Use per-target CPPFLAGS (e.g. my_program_name_CPPFLAGS) to specify preprocessor flags. Use per-target CFLAGS to specify C-compiler-only flags. Do not use AM_CPPFLAGS unless you’re sure all flags in it are needed for all targets in the Makefile. Do not use INCLUDES unless you want a big deprecation warning from automake((warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS'))) (it is entirely equivalent to AM_CPPFLAGS). Do not use AM_INCLUDES at all.

For the curious, passing -Wall to automake will enable a bunch of warnings which can improve your Makefiles. You can do that using the AUTOMAKE_OPTIONS environment variable, or the AM_INIT_AUTOMAKE macro in configure.ac.


And the citizens of automake lived happily ever after.

Introduction to ICE and libnice

As part of the series of tea time talks we do within Collabora, I recently got to refresh my knowledge of STUN, TURN and ICE (the protocols for NAT traversal) and give an introductory talk on how they all fit together within the context of libnice.

Since the talk might be useful (and perhaps even interesting) to a wider audience, I’ve made it available: slides, handout and source (git). It’s under CC-BY-SA 4.0. Please leave comments if anything is unclear, incorrect, or could do with more in-depth coverage!