GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

February 08, 2017

QEMU and the qcow2 metadata checks

When choosing a disk image format for your virtual machine one of the factors to take into considerations is its I/O performance. In this post I’ll talk a bit about the internals of qcow2 and about one of the aspects that can affect its performance under QEMU: its consistency checks.

As you probably know, qcow2 is QEMU’s native file format. The first thing that I’d like to highlight is that this format is perfectly fine in most cases and its I/O performance is comparable to that of a raw file. When it isn’t, chances are that this is due to an insufficiently large L2 cache. In one of my previous blog posts I wrote about the qcow2 L2 cache and how to tune it, so if your virtual disk is too slow, you should go there first.

I also recommend Max Reitz and Kevin Wolf’s qcow2: why (not)? talk from KVM Forum 2015, where they talk about a lot of internal details and show some performance tests.

qcow2 clusters: data and metadata

A qcow2 file is organized into units of constant size called clusters. The cluster size defaults to 64KB, but a different value can be set when creating a new image:

qemu-img create -f qcow2 -o cluster_size=128K hd.qcow2 4G

Clusters can contain either data or metadata. A qcow2 file grows dynamically and only allocates space when it is actually needed, so apart from the header there’s no fixed location for any of the data and metadata clusters: they can appear mixed anywhere in the file.

Here’s an example of what it looks like internally:

In this example we can see the most important types of clusters that a qcow2 file can have:

  • Header: this one contains basic information such as the virtual size of the image, the version number, and pointers to where the rest of the metadata is located, among other things.
  • Data clusters: the data that the virtual machine sees.
  • L1 and L2 tables: a two-level structure that maps the virtual disk that the guest can see to the actual location of the data clusters in the qcow2 file.
  • Refcount table and blocks: a two-level structure with a reference count for each data cluster. Internal snapshots use this: a cluster with a reference count >= 2 means that it’s used by other snapshots, and therefore any modifications require a copy-on-write operation.

Metadata overlap checks

In order to detect corruption when writing to qcow2 images QEMU (since v1.7) performs several sanity checks. They verify that QEMU does not try to overwrite sections of the file that are already being used for metadata. If this happens, the image is marked as corrupted and further access is prevented.

Although in most cases these checks are innocuous, under certain scenarios they can have a negative impact on disk write performance. This depends a lot on the case, and I want to insist that in most scenarios it doesn’t have any effect. When it does, the general rule is that you’ll have more chances of noticing it if the storage backend is very fast or if the qcow2 image is very large.

In these cases, and if I/O performance is critical for you, you might want to consider tweaking the images a bit or disabling some of these checks, so let’s take a look at them. There are currently eight different checks. They’re named after the metadata sections that they check, and can be divided into the following categories:

  1. Checks that run in constant time. These are equally fast for all kinds of images and I don’t think they’re worth disabling.
    • main-header
    • active-l1
    • refcount-table
    • snapshot-table
  2. Checks that run in variable time but don’t need to read anything from disk.
    • refcount-block
    • active-l2
    • inactive-l1
  3. Checks that need to read data from disk. There is just one check here and it’s only needed if there are internal snapshots.
    • inactive-l2

By default all tests are enabled except for the last one (inactive-l2), because it needs to read data from disk.

Disabling the overlap checks

Tests can be disabled or enabled from the command line using the following syntax:

-drive file=hd.qcow2,overlap-check.inactive-l2=on
-drive file=hd.qcow2,overlap-check.snapshot-table=off

It’s also possible to select the group of checks that you want to enable using the following syntax:

-drive file=hd.qcow2,overlap-check.template=none
-drive file=hd.qcow2,overlap-check.template=constant
-drive file=hd.qcow2,overlap-check.template=cached
-drive file=hd.qcow2,overlap-check.template=all

Here, none means that no tests are enabled, constant enables all tests from group 1, cached enables all tests from groups 1 and 2, and all enables all of them.

As I explained in the previous section, if you’re worried about I/O performance then the checks that are probably worth evaluating are refcount-block, active-l2 and inactive-l1. I’m not counting inactive-l2 because it’s off by default. Let’s look at the other three:

  • inactive-l1: This is a variable length check because it depends on the number of internal snapshots in the qcow2 image. However its performance impact is likely to be negligible in all cases so I don’t think it’s worth bothering with.
  • active-l2: This check depends on the virtual size of the image, and on the percentage that has already been allocated. This check might have some impact if the image is very large (several hundred GBs or more). In that case one way to deal with it is to create an image with a larger cluster size. This also has the nice side effect of reducing the amount of memory needed for the L2 cache.
  • refcount-block: This check depends on the actual size of the qcow2 file and it’s independent from its virtual size. This check is relatively expensive even for small images, so if you notice performance problems chances are that they are due to this one. The good news is that we have been working on optimizing it, so if it’s slowing down your VMs the problem might go away completely in QEMU 2.9.

Conclusion

The qcow2 consistency checks are useful to detect data corruption, but they can affect write performance.

If you’re unsure and you want to check it quickly, open an image with overlap-check.template=none and see for yourself, but remember again that this will only affect write operations. To obtain more reliable results you should also open the image with cache=none in order to perform direct I/O and bypass the page cache. I’ve seen performance increases of 50% and more, but whether you’ll see them depends a lot on your setup. In many cases you won’t notice any difference.

I hope this post was useful to learn a bit more about the qcow2 format. There are other things that can help QEMU perform better, and I’ll probably come back to them in future posts, so stay tuned!

Acknowledgments

My work in QEMU is sponsored by Outscale and has been made possible by Igalia and the help of the rest of the QEMU development team.

An Update on WebKit Security Updates

One year ago, I wrote a blog post about WebKit security updates that attracted a fair amount of attention at the time. For a full understanding of the situation, you really have to read the whole thing, but the most important point was that, while WebKitGTK+ — one of the two WebKit ports present in Linux distributions — was regularly releasing upstream security updates, most Linux distributions were ignoring the updates, leaving users vulnerable to various security bugs, mainly of the remote code execution variety. At the time of that blog post, only Arch Linux and Fedora were regularly releasing WebKitGTK+ updates, and Fedora had only very recently begun doing so comprehensively.

Progress report!

So how have things changed in the past year? The best way to see this is to look at the versions of WebKitGTK+ in currently-supported distributions. The latest version of WebKitGTK+ is 2.14.3, which fixes 13 known security issues present in 2.14.2. Do users of the most popular Linux operating systems have the fixes?

  • Fedora users are good. Both Fedora 24 and Fedora 25 have the latest version, 2.14.3.
  • If you use Arch, you know you always have the latest stuff.
  • Ubuntu users rejoice: 2.14.3 updates have been released to users of both Ubuntu 16.04 and 16.10. I’m very  pleased that Ubuntu has decided to take my advice and make an exception to its usual stable release update policy to ensure its users have a secure version of WebKit. I can’t give Ubuntu an A grade here because the updates tend to lag behind upstream by several months, but slow updates are much better than no updates, so this is undoubtedly a huge improvement. (Anyway, it’s hardly a bad idea to be cautious when releasing a big update with high regression potential, as is unfortunately the case with even stable WebKit updates.) But if you use the still-supported Ubuntu 14.04 or 12.04, be aware that these versions of Ubuntu cannot ever update WebKit, as it would require a switch to WebKit2, a major API change.
  • Debian does not update WebKit as a matter of policy. The latest release, Debian 8.7, is still shipping WebKitGTK+ 2.6.2. I count 184 known vulnerabilities affecting it, though that’s an overcount as we did not exclude some Mac-specific security issues from the 2015 security advisories. (Shipping ancient WebKit is not just a security problem, but a user experience problem too. Actually attempting to browse the web with WebKitGTK+ 2.6.2 is quite painful due to bugs that were fixed years ago, so please don’t try to pretend it’s “stable.”) Note that a secure version of WebKitGTK+ is available for those in the know via the backports repository, but this does no good for users who trust Debian to provide them with security updates by default without requiring difficult configuration. Debian testing users also currently have the latest 2.14.3, but you will need to switch to Debian unstable to get security updates for the foreseeable future, as testing is about to freeze.
  • For openSUSE users, only Tumbleweed has the latest version of WebKit. The current stable release, Leap 42.2, ships with WebKitGTK+ 2.12.5, which is coincidentally affected by exactly 42 known vulnerabilities. (I swear I am not making this up.) The previous stable release, Leap 42.1, originally released with WebKitGTK+ 2.8.5 and later updated to 2.10.7, but never past that. It is affected by 65 known vulnerabilities. (Note: I have to disclose that I told openSUSE I’d try to help out with that update, but never actually did. Sorry!) openSUSE has it a bit harder than other distros because it has decided to use SUSE Linux Enterprise as the source for its GCC package, meaning it’s stuck on GCC 4.8 for the foreseeable future, while WebKit requires GCC 4.9. Still, this is only a build-time requirement; it’s not as if it would be impossible to build with Clang instead, or a custom version of GCC. I would expect WebKit updates to be provided to both currently-supported Leap releases.
  • Gentoo has the latest version of WebKitGTK+, but only in testing. The latest version marked stable is 2.12.5, so this is a serious problem if you’re following Gentoo’s stable channel.
  • Mageia has been updating WebKit and released a couple security advisories for Mageia 5, but it seems to be stuck on 2.12.4, which is disappointing, especially since 2.12.5 is a fairly small update. The problem here does not seem to be lack of upstream release monitoring, but rather lack of manpower to prepare the updates, which is a typical problem for small distros.
  • The enterprise distros from Red Hat, Oracle, and SUSE do not provide any WebKit security updates. They suffer from the same problem as Ubuntu’s old LTS releases: the WebKit2 API change  makes updating impossible. See my previous blog post if you want to learn more about that. (SUSE actually does have WebKitGTK+ 2.12.5 as well, but… yeah, 42.)

So results are clearly mixed. Some distros are clearly doing well, and others are struggling, and Debian is Debian. Still, the situation on the whole seems to be much better than it was one year ago. Most importantly, Ubuntu’s decision to start updating WebKitGTK+ means the vast majority of Linux users are now receiving updates. Thanks Ubuntu!

To arrive at the above vulnerability totals, I just counted up the CVEs listed in WebKitGTK+ Security Advisories, so please do double-check my counting if you want. The upstream security advisories themselves are worth mentioning, as we have only been releasing these for two years now, and the first year was pretty rough when we lost our original security contact at Apple shortly after releasing the first advisory: you can see there were only two advisories in all of 2015, and the second one was huge as a result of that. But 2016 seems to have gone decently well. WebKitGTK+ has normally been releasing most security fixes even before Apple does, though the actual advisories and a few remaining fixes normally lag behind Apple by roughly a month or so. Big thanks to my colleagues at Igalia who handle this work.

Challenges ahead

There are still some pretty big problems remaining!

First of all, the distributions that still aren’t releasing regular WebKit updates should start doing so.

Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project. The good news is that Konstantin Tokarev has been working on a QtWebKit fork based on WebKitGTK+ 2.12, which is almost (but not quite yet) ready for use in distributions. I hope we are able to switch to use his project as the new upstream for QtWebKit in Fedora 26, and I’d encourage other distros to follow along. WebKitGTK+ 2.12 does still suffer from those 42 vulnerabilities, but this will be a big improvement nevertheless and an important stepping stone for a subsequent release based on the latest version of WebKitGTK+. (Yes, QtWebKit will be a downstream of WebKitGTK+. No, it will not use GTK+. It will work out fine!)

It’s also time to get rid of the old WebKitGTK+ 2.4 (“WebKit1”), which all distributions currently parallel-install alongside modern WebKitGTK+ (“WebKit2”). It’s very unfortunate that a large number of applications still depend on WebKitGTK+ 2.4 — I count 41 such packages in Fedora — but this old version of WebKit is affected by over 200 known vulnerabilities and really has to go sooner rather than later. We’ve agreed to remove WebKitGTK+ 2.4 and its dependencies from Fedora rawhide right after Fedora 26 is branched next month, so they will no longer be present in Fedora 27 (targeted for release in November). That’s bad for you if you use any of the affected applications, but fortunately most of the remaining unported applications are not very important or well-known; the most notable ones that are unlikely to be ported in time are GnuCash (which won’t make our deadline) and Empathy (which is ported in git master, but is not currently in a  releasable state; help wanted!). I encourage other distributions to follow our lead here in setting a deadline for removal. The alternative is to leave WebKitGTK+ 2.4 around until no more applications are using it. Distros that opt for this approach should be prepared to be stuck with it for the next 10 years or so, as the remaining applications are realistically not likely to be ported so long as zombie WebKitGTK+ 2.4 remains available.

These are surmountable problems, but they require action by downstream distributions. No doubt some distributions will be more successful than others, but hopefully many distributions will be able to fix these problems in 2017. We shall see!

On Epiphany Security Updates and Stable Branches

One of the advantages of maintaining a web browser based on WebKit, like Epiphany, is that the vast majority of complexity is contained within WebKit. Epiphany itself doesn’t have any code for HTML parsing or rendering, multimedia playback, or JavaScript execution, or anything else that’s actually related to displaying web pages: all of the hard stuff is handled by WebKit. That means almost all of the security problems exist in WebKit’s code and not Epiphany’s code. While WebKit has been affected by over 200 CVEs in the past two years, and those issues do affect Epiphany, I believe nobody has reported a security issue in Epiphany’s code during that time. I’m sure a large part of that is simply because only the bad guys are looking, but the attack surface really is much, much smaller than that of WebKit. To my knowledge, the last time we fixed a security issue that affected a stable version of Epiphany was 2014.

Well that streak has unfortunately ended; you need to make sure to update to Epiphany 3.22.6, 3.20.7, or 3.18.11 as soon as possible (or Epiphany 3.23.5 if you’re testing our unstable series). If your distribution is not already preparing an update, insist that it do so. I’m not planning to discuss the embarrassing issue here — you can check the bug report if you’re interested — but rather on why I made new releases on three different branches. That’s quite unlike how we handle WebKitGTK+ updates! Distributions must always update to the very latest version of WebKitGTK+, as it is not practical to backport dozens of WebKit security fixes to older versions of WebKit. This is rarely a problem, because WebKitGTK+ has a strict policy to dictate when it’s acceptable to require new versions of runtime dependencies, designed to ensure roughly three years of WebKit updates without the need to upgrade any of its dependencies. But new major versions of Epiphany are usually incompatible with older releases of system libraries like GTK+, so it’s not practical or expected for distributions to update to new major versions.

My current working policy is to support three stable branches at once: the latest stable release (currently Epiphany 3.22), the previous stable release (currently Epiphany 3.20), and an LTS branch defined by whatever’s currently in Ubuntu LTS and elementary OS (currently Epiphany 3.18). It was nice of elementary OS to make Epiphany its default web browser, and I would hardly want to make it difficult for its users to receive updates.

Three branches can be annoying at times, and it’s a lot more than is typical for a GNOME application, but a web browser is not a typical application. For better or for worse, the majority of our users are going to be stuck on Epiphany 3.18 for a long time, and it would be a shame to leave them completely without updates. That said, the 3.18 and 3.20 branches are very stable and only getting bugfixes and occasional releases for the most serious issues. In contrast, I try to backport all significant bugfixes to the 3.22 branch and do a new release every month or thereabouts.

So that’s why I just released another update for Epiphany 3.18, which was originally released in September 2015. Compare this to the long-term support policies of Chrome (which supports only the latest version of the browser, and only for six weeks) or Firefox (which provides nine months of support for an ESR release), and I think we compare quite favorably. (A stable WebKit series like 2.14 is only supported for six months, but that’s comparable to Firefox.) Not bad?

February 07, 2017

FOSDEM 2017 Day 3: Talks & Chats


Silent morning at the booths in building K (CC-BY-SA 3.0).

Today I got early up, going with Andreas to the venue, arriving at 8.30 AM. He was going there to open the Open Source Design room, I was going there to open the GNOME booth. After the shift I then decided to wandered around to collect stickers and speak to various projects at their booths.


Emiliano at the LibreOffice booth (CC-BY-SA 3.0).

LibreOffice‘s booth who had a stand right next to us and I decided to stop by. In LibreOffice they had just released version 5.3, which among other new features include a renewed user interface. LibreOffice is also making progress on integrating with GTK+3, although I unfortunately missed the talk they had about that the day before. In recent years a new flavor of LibreOffice has also arrived, namely LibreOffice online. This project makes it possible to deploy your own collaborative document editing infrastructure.


Team Coala at FOSDEM (CC-BY-SA 3.0).

At the Coala booth, I met Lasse whom I know also via the GNOME community. Coala is a type of meta code analysis software. Currently they are reworking internals, and ultimately aiming at simplifying how to perform the code analysis.


Jobs corner located in Building H. (CC-BY-SA 3.0).

My experience in all three FOSDEM conferences is that they are a good place to network and meet new faces. One thing I dont recall seeing at previous FOSDEMs was job postings. There was a very long wall and table dedicated so that individuals and organizations could advertise jobs, from everything between part-time system administrators and DevOps to full-time software engineers or project managers. Practical!


Stickers! (CC-BY-SA 3.0).

..and that was the end of my sticker collecting journey. Now I’ve got some, ready to be put on the dorm door at home. :-)

Talks

The rest of the day went with watching talks. In many places there were very large lines of people trying to get in, inside many of the rooms.


People standing in line to the “Decentralized Internet” room (CC-BY-SA 3.0).

In the end I went to the open source design room, in which i stayed for the rest of the day. This being an open source conference, many of the talks at FOSDEM are focused around software-engineering. The open source design room is the exception. It’s a small room, but there was good space available and I could sit down and do a little work in the meantime.


The Open Source Design room (CC-BY-SA 3.0).

What I really like about this room is that it is arranged by the open source design community. It feels very unified that the room directly represents a community, not just a topic. Open source design have its own repository with assets, their own forum etc and it represents designers who do their work in many different open source projects. Many of the talks were reflecting on the methodology we use to do design in open source. Many of the talks revolved around how we can approach user research to inform ourselves when designing. A speaker named Miroslav Mazel spoke about the challenges in conducting user research using local volunteers. One particular difficulty he explained is on how to keep the interest among the volunteers up for conducting it. Andreas was also there to speak about his experience conducting user interviews to inform his work on GNOME Maps. Including the user in the design process helped to recognize new use cases when designing transit routing in GNOME Maps.

Andreas answering questions during his speech “Interviews as user research” (CC-BY-SA 3.0).

Matthias Clasen and Emel spoke about the design of GNOME Recipes, a new application they are working on for GNOME’s upcoming 20th Anniversary. I think the application looks very promising and am definitely interested in submitting some more recipes!


Emel explaining the design of GNOME Recipes (CC-BY-SA 3.0).

Finally Jan, a designer on NextCloud, spoke about getting more designers involved in open source. IT is afterall not only about software engineering, the technology has to be used by people. So design matters and there are many projects which are in dire need of more designers. The open source room concluded with project pitches. Developers of various open source projects would each have three minutes to advertise their projects and make a call for design participation. I really liked this initiative! It’s hard to get started in many open source projects, especially if your role is not a software engineer. I hope for all the developers who stood up and advertised their project, succeeded in reaching out to interested designers. :-)

Home

Monday, we left Belgium. Although I left with an upset stomach and a cold, all in all I did have a really good time. Maybe we will meet again at Open Source Days 2017, foss-north 2017 or GUADEC 2017?

This week in GTK+ – 33

The past two weeks we’ve had DevConf and FOSDEM back to back, so the development slowed down a bit. Expect it to pick up again, now that we’re close to the GNOME 3.24 release.

In these last two weeks, the master branch of GTK+ has seen 34 commits, with 20973 lines added and 21593 lines removed.

Planning and status
Notable changes

On the master branch:

  • Timm Bäder removed gtk_widget_class_list_style_properties() in the continuing effort to deprecate the style properties inside GtkWidget and replace them with CSS properties
  • Timm also moved some of the state used only by GtkToggleButton subclasses into those types
  • William Hua improved the Mir GDK backend for proper positioning of menus
Bugs fixed
  • 777547 Notebook arrow icon wrong color after closing final tab
  • 773686 Software when launched shows in dash with wrong icon, name and menu
  • 775864 getting-started: typo tie->the
  • 778009 menu drawn on top of menubar in Fedora
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

Maps at FOSDEM

I went to FOSDEM again this year, my fourth year running. I go with a great group of friends and it is starting to become quite the tradition.

Maps meeting

FOSDEM lines up pretty well with the GNOME release cycle, in that after the conference we have about a month of time to get the last stuff in before the next 6 month development cycle comes to an end. With that in mind we had a quite quick and informal Maps meeting on what the immediate priorities where for the release and what we wanted to do after that.

Transit routing

We want to merge Marcus transit-routing branch this cycle. This will not add anything if there is no OpenTripPlaner server available. But our plan is to be able to add one to our service file so this can be turned on if we get some sponsorship or in any other way manage to solve the infrastructure needs. This will also be a way of disabling the functionality if we lose infrastructure, such as with our MapQuest tiles previously.

Geocoding / search as you type

We now have a Mapbox account. We could use the Mapbox geocoding API instead of Nominatim that we currently use. And with that we could achieve search-as-you-type functionality. The timing is right for a switch like this, since Collabora recently landed  a patch bomb on geocode-glib to make it handle custom backends through an interface. So we could write a Mapbox interface in Maps.

I did some prototyping with this during some FOSDEM talks and the (buggy) result can be seen in the video below.


An issue with using Mapbox geocoding service that I do not yet know if we can solve is that there does not appear to be a link between the id you get for a resulting place and the OpenStreetMap id. This makes it really hard for us to support editing the nodes you find.

Tile styles

Also since we have a Mapbox account it would be possible for us to make our own styles. For instance an hi-contrast style, a custom print style or a general GNOME style. This is a daunting task. But if anyone feel up for it, please let us know.

Mapbox GL Native

Thiago Santos from Mapbox held a talk about Mapbox GL Native which is a hardware-accelerated map rendering engine. It is written in C++-14 and has recently been ported to QT. Thiago talked about what is needed to port Mapbox GL Native to new platforms, and specifically called out GTK+ and GNOME Maps. Saying that it should be possible to make Mapbox GL Native work with our infrastructures.

Mapbox have written a blogpost outlining what needs to be true about a platform for Mapbox GL Native to be ported to it. Porting Mapbox GL Native to GLib land might be a nice GSoC or Outreachy project for GNOME/GTK+.


Stricter JSON parsing with Haskell and Aeson

I’ve been having fun recently, writing a RESTful service using Haskell and Servant. I did run into a problem that I couldn’t easily find a solution to on the magical bounty of knowledge that is the Internet, so I thought I’d share my findings and solution.

While writing this service (and practically any Haskell code), step 1 is of course defining our core types. Our REST endpoint is basically a CRUD app which exchanges these with the outside world as JSON objects. Doing this is delightfully simple:

{-# LANGUAGE DeriveGeneric #-}

import Data.Aeson
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON = genericParseJSON defaultOptions

That’s all it takes to get the basic type up with free serialization using Aeson and Haskell Generics. This is followed by a few more lines to hook up GET and POST handlers, we instantiate the server using warp, and we’re good to go. All standard stuff, right out of the Servant tutorial.

The POST request accepts a new object in the form of a JSON object, which is then used to create the corresponding object on the server. Standard operating procedure again, as far as RESTful APIs go.

The nice part about doing it like this is that the input is automatically validated based on types. So input like:

{
  "jobInputUrl": 123, // should have been a string
  "jobPriority": 123
}

will result in:

Error in $: expected String, encountered Number

However, as this nice tour of how Aeson works demonstrate, if the input has keys that we don’t recognise, no error will be raised:

{
  "jobInputUrl": "http://arunraghavan.net",
  "jobPriority": 100,
  "junkField": "junkValue"
}

This behaviour would not be undesirable in use-cases such as mine — if the client is sending fields we don’t understand, I’d like for the server to signal an error so the underlying problem can be caught early.

As it turns out, making the JSON parsing stricter and catch missing fields is just a little more involved. I didn’t find how this could be done in a single place on the Internet, so here’s the best I could do:

{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveGeneric      #-}

import Data.Aeson
import Data.Data
import GHC.Generics

data Job = Job { jobInputUrl :: String
               , jobPriority :: Int
               , ...
               } deriving (Data, Eq, Generic, Show)

instance ToJSON Job where
  toJSON = genericToJSON defaultOptions

instance FromJSON Job where
  parseJSON json = do
    job <- genericParseJSON defaultOptions json
    if keysMatchRecords json job
    then
      return job
    else
      fail "extraneous keys in input"
    where
      -- Make sure the set of JSON object keys is exactly the same as the fields in our object
      keysMatchRecords (Object o) d =
        let
          objKeys   = sort . fmap unpack . keys
          recFields = sort . fmap (fieldLabelModifier defaultOptions) . constrFields . toConstr
        in
          objKeys o == recFields d
      keysMatchRecords _ _          = False

The idea is quite straightforward, and likely very easy to make generic. The Data.Data module lets us extract the constructor for the Job type, and the list of fields in that constructor. We just make sure that’s an exact match for the list of keys in the JSON object we parsed, and that’s it.

Of course, I’m quite new to the Haskell world so it’s likely there are better ways to do this. Feel free to drop a comment with suggestions! In the mean time, maybe this will be useful to others facing a similar problem.

Update: I’ve fixed parseJSON to properly use fieldLabelModifier from the default options, so that comparison actually works when you’re not using Aeson‘s default options. Thanks to /u/tathougies for catching that.

I’m also hoping to rewrite this in generic form using Generics, so watch this space for more updates.

February 06, 2017

Wilber week 2017: our report

ZeMarmot reached Bacelona airport!ZeMarmot reached Barcelona airport!

Last week, the core GIMP team has been meeting for Wilber Week, a week-long meeting to work on GIMP 2.10 release and discuss the future of GIMP.  The meeting place was an Art Residency in the countryside, ~50km from Barcelona, Spain, with pretty much nothing but an internet access and a fire place for heating. Of course, both Aryeom and I were part of this hacking week. I personally think this has been a very exciting and productive time. Here is our personal report (it does not include the full result for everyone, only the part we have been a part of).

Software Hacking, by Jehan

GIMP on Flatpak

I’ve wanted to work on an official Flatpak build for at least 6 months, did some early tests already back in September, but could finally make the full time only this week. The build is feature-complete (this was not the case of the original nightly builds of GIMP, used as tests by Flatpak’s main developer, back when it was still called xdg-app; also these incomplete builds seem to have not been available anymore for a few months now), or nearly (since some features are still missing in Flatpak).

I’ll talk more on this later in a dedicated post, detailing what is there or not, and why, with feedback on the Flatpak project.
Bottom line: GIMP will have an official Flatpak, at least starting GIMP 2.10!

Heavy coding and arting going on at #WilberWeek“Heavy coding and arting going on at #WilberWeek” (photo by Mitch, GIMP maintainer)

Working on the help system, Windows build, and more…

I’ve also worked in parallel on some other topics. For instance I’ve made a new Windows build of GIMP to test a few bugs (with my cross-build tool, crossroad, which I hadn’t used for a few months!), fixed a few bugs here and there, and also spent a good amount of time working on improving language detection for the help system (in particular some broken cases when you don’t have exactly the same interface language as the help you downloaded, since we don’t have documentations for as many languages as we have GUI translations). This part is mostly not merged in our code yet because unfinished. But it should be soon.
All in all, that was 26 commits in GIMP (and 1 minor commit in babl) last week, and a lot more things started.

Art hacking, by Aryeom

Aryeom, ZeMarmot director, contributed a lot of smiles (as always), art and design. Since Mitch forgot our usual “Wilber Flag”, she quickly scribbled one on a big sheet of paper (see in video).

Aryeom drawing Wilber (photo by Schumaml) Wilber Flag, by Aryeom

Apart from playing with Wilber stamps, created by Antenne Springborn, Aryeom also spent many hours discussing t-shirt and patch designs with Simon Budig. Here is one of her nice attempts for a very classy outlined-Wilber design:

Outlined-Wilber design by AryeomOutlined-Wilber design by Aryeom

Funny story: she chose as a base a font called montserrat, without realizing that the region we were in at the time was called Montserrat as well. Total coincidence!

She has also been working on some missing icons in GIMP, for instance the Import/Export preferences icon.

And with time permitting, she scribbled various drawings on paper, because digital painting doesn’t mean you should forget analog techniques, right?

Social hacking: interviews and merchandise

Developer interviews

I have been wanting to bring a little more life to our communication ever since we got a new website for GIMP. We already produce more regular news. I wish we had even more. I also think we should even extend to community news. So if you’ve got cool events around the world involving GIMP, do not hesitate to tell us about them. We may be able to make it a gimp.org news when time permits.

Something else I wanted is showing the people behind GIMP: developers and contributors, but even the artists, designers and other creators making usage of GIMP as a tool in their daily creative process. I have talked about these interviews for a few months now, and Wilber Week was my first attempt to make them a reality. I interviewed Mitch, GIMP maintainer, Pippin, GEGL maintainer, Schumaml, GIMP administrator, Simon, a very early GIMP developer and Rishi, GNOME Photos maintainer and GEGL contributor.
All these interviews soon to be featured on gimp.org!

And that’s only a start! I am planning on interviewing even more contributors (developers and non-developers) and also artists. 🙂

Merchandising

We regularly have requests about t-shirts or other merchandising featuring Wilber/GIMP. So we sat down and discussed on what should be exactly GIMP’s official position on this topic. As you know, I, personally, am all for Libre Art, so this was my stance. And I am happy that we are currently willing to be quite liberal.

Yet we have a lot of values and that was our main concern: how nice is your design? Is your merchandising using good material? Is it produced with ecologically-conscious techniques? Do you give back to the community?… So many questions and this is why Simon Budig will work on a ruleset of what will be acceptable GIMP merchandising that we will “endorse”. Endorsement from the GIMP project will mean that we will feature your selling page link on gimp.org and also that you will be allowed to feature on your own page some “endorsed by GIMP” text or logo. I’ve been quite inspired by this system which Nina Paley uses for Sita Sings the Blues movie.

Well that’s the current status, but don’t take it as an official position and wait for an official news or page on gimp.org (as a general rule, nothing I write is in any way an official GIMP statement unless confirmed on the main website by text validated by peers).

Release hacking!

The one you’ve all been waiting for, so I kept it for the end, or close: what about GIMP 2.10 release? We finally decided that it is time to get 2.10 going. We still have a few things that we absolutely need to fix before the release, but the main decision is that we should stop being blocked by unfinished cool features.

We have got many very awesome features which are “nearly there”, but mostly untouched for years. Usually it means that it globally works but is either extremely slow (like the Seamless Clone or n-point deformation tools), or that it is much too instable (up to the crash), often also with unfinished GUI…
Well we will have to do a pass through our feature list and will simply disable whatever is deemed non-releasable. The code will still be here for anyone to fix, but we just can’t release half-finished unstable features. Sorry.
The good news is that it suddenly divides our blocker list by 10 or so! And that should make GIMP 2.10 coming along pretty soon.

But so what of all these cool features? Will we have to wait until GIMP 3 now? Not necessarily! We decided to relax the release rules, which come from a time where all free software released major versions with new features and minor versions with bug fixes only (some kind of semantic versioning applied to end software). So now, if any cool new feature comes along or if the currently deactivated features get finished, we are willing to make minor releases with them! Yes you read it well. This makes it much more exciting for developers since it means you won’t have to wait for years to see your changes in GIMP. But it also means that our contribution process gets much more robust to the unfinished-patch-dropping issue. Of course the libgimp API (used by plugins) still stays stable. Changes does not mean breaking stability!
This was also summed-up in an official gimp.org news recently.

I am so happy about this because I have been pushing for this change in our release process for years. Actually the first time I proposed this was in Libre Graphics Meeting 2014, Leipzig (as I explained in my report back then). I call it a rolling release, where we can release very regularly new stuff, even if just a little. This time though, the topic was brought up by Mitch himself.

People hacking

The conclusion of this week is that it was very nice. As Simon Budig put it in his interview: I mostly stay for the people. I think this is the same for us, and these kind of social events are the proof of it. The GIMP project is ­ — before all — made of people, and not just any people, even nice people! Such event is a good occasion for meeting physically, from time to time, and not just with pixels and bits exchanged through the internet.
We also spent a few hours visiting Barcelona, in particular Sagrada Familia, and doing a few hikes in Montserrat.

Click to view slideshow.
Awesome panorama shot showing several members of GIMP and GEGL (photo by Aryeom)Panorama shot featuring several members of GIMP and GEGL (photo by Aryeom)

Financial hacking: ZeMarmot

As a conclusion, we remind you that ZeMarmot would be the way for me to work full-time on GIMP software development! We could do nearly as much every week if our project had the funding which allowed us to sustain ourselves while hacking Free Software. So if you wish to see GIMP be released faster with many cool features, don’t hesitate to click our Patreon links (for USD funding) or the Tipeee one (EUR funding).

See you soon!

DevConf.cz 2017

Another edition of DevConf.cz took place last week. It was already the second edition I didn’t organize. This year, I was involved in the organization even less, just helping with the program and serving as a session chair for one day. So I could enjoy the conference more than ever before.

snimek-z-2017-02-06-16-37-47

DevConf.cz is still growing. This year, we had over 1700 registrations and ~1600 ppl actually showed up. This time, we also know for sure because it was the first edition when we did a registration and check-in. DevConf.cz is growing into a smaller FOSDEM with more focus on open source enterprise technologies and I think it even covers this area better than FOSDEM. The number of talks and workshops was also a bit higher, I think (200-250).

The opening keynote was pretty interesting. Tim Burke, the VP of Red Hat, announced a focus on integration of different Red Hat products and this year he showed it had actual results. People could see a demo of using different Red Hat products from hardware provisioning to deploying an app to OpenShift from JBoss Developer Studio. I hope that we as the desktop team will be able to contribute to this. Fedora Workstation is a great OS for developers and it should be the best OS for developers that want to develop on Red Hat platforms. I’d love us to get to the point where starting to develop with Red Hat technologies is just a matter of a couple of clicks/commands.

Another highlight of the conference was Hans de Goede’s talk “Making Fedora run better on laptops”. Hans announced a new team which is part of the desktop team and which will work on better hardware support in Fedora and RHEL (with the focus on laptops). We will finally have laptop models which will be officially supported!

The desktop track took place on Sunday. I session-chaired it, so I was more or less obliged to watch it all 🙂 Matthias Clasen prepared a very good presentation on Flatpak. His talk and, in fact, the whole track was interrupted when the projector system broke down. Unfortunately it was a failure in one of the main hardware components which couldn’t be fixed immediately. Matthias had to carry on without the projector and I must say that despite all the difficulties he did very well and there was a lot of questions. Meanwhile we managed to get a backup room where we moved the track once Matthias’ talk was over. Unfortunately the room was much smaller and a bit hidden which might have had an impact on attendance. So not so many people had an opportunity to watch another interesting talk – “Fedora Workstation – removing obstacles to success” by Christian Schaller who outlined some of our plans for the official Fedora desktop edition.

The weather during the conference was extraordinarily cold and a new term – devconflu – was invented. But I really enjoyed it, just had to give up FOSDEM for it. I was not up for another DevConf.cz+Red Hat internal meetings+FOSDEM this year.

BTW all the talk recordings are already online. Check out the DevConf.cz Youtube channel.


2017-02-06 Sunday.

  • Off to the Beta Co-working; had our own room this year - to avoid distracting others; good to have so many hackers in one place. Pursued by admin.

Introducing BuildStream

Greetings fellow Gnomies :)

At Codethink over the past few months we’ve been revisiting our approach to assembly of whole-stack systems, particularly for embedded Linux systems and custom GNOME based systems.

We’ve taken inspiration, lessons and use-cases from various projects including OBS, Reproducible Builds, Yocto, Baserock, buildroot, Aboriginal, GNOME Continuous, JHBuild, Flatpak Builder and Android repo.

The result of our latest work is a new project, BuildStream, which aims initially to satisfy clear requirements from GNOME and Baserock, and grow from there. BuildStream uses some key GNOME plumbing (OSTree, bubblewrap) combined with declarative build-recipe description to provide sandboxed, repeatable builds of GNOME projects, while maintaining the flexibility and speed required by GNOME developers.

But before talking about BuildStream, lets go over what this can mean for GNOME in 2017.

Centralization of build metadata

Currently we build GNOME in various ways, including JHBuild XML, Flatpak JSON for the GNOME Runtime and SDK, and GNOME Continuous JSON for CI.

We hope to centralize all of this so that the GNOME release team need only maintain one single set of core module metadata in one repository in the same declarative YAML format.

To this end, we will soon be maintaining a side branch of the GNOME release modulesets so people can try this out early.

GNOME Developer Experience

JHBuild was certainly a huge improvement over the absolutely nothing that we had in place before it, but is generally unreliable due its reliance on host tooling and dependencies.

  • Newcomers can have a hard time getting off the ground and making sure they have satisfied the system dependencies.
  • Builds are not easily repeatable, you cannot easily build GNOME 3.6 today with a modern set of dependencies.
  • Not easy to test core GNOME components like gnome-session or the gnome-initial-setup tool.

BuildStream nips these problems at the bud with an entirely no-host-tooling policy, in fact you can potentially build all of GNOME on your computer without ever installing gcc. Instead, GNOME will be built on top of a deterministic runtime environment which closely resembles the freedesktop-sdk-images Flatpak runtime but will also include the minimal requirements for booting the results in a VM.

Building in the Swarm

BuildStream supports artifact cache sharing so that authenticated users may upload successful build results to share with their peers. I doubt that we’ll want to share all artifacts between random users, but having GNOME Continuous upload to a common artifact cache will alleviate the pain of webkit rebuilds (unless you are hacking on webkit of course).

Flatpak / Flathub support

BuildStream will also be available as an alternative to flatpak-builder.

We will be providing an easy migration path and conversion script for Flatpak JSON which should be good enough for most if not all Flatpak app builds.

As the Flathub project develops, we will also work towards enabling submission of BuildStream metadata as an alternative to the Flatpak Builder JSON.

About BuildStream

Unlike many existing build systems, BuildStream treats the problem of building and distribution as separate problem spaces. Once you have built a stack in BuildStream it should be trivial enough to deploy it as rpms, debian packages, a tarball/ostree SDK sysroot, as a flatpak, or as a bootable filesystem image which can be flashed to hardware or booted in a VM.

Our view is that build instructions as structured metadata used to describe modules and how they integrate together is a valuable body of work on its own. As such we should be able to apply that same body of work reliably to a variety of tasks – the BuildStream approach aims to prove this view while also offering a clean and simple user experience.

BuildStream is written in Python 3, has fairly good test coverage at this stage and is quite well documented.

BuildStream works well right now but still lacks some important features. Expect some churn over the following months before we reach a stable release and become a viable alternative for developing GNOME on your laptop/desktop.

Dependencies

Note that for the time being the OSTree requirement may be too recent for many users running currently stable distros (e.g. debian Jessie). This is because we use the OSTree gobject introspection bindings which require a version from August 2016. Due to this hard requirement it made little sense to include special case support for older Python versions.

However with that said; if this transitional period is too painful, we may decide to lower the Python requirement and just use the OSTree command line interface instead.

Build Pipelines

The BuildStream design in a nutshell is to have one abstract core, which provides the mechanics for sandboxing build environments (currently using bubblewrap as our default sandbox), interpreting the YAML data model and caching/sharing the build results in an artifact cache (implemented with ostree) and an ecosystem of “Element” plugins which process filesystem data as inputs and outputs.

In a very abstract view, one can say that BuildStream is like GStreamer but its extensible set of element plugins operate on filesystem data instead of audio and video buffers.

This should allow for a virtually unlimited variety of pipelines, here are some sketches which attempt to illustrate the kinds of tasks we expect to accomplish using BuildStream.

Import a custom vendor tarball, build an updated graphics stack and BSP on top of that, and use a custom export element to deploy the build results as RPMS:

Import the base runtime ostree repository generated with Yocto, build the modules for the freedesktop-sdk-images repository on top of that runtime, and then deploy both Runtime and SDK from that base, while filtering out the irrelevant SDK specific bits from the Runtime deployment:

Import an arbitrary but deterministic SDK (not your host !) to bootstrap a compiler, C runtime and linux kernel, deploy a bootable filesystem image:

Build pipelines are modular and can be built recursively. So a separate project/pipeline can consume the same base system we just built and extend it with a graphics stack:

A working demo

What follows are some instructions to try out BuildStream in its early stages.

For this demo we chose to build a popular application (gedit) in the flatpak style, however this does not yet include an ostree export or generation of the metadata files which flatpak requires; the built gedit result cannot be run with flatpak without those steps but can be run in a `build-stream shell` environment.

# Installing BuildStream

# Before installing BuildStream you will need to first install
# Python >= 3.5, bubblewrap and OSTree >= v2016.8 as stated above.

# Create some arbitrary directory, dont use ~/buildstream because
# that's currently used by buildstream unless you override the 
# configuration.
mkdir ~/testing
cd testing
git clone https://gitlab.com/tristanvb/buildstream

# There are a handful of ways to install a python setuptools
# package, we recommend for developer checkouts that you first
# install pip, and run the following command.
#
# This should install build-stream and its pythonic dependencies
# into your users local python environment without touching any
# system directories:
cd buildstream
pip install --user -e .

# Clone the demo project repository
cd ..
git clone https://gitlab.com/tristanvb/buildstream-tests
cd buildstream-tests

# Take a peek of the gedit.bst pipeline state (optional)
#
# This will tell us about all the dependencies in the pipeline,
# what their cache keys are and their local state (whether they
# are already cached or need to be built, or are waiting for a
# dependency to be built first).
build-stream show --deps all gedit.bst

# Now build gedit on top of a GNOME Platform & Sdk
build-stream build gedit.bst

#
# This will take some time and quite some disk space, building
# on SSD is highly recommended.
#
# Once the artifact cache sharing features are in place then this
# will take half the disk space it currently takes, in the majority
# of cases where you BuildStream already has an artifact for the
# GNOME Platform and SDK bases.
#

# Ok, the build may have taken some time but I'm pretty sure it
# succeeded.
#
# Now we can launch a sandbox shell in an environment with the
# built gedit:
build-stream shell --scope run gedit.bst

# And launch gedit. Use the --standalone option to be sure we are
# running the gedit we just built, not a new window in the gedit
# installed on your host
gedit --standalone

Getting Involved

As you can see we’re currently hosted from my user account on gitlab, so our next steps is to sort out a proper hosting for the project including mailing list, bug tracking and a place to publish our documentation.

For right now, the best place to reach us and talk about BuildStream is in the #buildstream channel on GNOME IRC.

If you’d like to play around with the source, a quick read into the HACKING file will provide some orientation for getting started, coding conventions, building documentation and running tests.

 

With that, I hope you’ve all enjoyed FOSDEM and the beer that it entails :)

Open Desktop Review System : One Year Review

This weekend we had the 2,000th review submitted to the ODRS review system. Every month we’re getting an additional ~300 reviews and about 500,000 requests for reviews from the system. The reviews that have been contributed are in 94 languages, and from 1387 different users.

Most reviews have come from Fedora (which installs GNOME Software as part of the default workstation) but other distros like Debian and Arch are catching up all the time. I’d still welcome KDE software center clients like Discover and Apper using the ODRS although we do have quite a lot of KDE software reviews submitted using GNOME Software.

Out of ~2000 reviews just 23 have been marked as inappropriate, of which I agreed with 7 (inappropriate is supposed to be swearing or abuse, not just being unhelpful) and those 7 were deleted. The mean time between a review being posted that is actually abuse and it being marked as such (or me noticing it in the admin panel) is just over 8 hours, which is certainly good enough. In the last few months 5523 people have clicked the “upvote” button on a review, and 1474 people clicked the “downvote” button on a review. Although that’s less voting that I hoped for, that’s certainly enough to give good quality sorting of reviews to end users in most locales. If you have a couple of hours on your hands, gnome-software --mode=moderate is a great way to upvote/downvote a lot of reviews in your locale.

So, onward to 3,000 reviews. Many thanks to those who submitted reviews already — you’re helping new users who don’t know what software they should install.

February 05, 2017

2017-02-05 Sunday.

  • Another great day of FOSDEM; stood around saying what-ho to all and sundry, good to catch up with so many.

FOSDEM 2017 Day 2: Showtime


In line at FOSDEM. (CC-BY-SA 3.0)

All of us woke up at around 8, aiming to get to FOSDEM at half past 9. Booths would be set up between 9 and 10 and the first set of talks would start around 10. Since I have stayed at the accommodation before, I had a pretty good feeling for the route to go to FOSDEM, but we barely got out of the door before I realized that I had forgot all the merchandise I had brought with me, though.. :-)


Packing like a salesman.. (CC-BY-SA 3.0)

I had planned to spend most of today helping out in GNOME’s booth. We walked to the venue a little past 9, so I wouldn’t expect that many people to be around the FOSDEM venue yet. But when we arrived at 10, there were a ton of people already. I went straight to building K where the GNOME stand was located. Very crowded morning!


Building K, on the outside and inside. (CC-BY-SA 3.0)

FODSEM is happening at the Université libre de Bruxelles which is a university campus area that consists of several buildings. The GNOME booth is located next to LibreOffice and KDE in building K. I always request two tables but FOSDEM is growing each year with even more booths, so this time we had just a single table. When I came to the booth, Kat, David and others were already there, selling t-shirts. There were some problems doing the grey on grey GNOME shirts that I showed on my blog previously so instead Kat has printed the motive in white on dark grey, blue and orange. With a dozen t-shirts and hoodies the single table was already quite stuffed. It’s fortunate that socks doesn’t take up that much space!


Socks on display at the GNOME booth. They might seem really big at first, but they shrink to half the size after a a wash. (CC-BY-SA 3.0)

The socks turned out to sell well. I had brought 60-70 pairs and by the end of the first day all the socks were sold.
I also brought flyers, and Sebastian Wilmet complimented them with his technical flyers describing the GNOME as a development platform. This was really useful, since the flyers I have are very high level, encouraging contribution to all kinds of teams in GNOME whether engagement, documentation, translation, design etc. Sebastians flyers were aimed more towards the technical software students and developers, which an event like FOSDEM attracts many of.

While standing in booth, there were many people coming by that i had the opportunity to speak with. We had attendants that came by and said thanks for the work and effort and telling their story of ending up using GNOME. I also had positive comments on the release video! I’m really happy that there are people here who watch and look forward to watch these videos. It is a large effort, but it feels all worth it when you are getting positive reactions, in person. In previous years of FOSDEM I also remember we received lot of criticsm at the booth too. With GNOME 3, GNOME completely revamped its desktop and interaction style and this sudden change probably caused a lot of stir. What I’m seeing is probably the result of our desktop interaction style becoming more stable and GNOME’s new vision becoming more clear.

In the cafeteria I met Matthias and Emel who was preparing for a talk in the design room tomorrow about the new app GNOME Recipes. Will definitely attend that! We also discussed how the size of FOSDEM means that it’s hard to find people. Everyone are scattered across attending talks, standing in booths etc. Once I had split up with the rest of Open Source Aalborg, we were all in each our location. Even though this is the third time I am at FOSDEM, I keep getting impressed anew of the amount of traction and amount of people it holds. If you need a break, probably the best choice for taking it is to go outside the university campus.


Geoffrey working on the Android port of VLC. (CC-BY-SA 3.0)

Over at the cafeteria was a bunch of people wearing funny cone hats. Took me some time to realize that, of course, this is the VLC project sitting around at a table, hacking on the video player. I went over to them and spoke with a guy named Geoffrey. Geoffrey is from France and develops the port of VLC for Android. He was working on integrating VLC more with the Android platform. Also VLC 3.0 is coming up soon, which means that developers are currently focusing on bug fixing and landing the last few features before the feature freeze. Some features they have under the works, among others, are support for 360 degree videos and support for playing videos in virtual reality style.


The GNOME Beer event at La Becasse, promoted at FOSDSEM. (CC-BY-SA 3.0)

Saturday ended with GNOME’s annual beer night. This is a good opportunity to meet with fellow members of the GNOME community and GNOME users. I had a great time there! Thanks to the Collabora people for sponsoring some beers for everyone.


Beers, at GNOME beer night. (CC-BY-SA 3.0)

February 04, 2017

WilberWeek 2017

For the past three days, I am in El Bruc, a little village on the side of Montserrat near Barcelona, for WilberWeek — the annual retreat for members of the GIMP and GEGL communities. We have rented out half of the Can Serrat art residency for 10 days of good food, idyllic surroundings, sedated discussions and a bit of moody hacking.

img_20170204_120425

So far, I have spent my time eating paella; understanding the nuances of non-destructive image editing from Øyvind Kolås; walking in the countryside; and poring over Darktable and Shotwell to learn the workings of various “exposure and blacks” tools and get RAW decoding right. I have vague expectations that this will greatly improve the image editing experience in GNOME Photos.

c3sajyuuyaiqace

c3qmzkowcaaup0r

I am grateful to the GIMP project for inviting me and sponsoring my stay, and especially to Jehan Pagès and Aryeom for coming all the way to Barcelona to pick me up.

Photographs featuring Wilber are from Michael Natterer’s Twitter feed.


February 03, 2017

Fri 2017/Feb/03

  • Algebraic data types in Rust, and basic parsing

    Some SVG objects have a preserveAspectRatio attribute, which they use to let you specify how to scale the object when it is inserted into another one. You know when you configure the desktop's wallpaper and you can set whether to Stretch or Fit the image? It's kind of the same thing here.

    Examples of        preserveAspectRatio from the SVG spec

    The SVG spec specifies a simple syntax for the preserveAspectRatio attribute; a valid one looks like "[defer] <align> [meet | slice]". An optional defer string, an alignment specifier, and an optional string which can be meet or slice. The alignment specifier can be any one of these strings:

    none
    xMinYMin
    xMidYMin
    xMaxYMin
    xMinYMid
    xMidYMid
    xMaxYMid
    xMinYMax
    xMidYMax
    xMaxYMax

    (Boy oh boy, I just hate camelCase.)

    The C code in librsvg would parse the attribute and encode it as a bitfield inside an int:

    #define RSVG_ASPECT_RATIO_NONE (0)
    #define RSVG_ASPECT_RATIO_XMIN_YMIN (1 << 0)
    #define RSVG_ASPECT_RATIO_XMID_YMIN (1 << 1)
    #define RSVG_ASPECT_RATIO_XMAX_YMIN (1 << 2)
    #define RSVG_ASPECT_RATIO_XMIN_YMID (1 << 3)
    #define RSVG_ASPECT_RATIO_XMID_YMID (1 << 4)
    #define RSVG_ASPECT_RATIO_XMAX_YMID (1 << 5)
    #define RSVG_ASPECT_RATIO_XMIN_YMAX (1 << 6)
    #define RSVG_ASPECT_RATIO_XMID_YMAX (1 << 7)
    #define RSVG_ASPECT_RATIO_XMAX_YMAX (1 << 8)
    #define RSVG_ASPECT_RATIO_SLICE (1 << 30)
    #define RSVG_ASPECT_RATIO_DEFER (1 << 31)

    That's probably not the best way to do it, but it works.

    The SVG spec says that the meet and slice values (represented by the absence or presence of the RSVG_ASPECT_RATIO_SLICE bit, respectively) are only valid if the value of the align field is not none. The code has to be careful to ensure that condition. Those values specify whether the object should be scaled to fit inside the given area, or stretched so that the area slices the object.

    When translating that this C code to Rust, I had two choices: keep the C-like encoding as a bitfield, while adding tests to ensure that indeed none excludes meet|slice; or take advantage of the rich type system to encode this condition in the types themselves.

    Algebraic data types

    If one were to not use a bitfield in C, we could represent a preserveAspectRatio value like this:

    typedef struct {
        defer: gboolean;
        
        enum {
            None,
            XminYmin,
            XminYmid,
            XminYmax,
            XmidYmin,
            XmidYmid,
            XmidYmax,
            XmaxYmin,
            XmaxYmid,
            XmaxYmax
        } align;
        
        enum {
            Meet,
            Slice
        } meet_or_slice;
    } PreserveAspectRatio;
    	    

    One would still have to be careful that meet_or_slice is only taken into account if align != None.

    Rust has algebraic data types; in particular, enum variants or sum types.

    First we will use two normal enums; nothing special here:

    pub enum FitMode {
        Meet,
        Slice
    }
    
    pub enum AlignMode {
        XminYmin,
        XmidYmin,
        XmaxYmin,
        XminYmid,
        XmidYmid,
        XmaxYmid,
        XminYmax,
        XmidYmax,
        XmaxYmax
    }

    And the None value for AlignMode? We'll encode it like this in another type:

    pub enum Align {
        None,
        Aligned {
            align: AlignMode,
            fit: FitMode
        }
    }

    This means that a value of type Align has two variants: None, which has no extra parameters, and Aligned, which has two extra values align and fit. These two extra values are of the "simple enum" types we saw above.

    If you "let myval: Align", you can only access the align and fit subfields if myval is in the Aligned variant. The compiler won't let you access them if myval is None. Your code doesn't need to be "careful"; this is enforced by the compiler.

    With this in mind, the final type becomes this:

    pub struct AspectRatio {
        pub defer: bool,
        pub align: Align
    }

    That is, a struct with a boolean field for defer, and an Align variant type for align.

    Default values

    Rust does not let you have uninitialized variables or fields. For a compound type like our AspectRatio above, it would be nice to have a way to create a "default value" for it.

    In fact, the SVG spec says exactly what the default value should be if a preserveAspectRatio attribute is not specified for an SVG object; it's just "xMidYMid", which translates to an enum like this:

    let aspect = AspectRatio {
        defer: false,
        align: Align::Aligned {
            align: AlignMode::XmidYmid,
    	fit: FitMode::Meet
        }
    }

    One nice thing about Rust is that it lets us define default values for our custom types. You implement the Default trait for your type, which has a single default() method, and make it return a value of your type initialized to whatever you want. Here is what librsvg uses for the AspectRatio type:

    impl Default for Align {
        fn default () -> Align {
            Align::Aligned {
                align: AlignMode::XmidYmid,
                fit: FitMode::Meet
            }
        }
    }
    
    impl Default for AspectRatio {
        fn default () -> AspectRatio {
            AspectRatio {
                defer: false,
                align: Default::default ()    // this uses the value from the trait implementation above!
            }
        }
    }

    Librsvg implements the Default trait for both the Align variant type and the AspectRatio struct, as it needs to generate default values for both types at different times. Within the implementation of Default for AspectRatio, we invoke the default value for the Align variant type in the align field.

    Simple parsing, the Rust way

    Now we have to implement a parser for the preserveAspectRatio strings that come in an SVG file.

    The Result type

    Rust has a FromStr trait that lets you take in a string and return a Result. Now that we know about variant types, it will be easier to see what Result is about:

    #[must_use]
    enum Result<T, E> {
       Ok(T),
       Err(E),
    }

    This means the following. Result is an enum with two variants, Ok and Err. The first variant contains a value of whatever type you want to mean, "this is a valid parsed value". The second variant contains a value that means, "these are the details of an error that happened during parsing".

    Note the #[must_use] tag in Result's definition. This tells the Rust compiler that return values of this type must not be ignored: you can't ignore a Result returned from a function, as you would be able to do in C. And then, the fact that you must see if the value is an Ok(my_value) or an Err(my_error) means that the only way ignore an error value is to actually write an empty stub to catch it... at which point you may actually write the error handler properly.

    The FromStr trait

    But we were talking about the FromStr trait as a way to parse strings into values! This is what it looks like for our AspectRatio:

    pub struct ParseAspectRatioError { ... };
    
    impl FromStr for AspectRatio {
        type Err = ParseAspectRatioError;
    
        fn from_str(s: &str) -> Result<AspectRatio, ParseAspectRatioError> {
            ... parse the string in s ...
    
            if parsing succeeded {
                return Ok (AspectRatio { ... fields set to the right values ... });
            } else {
                return Err (ParseAspectRatioError { ... fields set to error description ... });
            }
        }
    }

    To implement FromStr for a type, you implement a single from_str() method that returns a Result<MyType, MyErrorType>. If parsing is successful you return the Ok variant of Result with your parsed value as Ok's contents. If parsing fails, you return the Err variant with your error type.

    Once you have that implementation, you can simply call "let my_result = AspectRatio::from_str ("xMidyMid");" and piece apart the Result as with any other Rust code. The language provides facilities to chain successful results or errors so that you don't have nested if()s and such.

    Testing the parser

    Rust makes it very easy to write tests. Here are some for our little parser above.

    #[test]
    fn parsing_invalid_strings_yields_error () {
        assert_eq! (AspectRatio::from_str (""), Err(ParseAspectRatioError));
    
        assert_eq! (AspectRatio::from_str ("defer foo"), Err(ParseAspectRatioError));
    }
    
    #[test]
    fn parses_valid_strings () {
        assert_eq! (AspectRatio::from_str ("defer none"),
                    Ok (AspectRatio { defer: true,
                                      align: Align::None }));
    
        assert_eq! (AspectRatio::from_str ("XmidYmid"),
                    Ok (AspectRatio { defer: false,
                                      align: Align::Aligned { align: AlignMode::XmidYmid,
                                                              fit: FitMode::Meet } }));
    }

    Using C-friendly wrappers for those fancy Rust enums and structs, the remaining C code in librsvg now parses and uses AspectRatio values that are fully implemented in Rust. As a side benefit, the parser doesn't use temporary allocations; the old C code built up a temporary list from split()ting the string. Rust's iterators and string slices essentially let you split() a string with no temporary values in the heap, which is pretty awesome.

Taking time out for FEDORA and GNOME

SETEIS was the first IT conference where I presented FEDORA and GNOME this 2017. The event was held at UNTELS, university located south of Lima. The System Engineering School have celebrated one more anniversary of its existence.

A wide range of topics were presented during the symposium such as Software Engineering, Computational Security, Bioinformatics and Linux.

julita

This time I did a talk in the auditorium where only a few students ( I could say 5) were aware of what Linux is. I realized of this because I usually do questions while I am doing an introduction to Linux. Then I pointed out the importance of Linux in IT and research.

julitaincaAfter that, I presented the FEDORA and the GNOME project, how and why they were created and the community that currently support these projects. I do prize with stickers, DVD and ballons of the projects in order to get attention and motivation about I am saying. I also gave an overview of the GSoC that will start in April 2017.

dsc_0175Thank so much to Toto Cabezas and the organizers from Villa El Salvador for let us spread the Linux word thought the FEDORA and GNOME projects in my local community.

untels_julita🙂


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, FEDORA 25, GNOME, Julita Inca, Julita Inca Chiroque, linux, UNTELS

Not going to FOSDEM 2017

After 9 years of assiduous attendance, I am not going to FOSDEM this year.
Don’t worry, I still love Free Software and beer, but the reason for the absence is pretty understandable: new baby coming! So I will let you know soon how life with two kids is (if I have the energy for that 🙂 ).

Enjoy FOSDEM 2017 everyone!

February 02, 2017

Going to FOSDEM!

It’s been two years since the last time I went to FOSDEM, but it seems that this year I’m going to be there again and, after having traveled to Brussels a few times already by plane and train, this year I’m going by car!: from Staines to the Euro tunnel and then all the way up to Brussels. Let’s see how it goes.

FOSDEM 2017

As for the conference, I don’t have any particular plan other than going to some keynotes and probably spending most of my time in the Distributions and the Desktops devrooms. Well, and of course joining other GNOME people at A La Bécasse, on Saturday night.

As you might expect, I will have my Endless laptop with me while in the conference, so feel free to come and say “hi” in case you’re curious or want to talk about that if you see me around.

At the moment, I’m mainly focused on developing and improving our flatpak story, how we deliver apps to our users via this wonderful piece of technology and how the overall user experience ends up being, so I’d be more than happy to chat/hack around this topic and/or about how we integrate flatpak in EndlessOS, the challenges we found, the solutions we implemented… and so forth.

That said, flatpak is one of my many development hats in Endless, so be sure I’m open to talk about many other things, including not work-related ones, of course.

Now, if you excuse me, I have a bag to prepare, an English car to “adapt” for the journey ahead and, more importantly, quite some hours to sleep. Tomorrow it will be a long day, but it will be worth it.

See you at FOSDEM!

February 01, 2017

Next stop: Brussels

On Friday I'm leaving for Brussels and FOSDEM!

Looking forward to be back again after missing out on last year´s incarnation and meeting familiar faces in real life once again, listening to interesting talks, both those relevant to work-related activities and personal interests, as well as general techie stuff, as I usually tend to end up at some “unexpected“ talk. Especially in the afternoon when getting a bit of fatigue and just staying around after the previous talk (which I had planned to attend) :-)

When it comes to Maps, I also have some good news. Just in time for FOSDEM I have managed to set up a temporary OpenTripPlanner server instance for the event so that people can test things out with the transit-routing branch.

The server has the base URL http://tricholoma dot update dot uu dot se:8080/otp

(replace dot with a . obviously, as I wanted to at least make it somewhat less likely for automated bots to generate extra load)

Beware that this server is not behind an HTTPS proxy (which should be the case for a real server, so that the user´s activity isn´t leaked to a potential third party).

As before, use the OTP_BASE_URL environment variable (or use a modified service file as described in an earlier post).
The server is currently loaded with transit data for the whole of Belgium.


A little screenshot showing a plausible trip from ULB to Grand Place after on Saturday afternoon.

And last, but not least I would like to thank my employer PrimeKey Solutions AB for sponsoring my trip and http://www.update.uu.se for kindly letting me run the demo server on their hardware.

libinput and lid switch events

I merged a patchset from James Ye today to add support for switch events to libinput, specifically: lid switch events. This feature is scheduled for libinput 1.7.

First, what are switches and how are they different so keys? A key's state is transient with a neutral state of "key is up". The state itself is expected to change frequently. Switches don't always have a defined logical neutral state and the state changes only infrequently. This requires different handling in applications and thus libinput exposes a new interface (and capability) for switches.

The interface itself is trivial. A switch event has two properties, the switch type (e.g. "lid") and the switch state (on/off). See the libinput-debug-events source code for a simple code to print the state and type.

In libinput, we generally try to restrict ourselves to the cases we know how to handle. So in the first iteration, we'll support a single switch event: the lid switch. This is the toggle that changes when you close the lid on your laptop.

But libinput uses this internally too: touchpads are disabled automatically whenever the lid is closed. Indeed, this functionally was the main motivation for this patchset. On a number of devices, we get ghost touches when the lid is closed. Even though the touchpad is unreachable by the user interference with the screen still causes events, moving the pointer in unexpected ways and generally being a nuisance. Some trackpoints suffer from the same issue. But now that libinput knows about the lid switch it can transparently disable the touchpad whenever the lid is closed and thus discard the events.

Lid switches on some devices are unreliable. There are some devices where the lid is permanently closed and other devices where the lid can be closed, but we'll never see the open event. So we change behaviour based on a few factors. After all, no-one likes a dysfunctional touchpad because the lid switch is broken (if you do, seek help). For one, whenever we detect keyboard events while in logically closed state we'll assume that the lid is open after all and adjust state accordingly. Unless the lid switch is reliable, we don't sync the initial state. That's annoying for those who start libinput in closed mode, but it filters out all devices that set the lid switch to "on" and then never change again. On the Surface 3 devices we go even further: we know those devices needs a bit of hand-holding. So whenever we detect activity on the keyboard, we also write the EV_SW/SW_LID state to the device node, thus updating the kernel to be correct again (and thus help everyone else who may be listening).

The exact behaviours will likely change slightly over time as we have to deal with corner-cases one-by-one. But meanwhile, it's even easier for compositors to listen to switch events and users don't have to deal with ghost touches anymore. Many thanks to James Ye for implementing this.

January 31, 2017

SSSD: {DBus,Socket}-activated responders!

Since its 1.15.0 release, SSSD takes advantage of systemd machinery and introduces a new way to deal with the responders.

Previously, in order to have a responder initialized, the admin would have to add the specific responder to the "services" line in sssd.conf file, which does make sense for the responders that are often used but not for those rarely used (as the infopipe and PAC responders for instance).

This old way is still preserved (at least for now) and this new release is fully backwards-compatible with the old config file.

For this new release, however, adding responders to the "services" isn't needed anymore as the admin can easily enable any of the responders' sockets and those will be {dbus,socket}-activated on demand and will be up while are still being used. In case the responder becomes idle, it will automatically shut itself down after a configurable amount of time.

The sockets we've created are: sssd-autofs.socket, sssd-nss.socket, sssd-pac.socket, sssd-pam.socket (and sssd-pam-priv.socket, but you don't have to worry about this one), sssd-ssh.socket and sssd-sudo.socket. As an example, considering the admins want to enable the sockets for both NSS and PAM responders, they should do: `systemctl enable sssd-pam.socket sssd-nss.socket` and voilà!

In some cases the admins may also want to set the "responder_idle_timeout" option added for each of the responders in order to tweak for how long the responder will be running in case itbecomes idle. Setting this option to 0 (zero) disables the responder_idle_timeout. For more details, please, check sssd.conf man page.

For this release we've taken a more conservative path and are leaving up to the admins to enable the services they want to have enabled in case they would like to try to using {dbus,socket}-activated responders

It's also important to note that while the SELinux policies are not updated in your distro you may need to have SELinux in permissive mode in order to test/use the {dbus,socket}-activated responders. A bug for this is already filed for Fedora and hopefully will be fixed before the new package is included in the distro.

And the changes in the code were (a high-level explanation) ...

Before this work the monitor was the piece of code responsible for handling the responders listed in the services' line of sssd.conf file. And by handling I mean:

  • Gets the list of services to be started (and, consequently, the total number of services);
  • For each service:
    • Gets the service configuration;
    • Starts the service;
    • Adds the service to the services' list;
    • Once the service is up, a dbus message is sent to the monitor, which ...
      • Sets up the sbus* connection to communicate with the service;
      • Marks the service as started;

Now, the monitor does (considering an empty services' line):

  • Once the service is up, a dbus message is sent to the monitor;
    • The number of services is increased;
    • Gets the service configuration;
    • Adds the service to the services' list
    • Sets up the sbus connection to communicate with the service;
    • Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed;
    • Marks the service as started;

By looking at those two different processes done by the monitor, some of you may have realized an extra step needed when the service has been {dbus,socket}-activated that was needed at all before. Yep, "Sets up a destructor to the sbus connection in order to properly shutdown the service when this connection is closed" is a completely new thing as, previously, the services were just shut down when SSSD was shut down and now the services are shutdown when they become idle.

So, what's basically done now is:
 - Once there's no communication to the service, it's (sbus) connection with the monitor is closed;
 - Closing the (sbus) connection triggers the following actions:
    - The number of services is decreased;
    - The connection destructor is unset (otherwise it would be called again on the service has been freed);
    - Service is shut down:

*sbus: SSSD uses dbus protocol over a private socket to handle its internal communication, so the services do not talk over system bus.

And how do the unit files look like?

SSSD has 7 services: autofs, ifp, nss, pac, pam, ssh and sudo. From those 7 services 4 of them have pretty much these unit files:

AutoFS, PAC, SSH and Sudo unit files:


sssd-$responder.service:
[Unit]
Description=SSSD $(responder) Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-$responder.socket

[Service]
ExecStartPre=-/bin/chown $sssd_user:$sssd_user /var/log/sssd/sssd_autofs.log
ExecStart=/usr/libexec/sssd/sssd_$responder --debug-to-files --socket-activated
Restart=on-failure
User=$sssd_user
Group=$ssd_user
PermissionsStartOnly=true

sssd-$responder.socket:
[Unit]
Description=SSSD $(responder) Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service

[Socket]
ListenStream=/var/lib/sss/pipes/$responder
SocketUser=$sssd_user
SocketGroup=$sssd_user

[Install]
WantedBy=sssd.service


And about the different ones? We will get there ... and also explain why they are different.

The infopipe (ifp) unit file:

As the infopipe won't be socket-activated, it doesn't have the its respective .socket unit.
Also, differently than the others responders the infopipe responder can only be run as root nowadays.
In the end, its .service unit looks like:

sssd-ifp.service:
[Unit]
Description=SSSD IFP Service responder
Documentation=man:sssd-ifp(5)
After=sssd.service
BindsTo=sssd.service

[Service]
Type=dbus
BusName=org.freedesktop.sssd.infopipe
ExecStart=/usr/libexec/sssd/sssd_ifp --uid 0 --gid 0 --debug-to-files --dbus-activated
Restart=on-failure

The PAM unit files:

The main difference between PAM responder and the others is that PAM has two sockets that can end up socket-activating its service. Also, these sockets have a special permission.
In the end, its unit files look like:

sssd-pam.service:
[Unit]
Description=SSSD PAM Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-pam.socket sssd-pam-priv.socket

[Service]
ExecStartPre=-/bin/chown $sssd_user:$sssd_user @logpath@/sssd_pam.log
ExecStart=@libexecdir@/sssd/sssd_pam --debug-to-files --socket-activated
Restart=on-failure
User=$sssd_user
Group=$sssd_user
PermissionsStartOnly=true

sssd-pam.socket:
[Unit]
Description=SSSD PAM Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service
BindsTo=sssd-pam-priv.socket

[Socket]
ListenStream=@pipepath@/pam
SocketUser=root
SocketGroup=root

[Install]
WantedBy=sssd.service

sssd-pam-priv.socket:
[Unit]
Description=SSSD PAM Service responder private socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service
BindsTo=sssd-pam.socket

[Socket]
Service=sssd-pam.service
ListenStream=@pipepath@/private/pam
SocketUser=root
SocketGroup=root
SocketMode=0600

[Install]
WantedBy=sssd.service

The NSS unit files:

The NSS responder was the trickiest one to have working properly, mainly because when socket-activated it has to run as root.
The reason behind this is that systemd calls getpwnam() and getgrnam() when using "User="/"Group=" different than root. By doing this libc ends up querying for $sssd_user, trying to talk to NSS responder which is not up yet and then the clients would end up hanging for a few minutes (due to our default_client_timeout) which is something we really want to avoid.

In the end, its unit files look like:

sssd-nss.service:
Description=SSSD NSS Service responder
Documentation=man:sssd.conf(5)
After=sssd.service
BindsTo=sssd.service

[Install]
Also=sssd-nss.socket

[Service]
ExecStartPre=-/bin/chown root:root @logpath@/sssd_nss.log
ExecStart=@libexecdir@/sssd/sssd_nss --debug-to-files --socket-activated
Restart=on-failure

sssd-nss.socket:
[Unit]
Description=SSSD NSS Service responder socket
Documentation=man:sssd.conf(5)
BindsTo=sssd.service

[Socket]
ListenStream=@pipepath@/nss
SocketUser=$sssd_user
SocketGroup=$sssd_user

All the services' units have a "BindsTo=sssd.service" in order to ensure that the service will be stopped when sssd.service is stopped so in case SSSD is shutdown/restart those actions will be propagated to the responders as well.

Similarly to "BindsTo=ssssd.service" there's "WantedBy=sssd.service" in every socket unit and it's there to ensure that, once the socket is enabled it will be automatically started by SSSD when SSSD is started.

And that's pretty much all changes that I've covered with this work.

I really have to say a big thank you to ...

  • Lukas Nykryn and Michal Sekletar who patiently reviewed the unit files we're using and gave me a lot if good tips while doing this work;
  • Sumit Bose who helped me to find out the issue with the NSS responder when trying to run it as a non-privileged user;
  • Jakub Hrozek, Lukas Slebodnik and Pavel Brezina for reviewing and helping me to find bugs, crashes, regressions that fortunately were avoided.

And what's next?

There's already a patch making the {dbus,socket}-activated automatically enabled when SSSD starts, which changes our approach from having to explicit enable the sockets in order to take advantage of this work to explicitly mask the disable (actually, mask) the sockets of the processes that shouldn't be {dbus,socket}-activated.

Also, a bigger work for the future is to also have the providers being socket-activated, but this is material for a different blob post. ;-)

Nice, nice. But I'm having issues with what you've described!

In case it happens to you, please, keep in mind that the referred way to diagnose any issues would be:

  • Inspecting sssd.conf in order to check which are the explicitly activated responders in the services' line;
  • `systemctl status sssd.service`;
  • `systemctl status sssd-$responder.service` (for the {dbus,socket}-activated ones);
  • `journalctl -u sssd.service`;
  • `journalctl -u sssd-$responder.service` (for the {dbus,socket}-activated ones);
  • `journalctl -br`;
  • Checking SSSD debug logs in order to see whether SSSD sockets where communicated

Handling all those mail notifications from the bug tracker

…and following stuff that interests you in an issue tracking system.

I work as a bugmaster in a large project. That means I interact with many people on many topics and try to have a quite hollistic view of what’s going on which requires processing vast amounts of information. Apart from good memory and basic technical understanding (“Wait, this report reminds me of another one which might be related”) I need to follow up and keep track of stuff.

When talking to fellow Wikimedians a common question I receive is: “How do you cope with being subscribed to basically every task in our bug tracker (Wikimedia Phabricator) and receiving those notifications?”

Assuming there is often “How can I cope with all that mail I receive?” hidden between the lines, I’m going to cover that first:

Phabricator options which allow you to follow or collect a list of stuff which interests you:

For completeness: Currently Phabricator does not offer a difference between “I subscribed to this task or got subscribed as I watch the associated project” versus “I got explicitly mentioned in a comment in this task” which can sometimes be unfortunate.

My mail workflow

My workflow is email based. Currently I actually take a look at about 400 to 700 bugmail notifications on a normal workday (I receive way more). Those are the emails that end up as “unread” in a subfolder of my email application (GNOME Evolution).

I apply filters locally, based on projects and/or senders (e.g. bots) of notifications. Phabricator includes a custom “X-Phabricator-Projects” header in every mail notification. As far as i know GMail does not support filtering on custom headers and last time I checked, GMail had a rather “creative” IMAP implementation which did not allow to perform such filtering server-side.
In Bugzilla, managing filter rules was easier as a task has exactly one product and component; further associations are expressed via keywords or whiteboard entries for those who fancy additional UI elements. As Phabricator tasks can have between zero and unlimited projects associated, the order of my local filters (and their criteria) is important.
For some projects and some senders, these filters set the email to “read” status (as in “has been read”) so I might never see these emails.

View of my mail application

My default view is to display only unread messages, to order by message threads, and to display new messages on top.

When going through new (unread) messages I mark most messages as read (keyboard shortcuts are your friend). I keep some messages in “unread” message status whenever I plan to re-check or follow up at some point. I occasionally go through them and nag people.
For urgent tasks which need rechecking rather sooner than later, I mostly end up keeping open tabs in my web browser (if those tasks are not prioritized with the highest (“Unbreak Now!”) priority already anyway).

In addition, I also display some associated projects in the message list for faster orientation which software area the thread is about. I use labels for this, which is a gross hack. (I’d love to have an email application that allows displaying values of arbitrary header lines as a column in the list of emails.)

Obviously, such an email based workflow makes it hard to pass work to someone else for a limited time (the so-called “vacation” concept). Some mail services allow proxying though.

Expectations and service level

And the social part: As Wikimedia Phabricator is used by more and more teams (not only by engineering but also by chapters, to plan sessions at conferences, etc.), I clarified which service level to expect from me. So the page that describes my job says: As the bugwrangler cannot support every single project and task in Phabricator to the same extent, maintainers and teams are more than welcome to contact the bugwrangler to express support requests for managing their tasks.

Do you have better workflows or tips to share?

January 30, 2017

GNOME Keysign 0.8

I’ve just release GNOME Keysign 0.8. It’s an exciting step towards a more mature codebase with less cruft and pieces of code moved to places where they should be more discoverable. To get the app, we have a tarball as usual, or an experimental flatpak (see below). Also notice that the repository has changed. The new URL should be more discoverable and cause less confusion. I will take down the old URL soon. Also note that this release will not be compatible with older releases. So you cannot find older clients on the network.
One problem that existed was when you selected a key and then pushed the “back” button, the UI would stall an unpleasantly long time. The actual problem is Python’s HTTPd implementation using select() with a relatively long interval instead of, say, doing things asynchronously. The interval is now shorter which increases the number of times the polling loop is executed but should make the UI more responsive. I wonder whether it makes sense to investigate hooking up the GLib Mainloop with Python’s SocketServer…

Another fix went into the HTTP client side which you could stall with a non reacting keyserver, i.e. when the HTTP request was simply not answered. Because the download is not done asynchronously as it should, the UI waits for the completion of the download. The current mitigation is to let the HTTP request time out.

A new thing is a popup when an uncaught exception happens. It’s copy and pasted from MyPaint and works by setting Python’s sys.excepthook.

You can also now switch the screen on which the fullscreen barcode is being shown. Once you have selected a key, you get the barcode displayed. If you click it it will cover your whole screen. If you are hooked up to a projector you might want to make sure that the barcode is shown on the bigger screen. Now you can press the left or right key to “move” the barcode. I needed to work around a bug in GTK which seems to prevent gtk_window_fullscreen_on_monitor () from working.

Finally, a new GPG abstraction consolidates all the required functionality into one module rather than having the required functionality spread around various modules. I named it “gpgmh” for “gpg made hard” which is a pun on “gpgme”, “gpg made easy”. The new module will also allow to use the real™ gpg module instead of the gpg executable wrapper provided by monkeysign. We cannot, however, switch to the library just yet, because it needs gpgme 1.8 which is too recent for current distros (well, Debian and Ubuntu). So we have to wait until we can depend on it.

If you want to try the application, you can now get the Flatpak from here. It should be possible to install the app with a command like flatpak --user install --from http://muelli.cryptobitch.de/tmp/2017-01-29-keysign.flatpakref. You can also grab the bundle if you want. Please note that the flatpak is very experimental. It would be surprising if anything but showing the UI actually worked. There are several issues we still need to work out. One is to send an email from within the sandbox and the other is re-use an existing gpg agent from the existing user session inside the sandbox. Gpg is behaving a bit weirdly there. Just having the agent’s socket available inside the sandbox does not seem to be enough to make it work. We need to investigate what’s going on there.

The future brings other exciting changes, too. We have a new UI in preparation which should be much more appealing. Here is what it will look like:

Presentation woes

My flatpak presentation at devconf.cz ran into some technical difficulties when the beamer system failed halfway through. I couldn’t show the second half of my slides, and had to improvise a bit. If you were in the room, I hope it wasn’t too incomprehensible.

You can see what you missed here:

And here is a quick recording of the demo I would have given at the end of my talk:

summing up 83

summing up is a recurring series on topics & insights that compose a large part of my thinking and work. drop your email in the box below to get it straight in your inbox or find previous editions here.

The Web Is a Customer Service Medium, by Paul Ford

The web seemed to fill all niches at once. It was surprisingly good at emulating a TV, a newspaper, a book, or a radio. Which meant that people expected it to answer the questions of each medium, and with the promise of advertising revenue as incentive, web developers set out to provide those answers. As a result, people in the newspaper industry saw the web as a newspaper. People in TV saw the web as TV, and people in book publishing saw it as a weird kind of potential book. But the web is not just some kind of magic all-absorbing meta-medium. It's its own thing. And like other media it has a question that it answers better than any other. That question is:

Why wasn't I consulted?

Humans have a fundamental need to be consulted, engaged, to exercise their knowledge (and thus power), and no other medium that came before has been able to tap into that as effectively.

every form of media has a question that it's fundamentally answering. that is something i've been alluding a few episodes ago. you might think you already understand the web and what users want, but in fact the web is not a publishing medium nor a magic all-absorbing meta-medium. it's its own thing.

Superintelligence: The Idea That Eats Smart People, by Maciej Cegłowski

AI risk is string theory for computer programmers. It's fun to think about, interesting, and completely inaccessible to experiment given our current technology. You can build crystal palaces of thought, working from first principles, then climb up inside them and pull the ladder up behind you.

People who can reach preposterous conclusions from a long chain of abstract reasoning, and feel confident in their truth, are the wrong people to be running a culture.

The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.

there is this idea that with the nascent ai technology, computers are going to become superintelligent and subsequently end all live on earth - or variations of this theme. but the real threat here is a different one. these seductive, apocalyptic beliefs prevent people from really working to make a difference and ignoring the harm that is caused by the current machine learning algorithms.

Epistemic learned helplessness, by Scott Alexander

When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn't so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas.

I guess you could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments are just going to be a bad idea so I don't even try.

the smarter someone is, the easier it is for them to rationalize and convince you of ideas that sound true even when they're not. epistemic learned helplessness is one of those concepts that's so useful you'll wonder how you did without it.

GNOME Keysign 0.7

I keep forgetting about blogging about the progress we’re making with GNOME Keysign. Since last time I reported several new cool developments happened. This 0.7 release fixes a few bugs and should increase compatibility with recent gpg versions.

The most noticeable change is probably a message when you don’t have a private key. I tried to create something clickable so that the user would be presented, say, seahorse with the relevant widgets that allows the user to quickly generate an OpenPGP key. But we currently don’t seem to be able to do that. It’s probably worth filing a bug against Seahorse.

You may also that the “Next” or “Back” button is now sensitive to the end of the notebook. That is a minor improvement in the UI.

In general, we should be more Python 3 compatible by removing python2-only code in various modules.

Another change is a hopefully more efficient bar code rendering. Instead of using mixed case characters, the newer version tries to use the alphanumeric mode which should use about 5.5 bits per character rather than 8. The barcode reading side should also save some CPU cycles by activating zbar’s cache.

As long as you fight back

Here’s the text of the letter that I sent to my representatives in the US Congress today. (I don’t live in the US, but I’m a citizen of it, and I vote.)

Dear {name}:

As I’m sure you’re aware, the President’s first destructive week in office has left many Americans fearful of whether the values of our country will continue to be carried out. You are part of the last line of defense.

As a US citizen who has lived abroad for over 20 years and been through the immigration systems of two countries, the President’s recent executive order on immigration has struck a particular chord with me. It is a cheap shot to fan the flames of xenophobia, and more refugees — not some abstract concept, but real people — will likely die because of it.

I urge you, as my representative, to do everything you can to obstruct and dismantle policies that fly in the face of decency, compassion, and what our country stands for. I am asking you to go beyond what a member of Congress usually does: these are unusual times and the current administration is not playing by the same rules that you and I are. I am asking you never to compromise and never to let up the pressure. If you want to practice bipartisanship, then reach out to those few Republicans who have not sold out. Freeze out the Republicans and Democrats who have.

This will not be an easy ride for you, but as long as you fight back, you can count on my vote.

If you are a US citizen and want to do something similar, here are some links to where you can find who represents you in the Senate and the House. (Note that to find your House representative, you need to enter your address or your extended 5+4 zip code, because of congressional district gerrymandering. Both of your state’s senators represent the whole state at large, so contact both of them.)

 


How libinput opens device nodes

In order to read events and modify devices, libinput needs a file descriptor to the /dev/input/event node. But those files are only accessible by the root user. If libinput were to open these directly, we would force any process that uses libinput to have sufficient privileges to open those files. But these days everyone tries to reduce a processes privileges wherever possible, so libinput simply delegates opening and closing the file descriptors to the caller.

The functions to create a libinput context take a parameter of type struct libinput_interface. This is an non-opaque struct with two function pointers: "open_restricted" and "close_restricted". Whenever libinput needs to open or close a file, it calls the respective function. For open_restricted() libinput expects the caller to return an fd with the given flags.

In the simplest case, a caller can merely call open() and close(). This is what the debugging tools do (and the test suite). But obviously this means you have to run those as root. The main wayland compositors (weston, mutter, kwin, ...) instead forward the request to systemd-logind. That then opens the event node and returns the fd which is passed to libinput. And voila, the compositors don't need to run as root, libinput doesn't have to know how the fd is opened and everybody wins. Plus, logind will mute the fd on VT-switch, so we can't leak keyboard events.

In the X.org case it's a combination of the two. When the server runs with systemd-logind enabled, it will open the fd before the driver initialises the device. During the init stage, libinput asks the xf86-input-libinput driver to open the device node. The driver forwards the request to the server which simply returns the already-open fd. When the server runs without systemd-logind, the server opens the file normally with a standard open() call.

So in summary: you can easily run libinput without systemd-logind but you'll have to figure out how to get the required privileges to open device nodes. For anything more than a test or debugging program, I recommend using systemd-logind.

January 29, 2017

Re: Consider the maintainer

I’ve read this LWN article: Consider the maintainer. It was a great read, and I want to share my thoughts, from my experience on being a maintainer (or helping the maintenance) of several GNOME modules.

GNOME has a lot of existing code, but let’s face it, it has also a lot of bugs (just look at bugzilla, but the code also contains a lot of not-yet-reported bugs). For a piece of software to be successful, I’m convinced that it has to be stable, mostly bug-free. Stability is not the only property of a successful software, but without it it has way less chance to be successful in the long run (after the hype wave is gone and the harsh reality resurfaces).

There is a big difference between (a) writing a feature, but in reality it’s full of bugs, and (b) writing the same feature, but “mostly bug-free” (targeting bug-free code). It certainly takes the double of time, probably more. The last 10% of perfection are the most difficult.

Paolo Borelli likes to explain that there are two kinds of developers: the maintainers and the developers who prefer to write crazy-new-experimental features (with a gray scale in-between). It is similar with the difference between useful tasks vs interesting tasks that I talked about in a previous blog post: some useful tasks like writing unit tests are not terribly interesting to do, but I think that in general a maintainer-kind-of-developer writes more tests. And Paolo said that I’m clearly on the maintainer side, caring a lot about code quality, stability, documentation, tests, bug triaging, etc.

Reducing complexity

The key, with a lot of existing but not perfect code, is to reduce complexity:

  • Improving the coding style for better readability;
  • Doing lots of small (or less-small) refactorings;
  • Writing utility classes;
  • Extracting from a big class a set of smaller classes so that the initial class delegates some of its work;
  • Writing re-usable code, by writing a library, and documenting the classes with GTK-Doc;
  • Etc.

Even for an application, it is useful to write most of the code as an internal library, documented with GTK-Doc. Browsing the classes in Devhelp is such a nice way to discover and understand the codebase for new contributors (even if the contributor has already a lot of experience with GLib/GTK+).

Another “maintainer task” that I’ve often done: when I start contributing to a certain class, I read the whole code of that class, trying to understand every line of code, doing lots of small refactorings along the way, simplifying the code, using new GLib or GTK+ APIs, etc. When doing this exercise, I have often discovered (and fixed) bugs that were not reported in the bug tracker. Then to achieve what I wanted to do initially, with the much better knowledge of the code, I know how to do it properly, not with a quick hack to do the minimal amount of change that I sometimes see passing. As a result the code has less bugs, there is less chance to introduce new bugs, and the code is easier to understand and thus more maintainable. There is no secrets, it takes more time to do that, but the result is better.

Some books that were very useful to me:

Of course I didn’t read all those books at once, practicing is also important. I nowadays read approximately one computing science book per year.

About new contributors and code reviews

When I started to contribute to GtkSourceView several years ago, I had already developed a complete LaTeX editor based on GtkSourceView (by myself), read several of the above books (most importantly Code Complete) and applied what I learned. I had already a lot of experience with GTK+. So starting to contribute to GtkSourceView was easy, my patches were accepted easily and I think it was not too much work for the reviewers. I then became a co-maintainer.

Contrast this with all those newbies wanting to contribute to GNOME for the first time, without any experience with GLib/GTK+. They don’t even know how to contribute, how to compile the code, they probably don’t know well the command line or git, etc. So if a maintainer wants to help those newcomers, it takes a lot of time. I think this is partly a problem of documentation (that I’m trying to solve with this guide on GLib/GTK+). But even with good documentation, if the new contributor needs to learn for the first time GTK+, it will require too much time for the maintainer. What I would suggest is for newcomers to start by writing a new application on their own; for that a list of ideas of missing applications would be helpful.

This is maybe a little controversial, but the talk Consider the maintainer was also controversial, by suggesting for instance: “Maintainers should be able to say that a project is simply not accepting contributions, or to limit contributors to a small, known group of developers.”

When a company wants to hire a developer, they can choose the best candidate, or if no candidates fit they can also choose to keep the existing team as-is. In Free Software, anyone can send a patch; sometimes it takes a lot of time to explain everything and then after a short time the contributor never comes back. Remember also the well-known fact that adding people to a late project makes it later (usually, but there are exceptions).

Another interesting glimpse, from Hackers and Painters (Paul Graham):

I think this is the right model for collaboration in software too. Don’t push it too far. When a piece of code is being hacked by three or four different people, no one of whom really owns it, it will end up being like a common-room. It will tend to feel bleak and abandoned, and accumulate cruft. The right way to collaborate, I think, is to divide projects into sharply defined modules, each with a definite owner, and with interfaces between them that are as carefully designed and, if possible, as articulated as programming languages.

Other topics

I could talk about other topics, such as the lack of statistics (I don’t even know the number of people executing the code I write!) or trying to avoid sources of endless maintenance burden (example: GtkSourceView syntax highlighting definition files, the maintenance could clearly be better distributed, with one maintainer per *.lang file). But this blog post is already quite long, so I’ll not expand on those topics.

In short, there is clearly matter for thoughts and improvements in how we work, to get more things done.

Testing exception vs error code behaviour with real world code

Our previous looks at exception performance have used microbenchmarks and generated code (here and here). These are fun and informative tests but ultimately flawed. What actually matters is performance on real world code. The straightforward way of measuring this is to convert a program using error codes into one using exceptions. This is harder than it seems.

The problem is that most code using error codes is not exception safe so this change can not be done easily (I tried, would not recommend). Fortunately there is a simple solution: going the other way. A fully exception safe code base can be easily converted into one using GLib style error objects with the following simple steps:

  1. replace all instances of throw with a call to a function such as Error* create_error(const char *msg)
  2. replace all catch statements with equivalent error object handling code
  3. alter the function signature of all functions creating or using error objects from void func() to void func(Error **e)
  4. change every call site to pass an error object and check if it points to non-null after the call, exit immediately if so
  5. repeat steps 4 and 5 recursively until compiler errors go away
This implements the exception code flow with explicit hand-written code

For testing we used Parzip, a high performance PkZip implementation. For simplicity we removed all parallel code and also the parts that deal with Zip file creation. Thus we ended up with a simple single threaded Zip file unpacker. The full code is available in this repository.

Source code size

LOC is a poor metric in general but illuminating in this case, because it directly shows the difference between these two approaches. The measurement is done with wc and if we ignore the license header on each file we find that the implementation using exceptions is 1651 lines and the one with error objects has 1971 lines. This means that error codes have roughly 19% more lines of source code than the equivalent code using exceptions. 

Looking at the code it is easy to see where this difference comes from. As an extreme example there is this function that reads data from file with endianness swapping:

localheader read_local_entry(File &f) {
    localheader h;
    uint16_t fname_length, extra_length;
    h.needed_version = f.read16le();
    h.gp_bitflag = f.read16le();
    h.compression = f.read16le();
    h.last_mod_time = f.read16le();
    h.last_mod_date = f.read16le();
    h.crc32 = f.read32le();

    /* And so on for a while. */
    return h;
}

And this is how it looks when using error objects:

localheader read_local_entry(File &f, Error **e) {
    localheader h;
    uint16_t fname_length, extra_length;
    h.needed_version = f.read16le(e);
    if(*e) {
        return h;
    }
    h.gp_bitflag = f.read16le(e);
    if(*e) {
        return h;
    }
    h.compression = f.read16le(e);
    if(*e) {
        return h;
    }
    h.last_mod_time = f.read16le(e);
    if(*e) {
        return h;
    }
    h.last_mod_date = f.read16le(e);
    if(*e) {
        return h;
    }
    h.crc32 = f.read32le(e);
    if(*e) {
        return h;
    }

    /* And so on and so on. */
    return h;
}

This example nicely shows the way that exceptions can make code more readable. The former sample is simple, straightforward, linear and understandable code. The latter is not. It looks like idiomatic Go (their words, but also mine). Intermixing the happy path with error handling makes the code quite convoluted.

Binary size

We tested the size of the generated code with GCC 6.2.0, Clang 3.8.1 and with Clang trunk. We built the code on 64 bit Ubuntu 16/10 using -g -O2 and the error code version was built with and without -fno-exceptions. The results look like this.
The results are, well, weird. Building with -fno-exceptions reduces code size noticeably in every case but the other parts are odd. When not using -fno-exceptions, the code that uses exceptions is within a few dozen bytes within the size of the code that uses error objects. GCC manages to make the error code version smaller even though it has the abovementioned 19% more lines of code. Some of this may be caused by the fact that -fno-exceptions links in less stuff from the C++ standard library. This was not researched further, though.

Looking at the ELF tables we find that the extra size taken by exceptions goes in the text segment rather than data segment. One would have expected it to have gone to unwind tables in a readonly data segment instead of text (which houses runnable code).

Things get even weirder with noexcept specifications. We added those to all functions in the exception code base which do not take an error object pointer in the error code version (meaning they will never throw). We then measured the sizes again and found that the resulting binaries had exactly the same size. On all three compilers. One would have expected some change (such as smaller exception unwind tables or anything) but no.

What have we learned from all this?

Mainly that things have more complex behavior than one would expect. The binary sizes especially are unexpected, especially the way Clang produces the same binary size for both error objects and exceptions. Even with this contradictory information we can make the following conclusions:
  • using exceptions can cause a noticeable reduction in lines of code compared to the equivalent functionality using error objects
  • compiling with -fno-exceptions can reduce binary size by 10-15%
Other than that these measurements really raise more questions than they answer. Why does noexcept not change output size? Why does Clang produce the same code size with exceptions and error objects? Are exception performance and code size fully optimized or could they be made smaller (this area probably has not had as much work done because many compiler contributors do not use exceptions internally)? 

A morning in San Francisco

This morning in San Francisco, I check out from the hotel and walk to Bodega, a place I discovered last time I was here. I walk past a Chinese man swinging his arms slowly and deliberately, celebrating a secret of health us Westerners will never know. It is Chinese New Year, and I pass bigger groups celebrating and children singing. My phone takes a picture of a forest of phones taking pictures.

I get to the corner hoping the place is still in business. The sign outside asks “Can it all be so simple?” The place is open, so at least for today, the answer is yes. I take a seat at the bar, and I’m flattered when the owner recognizes me even if it’s only my second time here. I ask her if her sister made it to New York to study – but no, she is trekking around Columbia after helping out at the bodega every weekend for the past few months. I get a coffee and a hibiscus mimosa as I ponder the menu.

The man next to me turns out to be her cousin, Amir. He took a plane to San Francisco from Iran yesterday after hearing an executive order might get signed banning people with visas from seven countries to enter the US. The order was signed two hours before his plane landed. He made it through immigration. The fact sheet arrived on the immigration officer’s desks right after he passed through, and the next man in his queue, coming from Turkey, did not make it through. Needles and eyes.

Now he is planning to get a job, and get a lawyer to find a way to bring over his wife and 4 year old child who are now officially banned from following him for 120 days or more. In Iran he does business strategy and teaches at University. It hits home really hard that we are not that different, him and I, and how undeservedly lucky I am that I won’t ever be faced with such a horrible choice to make. Paria, the owner, chimes in, saying that she’s a Iranian Muslim who came to the US 15 years ago with her family, and they all can’t believe what’s happening right now.

The church bell chimes a song over Washington Square Park and breaks the spell, telling me it’s eleven o’clock and time to get going to the airport.

flattr this!

January 28, 2017

Good usability but poor experience

Usability is about real people using the system to do real tasks in a reasonable amount of time. You can find variations of this definition by different researchers, including more strict definitions that include the five attributes of Usability:

1. Learnability
How easily you can figure out the interface on your own.
2. Efficiency
How quickly you can accomplish your tasks.
3. Memorability
Having used the software at least once, how easily can you recall how to use it the next time you use it?
4. Error rate
When using the software, how often users tend to make mistakes.
5. Satisfaction
If the software is pleasing to use.
However, that last item is treading on different territory: User eXperience (UX). And UX is different from usability.

If usability is about real people using the software to do real tasks in a reasonable amount of time, User eXperience is more about the user's emotional response when using the software. Or their emotional attachment to the software. UX is more closely aligned to the user's impression of the software beyond usability, beyond using the software to complete tasks.

Usually, usability and UX go together. A program with good usability tends to have positive UX. A program with poor usability tends to have negative UX. But it is possible for them to differ. You can have a program that's difficult to use, but the user is happy using it. And you can have a program that's very simple to use, but the user doesn't really like using it.

Let me give you an example: a simple music player. It's so simple that it doesn't have a menu. There's an "Add songs to playlist" button that seems obvious enough. The play and stop buttons are obvious (a button with the word "Play" to play music, and a button next to it labelled "Stop" to stop playing music.). To change the volume, there's a simple slider labeled "Volume" that has "quiet" and "loud" on each end of the slider.

It's easy to use. The music player is obvious and well-labeled. You can imagine it scores well with Learnability, Efficiency, Memorability, and Error rate.

But it's bare. There's no decoration to it. The program uses a font where the letters are blocky, small-caps, and spaced very close together. It uses the same font in the music list.

And the colors. Everything is white-on-black. The "Play" button is a sort of sickly green, and the "Stop" button is a sort of reddish-brown. The "Add songs to playlist" button is a weird purple. The box that shows the music play list is an eerie green.
Girls Just Want To Have Fun; Cyndi Lauper
Beat It; Michael Jackson &sungplaying
When Doves Cry; Prince
Karma Chameleon; Culture Club
Love Is A Battlefield; Pat Benatar


Volume:
quiet————loud
The program works well, but you just don't like using it. Every time you use the music player, your stomach turns. The colors are depressing. As soon as you load your play list and start playing, you cover the window with another window so you don't have to look at the program. After you use it a few times, you switch to another program. Even if the other program isn't as easy to use, at least you'll like using the other music player.

So that's one example of a program that would have good usability but negative UX.

January 25, 2017

Going to FOSDEM

I’m going to FOSDEM 2017!

I’ll have a spare, unopened, Nitrokey Pro with me to give to anyone who’s got a good plan for improving the user experience for them in GNOME. That might mean making the setup seamless; it might mean working on the rewrite of Seahorse; it might mean integrating them with LUKS; or something else. Contact me if you’re interested and have a plan.

Artistic Constraints

I have moved most of the sharing with the world to the walled gardens of Facebook, Google+ and others because of their convenience, but for an old fart like me it’s way more appropriate to do it the old way. So the thing to share today is quite topical. Mark Ferrari (of Lucasarts fame) shares his experience with 8bit art and the creative constraint. There isn’t as much gold in what he says as much as the art he shares that he made over the years that flourished in those constraints.

Mark is clearly a master in lighting and none of this trickery would have any appeal if he wasn’t so great in mixing the secondary lights so well, but check out these amazing color cycling demos.

Actual image I found explaining how I anti-aliased in GIMP. Cca 2002.

As far as I ever got with 8bit animation.

GXml 0.13.90 Released

With lot of work to do on XSD, but certainly happy to see GXml.Gom* classes taking shape, fixed lot of bugs since last 0.13.2 and starting to port some projects to this new version, I hope to soon release 0.14, just after most translation are in place.

This new version, will provide a better supported XML GObject wrapped, using DOM4 API and initials of other technologies like XPath and XSD.

Hope some one takes some time to implement XPath recent W3C specification and complete XSD support implementation. May be this will be a good projects for Google Summer of Code 2017.

On the process, I’ll try to implement W3C SVG API using GXml.Gom* classes, to provide a fully supported XML/SVG API using GObject libraries, and all other languages supported by GObject Introspection.

Future GXml versions, can include:

  • Cleanup more interfaces/classes no useful any more, like TNode derived classes and other non DOM4 compliant interfaces. This will reduce number of classes to choose from, while recommend way will be on GNode and GomNode derived classes.
  • Improve XSD support
  • Explore how to manage large XML files
  • Improve XPath support
  • Explore XQuery

May be other ideas, from GXml users.

January 24, 2017

The flatpak security model, part 3 – The long game

We saw previously that you can open up the Flatpak sandbox in order to run applications that are not built to be sandboxed. However, the long term plan is to make it possible for applications to work sandboxed, and then slowly make this the default for most applications (while losing little or no features). This is a work in progress on many levels, we need work on Flatpak itself, but also on the Linux desktop platform and the applications themselves.

So, how do we reach this long-term goal?

Some things were mentioned in earlier parts. For example, once we have a realistic shot at sandboxing we need to make the permissions more visible in the UI, and make it easier to override them. We also need to drop X11 and move to Wayland, as well as finish the work on PulseAudio permissions.

However, the really important part is being able to have dynamic, fine-grained permissions. This is achieved with something we call Portals.

A Portal is a service that runs on the host, yet is accessible from the sandbox. This is ok because the interface it exposes has been designed in order to be “safe”.

So, what makes a portal safe?

Lets start with a very simple portal, the Network Monitor portal. This service returns network connection status (online/offline) and signals when it changes. You can already get the basic status from the kernel even in the sandbox, but the portal can use Network Manager to get additional information like whether there is a captive portal active, and if the network is metered.

This portal looks at whether the calling app has network access, and if so allows it to read the current status, because this information could already be collected by the app manually (by replicating what network manager does). The portal is merely a more efficient and easy way to do this.

The next example is the Open URI portal. The application sends a request with a URI that it wants to be shown. For instance you could use this for links the user clicked on in the application, but also to show your application documentation.

We don’t want the sandbox to be able to start apps with caller-controlled URIs in the background, because that would be an easy way to do attack them. The way we make this safe is to make the operation interactive and cancellable. So, the portal shows a dialog, allowing the user to select the app to open the URI in, or (if the dialog was unexpected) close the dialog. All this happens outside the sandbox, which means that the user is always in control of what gets launched and when.

A similar example is the FileChooser portal. The sandbox cannot see the users files, but it can request the user to pick a file. Once a file is chosen outside the sandbox, the application is granted access to it (and only it). In this case too it is the interactivity and cancellability that makes the operation safe.

Another form of portal is geolocation. This is safe because the portal can reliably identify the calling application, and it keeps a list of which applications are currently allowed access. If the application is not allowed it replies that it doesn’t know the location. Then a UI can be created in the desktop to allow the user to configure these permissions. For instance, there can be an application list in the control center, or there could be a notification that lets you grant access.

To sum up, portals are considered safe for one of these reasons:

  • The portal doesn’t expose any info you wouldn’t already know, or which is deemed unsafe
  • The operation is interactive and cancellable
  • Portals can reliably identify applications and apply dynamic rules

Theoretically any kind of service designed in this way could be called a portal. For instance, one could call Wayland a UI portal. However, in practice portals are dbus services.  In fact by default Flatpak lets the sandbox talk to any service named org.freedesktop.portal.* on the session bus.

The portals mentioned above are part of the Desktop portal, xdg-desktop-portal.  It is a UI-less, desktop-independent service. But for all user-interaction or policy it defers to a desktop-specific backend. There are currently backends for Gtk and (work in progress) KDE. For sandboxing to work these need to be installed on the host system.

In addition the the previously listed portals, xdg-desktop-portal also contains:

  • Printing
  • User account information
  • Inhibiting suspend
  • Notifications
  • Proxy configuration
  • Screenshot request
  • Device access request

There is also a portal shipped with flatpak, the Document portal. It’s permissions based, and is what the FileChooser portal uses to grant access to files dynamincally on a file-by-file basis.

We are also planning to add more portals as needed. For instance we’d like to add a Share portal that lets you easily share content with registered handlers (for instance posting text to a Twitter or Facebook app).

January 23, 2017

GInterface and GXml

I love GInterface definitions, more in Vala, because they are clean an easy to describe API. Interfaces are the way W3C defines their specifications, like SVG and DOM4.

Vala interfaces definition are realy close to be a copy and paste from W3C’s specification definitions. With some, well a bit, work, you can transform them in usable GObject Interfaces definitions.

GXml take DOM4 interfaces and implement them, using a set of instantiable classes.

GXml provides a GObject to XML and back serialization framework, allowing you to define your own classes and how you want your data is represented in XML.

In order to read back your information, GXml needs to create, on-the-fly, instances of your classes, this means you need GObject ones no GInterface.

Starting to implement XSD support in GXml, I’ve created a set of interfaces to interpret W3C specification, this is really helpful, but unusable when you need to instantiate an object it is declared as a Interface. For example, if you have an interface A and it has a property of type B, but at the same time B is a GInterface, you can’t implement A and have an instantiable object from B: I mean, using g_object_new().

Because GXml engine, requires instantiable objects, to create new element nodes when found, using a GObject type to parse attributes to properties, for example, I ended creating a set of interfaces, to help me design an clean API, makes room for other implementations engines, but creating a new classes that will implement XSD interfaces having its own “mirror” properties.

This is, while GXml.XsdSchema have a GXml.XsdListSimpleTypes property to access to all simple type definitions, GXml.GomXsdSchema will have two properties with same purpose: a GXml.GomXsdListSimpleTypes property AND a property of type GXml.XsdListSimpleTypes to fully implement GXml.XsdSchema. Second one, will mirror the first. These is more work to implement an interface but keeps your classes’ properties instantiable, and your users can choose to use just GXml.XsdSchema interfaces API to access your class implementation, keeping open to use different implementations.

Best of all, with GXml implementation of XML to GObject, more clearly using GXml.Gom* classes, you will have access to *all* nodes, attributes and child ones found in an XML file, without loose them in the process of de-serializing back to your class instance.

This week in GTK+ – 32

In this last week, the master branch of GTK+ has seen 106 commits, with 7340 lines added and 12138 lines removed.

Planning and status
  • Matthias Clasen released GTK+ 3.89.3
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Benjamin Otte simplified the clipping shaders for the Vulkan renderers
  • Benjamin also removed the “assume numbers without dimensions are pixels” fallback code from the CSS parser
  • Daniel Boles landed various fixes to the GtkMenuGtkComboBox and GtkScale widgets
  • Daniel also simplified the internals of GtkComboBox and moved most of its internal widgets to GtkBuilder UI files
  • Matthias Clasen removed command line argument handling from the GTK+ initialization functions; gtk_init() now takes no arguments. Additionally, gdk_init() has been removed, as GDK ceased to be a separate shared library. The recommended way to write GTK+ applications remains using GtkApplication, which handles library initialization and the main loop
  • Timm Bäder merged his branch that makes GtkWidget visible by default, except for the GtkWindow and GtkPopover classes; Timm also removed gtk_widget_show_all() from the API, as it’s not useful any more
  • Timm modified GtkShortcutsShortcut, GtkFileChooserButton, and GtkFontButton to inherit directly from GtkWidget, taking advantage of the new scene graph API inside the base GtkWidget class

On the gtk-3-22 stable branch:

  • Ruslan Izhbulatov fixed the Windows backend for GDK to ensure that it works with remote displays
Bugs fixed
  • 777527 GDK W32: Invisible drop-down menus in GTK apps when working via RDP
  • 770112 The documented <alt>left shortcut doesn’t work on Wayland
  • 776225 [wayland] dropdown placed somewhere in the screen
  • 777363 [PATCH] wayland: avoid an unnecessary g_list_length call
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

Release of the pilot episode of an old project: “Ouhlala”

2017 starts well for ZeMarmot, with many new contributors and joy of life!

We  recently found a former project, lost in old hard drives, dating from either end-of-2014, or early 2015 (before  ZeMarmot), when we were still searching for a fun project to keep us  busy. As you know, we haven’t chosen this project, called “Ouhlala“,  which explains this small 30-sec episode was getting forgotten somewhere  in a hard drive. A little sad; therefore we now release it.
Obviously  this serie is on indefinite standby since we now focus on ZeMarmot and this is the  first time we publicly release this episode (it was only shown once  during a very small talk, 2 years ago)!

The early concept of the serie was to illustrate various idioms from all over the world with short  animations (not necessarily in an intellectual way, more with funny  views). The pilot episode focused on the French idiom “Jamais 2 sans 3” (~ things always go in 3).

License of the movie: Creative Commons by-SA 4.0 international
As usual, all is drawn with GIMP, sound is recorded or edited in Ardour, except for a few CC 0 sounds found on the awesome freesound.org.

Have a fun viewing all!


Reminder: if you like what we do, you can fund our current project, ZeMarmot at Patreon (USD) or Tipeee (EUR).

Feeds