GNOME.ORG

24 hours a day, 7 days a week, 365 days per year...

December 19, 2016

xf86-input-synaptics is not a Synaptics, Inc. driver

This is a common source of confusion: the legacy X.Org driver for touchpads is called xf86-input-synaptics but it is not a driver written by Synaptics, Inc. (the company).

The repository goes back to 2002 and for the first couple of years it Peter Osterlund was the sole contributor. Back then it was called "synaptics" and really was a "synaptics device" driver, i.e. it handled PS/2 protocol requests to initialise Synaptics, Inc. touchpads. Evdev support was added in 2003, punting the initialisation work to the kernel instead. This was the groundwork for a generic touchpad driver. In 2008 the driver was renamed to xf86-input-synaptics and relicensed from GPL to MIT to take it under the X.Org umbrella. I've been involved with it since 2008 and the official maintainer since 2011.

For many years now, the driver has been a generic touchpad driver that handles any device that the Linux kernel can handle. In fact, most bugs attributed to the synaptics driver not finding the touchpad are caused by the kernel not initialising the touchpad correctly. The synaptics driver reads the same evdev events that are also handled by libinput and the xf86-input-evdev driver, any differences in behaviour are driver-specific and not related to the hardware. The driver handles devices from Synaptics, Inc., ALPS, Elantech, Cypress, Apple and even some Wacom touch tablets. We don't care about what touchpad it is as long as the evdev events are sane.

Synaptics, Inc.'s developers are active in kernel development to help get new touchpads up and running. Once the kernel handles them, the xorg drivers and libinput will handle them too. I can't remember any significant contribution by Synaptics, Inc. to the X.org synaptics driver, so they are simply neither to credit nor to blame for the current state of the driver. The top 10 contributors since August 2008 when the first renamed version of xf86-input-synaptics was released are:


8 Simon Thum
10 Hans de Goede
10 Magnus Kessler
13 Alexandr Shadchin
15 Christoph Brill
18 Daniel Stone
18 Henrik Rydberg
39 Gaetan Nadon
50 Chase Douglas
396 Peter Hutterer
There's a long tail of other contributors but the top ten illustrate that it wasn't Synaptics, Inc. that wrote the driver. Any complaints about Synaptics, Inc. not maintaining/writing/fixing the driver are missing the point, because this driver was never a Synaptics, Inc. driver. That's not a criticism of Synaptics, Inc. btw, that's just how things are. We should have renamed the driver to just xf86-input-touchpad back in 2008 but that ship has sailed now. And synaptics is about to be superseded by libinput anyway, so it's simply not worth the effort now.

The other reason I included the commit count in the above: I'm also the main author of libinput. So "the synaptics developers" and "the libinput developers" are effectively the same person, i.e. me. Keep that in mind when you read random comments on the interwebs, it makes it easier to identify people just talking out of their behind.

This week in GTK+ – 29

In this last week, the master branch of GTK+ has seen 20 commits, with 883 lines added and 2740 lines removed.

Planning and status
  • Alex Larsson worked on a simplification of GdkWindow that removed native and foreign child windows; the long term plan is to have native windowing system surfaces only for top levels
  • Alex also sent a review on gtk-devel-list of Benjamin’s wip/otte/rendernode branch, with ideas on future work for the GSK rendering API
  • Chun-wei Fan updated the Windows backend to ensure that it continues to build and work on the master branch
  • Benjamin Otte implemented the snapshot() virtual function in more GtkWidget subclasses.
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Olivier Fourdan updated the Wayland backend to ensure that empty input shapes are updated on sub-surfaces when needed; this allows other toolkits, like Clutter, to use the GDK sub-surface API
  • Alex Larsson removed gdk_window_reparent() from the GDK API, since it’s unused and allows for the goal of only having top level GDK windows
  • Benjamin Otte removed the ad hoc code from GdkCellView to modify its background, as the cell view can use CSS to achieve the same (or better) results
  • Benjamin also removed the border node from the GtkFrame CSS nodes, as it performed additional immediate mode clipping that complicates the rendering

On the gtk-3-22 stable branch:

  • Emmanuele pushed fixes for the GL rendering when using GtkGLArea with OpenGL ES 2.0 implementations that are missing the GL_EXT_framebuffer_blit extension
Bugs fixed
  • 776132 Mention the difference between gdk_window_create_similar_image_surface and cairo_surface_create_similar_image
  • 774534 [wayland] input shape and opaque region not applied without begin_paint()/end_paint()
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

What is the global cost of Autoconf?

Waiting for autoconf to finish is annoying. But have you ever considered how much time and effort is wasted every day doing that? Neither have I, but let's estimate it anyway.

First we need an estimate of how many projects are using Autotools. 100 is too low and 10 000 is probably too high, so let's say 1000. We also need an assumption of how long an average autoconf run takes. Variability here is massive ranging from 30 seconds to (tens of) minutes on big projects on small hardware. Let's say an average of one minute as a round number.

Next we need to know how many times builds are run during a day. Some projects are very active with dozens of developers whereas most have only one person doing occasional development. Let's say each project has on average 2 developers and each one does an average of 4 builds per day.

This gives us an estimate of how much time is spent globally just waiting for autoconf to finish:

1000 proj * 2 dev/proj * 2 build/dev * 1 minute/build =
   4000 minutes = 66 hours

66 hours. Every day. If you are a business person, feel free to do the math on how much that costs.

But it gets worse. Many organisations and companies build a lot of projects each day. As an example there are hundreds of companies that have their own (embedded) Linux distribution that they do a daily build on. Linux distros do rebuilds constantly and so on. If we assume 10 000 organisations that do a daily build and we do a lowball estimate of 5 dependencies per project (many projects are not full distros, but instead build a custom runtime package or something similar), we get different numbers:

10 000 organisations * 1 build/organisation * 5 dependencies/build * 1 min/dependency = 50 000 minutes = 833 CPU hours

That is, every single day over 800 CPU hours are burned just to run autoconf. Estimating CPU consumption is difficult but based on statistics and division we can estimate an average consumption of 20 Watts. This amounts to 16 kWh every day or, assuming 300 working days a year, 5 MWh a year.

That is a lot of energy to waste just to be absolutely sure that your C compiler ships with stdlib.h.

Another look at GNOME recipes

It has been a few weeks since I’ve first talked about this new app that I’ve started to work on, GNOME recipes.

Since then, a few things have changed. We have a new details page, which makes better use of the available space with a 2 column layout.

Among the improved details here is a more elaborate ingredients list. Also new is the image viewer, which lets you cycle through the available photos for the recipe without getting in the way too much.

We also use a 2 column layout when editing a recipe now.

Most importantly, as you can see in these screenshots, we have received some contributed recipes. Thanks to everybody who has sent us one! If you haven’t yet, please do. You may win a prize, if we can work out the logistics :-)

If you want to give recipes a try, the sources are here: https://git.gnome.org/browse/recipes/ and here is a recent Flatpak.

 

 

libinput touchpad pointer acceleration analysis

A long-standing criticism of libinput is its touchpad acceleration code, oscillating somewhere between "terrible", "this is bad and you should feel bad" and "I can't complain because I keep missing the bloody send button". I finally found the time and some more laptops to sit down and figure out what's going on.

I recorded touch sequences of the following movements:

  • super-slow: a very slow movement as you would do when pixel-precision is required. I recorded this by effectively slowly rolling my finger. This is an unusual but sometimes required interaction.
  • slow: a slow movement as you would do when you need to hit a target several pixels across from a short distance away, e.g. the Firefox tab close button
  • medium: a medium-speed movement though probably closer to the slow side. This would be similar to the movement when you move 5cm across the screen.
  • medium-fast: a medium-to-fast speed movement. This would be similar to the movement when you move 5cm across the screen onto a large target, e.g. when moving between icons in the file manager.
  • fast: a fast movement. This would be similar to the movement when you move between windows some distance apart.
  • flick: a flick movement. This would be similar to the movement when you move to a corner of the screen.
Note that all these are by definition subjective and somewhat dependent on the hardware. Either way, I tried to get something of a reasonable subset.

Next, I ran this through a libinput 1.5.3 augmented with printfs in the pointer acceleration code and a script to post-process that output. Unfortunately, libinput's pointer acceleration internally uses units equivalent to a 1000dpi mouse and that's not something easy to understand. Either way, the numbers themselves don't matter too much for analysis right now and I've now switched everything to mm/s anyway.

A note ahead: the analysis relies on libinput recording an evemu replay. That relies on uinput and event timestamps are subject to a little bit of drift across recordings. Some differences in the before/after of the same recording can likely be blamed on that.

The graph I'll present for each recording is relatively simple, it shows the velocity and the matching factor.The x axis is simply the events in sequence, the y axes are the factor and the velocity (note: two different scales in one graph). And it colours in the bits that see some type of acceleration. Green means "maximum factor applied", yellow means "decelerated". The purple "adaptive" means per-velocity acceleration is applied. Anything that remains white is used as-is (aside from the constant deceleration). This isn't really different to the first graph, it just shows roughly the same data in different colours.

Interesting numbers for the factor are 0.4 and 0.8. We have a constant acceleration of 0.4 on touchpads, i.e. a factor of 0.4 "don't apply acceleration", the latter is "maximum factor". The maximum factor is twice as big as the normal factor, so the pointer moves twice as fast. Anything below 0.4 means we decelerate the pointer, i.e. the pointer moves slower than the finger.

The super-slow movement shows that the factor is, aside from the beginning always below 0.4, i.e. the sequence sees deceleration applied. The takeaway here is that acceleration appears to be doing the right thing, slow motion is decelerated and while there may or may not be some tweaking to do, there is no smoking gun.


Super slow motion is decelerated.

The slow movement shows that the factor is almost always 0.4, aside from a few extremely slow events. This indicates that for the slow speed, the pointer movement maps exactly to the finger movement save for our constant deceleration. As above, there is no indicator that we're doing something seriously wrong.


Slow motion is largely used as-is with a few decelerations.

The medium movement gets interesting. If we look at the factor applied, it changes wildly with the velocity across the whole range between 0.4 and the maximum 0.8. There is a short spike at the beginning where it maxes out but the rest is accelerated on-demand, i.e. different finger speeds will produce different acceleration. This shows the crux of what a lot of users have been complaining about - what is a fairly slow motion still results in an accelerated pointer. And because the acceleration changes with the speed the pointer behaviour is unpredictable.


In medium-speed motion acceleration changes with the speed and even maxes out.

The medium-fast movement shows almost the whole movement maxing out on the maximum acceleration factor, i.e. the pointer moves at twice the speed to the finger. This is a problem because this is roughly the speed you'd use to hit a "mentally preselected" target, i.e. you know exactly where the pointer should end up and you're just intuitively moving it there. If the pointer moves twice as fast, you're going to overshoot and indeed that's what I've observed during the touchpad tap analysis userstudy.


Medium-fast motion easily maxes out on acceleration.

The fast movement shows basically the same thing, almost the whole sequence maxes out on the acceleration factor so the pointer will move twice as far as intuitively guessed.


Fast motion maxes out acceleration.

So does the flick movement, but in that case we want it to go as far as possible and note that the speeds between fast and flick are virtually identical here. I'm not sure if that's me just being equally fast or the touchpad not quite picking up on the short motion.


Flick motion also maxes out acceleration.

Either way, the takeaway is simple: we accelerate too soon and there's a fairly narrow window where we have adaptive acceleration, it's very easy to top out. The simplest fix to get most touchpad movements working well is to increase the current threshold on when acceleration applies. Beyond that it's a bit harder to quantify, but a good idea seems to be to stretch out the acceleration function so that the factor changes at a slower rate as the velocity increases. And up the acceleration factor so we don't top out and we keep going as the finger goes faster. This would be the intuitive expectation since it resembles physics (more or less).

There's a set of patches on the list now that does exactly that. So let's see what the result of this is. Note ahead: I also switched everything from mm/s which causes some numbers to shift slightly.

The super-slow motion is largely unchanged though the velocity scale changes quite a bit. Part of that is that the new code has a different unit which, on my T440s, isn't exactly 1000dpi. So the numbers shift and the result of that is that deceleration applies a bit more often than before.


Super-slow motion largely remains the same.

The slow motions are largely unchanged but more deceleration is now applied. Tbh, I'm not sure if that's an artefact of the evemu replay, the new accel code or the result of the not-quite-1000dpi of my touchpad.


Slow motion largely remains the same.

The medium motion is the first interesting one because that's where we had the first observable issues. In the new code, the motion is almost entirely unaccelerated, i.e. the pointer will move as the finger does. Success!


Medium-speed motion now matches the finger speed.

The same is true of the medium-fast motion. In the recording the first few events were past the new thresholds so some acceleration is applied, the rest of the motion matches finger motion.


Medium-fast motion now matches the finger speed except at the beginning where some acceleration was applied.

The fast and flick motion are largely identical in having the acceleration factor applied to almost the whole motion but the big change is that the factor now goes up to 2.3 for the fast motion and 2.5 for the flick motion, i.e. both movements would go a lot faster than before. In the graphics below you still see the blue area marked as "previously max acceleration factor" though it does not actually max out in either recording now.


Fast motion increases acceleration as speed increases.

Flick motion increases acceleration as speed increases.

In summary, what this means is that the new code accelerates later but when it does accelerate, it goes faster. I tested this on a T440s, a T450p and an Asus VivoBook with an Elantech touchpad (which is almost unusable with current libinput). They don't quite feel the same yet and I'm not happy with the actual acceleration, but for 90% of 'normal' movements the touchpad now behaves very well. So at least we go from "this is terrible" to "this needs tweaking". I'll go check if there's any champagne left.

December 17, 2016

On reversing btrees in job interviews

Hiring new people is hard. No-one has managed to find a perfect way to separate poorly performing job applicants from the good ones. The method used in most big companies nowadays is the coding interview. In it the applicant is given some task, such as reversing a btree, and they are asked to write the code to do that on a whiteboard without a computer.

At regular intervals this brings up Internet snarkiness.



The point these comments make are basically sound, but they miss the bigger point which is this:

When you are asked to reverse a btree on a whiteboard, you are not being evaluated based on whether you can reverse a btree on a whiteboard.

Before we look into what you are evaluated on, some full disclosure may be in order. I have interviewed for Google a few times but each time they have chosen not to hire me. I have not interviewed people with this method nor have I received any training in doing so. Thus I don't know what aspects of interview performance are actually being evaluated, and presumably different companies have different requirements. These are just estimates which may be completely wrong.

Things tested before writing code

Getting an onsite interview is a challenge in itself. You have to pass multiple phone screens and programming tests just to get there. This test methodology is not foolproof. Every now and then (rarely, but still) a person manages to slip through the tests even if they would be unqualified for the job. Starting simple is an easy way to make sure that the applicant really is up to the challenge.

The second big test is one of attitude. While we all like to think that programming is about having fun solving hard problems with modern functional languages, creating world altering projects with ponies, rainbows and kittens, reality is more mundane. Over 80% of all costs in a software project are incurred during maintenance. This is usually nonglamorous and monotonous work. Even new projects have a ton of “grunt work” that just needs to be done. The attitude one displays when asked to do this sort of menial task can be very revealing. Some people get irritated because such a task is “beneath their intelligence”. These people are usually poor team players who you definitely do not want to hire.

The most important aspect of team work is communication. Yes, even more important than coding. Reversing a btree is a simple problem but the real work is in explaining what is being done and why. It is weird that even though communication is so important, very little effort is spent in teaching it (in many schools it is not taught at all). Being able to explain how to implement this kind of simple task is important, because a large fraction of your work consists of describing your solutions and ideas to other people. It does not matter if you can come up with the greatest ideas in the world if you can not make other “get” them.

Writing the code

Writing the code to reverse a btree should not take more than a few minutes. It is not hard, anyone who has done basic CS courses can get it done in a jiffy. But writing the code is an almost incidental part of the interview, in fact it is just a springboard to the actual problem.

Things tested after the code is written

Once the basic solution is done, you will get asked to expand it. For a btree questions might include:

  • If you used stack based iteration, what if the btree is deeper than there is stack space?
  • What if you need to be able to abort the reversal and restore the original tree given an external signal during reversal?
  • What if the btree is so large it won't fit in RAM? Or on one machine?
  • What if you need to reverse the tree while other threads are reading it?
  • What if you need to reverse the three while other threads are mutating it?
  • And so on.

As we see even the most seemingly simple problem can be made arbitrarily hard. The point of the original question is not to test if you can reverse the tree, but find how far you can go from there and where is the limit of what you don't know. If you come out of an interview where you are asked to reverse a tree and the only thing you do is reverse the tree, you have most likely failed.

When the edge of knowledge has been reached comes the next evaluation point: what do you do when you don't know what to do. This is important because a large fraction of programming is solving problems you have not solved before. It can also be very frustrating because it goes against the natural problem solving instinct most people have. Saying “honestly at this point I would look this up on Google” gives you a reply of “Suppose you do that and find nothing, what do you do?”.
Different people react in different ways in this situation. These include:

  • freezing up completely
  • whining
  • trying random incoherent things
  • trying focused changes
  • trying a solution that won't solve the problem fully but is better than the current solution (and explicitly stating this before starting work)

This test is meant to reveal how the person works under pressure. This is not a hard requirement for many jobs but even then those who can keep a cool head under stressing circumstances have a distinct advantage to those who can't.

Points of criticism

There are really only two reliable ways of assessing a candidate. The first one is word of mouth from someone who has worked with the person before. The second one is taking the candidate in and make them work with the team for a while. Neither of these approaches is scalable.

The coding interview aims to be a simplification of this process, where the person is tested on things close to “real world” problems. But it is not the real world, only a model. And like all models, it is a simplification that leaves some things out and emphasizes other things more than they should.
The above mentioned lack of Internet connectivity is one. It can be very frustrating to be denied the most obvious and simplest approach (which is usually the correct approach) for a problem for no seemingly good reason. There actually is a reason: to test behavior when information is not available, but it still may seem arbitrary and cold, especially since looking up said information usually yields useful data even if the exact solution is not found.

A different problem has to do with thinking. The interviewers encourage the applicant to think out loud as much as possible. The point of this is to see the thought process, what approaches they are going through and what things they are discarding outright. The reasoning behind this is sound but on the other hand it actively makes thinking about the problem harder for some people.

Every time you start explaining your thought process you have to do a mental context switch and then another one to go back to thinking. Brain research has shown us that different people think in very different ways. Talking while pondering a problem is natural for some people. It is very taxing for others. This gives a natural advantage to people who talk while thinking but there is not, as far as I know, any research results that these sort of people perform better at a given job.

Mental context switches are not the most problematic thing. For some people explaining their own thought process can be impossible. I don't know how common this is (or it it's even a thing in general) but at least for me personally, I can't really explain how I solve problems. I think about them but I can't really conceptualize how the solution is reached. Once a solution presents itself, explaining it is simple, but all details about the path that lead there remain in the dark.

People who think in this manner have a bit of an unfortunate handicap, because it is hard to tell the difference between a person thinking without speaking and a person who has just frozen with no idea what to do. Presumably interviewers are taught to notice this and help the interviewee but since all books and guidelines emphasize explaining your thought process, this might not always succeed perfectly.

But does it really matter?

That's the problem with human issues. Without massive research in the issue we don't know.

Meson ♥ xmlrpc-c

xmlrpc-c is a lightweight RPC library based on XML and HTTP. For quite some time it’s outdated in Fedora (currently we have 1.32.5 which is older than stable/1.43.06 and even older than super-stable/1.39.11) so I started taking a look why.

Enrico Scholz, previous maintainer of xmlrpc in Fedora, did porting to CMake in 2007 and was supporting it until 2013 and even tried to get it in upstream, but upstream developer didn’t even reply (probably there was some communication, but not in public place). So version of xmlrpc-c we have in Fedora uses this patch-set. Because he gave up on maintaining cmake port and was not anymore interested in maintaining this package it’s oudated and not maintained.

Why not just drop this patch and use upstream’s buildsys? “Current buildsystem is broken and nearly unfixable except for the upstream author.” says Enrico and I tend to agree with him. It’s one of worst buildsystems I have ever seen (it’s common among scientists).

I started rebasing cmake port, but after 15 mins I gave up checking what each macro does there. It’s way better that original one, but cmake doesn’t really make your life easier, so I stopped wasting time and started my own meson port. It took me quite a while to make working meson port, most of the time I spent trying to understand how this make-ish thing generates particular library or even which libraries it generates. So, here it is: https://github.com/ignatenkobrain/xmlrpc-c/.

What did I achieve?

  • Cross-compilation for free.
  • Support for all OS out of the box (this is a bit lie since I tested only on linux, but it’s fairly simple to adapt for others). In original build-sys to build on windows you have to manually change some configs, run some different scripts and etc.
  • Maintainable build-sys code (1k of human-readable code vs 5k of function-over-function-with-shell-and-make).
  • Faster compiling. I can’t say that original build-sys is slow (actually it’s very fast since it doesn’t use libtool and all other automake stuff), but with meson I got it ~2 times faster.

What I didn’t achieve (yet?)

  • Building tests, examples, tools (in my TODO list since it’s required for my update in Fedora).
  • Building wininet-client and libwww-client (I’ve never heard about these before and not sure if anyone uses them, so it’s not in my TODO list at this moment).
  • Push changes upstream (once I finish complete porting, I will try to get it upstream).

Funny (actually, no) thing I found in lib/expat/xmltok/Makefile:

# I can't figure out what XML_BYTE_ORDER is, but it doesn't look like the
# code has ever defined it.  That means it's treated like 0 in #if.  Since
# we started using the Gcc -Wundef option, that generates a warning, so
# se set it explicitly to 0 here.

CFLAGS_LOCAL = -DXML_BYTE_ORDER=0

I hope that upstream developer (Bryan Henderson) can make opposite of title so that not only Meson will ♥ xmlrpc-c, but also in opposite way ;). This will make people (package maintainers in first place) happy.

A bit more technical information…

*FLAGS which I use:

CFLAGS="-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic"
CXXFLAGS="-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic"
LDFLAGS="-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld"

Lines of Code:

[brain@x1carbon xmlrpc-c]$ find -name '*.mk' -o -name '*.mk.in' -o -name configure.in -o -name Makefile -o -name GNUMakefile | grep -vP "^\./(?:tools|examples|test)" | xargs wc -l
...
  4805 total
[brain@x1carbon xmlrpc-c]$ find -name meson.build -o -name meson_options.txt | xargs wc -l
...
  947 total

Configure time:

[brain@x1carbon xmlrpc-c]$ time ./configure 
...
real	0m5.838s
user	0m1.931s
sys	0m3.495s
[brain@x1carbon xmlrpc-c]$ meson build --buildtype=plain
...
real	0m2.651s
user	0m1.246s
sys	0m1.935s

Compile time:

[brain@x1carbon xmlrpc-c]$ time make -j4
...
real	0m36.514s
user	1m29.565s
sys	0m24.865s
[brain@x1carbon xmlrpc-c]$ time ninja-build -j4 -C build
...
real	0m19.673s
user	0m45.926s
sys	0m13.389s

So, I'm looking for a job

I decided a while ago to avoid personal posts here with my loud ramblings, because they en up being pointless. I'll make an exception now and bring in a little life issue that is a little bit sudden.

I'm looking for a job, in snowy Montréal (I can do remote too).

If you are looking for an experienced developer to join your team, I can be that person. Even for a limited time, I'm ready to start as soon as you are.

My résumé

And may you have happy holidays.

December 15, 2016

Thu 2016/Dec/15

Igalia is hiring. We're currently interested in Multimedia and Chromium developers. Check the announcements for details on the positions and our company.

Making your own retro keyboard

We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

New GNOME API Key for Google Services

Tl;dr: update evolution-data-server to stop your client from misbehaving; update gnome-online-accounts to shield yourself from other buggy clients.

Recently, a few bugs in evolution-data-server were causing various GNOME components to hit Google’s daily limit for their CalDAV and Tasks APIs. At least evolution, gnome-calendar and gnome-todo were affected. The bugs have since been fixed, but until every single user out there installs the fix, everybody will be susceptible even if they have a fixed copy of evolution-data-server. This is because Google identifies the clients by the OAuth 2.0 API key used to access their services, and not the version of the code running on them.

evolution-data-server-quota-exceeded-error-message

We are therefore phasing out the old Google API key used by GNOME so that users updating their systems will have no connection to the one that was tainted by these bugs.

Fedora Users

If you are using Fedora 25, make sure you have evolution-data-server-3.22.3-1.fc25 and gnome-online-accounts-3.22.3-1.fc25. For Fedora 24, the versions are evolution-data-server-3.20.6-1.fc24 and gnome-online-accounts-3.20.5-1.fc24.

GNOME Distributors

If you are distributing GNOME, make sure that you are shipping evolution-data-server-3.22.3 and gnome-online-accounts-3.22.3, or evolution-data-server-3.20.6 and gnome-online-accounts-3.20.5.


2016 year in review, part 1

2016 has been a great year in open source software, and I wanted to take a moment to reflect on some of my favorite articles from this year. In no particular order:

1. Teaching usability of open source software
In Fall semester, I taught an online class on the usability of open source software (CSCI 4609 Processes, Programming, and Languages: Usability of Open Source Software). This was not the first time I helped others learn about usability testing in open source software, having mentored GNOME usability testing for both Outreachy and the Outreach Program for Women, but this was the first time I taught a for-credit class. In this article, I shared a reflection of my first semester teaching this class. I also received feedback from teaching the class, which I'll use to make the class even better next time! I teach again in Spring semester, starting in January. And I shared a related article in preparation for next semester, looking ahead to teaching the class again.

2. Cultural context in open source software
Have you ever worked on an open source software project—and out of nowhere, a flame war starts up on the mailing list? You review the emails, and think to yourself "Someone over-reacted here." The problem may not be an over-reaction per se, but rather a cultural conflict. I don't mean to reduce entire cultures to a simple scale, but understanding the general communication preferences of different cultures can help you in any open source project that involves worldwide contributors. A brief introduction.

3. First contribution to usability testing
In order to apply for an Outreachy internship, we ask that you make an initial contribution. For usability testing, I suggest a small usability test with a few testers, then write a summary about what you learned and what you would do better next time. This small test is a great introduction to usability testing. It exercises the skills you'll grow during the usability testing internship, and demonstrates your interest in the project. This is a guest post from Ciarrai, who applied for (and was accepted to) the usability testing internship this year. Ciarrai's summary was an excellent example of usability testing, and I posted it with their permission.

4. How to create a heat map
The traditional way to present usability test results is to share a summary of the test itself. What worked well? What were the challenges? This written summary works well, and it's important to report your findings accurately, but the summary requires a lot of reading on the part of anyone who reviews the results. And it can be difficult to spot problem areas. When presenting my usability test results, I still provide a summary of the findings. But I also include a "heat map." The heat map is a simple information design tool that presents a summary of the test results in a novel way.
5. Visual brand and user experience
How does the visual brand of a graphical desktop affect the user experience? Some desktop environments try to brand their desktop with visual elements including a distinctive wallpaper. But it's not just images that define a visual identity for a desktop environment. The shapes and arrangements used in the presentation also define a user interface's visual brand. In this way, the shapes influence our perception and create an association with a particular identity. We recognize a particular arrangement and connect that pattern with a brand.
I'll share the remainder of the list next week.

Typing emoji is weird

I was thrilled to read in a recent article on Fedora Magazine that Fedora 25 now includes a feature to type emoji. While I don't often use emoji, I do rely on them in certain situations, usually when texting with friends. It would be nice to use emoji in some of my desktop communications, but I was disappointed after reading the description of how to do it. A summary of the process, from the article:
Fedora 25 Workstation ships with a new feature that, when enabled, allows you to quickly search, select and input emoji using your keyboard.

The new emoji input method ships by default in Fedora 25 Workstation, but to use it you will need to enable it using the Region and Language settings dialog.

Next, enable an advanced input method (which is powered behind the scenes by iBus). The advanced input methods are identifiable in the list by the cogs icon on the right of the list. In this example below, we have added English—US (Typing Booster)

Now the Emoji input method is enabled, search for emoji by pressing the keyboard shortcut Ctrl+Shift+e presenting an @ symbol in the currently focused text input. Begin typing for the emoji you want to use, and pop-over will appear allowing you to browse the matches. Finally, use the keyboard to mouse to make your selection, and the glyph will be placed as input.
If you didn't follow that, here's a recap: add a plug-in to activate emoji, then do this whenever you want to type an emoji:

  1. hit a control-key sequence
  2. look for the @ character to appear
  3. type some text that describes the emoji
  4. if the emoji you want pops up, click on it



Maybe that seems straightforward to you. To me, that seems like a very opaque way to insert an emoji. What emoji can I type? This method is probably okay if I know what emoji are there, such as "heart" or "tree," like in the example. This doesn't appear to be good usability. But if you don't know the available emoji, you may have a hard time. Emojipedia says there are as many as of 1,851 emoji characters supported on current platforms, up to and including Unicode 9.0. Good luck in finding the one you want through the @ method.

I don't know the technical details of what's happening behind the scenes, but I wonder why we couldn't provide a feature to add emoji that's easier to use? Here's one example:

  1. right-click wherever you are typing text
  2. select "Insert emoji" from the pop-up menu
  3. get a dialog box with different emoji on it
  4. click on the emoji you want

I would organize the emoji pop-up menu by type. On my Android phone, I get something similar: all the "face" emoji on one page, all the "nature" emoji on another, etc.

Basically, I think it would be easier to add an item in a right-click menu to insert an emoji. I'm sure there are many challenges here, like maybe not all applications that let you type text support a right-click menu. More likely, it may be difficult or impossible to insert a new item in a program's right-click menu.

But my view is there should be an easier way to insert an emoji.
image: Fedora Magazine, used for illustration

December 14, 2016

Rust and Automake - Part 2

I already talked about Rust and Automake. This is a follow-up.

One of the caveat of the previous implementation is that it doesn't pass make distcheck as it doesn't work when srcdir != builddir.

The main problem is that we need to tell cargo to build in a different directory. Fortunately Cargo PR #1657 has taken care of this. So all we need is a carefully crafted CARGO_TARGET_DIR environment variable to set before calling cargo. You can substitute abs_top_builddir for the top build directory.

But let's show the patch. Nothing complicated.

December 13, 2016

Overview of the VP9 video codec

When I first looked into video codecs (back when VP8 was released), I imagined them being these insanely complex beasts that required multiple PhDs to understand. But as I quickly learned, video codecs are quite simple in concept. Now that VP9 is gaining support from major industry players (see supporting statements in December 2016 from Netflix and Viacom), I figured it’d be useful to explain how VP9 works.

Why we need video codecs

That new television that you’ve been dreaming of buying – with that fancy marketing term, UHD (ultra high-definition). In numbers, this is 3840×2160 pixels at 60 fps. Let’s assume it’s HDR, so 10 bits/component, at YUV-4:2:0 chroma subsampling. Your total uncompressed data rate is:

3840 x 2160 x 60 x 10 x 1.5 = 7,464,960,000 bits/sec =~ 7.5 Gbps

And since that would RIP your internet connection when watching your favourite internet stream, you need video compression.

Basic building blocks

A video stream consists of frames, and each frame consists of color planes. Most of us will be familiar with the RGB (red-green-blue) colorspace, but video typically uses the YUV (Y=luma/brightness, U/V=blue/red chroma difference) colorspace. What makes YUV attractive from a compression point-of-view is that most energy will be concentrated in the luma plane, which provides us with a focus point for our compression techniques. Also, since our eyes are less perceptive to color distortion than to brightness distortion, the chroma planes typically have a lower resolution. In YUV-4:2:0, the chroma planes have only half the width/height of the luma plane, so for 4K video (3840×2160), the chroma resolution is only 1920×1080 per plane:

VP9 also supports other chroma subsamplings, such as 4:2:2 and 4:4:4. Next, frames are sub-divided in blocks. For VP9, the base block size is 64×64 pixels (similar to HEVC). Older codecs (like VP8 or H.264) use 16×16 as base block size, which is one of the reasons they perform less well for high-resolution content. These blocks are the basic unit in which video codecs operate:

Now that the fundamentals are in place, let’s look closer at these 64×64 blocks.

Block decomposition

Each 64×64 block goes through one or more rounds of decomposition, which is similar to quad-tree decomposition used in HEVC and H.264. However, unlike the two partitioning modes (none and split) in quad-tree, VP9 block decomposition has 4 partitioning modes: none, horizontal, vertical and split. None, horizontal and vertical are all terminal nodes, whereas split recurses down at the next block level (32×32), where each of the 4 sub-blocks goes through a subsequent round of the decomposition process. This process can continue up until the 8×8 block level, where all partitioning modes are terminal, which means 4×4 is the smallest possible block size.

If you do this for the blocks we highlighted earlier, the full block decomposition partitioning (red) looks like this:

 

Next, each terminal block goes through the main block decoding process. First, a set of block elements are coded:

  • the segment ID allows selecting a per-block quantizer and/or loop-filter strength level that is different from the frame-wide default, which allows for adaptive quantization/loop-filtering. The segment ID also allows encoding a fixed reference and/or marking a block as skipped, which is mainly useful for static content/background;
  • the skip flag indicates – if true – that a block has no residual coefficients;
  • the intra flag selects what prediction type is used for prediction: intra or inter;
  • lastly, the transform size defines the size of the residual transform through which residual data is coded. The transform size can be 4×4, 8×8, 16×16 or 32×32, and cannot be larger than the block size. This is identical to HEVC. H.264 only supports up to 8×8.

Depending on the value of the transform size, a block can contain multiple transform blocks (blue). If you overlay the transform blocks on top of the block decomposition from earlier, it looks like this:

Inter prediction

If the intra flag is false, each block will predict pixel values by copying pixels from 1 or 2 previously coded reference frames at specified pixel offsets, called motion vectors. If 2 reference frames are used, the prediction values from each reference frame at the specified motion vector pixel offset will be averaged to generate the final predictor. Motion vectors have up to 1/8th-pel resolution (i.e. a motion vector increment by 1 implies a 1/8th pixel offset step in the reference), and the motion compensation functions use 8-tap filters for sub-pixel interpolation. Notably, VP9 supports selectable motion filters, which does not exist in HEVC/H.264. Chroma planes will use the same motion vector as the luma plane.

In VP9, inter blocks code the following elements.

  • the compound flag indicates how many references will be used for prediction. If false, this block uses 1 reference, and if true, this block uses 2 references;
  • the reference selects which reference(s) is/are used from the internal list of 3 active references per frame;
  • the inter mode specifies how motion vectors are coded, and can have 4 values: nearestmv, nearmv, zeromv and newmv. Zeromv means no motion. In all other cases, the block will generate a list of reference motion vectors from nearby blocks and/or from this block in the previous frame. If inter mode is nearestmv or nearmv, this block will use the first or second motion vector from this list. If inter mode is newmv, this block will have a new motion vector;
  • the sub-pixel motion filter can have 3 values: regular, sharp or smooth. It defines which 8-tap filter coefficients will be used for sub-pixel interpolation from the reference frame, and primarily effects the appearance of edges between objects;
  • lastly, if the inter mode is newmv, the motion vector residual is added to the nearestmv value to generate a new motion vector.

If you overlay motion vectors (using cyan, magenta and orange for each of the 3 active references) on top of the transform/block decomposition image from earlier, it’s easy to notice that the motion vectors essentially describe the motion of objects between the current frame and the reference frame. It’s easy to see that the purple/cyan motion vectors often have opposite directions, because one reference is located (temporally) before this frame, whereas the other reference is located (temporally) after this frame.

Intra Prediction

In case no acceptable reference is available, or no motion vector for any available reference gives acceptable prediction results, a block can use intra prediction. For intra prediction, edge (top, left and top-left) pixels are used to predict the contents of the current block. The exact mechanism through which the edge pixels are filtered to generate the predictor is called the intra prediction mode. There are three types of intra predictors:

  • directional, with 8 different values, each indicating a different directional angle – see schematic;
  • TM (true-motion), where each predicted pixel(x,y) = top(x) + left(y) – topleft;
  • and DC (direct current), where each predicted pixel(x,y) = average(top(1..n) and left(1..n)).

This makes for a total of 10 different intra prediction modes. This is more than H.264, which only has 4 or 9 intra prediction modes (DC, planar, horizontal, vertical or DC and 8 directional ones) depending on the block size and plane type (luma vs. chroma), but also less than HEVC, which has 35 (DC, planar and 33 directional angles).

Although a mode is shared at the block level, intra prediction happens at the transform block level. If one block contains multiple transform blocks (e.g. a block of 8×4 will contain two 4×4 transform blocks), both transform block re-use the same intra prediction mode, so it is only coded once.

Intra prediction blocks contain only two elements:

  • luma intra prediction mode;
  • chroma intra prediction mode.

So unlike inter blocks, where all planes use the same motion vector, intra prediction modes are not shared between luma and chroma planes. This image shows luma intra prediction modes (green) overlayed on top of the transform/block decompositions from earlier:

Residual coding

The last part in the block coding process is the residual data, assuming the skip flag was false. The residual data is the difference between the original pixels in a block and the predicted pixels in a block. These pixels are then transformed using a 2-dimensional transform, where each direction (horizontal and vertical) is either a 1-dimensional (1-D) discrete cosine transform (DCT) or an asymmetric discrete sine transform (ADST). The exact combination of DCT/ADST in each direction depends on the prediction mode. Intra prediction modes originating from the top edge (the vertical intra prediction modes) use ADST vertically and CT horizontally; modes originating from the left edge (the horizontal intra prediction modes) use ADST horizontally and DCT vertically; modes originating from both edges (TM and the down/right diagonal directional intra prediction mode) use ADST in both directions; and finally, all inter modes, DC and the down/left diagonal intra prediction mode use DCT in both directions. By comparison, HEVC only supports ADST combined in both directions, and only for the 4×4 transform, where it’s used for all intra prediction modes. All inter modes and all other transform sizes use DCT in both directions. H.264 does not support ADST.

  • DCT: 
  • ADST: 

Lastly, the transformed coefficients are quantized to reduce the amount of data, which is what provides the lossy part of VP9 compression. As an example, the quantized, transformed residual of the image that we looked at earlier looks like this (contrast increased by ~10x):

Coefficient arrays are coded into the bitstream using scan tables. The purpose of a transform is to concentrate energy in fewer significant coefficients (with quantization reducing the non-significant ones to zero or near-zero). Following from that, the purpose of scan tables is to find a path through the 2-dimensional array of coefficients that is most likely to find all non-zero coefficients while encountering as few zero coefficients as possible. Classically, most video codecs (such as H.264) use scan tables derived from the zig-zag pattern. VP9, interestingly, uses a slightly different pattern, where scan tables mimic a quarter-circle connecting points ordered by distance to the top/left origin. For example, the 8×8 zig-zag (left) and VP9 (right) scan tables (showing ~20 coefficients) compare like this:

     

In cases where the horizontal and vertical transform (DCT vs. ADST) are different, the scan table also has a directional bias. The bias stems from the fact that transforms combining ADST and DCT typically concentrate the energy in the ADST direction more than in the DCT direction, since the ADST is the spatial origin of the prediction data. These scan tables are referred to as row (used for ADST vertical + DCT horizontal 2-D transforms) and column (used for ADST horizontal + DCT vertical 2-D transforms) scan tables. For the 8×8 transform, these biased row (left) and column (right) scan tables (showing ~20 coefficients) look like this:

     

The advantage of the VP9 scan tables, especially at larger transform sizes (32×32 and 16×16) is that it leads to a better separation of zero- and non-zero-coefficients than classic zig-zag or related patterns used in other video codecs.

Loopfilter

The aforementioned processes allow you to compress individual blocks in VP9. However, they also lead to blocking artifacts at the edges between transform blocks and prediction blocks. To resolve that (i.e. smoothen unintended hard edges), VP9 imposes a post-block decoding loopfilter which aims to soften hard block edges. There are 4 separate loopfilters that run at block edges, modifying either 16, 8, 4 or 2 pixels. Smaller transforms allow only small filters, whereas for the largest transforms, all filters are available. Which one runs for each edge depends on filter strength and edge hardness.

On the earlier-used image, the loopfilter effect (contrast increased by ~100x) looks like this:

Arithmetic coding and adaptivity

So far, we’ve comprehensively discussed algorithms used in VP9 block coding and image reconstruction. We have not yet discussed how symbol aggregation into a serialized bitstream works. For this, VP9 uses an binary arithmetic range coder. Each symbol (e.g. an intra mode choice) has a probability table associated with it. These probabilities can either be global defaults, or they can be explicitly updated in the header of each frame (forward updates). Based on the entropy of the decoded data of previous frames, the probabilities are updated before the coding of next frames starts (backward updates). This means that probabilities effectively adapt to data entropy – but without explicit signaling. This is very different from how H.264/HEVC use CABAC, since CABAC uses per-symbol adaptivity (i.e. after each bit, the associated probability is updated – which is useful, especially in intra/keyframe coding) but resets state between frames, which means it can’t take advantage of entropic redundancy between frames. VP9 keeps probabilities constant during the coding of each frame, but maintains state (and adapts probabilities) between frames, which provides compression benefits during long stretches of inter frames.

Summary

The above should give a pretty comprehensive overview of algorithms and designs of the VP9 video codec. I hope it helps in understanding why VP9 performs better than older video codecs, such as H.264. Are you interested in video codecs and would you like to write amazing video encoders? Two Orioles, based in New York, is hiring video engineers!

Improving notifications in GNOME

Like a lot of people, I was really happy with the big notification redesign that landed in 3.16. However, I’ve been itching to return to notifications for a little while, in order to improve what we have there. Prompted by some conversations we had during the Core Apps Hackfest last month, I finally got around to spending a bit of time on this.

One of the main goals for this is to focus the UI – to improve the signal to noise ratio, by refining and focusing the interface. This can be seen in the new mockups that have been posted for the shell’s calendar drop down – the number of extraneous visual elements is dramatically reduced, making it much easier to pick out interesting information.

The other aspect of my recent work on notifications is to improve the quality of the notifications themselves. Notification for events that aren’t of interest, or which stick around for longer than necessary, can undermine the entire notifications experience.

This is why, aside from updating the shell’s UI for presenting notifications, the other aspect of this effort is to ensure that applications are using notifications effectively. This includes making sure that applications:

  • Only show notifications that are actually interesting to the user.
  • Revoke notifications once they have outlived their usefulness.
  • Give every notification a useful title and body text.

I’ve recently reviewed a bunch of GNOME applications to ensure that they are using notifications correctly, and have created a wiki page to track the issues that I found.

If you spot any GNOME applications not using notifications effectively, it would be great if you could help by filing bugs and add them to the wiki page. Likewise, if you work on an application that uses notifications, make sure to check that you’re following the guidelines. It would be really great if GNOME could achieve across the board improvements in this area for the next release!

Python workshop on FEDORA and GNOME at UNMSM

Today, I have led a workshop that lasted 2 hours at UNMSM (Universidad Mayor de San Marcos).  Students of the Math School organized the workshop and these were the ads:

unmsm01I have presented FEDORA and GNOME, some students expressed that they have used Ubuntu and Unity before, but never FEDORA neither GNOME.  I started with history of GNU/Linux and then the Red Hat history to land into the FEDORA overview, how I got involved and ways of contributions.

unmsm2Students who answered questions correctly won a prize such as DVDs and stickers of the FEDORA and GNOME events I did. Some of them liked to have balloons of the projects🙂

unmsm03A group photograph was taken before installation of FEDORA 25 on their machines:

dsc_0520We installed FEDORA 25 64bits on VMWARE, and all of them had achieved successfully!

unmsm05Finally, I presented GNOME to the audience and the programs Outreachy and GSoC:

unmsm06Thanks so much to the organizers Cis (Computation Intelligence Society) of the UNMSM (Universidad Nacional Mayor de San Marcos) and the support of enthusiastic guys!unmsm08


Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CIS, conference, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Lima, linux, Perú, Python, San Marcos University, Universidad Nacional Mayor de San Marcos, UNMSM, workshop

GomDocument: Providing best of two worlds

In upcoming GXml 0.14, there will be a new DOM4 implementation called GomDocument, it along with GomElement and GomObject, provides support to serialize or deserialize GObject object to or from XML files.

I’ve made initial measures about what is the performance of GomDocument against other implementations in GXml: GDocument and TDocument, first one is a libxml2 implementation with bidings on GObject and DOM4 while last one is a pure GObject implementation with no support (yet?) of DOM4.

In performance test, we use a 3.5 MB file with a lot of nodes, read it and then create an internal in memory XML tree and then fill out your GObject class properties, we measure required time to deserialize and then time to write it back to an XML file.

We can see GDocument taking a lot of time on deserialize, because it uses libxml2  to create a tree and GXml.SerializableObjectModel to fill out your GObject class. Serialize is very competitive, because all implementations, use almost same engine: direct access to libxml2 by xmlWriter.

GomDocument, is very competitive if compared to TDocument, but can perform much better than GDocument.

Please note that GomDocument time to write to disk is not available, because serialization and deserialization is make in one go, then may be we case reduce this time (should be the same from others  because the file is loaded in memory before to read) and then this makes GomDocument to perform even better!!

On memory usage, TDocument requires much more memory and GDocument is the one to defeat. GomDocument now requires almost same memory than GDocument.

Conclusion

GomDocument is the way to go on Serialization framework for GXml. These results will help LibreSCL, to provide a very competitive serialization/deserialization framework and consider to create a WEB based application using GObject Introspection and Python to access very large files, without requiring lot of memory resources and may be a good response time on reading.

I’m starting to stabilize GXml to release 0.14 as soon as I can found most bugs on parse XML files. GXml’s GomDocument have a set of errors detection, based on DOM4, not present in other GXml implementations, making it more sensitive to some files and may not affecting (or detected) by libxml2, for example.

December 12, 2016

Quiztime II

So, following up from the last quiztime which was about the importance of explicit linking, another case from the wonderful world of shared libraries.

This time we study the implications of dlopen, its parameters and C++. Consider the program and module below. If you run that, it will crash somewhat obscurely in libstdc++. Why?

#0  0x00007f687b2eb126 in std::ostream::sentry::sentry(std::ostream&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007f687b2eb889 in std::basic_ostream >& std::__ostream_insert >(std::basic_ostream >&, char const*, long) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2  0x00007f687b2ebd57 in std::basic_ostream >& std::operator< <  >(std::basic_ostream >&, char const*) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#3  0x00007f687a6f01a0 in Impl::print (this=0x558a9bc1f2e0, str="Hello") at module.cc:24
#4  0x0000558a99de3d1b in main (argc=1, argv=0x7ffca28cf538) at main.cc:31
#ifndef INTERFACE_H
#define INTERFACE_H
 
#include <string>
 
class IInterface {
    public:
        virtual ~IInterface() {};
        virtual void print(const std::string& str) = 0;
};
 
#endif // INTERFACE_H
</string>
#include "interface.h"
 
#include <iostream>
 
class Impl : public IInterface {
    public:
        Impl();
        virtual ~Impl();
        void print(const std::string& str);
};
 
extern "C" {
void *entry_point(void)
{
    return new Impl;
}
};
 
Impl::Impl() {};
Impl::~Impl() {};
 
void Impl::print(const std::string& str)
{
    std::cerr < <"Some text to print\n";
    std::cerr << "Got passed this: " << str << "\n";
    std::cerr << "=====\n";
}
#include <dlfcn .h>
#include <iostream>
 
#include "interface.h"
 
extern "C" {
typedef void *(*EntryFunction)(void);
};
 
int main(int argc, char *argv[])
{
    IInterface *iface;
    EntryFunction f;
 
    void *lib = dlopen("./module.so", RTLD_NOW | RTLD_DEEPBIND);
    if (lib == nullptr) {
        std::cerr < < dlerror () << "\n";
        return 1;
    }
 
    f = (EntryFunction) dlsym (lib, "entry_point");
 
    if (f == nullptr) {
        std::cerr << dlerror () << "\n";
        return 1;
    }
 
    iface = reinterpret_cast<IInterface *>(f());
 
    while (true) {
        iface->print ("Hello");
    }
}
</iostream></dlfcn>
.PHONY: all clean
all: main module.so
 
clean:
	rm -f main
	rm -f module.so
 
main: main.cc
	g++ -g -o $@ $< -ldl
 
module.so: module.cc
	g++ -g -o $@ -shared $< -fPIC

Logitech Unifying Hardware Required

Does anyone have a spare Logitech Unifying dongle I can borrow? I specifically need the newer Texas Instruments version, rather than the older Nordic version.

You can tell if it’s the version I need by looking at the etching on the metal USB plug, if it says U0008 above the CE marking then it’s the one I’m looking for. I’m based in London, UK if that matters. Thanks!

This week in GTK+ – 28

In this last week, the master branch of GTK+ has seen 103 commits, with 5986 lines added and 1896 lines removed.

Planning and status
  • Benjamin Otte is working on a refactoring of the GSK render node API
  • Emmanuele Bassi worked on a 3.x-specific branch that allows GDK to use EGL instead of GLX on X11 platforms
  • The GTK+ road map is available on the wiki.
Notable changes

On the master branch:

  • Benjamin merged the Vulkan renderer for GSK, as comparison point for the GL and Cairo renderers. The Vulkan renderer adds a new optional dependency on a working Vulkan implementation, as well as glslc
  • The Vulkan renderer also resulted in a large refactoring of the GL drawing code inside GDK, though this should not have caused any user-visible changes in the API
  • Benjamin also implemented support for the CSS border-spacing property following the CSS 2.1 specification

On the gtk-3-22 stable branch:

  • Matthias released GTK+ 3.22.5.
Bugs fixed
  • 775651 GdkX11-4.0.gir, GdkWin32-4.0.gir, and Gsk-4.0.gir are generated before Gdk-4.0.gir is ready
Getting involved

Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.

December 11, 2016

November Bug Squash Redux

Last month GNOME had a Bug Squash Month. Thanks to everyone who participated!

For this initiative I had prototyped a way to gamify things up a bit. I created a high score table and badges for the local open source groups joining in on the bug squashing. I had my own group Open Source Aalborg participating on November the 30th and we had lots of fun!


Open Source Aalborg in the process of bug squashing (CC-BY-SA 4.0)

What made bug squashing a particularly good activity for my local group was the it sets no requirement to do actual coding. I did a small presentation about bug squashing and afterwards everyone could participate regardless of background. Our award system with the badges even meant each could individually set their goal and how much effort to put into it. Want something easy? Look out for obvious obsolete bugs. Want something harder? Try to code up a patch.


Installing Fedora to build run the latest GNOME Apps and reproduce bugs (CC-BY-SA 4.0).

As participants worked out the bug reports, I was keeping the high score table up to date. A projector would then show off the high score table so everyone would know whenever we achieved a new badge.


Small physical rewards in the form of candy to everyone participating (CC-BY-SA 4.0)

As another experiment, I had made actual physical rewards for the participants consisting of candy and buiscuits. When participants managed to squash their first bug they would get something, and when our group managed to get 5 bugs squashed I would give out snacks to everyone. Fun small addition and I particularly liked the idea of rewards to everybody when we as a group met a goal. It reinforces the team spirit!

Some thoughts about what worked and what didnt work:

  • Badges which the event manager can manage live at the event happens also work the best.
  • It varies a lot how much each local group achieves in quantity. I think making badges based on other measurements works best (fx. Spooky Skeleton Award was fun!).
  • Our social media awards were awesome!
  • Good to have share a step by step guide after presentation so people can get started quickly.

I’m definitely up for getting another high score table up and running again for another bug squash sometime. There has been talk in #engagement about making a web app a bit like the Capture the flag websites for these kind of things. Would definitely be cool!

And thanks a lot Alexandre for taking an initative on this!

The future of GNOME Music

GNOME Music is an app that I particularly fell in love. Not because it worked perfectly (it didn’t, at all), nor because it’s fast to the last bit (it isn’t, at all), neither because it’s written in a programming language that I love (i.e. C. And no, it isn’t).

So why the hell did I invest my free time, energy and resources on it? If we look on how it is right now, it makes no sense. So for my dog’s sake, why?

Simply because I want to push GNOME forward.

I saw a very rough, embryonic prototype of what could be a good app. I saw bottlenecks all around, things to fix, tasks to complete. I saw pretty good mockups available. All the ingredients were there, we just needed a little bit of love. Working on challenging apps almost always ends up pushing the entire plataform forward.

To add value to this discussion, and for you guys to understand the future of GNOME Music, we must see the history of this app first.

Long ago…

There was an app called GNOME Music. It was written as a quick prototype in JavaScript. Then, another version of the same app was written in Python, and this snaky version of the app was more complete than the other.

The Python version was made the official one.

And yet, the pythonic Music was also a quick prototype.

I’m not particularly against quick prototypes. It’s just that the methods to maintain prototypes are radically different from the methods to maintain apps on the long run. Prototypes are meant to be rapidly developed and showcase a certain functionality, right? GNOME Music, however, is part of the extended GNOME plataform – this implies maintaining it for thousands of users during many years.

Features were being added to Music in such a way that the code was getting messier and messier. That sucks.

Back to the Future

All this trip to the past has serious implications on how GNOME Music codebase looks like now. At the GNOME Core Apps Hackfest, we had the chance to sit down, analyse the code and sketch out what we need to do.

Sad news: most of this work consists of fixing and reorganizing what we have now, so in the future we can still read Music’s code and work on it (instead of sitting down and crying every time we have to deal with it, which is basically what used to happen a few months ago).

If I’m to draw how Music’s codebase is organized right now, it’d look like this:

current-musicMusic codebase as it looks right now

Basically, everything is spread all around the entire codebase. Based on that, here is our current, actionable list of tasks on how to fix that (the names are not final):

  • All the data will be encapsulated by a new backend class
  • A new concept of “data sources” will beintroduced
    • Data source is an interface for code that provides data
    • Data sources provide:
      • Albums
      • Artists
      • Playlists
      • Search results
    • The current sources will implement this interface
      • Grilo will be a data source
      • Tracker will be another data source
  • Only one singleton class will exist, the Backend class
    • The Backend manages:
      • The Playlists object
      • Data sources
      • The Player
      • Album art cache
      • MPRIS
  • The UI only communicates with the backend

tl; dr: we’re splitting UI from backend work and adding a layer to abstract data sources. This is how it should look like, after the plan is implemented:

next-musicNext architecture of GNOME Music

 

We probably may be overloading the Backend class (suggestions?) but this new architecture will give us some more flexibility to implement new data providers, sync online accounts data, and whatever else we need.

I already started working on this move with the Playlists code. You can check it out here. It needs review and, of course, testing, but the prototype is there to show how we can move on.

And the Sun starts to shine again on Music land.

December 10, 2016

Smooth transition to new major versions of a set of libraries

With GTK+ 4 in development, it is a good time to reflect about some best-practices to handle API breaks in a library, and providing a smooth transition for the developers who will want to port their code.

But this is not just about one library breaking its API. It’s about a set of related libraries all breaking their API at the same time. Like what will happen in the near future with (at least a subset of) the GNOME libraries in addition to GTK+.

Smooth transition, you say?

What am I implying by “smooth transition”, exactly? If you know the principles behind code refactoring, the goal should be obvious: doing small changes in the code, one step at a time, and – more importantly – being able to compile and test the code after each step. Not in one huge commit or a branch with a lot of un-testable commits.

So, how to achieve that?

Reducing API breaks to the minimum

When developing a non-trivial feature in a library, designing a good API is a hard problem. So often, once an API is released and marked as stable, we see some possible improvements several years later. So what is usually done is to add a new API (e.g. a new function), and deprecating an old one. For a new major version of the library, all the deprecated APIs are removed, to simplify the code. So far so good.

Note that a deprecated API still needs to work as advertised. In a lot of cases, we can just leave the code as-is. But in some other cases, the deprecated API needs to be re-implemented in terms of the new API, usually for a stateful API where the state is stored only wrt the new API.

And this is one case where library developers may be tempted to introduce the new API only in a new major version of the library, removing at the same time the old API to avoid the need to adapt the old API implementation. But please, if possible, don’t do that! Because an application would be forced to migrate to the new API at the same time as dealing with other API breaks, which we want to avoid.

So, ideally, a new major version of a library should only remove the deprecated API, not doing other API breaks. Or, at least, reducing to the minimum the list of the other, “real” API breaks.

Let’s look at another example: what if you want to change the signature of a function? For example adding or removing a parameter. This is an API break, right? So you might be tempted to defer that API break for the next major version. But there is another solution! Just add a new function, with a different name, and deprecate the first one. Coming up with a good name for the new function can be hard, but it should just be seen as the function “version 2”. So why not just add a “2” at the end of the function name? Like some Linux system calls: umount() -> umount2() or renameat() -> renameat2(), etc. I admit such names are a little ugly, but a developer can port a piece of code to the new function with one (or several) small, testable commit(s). The new major version of the library can rename the v2 function to the original name, since the function with the original name was deprecated and thus removed. It’s a small API break, but trivial to handle, it’s just renaming a function (a git grep or the compiler is your friend).

GTK+ timing and relation to other GNOME libraries

GTK+ 3.22 as the latest GTK+ 3 version came up a little as a surprise. It was announced quite late during the GTK+/GNOME 3.20 -> 3.22 development cycle. I don’t criticize the GTK+ project for doing that, the maintainers have good reasons behind that decision (experimenting with GSK, among other things). But – if we don’t pay attention – this could have a subtle negative fallout on higher-level GNOME libraries.

Those higher-level libraries will need to be ported to GTK+ 4, which will require a fair amount of code changes, and might force to break in turn their API. So what will happen is that a new major version will also be released for those libraries, removing their own share of deprecated API, and doing other API breaks. Nothing abnormal so far.

If you are a maintainer of one of those higher-level libraries, you might have a list of things you want to improve in the API, some corners that you find a little ugly but you never took the time to add a better API. So you think, “now is a good time” since you’ll release a new major version. This is where it can become problematic.

Let’s say you released libfoo 3.22 in September. If you follow the new GTK+ numbering scheme, you’ll release libfoo 3.90 in March (if everything goes well). But remember, porting an application to libfoo 3.90/4.0 should be as smooth as possible. So instead of introducing the new API directly in libfoo 3.90 (and removing the old, ugly API at the same time), you should release one more version based on GTK+ 3: libfoo 3.24. To reduce the API delta between libfoo-3 and libfoo-4.

So the unusual thing about this development cycle is that, for some libraries, there will be two new versions in March (excluding the micro/patch versions). Or, alternatively, one new version released in the middle of the development cycle. That’s what will be done for GtkSourceView, at least (the first option), and I encourage other library developers to do the same if they are in the same situation (wanting to get rid of APIs which were not yet marked as deprecated in GNOME 3.22).

Porting, one library at a time

If each library maintainer has reduced to the minimum the real API breaks, this eases greatly the work to port an application (or higher-level library).

But in the case where (1) multiple libraries all break their API at the same time, and (2) they are all based on the same main library (in our case GTK+), and (3) the new major version of those other libraries all depend on the new major version of the main library (in our case, libfoo 3.90/4.0 can be used only with GTK+ 3.90/4.0, not with GTK+ 3.22). Then… it’s again the mess to port an application – except with the following good practice that I will just describe!

The problem is easy but must be done in a well-defined order. So imagine that libfoo 3.24 is ready to be released (you can either release it directly, or create a branch and wait March to do the release, to follow the GNOME release schedule). What are the next steps?

  • Do not port libfoo to GTK+ 3.89/3.90 directly, stay at GTK+ 3.22.
  • Bump the major version of libfoo, making it parallel-installable with previous major versions.
  • Remove the deprecated API and then release libfoo 3.89.1 (development version). With a git tag and a tarball.
  • Do the (hopefully few) other API breaks and then release libfoo 3.89.2. If there are many API breaks, more than one release can be done for this step.
  • Port to GTK+ 3.89/3.90 for the subsequent releases (which may force other API breaks in libfoo).

The same for libbar.

Then, to port an application:

  • Make sure that the application doesn’t use any deprecated API (look at compilation warnings).
  • Test against libfoo 3.89.1.
  • Port to libfoo 3.89.2.
  • Test against libbar 3.89.1.
  • Port to libbar 3.89.2.
  • […]
  • Port to GTK+ 3.89/3.90/…/4.0.

This results in smaller and testable commits. You can compile the code, run the unit tests, run other small interactive/GUI tests, and run the final executable. All of that, in finer-grained steps. It is not hard to do, provided that each library maintainer has followed the above steps in the good order, with the git tags and tarballs so that application developers can compile the intermediate versions. Alongside a comprehensive (and comprehensible) porting guide, of course.

For a practical example, see how it is done in GtkSourceView: Transition to GtkSourceView 4. (It is slightly more complicated, because we will change the namespace of the code from GtkSource to Gsv, to stop stomping on the Gtk namespace).

And you, what is your list of library development best-practices when it comes to API breaks?

PS: This blog post doesn’t really touch on the subject of how to design a good API in the first place, to avoid the need to break it. It will maybe be the subject of a future blog post. In the meantime, this LWN article (Designing better kernel ABIs) is interesting. But for a user-space library, there is more freedom: making a new major version parallel-installable (even every six months, if needed, like it is done in the Gtef library that can serve as an incubator for GtkSourceView). Writing small copylibs/git submodules before integrating the feature to a shared library. And a few other ways. With a list of book references that help designing an Object-Oriented code and API.

FEDORA and GNOME at the “1er Encuentro de Tecnología e innovación-Macro Región Lima 2016” Conference

Thanks to UNTELS (Universidad Nacional Tecnológica de Lima Sur) for organizing the conference “1er Encuentro de Tecnología e innovación-Macro Región Lima 2016”. This was the advertisement they had prepared for the occasion:

untels01I had traveled almost 37Km from Lima to reach UNTELS, so it was difficult to be there on time. The organizers kindly moved the schedule while participants were registered, then I spent time for lunch with other speakers and organizers.

untels02A previous history of GNU/Linux was also included in my talk to draw FEDORA and GNOME into the story.  The talk last in total more than one hour, but this time I really sweat the GNOME and FEDORA t-shirt. It might be because the summer time in Perú🙂

untels03The work of students from UNTELS newbies in organizing events plus the support and the experience of APEISC was a great combination to have a successful result.

untels04The picture of the participants with balloons is also a great memory that I can share🙂

dsc_0320Hearing questions and suggestions from different people from different universities exerts myself make an effort for Linux, not only in history and administration. Troubleshooting, integration with different technologies and awareness of the future of Linux are concerns.

dsc_0276


Filed under: FEDORA, GNOME Tagged: 1er Encuentro de Tecnologia e Innovacion Macro Region Lima, APEISC, fedora, GNOME, Julita Inca, Julita Inca Chiroque, Universidad Nacional Tecnologica Lima Sur, UNTELS

December 09, 2016

QMI firmware update with libqmi

img_20161209_160520qmi-firmware-update

One of the key reasons to keep using the out-of-tree Sierra GobiNet drivers and GobiAPI was that upgrading firmware in the WWAN modules was supported out of the box, while we didn’t have any way to do so with qmi_wwan in the upstream kernel and libqmi.

I’m glad to say that this is no longer the case; as we already have a new working solution in the aleksander/qmi-firmware-update branch in the upstream libqmi git repository, which will be released in the next major libqmi release. Check it out!

The new tool is named, no surprise, qmi-firmware-update; and allows upgrading firmware for Qualcomm based Sierra Wireless devices (e.g. MC7700, MC7710, MC7304, MC7354, MC7330, MC7444…). I’ve personally not tested any other device or manufacturer yet, so won’t say we support others just yet.

This work wouldn’t have been possible without Bjørn Mork‘s swi-update program, which already contained most of the bits and pieces for the QDL download session management, we all owe him quite some virtual beers. And thanks also to Zodiac Inflight Innovations for sponsoring this work!

Sierra Wireless SWI9200 series (e.g. MC7700, MC7710…)

The upgrade process for Sierra Wireless SWI9200 devices (already flagged as EOL, but still used in thousands of places) is very straightforward:

  • Device is rebooted in boot & hold mode (what we call QDL download mode) by running AT!BOOTHOLD in the primary AT port.
  • A QDL download session is run to upload the firmware image, which is usually just one single file which contains the base system plus the carrier-specific settings.
  • Once the QDL download session is finished, the device is rebooted in normal mode.

The new qmi-firmware-update tool supports all these steps just by running one single command as follows:

$ sudo qmi-firmware-update \
     --update \
     -d 1199:68a2 \
     9999999_9999999_9200_03.05.14.00_00_generic_000.000_001_SPKG_MC.cwe

Sierra Wireless SWI9x15 series (e.g. MC7304, MC7354…)

The upgrade process for Sierra Wireless SWI9x15 devices is a bit more complex, as these devices support and require the QMI DMS Set/Get Firmware Preference commands to initiate the download. The steps would be:

  • Decide which firmware version, config version and carrier strings to use. The firmware version is the version of the system itself, the config version is the version of the carrier-specific image, and the carrier string is the identifier of the network operator.
  • Using QMI DMS Set Firmware Preference, the new desired firmware version, config version and carrier are specified. When the firmware and config version don’t match the ones currently running in the device, it will reboot itself in boot & hold mode and wait for the new downloads.
  • A QDL download session is run to upload each firmware image available. For these kind of devices, two options are given to the users: a pair of .cwe and .nvu images containing the system and carrier images separately, or a consolidated .spk image with both. It’s advised to always use the consolidated .spk image to avoid mismatches between system and config.
  • Once the QDL download sessions are finished, the device is rebooted in normal mode.

Again, the new qmi-firmware-update tool supports all these steps just by running one single command as follows:

$ sudo qmi-firmware-update \
     --update \
     -d 1199:68c0 \
 9999999_9902574_SWI9X15C_05.05.66.00_00_GENNA-UMTS_005.028_000-field.spk

This previous commands will analyze each firmware image provided and will extract the firmware version, config version and carrier so that the user doesn’t need to explicitly provide them (although there are also options to do that if needed).

Sierra Wireless SWI9x30 series (e.g. MC7455, MC7430..)

The upgrade process for Sierra Wireless SWI9x30 devices is equivalent to the one used for SWI9x15. One of the main differences, though, is that SWI9x15 devices seem to only allow one pair of modem+pri images (system+config) installed in the system, while the SWI9x30 allows the user to download multiple images and then select them using the QMI DMS List/Select/Delete Stored Image commands.

The SWI9x30 modules may also run in MBIM mode instead of QMI. In this case, the firmware upgrade process is exactly the same as with the SWI9x15 series, but using QMI over MBIM. The qmi-firmware-update program supports this operation with the –device-open-mbim command line argument:

$ sudo qmi-firmware-update \
    --update \
    -d 1199:9071 \
    --device-open-mbim \
    SWI9X30C_02.20.03.00.cwe \
    SWI9X30C_02.20.03.00_Generic_002.017_000.nvu

Notes on device selection

There are multiple ways to select which device is going to be upgraded:

  • vid:pid: If there is a single device to upgrade in the system, it usually is easiest to use the -d option to select it by vid:pid (or even just by vid). This is the way used by default in all previous examples, and really the easiest one if you just have one modem available.
  • bus:dev: If there are multiple devices to upgrade in the same system, a more restrictive device selection can be achieved with the -s option specifying the USB device bus number plus device number, which is unique per physical device.
  • /dev/cdc-wdm: A firmware upgrade operation may also be started by using the –cdc-wdm option (shorter, -w) and specifying a /dev/cdc-wdm device exposed by the module.
  • /dev/ttyUSB: If the device is already in boot & hold mode, a QDL download session may be performed directly on the tty specified by the –qdl-serial (shorter, -q) option.

Notes on firmware images

Sierra Wireless provides firmware images for all their SWI9200, SWI9x15 and SWI9x30 modules in their website. Sometimes they do specify “for Linux” (and you would get .cwe, .nvu or .spk images) and sometimes they just provide .exe Windows OS binaries. For the latter, you can just decompress the .exe file e.g. using 7-Zip and get the firmware images that you would use with qmi-firmware-update, e.g.:

 $ 7z x SWI9200M_3.5-Release13-SWI9200X_03.05.29.03.exe
 $ ls *.{cwe,nvu,spk} 2>/dev/null
 9999999_9999999_9200_03.05.29.03_00_generic_000.000_001_SPKG_MC.cwe

[TL;DR?]

qmi-firmware-update now allows upgrading firmware in Sierra Wireless modems using qmi_wwan and libqmi.


Filed under: FreeDesktop Planet, GNOME Planet, Planets Tagged: firmware upgrade, Gobi, GobiNet, libqmi, QMI, qmi_wwan, Qualcomm, sierra-wireless

December 08, 2016

Oh, the security!


Under public domain

There’s been lately lots of fuzz around Tracker as a security risk, as the de-facto maintainer of Tracker I feel obliged to comment. I’ll comment purely on Tracker bits, I will not comment on other topics that OTOH were not as debated but are similarly affected, like thumbnailing, previewing, autodownloading, or the state of maintenance of gstreamer-plugins-bad.

First of all, I’m glad to tell that Tracker now sandboxes its extractors, so its only point of exposure to exploits is now much more constrained, leaving very little room for malicious code to do anything harmful. This fix has been backported to 1.10 and 1.8, and new tarballs rolled, everyone rejoice.

Now, the original post raising the dust storm certainly achieved its dramatic effect, despite Tracker not doing anything insecure besides calling a closed well known set of 3rd party libraries (which after all are most often installed from the same trusted sources that Tracker comes from), it’s been on the “security” spotlight across several bugs/MLs/sites with different levels of accuracy, I’ll publicly comment on some of these assertions I’ve seen in the last days.

This is a design flaw in Tracker!

Tracker has always performed metadata extraction in a separate process for stability reasons, which means we already count on this process possibly crashing and burning away.

Tracker was indeed optimistic at the possible reasons why that might happen, but precisely thanks to Tracker design it’s been a breeze to isolate the involved parts. A ~200 lines change hardly counts as a redesign.

All of tracker daemons are inherently insecure!, or its funnier cousin Tracker leaks all collected information to the outside world!

This security concern has only raised because of using 3rd party parsers (well, in the case of the GStreamer vulnerability in question, decoders, why a parsing facility like GstDiscoverer triggers decoding is another question worth asking), and this parsing of content happens in exactly one place in your common setup: tracker-extract.

Let’s dissect a bit Tracker daemons’ functionality:

  • tracker-store: It is the manager of your user Tracker database, it connects to the session bus and gets readwrite access to a database in ~/.cache. Also does notification of changes in the database through the user bus.
  • tracker-miner-fs: It’s the process watching for changes in filesystem, and filling in the basic information that can be extracted from shared-mime-info sniffing (which usually involves matching some bytes inside the file, little conditionals involved), struct dirent and struct stat.
  • tracker-extract: Guilty as charged! It receives the notification of changes, and is basically a loop that picks the next unprocessed file, runs it through 3rd party parsers, sends a series of insert clauses over dbus, and picks the next file. Wash, rinse, repeat.
  • tracker-miner-applications: A very simplified version of tracker-miner-fs that just parses the keyfiles in various .desktop file locations.
  • tracker-miner-rss: This might be another potential candidate, as it parses “arbitrary” content through libgrss. However, it must be configured by the user, it otherwise has no RSS feeds to read from. I’ll take the possibility of hijacking famous blogs and news sites to hack through tracker-miner-rss as remote enough to fix it after a breathe.

So, taking aside per-parser specifics, tracker consists of one database stored under 0600 permissions, information being added to it through requests in the dbus session, and being read by apps from a readonly handle created by libtracker-sparql, the read and write channels can be independently isolated.

If you are really terrified by your user information being stored inside your homedir, or can’t sleep thinking of your session bus as a dark alley, you certainly want to run all your applications in a sandbox, they won’t be able to poke on org.freedesktop.Tracker1.Store or sniff on ~/.cache that way.

But again, there is nothing that makes Tracker as a whole inherently insecure, at least not more than the average session bus service, or the average application storing data in your homedir. Everything that could be distrusted is down to specific parsers, and that is anything but inherent in Tracker.

Tracker-extract runs plugins and is thus unsafe!

No, tracker-extract has a modular design, but is not extensible itself. It reads a closed set of modules implemented by Tracker from a folder that should be in /usr/lib/tracker-1.0 if your setup is right. The API of these modules is private and subject to change. If anything manages to add or modify modules there, you’ve got way worse concerns.

Now, one of these extractor modules uses GStreamer, which to my knowledge is still the go-to library if you want anything multimedia on linux, and it happens to open an arbitrary list of plugins itself, that is beyond Tracker control or extent.

It should be written in rust!

What do we gain from that? As said, tracker-extract is in essence a very simple loop, all the scary stuff is handled by external libraries that will still be implemented in “unsafe languages”, rust is just as useful as gift paper to wrap this.

Extraction should be further isolated into another process!

There are good reasons not to do that. Having two separate processes running completely interlocked tasks (one process can’t do anything until the other is finished) is pretty much a worst case for scheduling, context switching, performance and battery life at once.

Furthermore, such tertiary service would need exactly the same whitelisted syscalls and exactly the same number of ways out of the process. So I think I won’t attract the “Tracker is heavy/slow” zealots for this time… There is a throwaway process, and it is tracker-extract.

The silver linings

Tracker is already more secure, now lets silence the remaining noise. Quite certainly one area of improvement is Flatpak integration, so sandboxed applications can launch isolated Tracker instances that run under the same sandboxed environment, and extracted data is only visible within the sandbox.

This is achievable with current Tracker design, however the “Tracker as a service” approach sounds excessive with this status quo, tracker needs to adapt to being usable as a local store, and it needs to go through being more of a generic SPARQL endpoint before.

But this is just adapting to the new times, Flatpak is relatively young and Tracker is slow moving, so they haven’t met yet. But there is a pretty clear roadmap, and we’ll get there.

Trying googletest

Recently I’ve played a bit with googletest in a couple of small projects: in murrayc-tuple-utils (and some later commits) and in prefixsuffix. It’s pretty straightforward and improved my test code and test result output (test-suite.log with autotools). Here are some notes.

Code Changes

I usually just use the standard autotools and CMake test features, sprinkling assert()s and “return EXIT_FAILURE;” in my C++ test code. However, googletest lets me replace a simple assert():

assert(foo.get_thing() == "hello");

with an EXPECT_EQ() call that will give me a clue when the test fails:

EXPECT_EQ("hello", foo.get_thing());

There are some other simple assertions.

googletest also wants me to declare test functions, previously simply declared like so,

void test_thing_something() {

now like so:

TEST("TestThing", "Something") {

which is all a bit macrotastic, unfortunately, but it works.

I chose to use the standard main() implementation, so I just removed my main() function that called these tests and compiled gtest_main.cc into the gtest library, as suggested in the googletest documentation.

Build Changes

googletest is meant to be built in your project, not built separately and just linked to as a library. For performance testing, I can see the benefit of building the test framework with exactly the same environment and options as the code being tested, but this does seem rather awkward. I guess it’s just what Google do in their monolithic builds so there’s nobody making an effort to support any other build system.

Anyway, it’s fairly easy to add googletest as a git submodule and build the library in your autotools or CMake project. For autotools, I did this with libtool in murrayc-tuple-utils and without libtool in prefixsuffix.

Unfortunately, I had to list the individual googletest source files in EXTRA_DIST to make this work with autotool’s “make distcheck”.

This is easier with CMake, because googletest has a CMakeList.txt file so you can use “add_subdirectory (googletest)”. Of course, CMake doesn’t have an equivalent for “make distcheck”.

Also unfortunately, I had to stop using the wonderful -Wsuggest-override warning with warnings as error, because googletest doesn’t use the override keyword. I think it hasn’t really caught up with C++11 yet, which seems odd as I guess Google is using at least C++11 in all its code.

Conclusion

The end result is not overwhelming so far, and it’s arguable if it’s worth having to deal with the git submodule awkwardness. But in theory, when the tests fail, I now won’t have to add so many printfs to find out what happened. This makes it a bit more like using JUnit with Java projects.

Also in theory, the test output would integrate nicely with a proper continuous integration system, such as Jenkins, or whatever Google use internally. But I’m not using any of those on these projects. Travis-CI is free with github projects, but it seems to be all about builds, with no support for test results.

2016-12-08 Thursday.

  • Mail chew; really encouraged to see Kolab's lovely integration with Collabora Online announced and available for purchase. Wonderful to have them getting involved with LibreOffice, doing testing, filing and triaging bugs up-stream and so on, not to mention the polished marketing.

December 07, 2016

websites are processes

so yesterday i had a coffee with a friend. they run a small consulting shop and he proudly showed me their new website. they were working hard on it for months, and to be honest, it was looking good. it was responsive, had a great design and good copy.

however...

they were still failing to attract new clients and leads over their website. he asked me if there was anything glaringly wrong with their website. one thing jumped out: there wasn't a call to action. prospects would have to scroll down the whole page, find the contact form, fill out 10+ fields and hope somebody would get back to them.

but this was just a symptom, and not the main problem. the main problem was this: they failed to understand that websites are processes.

all websites "sell" something, whether you are selling an actual product, a service, you want to do outreach, get more attention or just simply share your ideas. but it takes time to get visitors interested and willing to buy your service or product. and this is a process. take any sales professional and you'll find an efficient, tested process they stick to. and we can do the same with our websites, fully automated and scalable.

let's have a look at the process in detail. it starts with strangers coming to your website. maybe they saw you at an event, got your business card, found you on google or some article you wrote a year ago. they have a look at your website and in case they are interested, they want to learn more.

next you start talking specifically about what you do and how you can help your clients. as you convince your audience that your product or service is the right solution for their problem you move them to the next stage via a call to action where they become leads. this can be done with a free email course, a cheat sheet or any other value exchange.

at the next stage, your audience is convinced that you're able to help them solving their problem, but trust and connection is still missing. as people only stay a short time (if you get more than 20 seconds you're really lucky) on your websites, this step is needed in order to extend the conversation in which you can nurture the connection and exchange value. drip campaigns or well crafted newsletters (no, not these awful spammy things, but something like a personal email) are good tools for this step.

lastly, by giving away valuable content and helpful tips you establish trust and set yourself up as an expert in your domain. only now you have all the ingredients to make a successful sale.

it is important to note, that the only goal of each step in this process is to help your visitors get to the next step. this is not about you, it is about how you can help your clients.

so think about your website for a moment. do you have a process similar to this in place? if not, you're losing money.

2016-12-07 Wednesday.

  • Prodded at mail, built ESC stats. Thrilled to have Collabora Online 2.0 released in conjunction with our partners. You can buy a LibreOffice based Collabora Online with collaborative editing and long term support now. We'll of course be releasing incremental improvements, updates and fixes approximately each month as we move forward. Thanks to all in the company & community - whose hard work & dedication made this possible.

Avoiding CVE-2016-8655 with systemd

Avoiding CVE-2016-8655 with systemd

Just a quick note: on recent versions of systemd it is relatively easy to block the vulnerability described in CVE-2016-8655 for individual services.

Since systemd release v211 there's an option RestrictAddressFamilies= for service unit files which takes away the right to create sockets of specific address families for processes of the service. In your unit file, add RestrictAddressFamilies=~AF_PACKET to the [Service] section to make AF_PACKET unavailable to it (i.e. a blacklist), which is sufficient to close the attack path. Safer of course is a whitelist of address families whch you can define by dropping the ~ character from the assignment. Here's a trivial example:

…
[Service]
ExecStart=/usr/bin/mydaemon
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
…

This restricts access to socket families, so that the service may access only AF_INET, AF_INET6 or AF_UNIX sockets, which is usually the right, minimal set for most system daemons. (AF_INET is the low-level name for the IPv4 address family, AF_INET6 for the IPv6 address family, and AF_UNIX for local UNIX socket IPC).

Starting with systemd v232 we added RestrictAddressFamilies= to all of systemd's own unit files, always with the minimal set of socket address families appropriate.

With the upcoming v233 release we'll provide a second method for blocking this vulnerability. Using RestrictNamespaces= it is possible to limit which types of Linux namespaces a service may get access to. Use RestrictNamespaces=yes to prohibit access to any kind of namespace, or set RestrictNamespaces=net ipc (or similar) to restrict access to a specific set (in this case: network and IPC namespaces). Given that user namespaces have been a major source of security vulnerabilities in the past months it's probably a good idea to block namespaces on all services which don't need them (which is probably most of them).

Of course, ideally, distributions such as Fedora, as well as upstream developers would turn on the various sandboxing settings systemd provides like these ones by default, since they know best which kind of address families or namespaces a specific daemon needs.

Validating e-mail addresses

tl;dr: Most likely, you want to validate using the regular expression ^[^@]+@[^@]+$ (please think about the trade-off you want between practicality and precision); but if you read the caveats below and still want to validate to RFC 5322, then you want libemailvalidation.

Validating e-mail addresses is hard, and not something which you normally want to do in great detail: while it’s possible to spend a lot of time checking the syntax of an e-mail address, the real measure of whether it’s valid is whether the mail server on that domain accepts it. There is ultimately no way around checking that.

Given that a lot of mail providers implement their own restrictions on the local-part (the bit before the ‘@’) of an e-mail address, an address like [email protected] (which is syntactically valid) probably won’t actually be accepted. So what’s the value in doing syntax checks on e-mail addresses? The value is in catching trivial user mistakes, like pasting the wrong data into an e-mail address field, or making a trivial typo in one.

So, for most use cases, there’s no need to bother with fancy validation: just check that the e-mail address matches the regular expression ^[^@]+@[^@]+$. That should catch simple mistakes, accept all valid e-mail addresses, and reject some invalid addresses.

Why have I been doing further? Walbottle needs it — I think where one RFC references another is one of the few times it’s necessary to fully implement e-mail validation. In this case, Walbottle needs to be able to validate e-mail addresses provided in JSON files, for its email defined format.

So, I’ve just finished writing a small copylib to validate e-mail addresses according to all the RFCs I could get my hands on; mostly RFC 5322, but there is a sprinking of 5234, 5321, 3629 and 6532 in there too. It’s called libemailvalidation (because naming is hard; typing is easier). Since it’s only about 1000 lines of code, there seems to be little point in building a shared library for it and distributing that; so add it as a git submodule to your code, and use validate.c and validate.h directly. It provides a single function:

size_t error_position;

is_valid = emv_validate_email_address (address_to_check,
                                       length_of_address_to_check,
                                       EMV_VALIDATE_FLAGS_NONE,
                                       &error_position);

if (!is_valid)
  fprintf (stderr, "Invalid e-mail address; error at byte %zu\n",
           error_position);

I’ve had fun testing this lot using test cases generated from the ABNF rules taken directly from the RFCs, thanks to abnfgen. If you find any problems, please get in touch!

Fun fact for the day: due to the obs-qp rule, a valid e-mail address can contain a nul byte. So unless you ignore deprecated syntax for e-mail addresses (not an option for programs which need to be interoperable), e-mail addresses cannot be passed around as nul-terminated strings.

December 05, 2016

Playing with .NET (dotnet) and IronFunctions

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service.

While AWS Lambda only supports Java, Python and Node.js, Iron Functions allows you to use any language you desire by running your code in containers.

With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool.

TL;DR:

The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it:

Using dotnet with functions

Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function:

dotnet new  

By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs.

mv Program.cs func.cs  

Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following:

using System;  
using System.Text;  
using System.Security.Cryptography;  
using System.IO;

namespace ConsoleApplication  
{
    public class Program
    {
        public static void Main(string[] args)
        {
            // if nothing is being piped in, then exit
            if (!IsPipedInput())
                return;

            var input = Console.In.ReadToEnd();
            var stream = DownloadRemoteImageFile(input);
            var hash = CreateChecksum(stream);
            Console.WriteLine(hash);
        }

        private static bool IsPipedInput()
        {
            try
            {
                bool isKey = Console.KeyAvailable;
                return false;
            }
            catch
            {
                return true;
            }
        }
        private static byte[] DownloadRemoteImageFile(string uri)
        {

            var request = System.Net.WebRequest.CreateHttp(uri);
            var response = request.GetResponseAsync().Result;
            var stream = response.GetResponseStream();
            using (MemoryStream ms = new MemoryStream())
            {
                stream.CopyTo(ms);
                return ms.ToArray();
            }
        }
        private static string CreateChecksum(byte[] stream)
        {
            using (var md5 = MD5.Create())
            {
                var hash = md5.ComputeHash(stream);
                var sBuilder = new StringBuilder();

                // Loop through each byte of the hashed data
                // and format each one as a hexadecimal string.
                for (int i = 0; i < hash.Length; i++)
                {
                    sBuilder.Append(hash[i].ToString("x2"));
                }

                // Return the hexadecimal string.
                return sBuilder.ToString();
            }
        }
    }
}

Note: IO with an IronFunction is done via stdin and stdout. This code

Using with IronFunctions

Let's first init our code to become IronFunctions deployable:

fn init <username>/<funcname>  

Since IronFunctions relies on Docker to work (we will add rkt support soon) the <username> is required to publish to docker hub. The <funcname> is the identifier of the function.

In our case we will use dotnethash as the <funcname>, so the command will look like:

fn init seiflotfy/dotnethash  

When running the command it will create the func.yaml file required by functions, which can be built by running:

Push to docker

fn push  

This will create a docker image and push the image to docker.

Publishing to IronFunctions

To publish to IronFunctions run ...

fn routes create <app_name>  

where <app_name> is (no surprise here) the name of the app, which can encompass many functions.

This creates a full path in the form of http://<host>:<port>/r/<app_name>/<function>

In my case, I will call the app myapp:

fn routes create myapp  

Calling

Now you can

fn call <app_name> <funcname>  

or

curl http://<host>:<port>/r/<app_name>/<function>  

So in my case

echo http://lorempixel.com/1920/1920/ | fn call myapp /dotnethash  

or

curl -X POST -d 'http://lorempixel.com/1920/1920/'  http://localhost:8080/r/myapp/dotnethash  

What now?

You can find the whole code in the examples on GitHub. Feel free to join the Iron.io Team on Slack.
Feel free to write your own examples in any of your favourite programming languages such as Lua or Elixir and create a PR :)

GNOME loves to cook

GNOME needs a recipe app, since we all love to cook.org-gnome-recipesThis is not a new idea. Looking all the way back to 2007, the idea of a GNOME cook book already existed back then. For one reason or another, we never quite got there. But the idea has stuck around.

screenshot-from-2016-12-02-08-39-17

With the upcoming 20th birthday of GNOME next year, some of us thought that we should make another attempt at this application, maybe as a birthday gift to all of GNOME.

Shortly after GUADEC, I got my hands on some existing designs and started to toy around with implementing them over a few weekends and evenings. The screenshots in this post show how far I got since then.

screenshot-from-2016-12-02-08-39-33

Why did I start to write this app from scratch (instead of e.g. trying to give a face-lift to venerable gourmet)  ?

Beyond the obvious reason that I love to code as much as I love to cook, I wanted to give GNOME Builder a more serious test by starting an application from scratch. And I find it very useful to take a look at GTK+ from the application developer side, every now and then.  In both of these regards, the endeavor was already successful and has yielded improvements to both GNOME Builder and GTK+.  For the cooking part, you can judge that for yourself.

screenshot-from-2016-12-02-08-46-17

The main reason for writing this post is that we are at a point now where we need contributions to make progress. The idea is that we will include a decent set of recipes from GNOME contributors all over the world with the application.

Therefore, we need your recipes, ideally with good photos.

All the photos and text you see in the screenshots here are just test data that I’ve used during development, and need to be replaced with actual content.

So, how can you contribute your favorite recipes ?

Add your recipe to the app, and when you are satisfied with how it looks, you can use the Export button on the details page to create an archive with the recipe and related information, such as images. The archive also includes information about the author of the recipe (ie yourself), so make sure to provide some information for that in the Preferences dialog.

We just created a bugzilla project for recipes, so you can just attach the archive to a bug:

https://bugzilla.gnome.org/page.cgi?id=browse.html&product=recipes

Please make it clear in the bug that all the included images are your own and that we are allowed to ship it with the app.

screenshot-from-2016-12-02-08-44-40

Beyond recipes, there are plenty of other ways to contribute, of course. While I’ve tried hard to get many of the features in the initial design implemented, there is a lot more that can be done: for example, unit conversion, or a way to easily share recipes, or to print a shopping list.

screenshot-from-2016-12-02-08-39-43

Where do you get it ?

The project started out on github, but it is also available on git.gnome.org now. The design materials are collected on the GNOME wiki.

If you just want to try it out without building it yourself,  you can just use Flatpak:

flatpak install --from https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

Of course, I did not get this far on my own. Thanks are due to several people. First and foremost, Emel Elvin Yıldız, for the designs and feedback on the implementation, Jakub Steiner for the icon and visuals, and Christian Hergert for keeping this idea alive and for making GNOME builder work great.

Suffix Tree: Suffix Array and LCP Array, C++

I played a bit more with my C++ Suffix Tree experiment, gradually exploring how Suffix Trees are equivalent to a Suffix Array and its associated LCP Array, allowing us to do anything with Suffix Arrays (and LCP arrays – Longest Common Prefix) that we can do with Suffix Trees, but with less memory (but only by a constant factor) and, I think, more cache-friendly use of memory. This code is still far from suitable for any real use.

Converting between Suffix Trees and Suffix Arrays

I first moved my previous code for Ukkonen’s algorithm from a branch into master, so we can now construct a SuffixTree from a string, in O(n) linear time. Subsequent insert()s still use the naive O(n^2) code. Traditionally, you’d concatenate all your strings with unique separator characters, but I’m trying to avoid that. And, really, I’m just trying to explore the use of the relevant algorithms – it’s a bonus if I can wrap it up into something useful.

I then added a get_suffix_array_and_lcp_array() method to get the Suffix Array and associated LCP Array, in O(n) linear time. This works by doing a lexicographically-ordered depth first search.

Finally, I added a SuffixTree constructor that takes such a Suffix Array and LCP Array to build the suffix tree, again in O(n) linear time. This iterates over the suffixes in the Suffix Array adding a node for each, walking back up the tree to split at the point specified by the LCP array. I kept the path in a stack for that walk up. This constructor let me test that I could use a Suffix Array (and LCP Array) obtained from the SuffixTree to recreate the SuffixTree correctly.

This was based on the ideas I learned from Dan Gusfield’s Fundamentals of Stringology lectures, specifically part 3. Incidentally, his later discussion of the Burrows Wheeler Transform led me to Colt McAnlis’ wonderful Compressor Head series, which far more people should watch.

Towards a usable SuffixArray API.

I’ve already created a SuffixArray class to do much of what the SuffixTree class can do, such as finding matches for a substring. It currently does an O(log(n)) binary search over the suffix array. But this should be possible in O(m) time, where m is the length of the substring, hopefully with far less string comparisons, just as it is possible with a Suffix Array. This can apparently be done with the help of the LCP Array. Hopefully I’ll get to that.

I also want to implement Kärkkäinen & Sanders DC3 algorithm to construct the Suffix Array directly from the string in O(n) linear time. And, if I understand Dan Gusfield’s lecture properly, we can use the Burrows-Wheeler transform to construct the associated LCP Array also in linear time. Hopefully I’ll get to that too.

December 03, 2016

Core Apps Hackfest 2016: report

I spent last weekend at the Core Apps Hackfest in Berlin. The agenda was to work on GNOME’s core applications: Documents, Files, Music, Photos, Videos, Usage, etc.; to raise their overall standard and to make them push beyond the limits of the framework. There were 19 of us and among us we covered a wide range of modules and areas of expertise.

I spent most of my time on the plumbing necessary for Documents and Photos to use GtkFlowBox and GtkListBox. The innards of Photos had already been overhauled to reduce its dependency on GtkTreeModel. Going into the hackfest we were sorely lacking a widget that had all the bells and whistles we need — the idiomatic GNOME 3 selection mode, and seamlessly switching between a list and grid view. So, this is where I decided to focus my energy. As a result, we now have a work-in-progress GdMainBox widget in libgd to replace the old GtkIconView/GtkTreeView-based GdMainView.

gnome-photos-flowbox

In fact, GtkListBox and GtkFlowBox was a recurring theme at the hackfest. Carlos Soriano and Georges were working on using them in Files, and whenever anybody uses them in a non-trivial manner there is the inevitable discussion about performance. Good thing that Benjamin was around. He spent the better part of a tram ride and more than an hour at the whiteboard, sketching out a strategy to make GtkListBox more efficient than it is today.

Like last year, Øyvind joined us. We talked about GEGL, and I finally saw the new GIMP in action. I rarely use GIMP, and I am not sure I have ever built it from source, but I have been reading its sources on a semi-regular basis for almost a year now. It was good to finally address this aberration. Øyvind had with him a cheap hand-held DLNA media renderer that was getting stuck when trying to render more than one image with dleyna-render and Photos. Zeeshan helped me poke at it, but unfortunately we didn’t get anywhere.

Other than that, Petr Stetka impressed everyone with his progress on the new Usage application. Georges refreshed his patches to implement the new Online Accounts Settings panel, Carlos Garnacho helped me with GtkGesture, and I reviewed various patches and branches that had been on my list for a while.

Many thanks to Red Hat for sponsoring me; to Carlos Soriano and Joaquim for organizing the event; to Kinvolk for being such gracious hosts; and to Collabora for the nice dinner.


December 02, 2016

GNOME Core Apps Hackfest 2016

This November from Friday 25 to Sunday 27 was held in Berlin the GNOME Core Apps Hackfest.

My focus during this hackfest was to start implementing a widget for the series view of the Videos application, following a mockup by Allan Day.

To make this more interesting, I implemented this view using Emeus, Emmanuele Bassi's new in-development constraints based layout system for GTK+. You can find the (clearly unfinished) result here: https://github.com/Kekun/totem-series. I will keep working on it with Victor Toso who did some initial prototype last year.

Working at the hackfest was a great experience, interaction with the other contributors was face to face which helps a lot strengthening GNOME as a community. 😃

Thank a lot to Kinvolk and Collabora for helping to make this hackfest a great event!

Feeds