January 14, 2017
January 13, 2017
not sure how to tell you this: but just because your website is well-designed doesn’t mean that it’s effective.
and there’s one simple reason for this: most people fail to understand that websites are processes.
i've been talking about this a lot last year at conferences like sfscon 2016 in italy or 12min.me in munich. many people asked me about the slides and further information, so i gladly published an extended version of my slides along with speaker notes. a video recording is available here.
the gist of my talk is the following:
- websites are processes and start way before people come to your website and end with clients sitting in your meeting room or buying your product
- it's no longer about optimizing your websites for seo and hoping for the best. it's about optimizing your presence across the web. and in the real world as well
- take time to carefully craft your value proposition. otherwise people don't get what you do, how you can help them and you'll lose them immediately
- make sure that your landing page works. a value proposition, a deep dive into your client's big, expensive problem and a call to action are essential
- if you do have an email list, don't send these spammy newsletters. personalize. give value. a lot
January 12, 2017
It’s 2014, and like previous years:
This time I won’t give any talk, just relax and enjoy talks from others, and hope Strasbourg.
And what is more important, meet those hackers you interact with frequently, and maybe share some beers.
So...
- Mail chew; customer call. Amused to see a paper on The Appendix suggesting it is not a vestigal organ; less amusing my Mother had her spleen removed some years back as another useless / vestigal organ before that too was found to be rather useful.
- TDF Mac Mini arrived, and I started to set it up to build LibreOffice, hopefully will have some spare time to fix a Mac issue or two.
-
Reproducible font rendering for librsvg's tests
The official test suite for SVG 1.1 consists of a bunch of SVG test files that use many of the features in the SVG specification. The test suite comes with reference PNGs: your SVG renderer is supposed to produce images that look like those PNGs.
I've been adding test files from that test suite to librsvg as I convert things to Rust, and also when I refactor code that touches code for a particular kind of SVG element or filter.
The SVG test suite is not a drop-in solution, however. The spec does not specify pixel-exact rendering. It doesn't mandate any specific kind of font rendering, either. The test suite is for eyeballing that tests render correctly, and each test has instructions on what to look for; it is not meant for automatic testing.
The test files include text elements, and the font for those texts is specified in an interesting way. SVG supports referencing "SVG fonts": your image_with_text_in_it.svg can specify that it will reference my_svg_font.svg, and that file will have individual glyphs defined as normal SVG objects. "You draw an a with this path definition", etc.
Librsvg doesn't support SVG fonts yet. (Patches appreciated!) As a provision for renderers which don't support SVG fonts, the test suite specifies fallbacks with well-known names like "sans-serif" and such.
In the GNOME world, "sans-serif" resolves to whatever Fontconfig decides. Various things contribute to the way fonts are resolved:
-
The fonts that are installed on a particular machine.
-
The Fontconfig configuration that is on a particular machine: each distro may decide to resolve fonts in slightly different ways.
-
The user's personal ~/.fonts, and whether they are running gnome-settings-daemon and whether it monitors that directory for Fontconfig's perusal.
-
Phase of the moon, checksum of the clouds, polarity of the yak fields, etc.
For silly reasons, librsvg's "make distcheck" doesn't work when run as a user; I need to run it as root. And as root, my personal ~/.fonts doesn't get picked up and also my particular font rendering configuration is different from the system's default (why? I have no idea — maybe I selected specific hinting/antialiasing at some point?).
It has taken a few tries to get reproducible font rendering for librsvg's tests. Without reproducible rendering, the images that get rendered from the test suite may not match the reference images, depending on the font renderer's configuration and the available fonts.
Currently librsvg does two things to get reproducible font rendering for the test suite:
-
We use a specific cairo_font_options_t on our PangoContext. These options specify what antialiasing, hinting, and hint metrics to use, so that the environment's or user's configuration does not affect rendering.
-
We create a specific FcConfig and a PangoFontMap for testing, with a single font file that we ship. This will cause any font description, no matter if it is "sans-serif" or whatever, to resolve to that single font file. Special thanks to Christian Hergert for providing the relevant code from Gnome-builder.
-
We ship a font file as mentioned above, and just use it for the test suite.
This seems to work fine. I can run "make check" both as my regular user with my private ~/.fonts stash, or as root with the system's configuration, and the test suite passes. This means that the rendered SVGs match the reference PNGs that get shipped with librsvg — this means reproducible font rendering, at least on my machine. I'd love to know if this works on other people's boxes as well.
-
January 11, 2017
- Mail chew, contract work, encouraging partner call. Took H. out to try to draw the moon at night (Astronomy GCSC) - immediate cloud cover: hmm.
Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:
H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(<=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|
and obtain a layout like this one:
Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:
- arithmetic operators for constant and multiplication factors inside
predicates, like
[button1(button2 * 2 + 16)] - explicit attribute references, like
[button1(button1.height / 2)]
This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.
Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:
I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:
At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.
As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.
January 10, 2017
If your build depends on a non-exact dependency version (like “somelibrary >= 3.1”), and the exact version gets recomputed every time you run the build, your project is broken.
- You can no longer build old versions and get the same results.
- Want to cut a bugfixes-only release from an old branch? Sorry.
- Want to use git bisect? Nope.
- You can’t rely on your code working because it will change by itself. Maybe it worked today, but that doesn’t mean it will work tomorrow. Maybe it worked in continuous integration, but that doesn’t mean it will work when deployed.
- Wondering whether any dependency versions changed and when? No way to figure it out.
Package management and build tools should get this right by default. It is a real problem; I’ve seen it bite projects I’m working on countless times.
(I know that some package managers get it right, and good for them! But many don’t. Not naming names here because it’s beside the point.)
What’s the solution? I’d argue that it’s been well-known for a while. Persist the output of the dependency resolution process and keep it in version control.
- Start with the “logical” description of the dependencies as hand-specified by the developers (leaf nodes only, with version ranges or minimum versions).
- Have a manual update command to run the dependency resolution algorithm, arriving at an exhaustive list of all packages (ideally: identified by content hash and including results for all possible platforms). Write this to a file with deterministic sort order, and encourage keeping this file in git. This is sometimes called a “lock file.”
- Both CI and production deployment should use the lock file to download and install an exact set of packages, ideally bit-for-bit content-hash-verified.
- When you want to update dependencies, run the update command manually and submit a pull request with the new lock file, so CI can check that the update is safe. There will be a commit in git history showing exactly what was upgraded and when.
Bonus: downloading a bunch of fixed package versions can be extremely efficient; there’s no need to download package A in order to find its transitive dependencies and decide package B is needed, instead you can have a list of exact URLs and download them all in parallel.
You may say this is obvious, but several major ecosystems do not do this by default, so I’m not convinced it’s obvious.
Reproducible builds are (very) useful, and when package managers can’t snapshot the output of dependency resolution, they break reproducible builds in a way that matters quite a bit in practice.
(Note: of course this post is about the kind of package manager or build tool that manages packages for a single build, not the kind that installs packages globally for an OS.)
I really like the polished look of GNOME and its default theme Adwaita, but there is one thing that has been bugging me for some time. By default server side window decorations are light and if an app has a dark UI and uses a server side window decorations, you get a dark window with a light title bar. It doesn’t look every nice and when you maximize the window, it’ll get even worse because you get a nice black-and-white hamburger (black top bar, light title bar, and dark window content).
There are quite a few apps suffering from this: Atom, Firefox Developer Edition, Blender,…
But Mutter actually allows the clients to set a theme for their window decorations even though they’re rendered on the server side. They just need to set an x window property GTK_THEME_VARIANT=dark.
And I think the difference speaks for itself:


You can test it by executing: xprop -f _GTK_THEME_VARIANT 8u -set _GTK_THEME_VARIANT dark
and clicking the window where it should apply.
Are you a user of one of the apps that would benefit from it? Or even a contributor? Try to convince the project to implement this tiny change. If you’re a distro maintainer of such an app, you may consider applying a small patch.
Recently I got myself a GPD Win, to make it simple it's a PC in a Nintendo 3DS XL form factor, with a keyboard and a game controller. It comes with Windows 10 and many not too demanding games work perfectly on it: it's perfect to run indie games from Steam and for retro consoles emulation.
But who simply want to play video games, let's make it fun, let's put a penguin in it! On this GNOME wiki page I'll report all my findings on Linux support on this machine, focusing mainly on OpenSUSE for the moment. Wouldn't it be awesome to have a fully working and easily installable GNOME desktop running Games and Steam on this machine? 😃
Following up on my previous post where I detailed the work I’ve been doing mostly on Purism’s website, today’s post post will cover some video work. Near the beginning of October, I received a Librem 15 v2 unit for testing and reviewing purposes. I have been using it as my main laptop since then, as I don’t believe in reviewing something without using it daily for a couple weeks at least. And so on nights and week-ends, I wrote down testing results, rough impressions and recommendations, then wrote a detailed plan and script to make the first in depth video review of this laptop. Here’s the result—not your typical 2-minutes superficial tour:
With this review, I wanted to:
- Satisfy my own curiosity and then share the key findings; one of the things that annoyed me some months ago is that I couldn’t find any good “up close” review video to answer my own technical questions, and I thought “Surely I’m not the only one! Certainly a bunch of other people would like to see what the beast feels like in practice.”
- Make an audio+video production I would be proud of, artistically speaking. I’m rather meticulous in my craft as like creating quality work made to last (similarly, I have recently finished a particular decorative painting after months of obsession… I’ll let you know about that in some other blog post ;)
- Put my production equipment to good use; I had recently purchased a lot of equipment for my studio and outdoors shooting—it was just begging to be used! Some details on that further down in this post.
- Provide a ton of industrial design feedback to the Purism team for future models, based on my experience owning and using almost every laptop type out there. And so I did. Pages and pages of it, way more than can fit in a video:
Pictured: my review notesA fine line
For the video, I spent a fair amount of time writing and revising my video’s plan and narration (half a dozen times at least), until I came to a satisfactory “final” version that I was able to record the narration for (using the exquisite ancient techniques of voice acting).
The tricky part is being simultaneously concise, exhaustive, fair, and entertaining. I wanted to be as balanced as possible and to cover the most crucial topics.
- At the end of the day, there are some simple fundamentals of what makes a good or bad computer. Checking my 14 pages of review notes, I knew I was being extremely demanding, and that some of those expectations of mine came down to personal preference, things that most people don’t even care about, or common issues that are not even addressed by most “big brand” OEMs’ products… so I balanced my criticism with a dose of realism, making sure to focus on what would matter to people.
- I also chose topics that would have a longer “shelf life”, considering how much work it takes to produce a high-quality video. For example, even while preparing the review over the course of 2-3 months, some aspects (such as the touchpad drivers) changed/improved and made me revise my opinion. The touchpad behaved better in Fedora 25 than in Fedora 24… until a kernel update broke it (Ugh. At that point I decided to version-lock my kernel package in Fedora).
- I was conservative in my estimates, even if that makes the device look less shiny than it is. For example, while I said “5-6 hours” of battery life in the review video, in practice I realized that I can get 10-12 hours of battery life with my personal usage pattern (writing text in Gedit, no web browser open, 50% brightness, etc.) and a few simple tweaks in PowerTop.
The final version of my script, as I recorded it, was 59 minutes long. No jokes. And that was after I had decided to ignore some topics (ex.: the whole part about the preloaded operating system; that part would be long enough to be a standalone review review).
I spent some days processing that 59 minutes recording to remove any sound impurities or mistakes, and sped up the tempo, bringing down the duration to 37 and then 31 minutes. Still, that was too long, so while I was doing the final video edit, I tightened everything further (removing as many speech gaps as possible) and cut out a few more topics at the last minute. The final rendered result is a video that is 21 minutes long. Much more reasonable, considering it’s an “in depth” review.
Real audio, lighting, and optics
My general work ethic is: when you do something, do it properly or don’t do it at all.
For this project I used studio lighting, tripods and stands, a dolly, a DSLR with two lenses, a smaller camera for some last minute shots, a high-end PCM sound recorder, a phantom-powered shotgun microphone, a recording booth, monitoring headphones, sandbags, etc.
To complement the narration and cover the length of the review, I chose seven different songs (out of many dozens) based on genre, mood and tempo. I sometimes had to cut/mix songs to avoid the annoying parts. The result, I hope, is a video that remains pleasant and entertaining to watch throughout, while also having a certain emotional or “material” quality to it. For example, the song I used for the thermal design portion makes me feel like I’m touching heatpipes and watching energy flow. Maybe that’s just me though—perhaps if there was some lounge music I would feel like a sofa ;)
Fun!
Much like when I made a video for a local symphonic orchestra, I have enjoyed the making of this review video immensely. I’m glad I got an interesting device to review (not a whole chicken in a can) with the freedom to make the “video I would have wanted to see”. On that note, if anyone has a fighter jet they’d want to see reviewed, let me know (I’m pretty good at dodging missiles ;)
January 09, 2017
In this last week, the master branch of GTK+ has seen 81 commits, with 12205 lines added and 12625 lines removed.
Planning and status
- Welcome back to This Week in GTK+ after the end of the year break
- The GTK+ road map is available on the wiki.
Notable changes
On the master branch:
- Timm Bädert merged his work on moving the scene graph of widgets directly into the
GtkWidgetclass; this allows widgets to have internal children without necessarily subclassingGtkContainer - Timm also worked on porting widgets currently using the internal CSS gadget API to be composite widgets, like
GtkSwitch - Benjamin Otte and Georges Basile Stavracas Neto have been working on making the Vulkan GSK renderer work on Wayland
- Benjamin also worked on improving the efficiency of the Vulkan renderer
- William Hua worked on improving the Mir backend of GDK with regards to clipboard support
On the gtk-3-22 stable branch:
- Matthias Clasen released GTK+ 3.22.6
Bugs fixed
- 776627 – Correct PostScript capitalization
- 776868 – Improve the documentation of GtkEntry:attributes
- 776560 – icon-browser: window opens at very narrow size, only showing 1 column of icons
- 775732 – mir: clipboard support missing
- 776736 – build: Fix vulkan detection
- 776807 – GtkInspector doesn’t show up when Gtk is initialized through option group
Getting involved
Interested in working on GTK+? Look at the list of bugs for newcomers and join the IRC channel #gtk+ on irc.gnome.org.
January 08, 2017
Test setup
Results
Conclusions
January 07, 2017
Slashdot was the first to write about FreeDOS 1.2, but we also saw coverage from Engadget Germany, LWN, Heise Online, PC Forum Hungary, FOSS Bytes, ZDNet Germany, PC Welt, Tom's Hardware, and Open Source Feed. And that's just a sample of the news! There were articles from the US, Germany, Japan, Hungary, Ukraine, Italy, and others.
In reading the articles people had written about FreeDOS 1.2, I realized something that was both cool and insightful: most tech news sites re-used material from our press kit.
You see, in the weeks leading up to FreeDOS 1.2, I assembled additional information and resources about FreeDOS 1.2 release, including a bunch of screenshots and other images of FreeDOS in action. In an article posted to our website, I highlighted the press kit, and added "If you are writing an article about FreeDOS, feel free to use this information to help you." And they did!
We track a complete timeline of interesting events on our FreeDOS History page, including links to articles. Comparing the press coverage from FreeDOS 1.0, FreeDOS 1.1 and FreeDOS 1.2, we definitely saw the most articles about FreeDOS 1.2. And unlike previous releases where only a few tech news websites wrote articles about FreeDOS and other news outlets mostly referenced the first few sites, the coverage of FreeDOS 1.2 was mostly original articles. Only a small handful were references to news items from other news sites.
I put that down to the press kit. With the press kit, journalists were able to quickly pull interesting information and quotes about FreeDOS, and find images they could use in their articles. For a busy journalist who doesn't have much time to write about a free DOS implementation in 2016, our press kit made it easy to create something fresh. And news sites love to write their own stories rather than link to other news sites. That means more eyeballs for them.
Here are a few lessons I learned from creating our press kit:
What is your project about? What does it do? How is it useful? Who uses it? What are the new features in this release? These are the basic questions any journalist will want to answer in their article, if they choose to write about you. In the FreeDOS press kit, I also included a history about FreeDOS, discussing how we got started in 1994 and some highlights from our timeline.
In writing about your project, pretend you are writing an email to someone you know. Or if you prefer, write like you are posting something to a personal blog. Keep it informal. Avoid jargon. If your language is too stuffy or too technical, journalists will have a hard time quoting from you. In writing the FreeDOS press kit, I started by listing a few common questions that people usually ask me about FreeDOS, then I just responded to them like I was answering an email. My answers were often long, but the paragraphs were short so easier to skim.
Whether your program runs from the command line or in a graphical environment, screenshots are key. And tech news sites like to use images; they are a cheap way to draw attention. So take lots of screenshots and include them in your press kit. Show all the major features through these screenshots. But be wary of background images and other branding that might distract from your screenshots. In particular, if the screenshot will show your desktop, set your wallpaper to the default for your operating system, or use a solid color in the range medium- to light-blue. For the FreeDOS press kit, I took a ton of screenshots of every step in the install process. I also grabbed screenshots of FreeDOS at the command line, running utilities and tools, and playing some of the games we installed.
You may find your press kit will become quite long. That's okay, as long as this doesn't make it difficult for someone to figure out what's there. Put the important stuff first. Use a table of contents, if you have a lot of information to share. Use headings and sections to break things up. If a journalist can't find the information they need to write an article about your project, they may skip it and write about something else. I organized our press kit like a simple website. An index page provided some basic information, with a list of links to other material contained in the press kit. I arranged our screenshots in separate "pages." And every page of screenshots started with a brief context, then listed the screenshots without much fanfare. But every screenshot included a description of what you were seeing. For example, I had over forty screenshots from installing FreeDOS, and I wrote a one-sentence description for each.
No matter how much work you put into it, one will want to use your press kit if it is riddled with spelling errors and poor grammar. Consider writing your press kit material in a word processor and running a spell check against it. Read your text aloud and see if it makes sense to you. When you're done, try to look at your press kit from the perspective of someone who hasn't used your project before. Can they easily understand what it's about? To help you in this step, ask a friend to review the material for you.
Don't assume that tech news sites will seek you out. You need to reach out to them to let them know you have a new release coming up. Create your press kit well in advance, and about a week or two before your release, individually email every journalist or tech news website that might be interested in you. Most news sites have a "Contact us" link or list of editor "beats" where you can direct yourself to the writer or editor most likely to write about your topic. Craft a short email that lets them know who you are, what project you're from, when the next release will happen, and what new features it will include. Give them a link to the press kit directly in your email. But make the press kit easy to see in the email. Use the full URL to the press kit, and make it clickable. Also link to the press kit from your website, so anyone else who visits your project can quickly find the information they need to write an article.
TL;DR;
Motivation
I was always curious how do I spend time using my computer - what applications do I use, how much time do I spend using particular app etc. I know there is plenty of software that could track my activity on the market, however, none of them met my requirements, so couple of months ago I've started workertracker project [2], which does the job. I'll blog about the application in the future, since is not ready yet (there's a few pre-releases, so feel free to test it), however, the post is about quite important feature of the app - accessing current URL of the browser (in google-chrome, for now).
chromietabs library
Since I couldn't find any good solution on the internet, I've decided to implement a tiny library that will provide information about active tab in Google Chrome and Chromium web browsers.
An Interface of the chromietabs library is very simple, and it consists of few layers - depends on how detailed information you need, you should use another class. The example below demonstrates how can you access the URL of the current tab in google chrome:
ChromieTabs::SessionAnalyzer analyzer{
ChromieTabs::SessionReader("Current Session")};
auto window_id = analyzer.get_current_window_id();
auto active_tab_id = analyzer.get_current_tab_id(window_id);
std::cout << analyzer.get_current_url(active_tab_id) << std::endl;A full example can be found in the git repository [1].
You can also use the documentation [3].
Please note, that current release (0.1) is a pre-release, and the API might change a bit.
How does it work?
I've noticed, that when I kill (not close) google-chrome, it's able to restore my tabs after the crash. It means, that it has to constantly update some file saving information about the tabs. I was right - there is a binary file in your profile directory - Current Session - that stores that information.
Unfortunately, Current Session is a binary file, so I had to go through the chromium source code to figure out the file format.
Feedback
Feedback is always appreciated! Feel free to comment, report issues[4], or create pull requests [5].
Links
[1] https://github.com/loganek/chromietabs
[2] https://github.com/loganek/workertracker
[3] https://loganek.github.io/chromietabs/master//index.html
[4] https://github.com/loganek/chromietabs/issues
[5] https://github.com/loganek/chromietabs/pulls

I’ve been lately working on integrating ModemManager in OpenWRT, in order to provide a unique and consolidated way to configure and manage mobile broadband modems (2G, 3G, 4G, Iridium…), all working with netifd.
OpenWRT already has some support for a lot of the devices that ModemManager is able to manage (e.g. through the uqmi, umbim or wwan packages), but unlike the current solutions, ModemManager doesn’t require protocol-specific configurations or setups for the different devices; i.e. the configuration for a modem running in MBIM mode may be the same one as the configuration for a modem requiring AT commands and a PPP session.
Currently the OpenWRT package prepared is based on ModemManager git master, and therefore it supports: QMI modems (including the new MC74XX series which are raw-ip only and don’t support DMS UIM operations), MBIM modems, devices requiring QMI over MBIM operations (e.g. FCC auth), and of course generic AT+PPP based modems, Cinterion, Huawei (both AT+PPP and AT+NDISDUP), Icera, Haier, Linktop, Longcheer, Ericsson MBM, Motorola, Nokia, Novatel, Option (AT+PPP and HSO), Pantech, Samsung, Sierra Wireless (AT+PPP and DirectIP), Simtech, Telit, u-blox, Wavecom, ZTE… and even Iridium and Thuraya satellite modems. All with the same configuration.
Along with ModemManager itself, the OpenWRT feed also contains libqmi and libmbim, which provide the qmicli, mbimcli, and soon the qmi-firmware-update utilities. Note that you can also use these command line tools, even if ModemManager is running, via the qmi-proxy and mbim-proxy setups (i.e. just adding -p to the qmicli or mbimcli commands).
This is not the first time I’ve tried to do this; but this time I believe it is a much more complete setup and likely ready for others to play with it. You can jump to the modemmanager-openwrt bitbucket repository and follow the instructions to include it in your OpenWRT builds:
https://bitbucket.org/aleksander0m/modemmanager-openwrt
The following sections try to get into a bit more detail of which were the changes required to make all this work.
And of course, thanks to VeloCloud for sponsoring the development of the latest ModemManager features that made this integration possible 
udev vs hotplug
One of the latest biggest features merged in ModemManager was the possibility to run without udev support; i.e. without automatically monitoring the device addition and removals happening in the system.
Instead of using udev, the mmcli command line tool ended up with a new --report-kernel-event that can be used to report the device addition and removals manually, e.g.:
$ mmcli --report-kernel-event="action=add,subsystem=tty,name=ttyUSB0" $ mmcli --report-kernel-event="action=add,subsystem=net,name=wwan0"
This new way of notifying device events made it very easy to integrate the automatic device discovery supported in ModemManager directly via tty and net hotplug scripts (see mm_report_event()).
With the integration in the hotplug scripts, ModemManager will automatically detect and probe the different ports exposed by the broadband modem devices.
udev rules
ModemManager relies on udev rules for different things:
- Blacklisting devices: E.g. we don’t want ModemManager to claim and probe the TTYs exposed by Arduinos or braille displays. The package includes a USB vid:pid based blacklist of devices that expose TTY ports and are not modems to be managed by ModemManager.
- Blacklisting ports: There are cases where we don’t want the automatic logic selection to grab and use some specific modem ports, so the package also provides a much shorter list of ports blacklisted from actual modem devices. E.g. the QMI implementation in some ZTE devices is so poor that we decided to completely skip it and fallback to AT+PPP.
- Greylisting USB serial adapters: The TTY ports exposed by USB serial adapters aren’t probed automatically, as we don’t know what’s connected in the serial side. If we want to have a serial modem, though, the
mmcli --scan-modemsoperation may be executed, which will include the probing of these greylisted devices. - Specifying port type hints: Some devices expose multiple AT ports, but with different purposes. E.g. a modem may expose a port for AT control and another port for the actual PPP session, and choosing the wrong one will not work. ModemManager includes a list of port type hints so that the automatic selection of which port is for what purpose is done transparently.
As we’re not using udev when running in OpenWRT, ModemManager includes now a custom generic udev rules parser that uses sysfs properties to process and apply the rules.
procd based startup
The ModemManager daemon is setup to be started and controlled via procd. The init script controlling the startup will also take care of scheduling the re-play of the hotplug events that had earlier triggered --report-kernel-event actions (they’re cached in /tmp); e.g. to cope with events coming before the daemon started or to handle daemon restarts gracefully.
DBus
Well, no, I didn’t port ModemManager to use ubus
If you want to run ModemManager under OpenWRT you’ll also need to have the DBus daemon running.
netifd protocol handler
When using ModemManager, the user shouldn’t need to know the peculiarities of the modem being used: all modems and protocols (QMI, MBIM, Generic AT, vendor-specific AT…) are all managed via the same single DBus interfaces. All the modem control commands are internal to ModemManager, and the only additional considerations needed are related to how to setup the network interface once the modem is connected, e.g.:
- PPP: some modems require a PPP session over a serial port.
- Static: some modems require static IP configuration on a network interface.
- DHCP: some modems require dynamic IP configuration on a network interface.
The OpenWRT package for ModemManager includes a custom protocol handler that enables the modemmanager protocol to be used when configuring network interfaces. This new protocol handler takes care of configuring and bringing up the interfaces as required when the modem gets into connected state.
Example configuration
The following snippet shows an example interface configuration to set in /etc/config/network.
config interface 'broadband' option device '/sys/devices/platform/soc/20980000.usb/usb1/1-1/1-1.2/1-1.2.1' option proto 'modemmanager' option apn 'ac.vodafone.es' option username 'vodafone' option password 'vodafone' option pincode '7423' option lowpower '1'
The settings currently supported are the following ones:
- device: The full sysfs path of the broadband modem device needs to be configured. Relying on the interface names exposed by the kernel is never a good idea, as these may change e.g. across reboots or when more than one modem device is available in the system.
- proto: As said earlier, the new modemmanager protocol needs to be configured.
- apn: If the connection requires an APN, the APN to use.
- username: If the access point requires authentication, the username to use.
- password: If the access point requires authentication, the password to use.
- pincode: If the SIM card requires a PIN, the code to use to unlock it.
- lowpower: If enabled, this setting will request the modem to go into low-power state (i.e. IMSI detach and RF off) when the interface is disconnected.
As you can see, the configuration can be used for any kind of modem device, regardless of what control protocol it uses, which interfaces are exposed, or how the connection is established. The settings are currently only IPv4 only, but adding IPv6 support shouldn’t be a big issue, patches welcome 
SMS, USSD, GPS…
The main purpose of using a mobile broadband modem is of course the connectivity itself, but it also may provide many more features. ModemManager provides specific interfaces and mmcli actions for the secondary features which are also available in the OpenWRT integration, including:
- SMS messaging (both 3GPP and 3GPP2).
- Location information (3GPP LAC/CID, CDMA Base station, GPS…).
- Time information (as reported by the operator).
- 3GPP USSD operations (e.g. to query prepaid balance to the operator).
- Extended signal quality information (RSSI, Ec/Io, LTE RSRQ and RSRP…).
- OMA device management operations (e.g. to activate CDMA devices).
- Voice call control.
Worth noting that not all these features are available for all modem types (e.g. SMS messaging is available for most devices, but OMA DM is only supported in QMI based modems).
TL;DR?
You can now have your 2G/3G/4G mobile broadband modems managed with ModemManager and netifd in your OpenWRT based system.
Filed under: Development, FreeDesktop Planet, GNOME Planet, Planets Tagged: libmbim, libqmi, ModemManager, openwrt
January 06, 2017
I have started Linux classes at University as a professor this 2017! and the course is a review of the GNU/Linux story, followed by the installation and the use of commands to manage the terminal. At the end of the course, some services are also set to prepare students into the IT Infrastructure world.
To start this new adventure, I have recommended they post some experiences from class:
- Jose Huaman work: http://iepvjhuaman.wixsite.com/fedora
- Luis Zacarias: https://sites.google.com/site/luiszacariasquispeojeda1997/instalacion-de-fedora
- Victor Mendoza: https://stefano2017.wordpress.com/blog/
- Victoria Meza: https://victoria837.wordpress.com/blog/
- Staymon Loza:https://staymonloza.wordpress.com/2017/01/06/pasos-de-instalacion-de-fedora-en-virtual-box/
- Bressner Revollar: https://sites.google.com/view/instalaciondefedora25
- Sandra Fiorella: https://fiosandrablog.wordpress.com/
- Daniel Valderrama: https://danielvalderramablog.wordpress.com/
- Vito Lezama: https://vitolezamafblog.wordpress.com/2017/01/05/first-blog-post/
- Fonsy Mayuri post: https://fonsymgblog.wordpress.com/2017/01/05/instalacion-de-fedora-25-sobre-virtualbox/
- Zarela Vicente: https://zarelavicenteblog.wordpress.com/
- Josué Ñauri: josuenaure.wordpress.com
- Juan Francisco Ezquerre: https://juanezquerra.wordpress.com/
- Andres Saenz post: https://andrewsaenzsite.wordpress.com/2017/01/05/desarrollo-en-linux/
- Jhan Guerra: https://jhanblogcarlos.wordpress.com/2017/01/05/primera-entrada-del-blog/
- Chirthian Luque: https://wordpress.com/post/christianluqueq.wordpress.com/18
- Edilto Aguilar: https://ediltousil.wordpress.com/blog?iframe=true&theme_preview=true
These are some pictures that records this new group of 20 at USIL lab 
Thanks USIL (Universidad San Ignacio de Loyola) ❤
Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: fedora, FEDORA 25, GNOME, Julita Inca, Julita Inca Chiroque, lab Linux, Lima Peru, linux, Perú, Universidad San Ignacion de Loyola, USIL
January 05, 2017
It has been a while that I’ve written about flow-based programming — but now that I’m putting most of my time to Flowhub things are moving really quickly.
One example is the new component API in NoFlo that has been emerging over the last year or so.
Most of the work described here was done by Vladimir Sibirov from The Grid team.
Introducing the Process API
NoFlo programs consist of graphs where different nodes are connected together. These nodes can themselves be graphs, or they can be components written in JavaScript.
A NoFlo component is simply a JavaScript module that provides a certain interface that allows NoFlo to run it. In the early days there was little convention on how to write components, but over time some conventions emerged, and with them helpers to build well-behaved components more easily.
Now with the upcoming NoFlo 0.8 release we’ve taken the best ideas from those helpers and rolled them back into the noflo.Component base class.
So, how does a component written using the Process API look like?
// Load the NoFlo interface
var noflo = require('noflo');
// Also load any other dependencies you have
var fs = require('fs');
// Implement the getComponent function that NoFlo's component loader
// uses to instantiate components to the program
exports.getComponent = function () {
// Start by instantiating a component
var c = new noflo.Component();
// Provide some metadata, including icon for visual editors
c.description = 'Reads a file from the filesystem';
c.icon = 'file';
// Declare the ports you want your component to have, including
// their data types
c.inPorts.add('in', {
datatype: 'string'
});
c.outPorts.add('out', {
datatype: 'string'
});
c.outPorts.add('error', {
datatype: 'object'
});
// Implement the processing function that gets called when the
// inport buffers have packets available
c.process(function (input, output) {
// Precondition: check that the "in" port has a data packet.
// Not necessary for single-inport components but added here
// for the sake of demonstration
if (!input.hasData('in')) {
return;
}
// Since the preconditions matched, we can read from the inport
// buffer and start processing
var filePath = input.getData('in');
fs.readFile(filePath, 'utf-8', (err, contents) {
// In case of errors we can just pass the error to the "error"
// outport
if (err) {
output.done(err);
return;
}
// Send the file contents to the "out" port
output.send({
out: contents
});
// Tell NoFlo we've finished processing
output.done();
});
});
// Finally return to component to the loader
return c;
}
Most of this is still the same component API we’ve had for quite a while: instantiation, component metadata, port declarations. What is new is the process function and that is what we’ll focus on.
When is process called?
NoFlo components call their processing function whenever they’ve received packets to any of their regular inports.
In general any new information packets received by the component cause the process function to trigger. However, there are some exceptions:
- Non-triggering ports don’t cause the function to be called
- Ports that have been set to forward brackets don’t cause the function to be called on bracket IPs, only on data
Handling preconditions
When the processing function is called, the first job is to determine if the component has received enough data to act. These “firing rules” can be used for checking things like:
- When having multiple inports, do all of them contain data packets?
- If multiple input packets are to be processed together, are all of them available?
- If receiving a stream of packets is the complete stream available?
- Any input synchronization needs in general
The NoFlo component input handler provides methods for checking the contents of the input buffer. Each of these return a boolean if the conditions are matched:
input.has('portname')whether an input buffer contains packets of any typeinput.hasData('portname')whether an input buffer contains data packetsinput.hasStream('portname')whether an input buffer contains at least one complete stream of packets
For convenience, has and hasData can be used to check multiple ports at the same time. For example:
// Fail precondition check unless both inports have a data packet
if (!input.hasData('in1', 'in2')) return;
For more complex checking it is also possible to pass a validation function to the has method. This function will get called for each information packet in the port(s) buffer:
// We want to process only when color is green
var validator = function (packet) {
if (packet.data.color === 'green') {
return true;
}
return false;
}
// Run all packets in in1 and in2 through the validator to
// check that our firing conditions are met
if (!input.has('in1', 'in2', validator)) return;
The firing rules should be checked in the beginning of the processing function before we start actually reading packets from the buffer. At that stage you can simply finish the run with a return.
Processing packets
Once your preconditions have been met, it is time to read packets from the buffers and start doing work with them.
For reading packets there are equivalent get functions to the has functions used above:
input.get('portname')read the first packet from the port’s bufferinput.getData('portname')read the first data packet, discarding preceding bracket IPs if anyinput.getStream('portname')read a whole stream of packets from the port’s buffer
For get and getStream you receive whole IP objects. For convenience, getData returns just the data payload of the data packet.
When you have read the packets you want to work with, the next step is to do whatever your component is supposed to do. Do some simple data processing, call some remote API function, or whatever. NoFlo doesn’t really care whether this is done synchronously or asynchronously.
Note: once you read packets from an inport, the component activates. After this it is necessary to finish the process by calling output.done() when you’re done.
Sending packets
While the component is active, it can send packets to any number of outports using the output.send method. This method accepts a map of port names and information packets.
output.send({
out1: new noflo.IP('data', "some data"),
out2: new noflo.IP('data', [1, 2, 3])
});
For data packets you can also just send the data as-is, and NoFlo will wrap it to an information packet.
Once you’ve finished processing, simply call output.done() to deactivate the component. There is also a convenience method that is a combination of send and done. This is useful for simple components:
c.process(function (input, output) {
var data = input.getData('in');
// We just add one to the number we received and send it out
output.sendDone({
out: data + 1
});
});
In normal situations there packets are transmitted immediately. However, when working on individual packets that are part of a stream, NoFlo components keep an output buffer to ensure that packets from the stream are transmitted in original order.
Component lifecycle
In addition to making input processing easier, the other big aspect of the Process API is to help formalize NoFlo’s component and program lifecycle.

The component lifecycle is quite similar to the program lifecycle shown above. There are three states:
- Initialized: the component has been instantiated in a NoFlo graph
- Activated: the component has read some data from inport buffers and is processing it
- Deactivated: all processing has finished
Once all components in a NoFlo network have deactivated, the whole program is finished.
Components are only allowed to do work and send packets when they’re activated. They shouldn’t do any work before receiving input packets, and should not send anything after deactivating.
Generator components
Regular NoFlo components only send data associated with input packets they’ve received. One exception is generators, a class of components that can send packets whenever something happens.
Some examples of generators include:
- Network servers that listen to requests
- Components that wait for user input like mouse clicks or text entry
- Timer loops
The same rules of “only send when activated” apply also to generators. However, they can utilize the processing context to self-activate as needed:
exports.getComponent = function () {
var c = new noflo.Component();
c.inPorts.add('start', { datatype: 'bang' });
c.inPorts.add('stop', { datatype: 'bang' });
c.outPorts.add('out', { datatype: 'bang' });
// Generators generally want to send data immediately and
// not buffer
c.autoOrdering = false;
// Helper function for clearing a running timer loop
var cleanup = function () {
// Clear the timer
clearInterval(c.timer.interval);
// Then deactivate the long-running context
c.timer.deactivate();
c.timer = null;
}
// Receive the context together with input and output
c.process(function (input, output, context) {
if (input.hasData('start')) {
// We've received a packet to the "start" port
// Stop the previous interval and deactivate it, if any
if (c.timer) {
cleanup();
}
// Activate the context by reading the packet
input.getData('start');
// Set the activated context to component so it can
// be deactivated from the outside
c.timer = context
// Start generating packets
c.timer.interval = setInterval(function () {
// Send a packet
output.send({
out: true
});
}, 100);
// Since we keep the generator running we don't
// call done here
}
if (input.hasData('stop')) {
// We've received a packet to the "stop" port
input.getData('stop');
if (!c.timer) {
// No timers running, we can just finish here
output.done();
return;
}
// Stop the interval and deactivate
cleanup();
// Also call done for this one
output.done();
}
});
// We also may need to clear the timer at network shutdown
c.shutdown = function () {
if (c.timer) {
// Stop the interval and deactivate
cleanup();
}
c.emit('end');
c.started = false;
}
}
Time to prepare
NoFlo 0.7 included a preview version of the Process API. However, last week during the 33C3 conference we finished some tricky bits related to process lifecycle and automatic bracket forwarding that make it more useful for real-life NoFlo applications.
These improvements will land in NoFlo 0.8, due out soon.
So, if you’re maintaining a NoFlo application, now is a good time to give the git version a spin and look at porting your components to the new API. Make sure to report any issues you encounter!
We’re currently migrating all the hundred-plus NoFlo open source modules to latest build and testing process so that they can be easily updated to the new APIs when they land.
Our team maintains Firefox RPMs for Fedora and RHEL and a lot of people have been asking us to provide Firefox for Flatpak as well. I’m finally happy to announce Firefox Developer Edition for Flatpak.

We started with the Developer Edition because that’s something that is not easily available to Fedora users. Providing the standard Firefox wouldn’t bring a lot of benefit right now because it’s available very quickly after upstream releases via Fedora repositories. In the future, we’d like to add releases of the standard Firefox (nightly, stable, perhaps ESR).
Firefox DE for Flatpak is built on our internal build cluster and hosted on mojefedora.cz (mojefedora == myfedora in Czech) on OpenShift. It’s an unofficial build for testing purposes, not provided by Mozilla. We’d like to work with Mozilla, so that it can eventually be adopted by the Mozilla project and you can get Firefox flatpaks directly from the source.
Right now, Firefox DE is not sandboxed, it has full access to user’s home. In the near future, we’d like to start a devel branch in the flatpak repository where we will ship a sandboxed Firefox and experiment how well Firefox can handle sandboxing and what needs to be done to assure the expected user experience. A web browser is definitely the #1 candidate among desktop applications for sandboxing. If you’re interested in sandboxing Firefox on Linux via Flatpak, contact us (you’ll find Jan’s email on the website with installation instructions).

Firefox Developer Edition for Flatpak running on Fedora
We’ve tested the FDE flatpak on Fedora 25, openSUSE Tumbleweed, and Ubuntu 16.10. You need flatpak 0.6.13 or newer for the installation commands to work. The repo should work with older versions as well, but there was a change in command syntax and the commands we use don’t work in older releases than 0.6.13. Fedora 25 has the newest release (0.8.0), openSUSE Tumbleweed has a new enough release (0.6.14), just for Ubuntu you’ll need to install the newest flatpak from a PPA.

Firefox Developer Edition for Flatpak running on Ubuntu
GNOME Software in Fedora 25 also supports adding repos via .flatpakrepo files and installing apps via .flatpakref files, but it’s not reliable enough yet, so we only recommend you use the command line instructions. It’s just two commands (you only need the latter one on Fedora 25 with the newest flatpak).
There are also a couple of problems we haven’t quite figured out yet. In openSUSE and Ubuntu, the desktop file database is not refreshed after the installation, so the launcher doesn’t appear right away. You need to log out and log in to refresh it and make the launcher appear. In openSUSE Tumbleweed in KDE Plasma in a VM, I couldn’t start the app getting “no protocol specified, Error: cannot open display: :99.0”. We’re looking for hearing from you how it works on other distributions.
Although the repo is for testing purposes, we’re committed to updating it regularly until we announce otherwise on the website with the installation instructions. So you don’t have to worry that you’ll end up with a scratch build that will never get updated.
At last, I’d like to thank Vadim Rutkovsky who made the initial proof-of-concept Firefox build for Flatpak we built upon, and Jan Hořák who did most of the work on the current build and repo setup.
January 04, 2017
For every request, IronFunctions would spin up a new container to handle the job, which depending on container and task could add a couple of 100ms of overhead.
So why not reuse the containers if possible? Well that is exactly what Hot Functions do.
Hot Functions improve IronFunctions throughput by 8x (depending on duration of task).
Hot Functions reside in long-lived containers addressing the same type of task, which take incoming workload and feed into their standard input and read from their standard output. In addition, permanent network connections are reused.
Here is how a hot function looks like. Currently, IronFunctions implements a HTTP-like protocol to operate hot containers, but instead of communication through a TCP/IP port, it uses standard input/output.
So to test this baby we deployed on 1 GB Digital Ocean instances (which is not much), and used Honeycomb to track and plot the performance.
Simple function printing "Hello World" called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 162x higher throughput.
Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 1,39x higher throughput.
By combining Hot Functions with concurrency we saw even better results:
Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7)
Hot Functions have 7,84x higher throughput.
So there you have it, pure awesomeness by the Iron.io team in the making.
Also a big thank you to the good people from Honeycomb for their awesome product that allowed us to benchmark and plot (All the screenshots in this article are from Honeycomb). Its a great and fast new tool for debugging complex systems by combining the speed and simplicity of time series metrics with the raw accuracy and context of log aggregators.
Since it supports answering arbitrary, ad-hoc questions about those systems in real time, it was an awesome, flexible, powerful way for us to test IronFunctions!
January 03, 2017
This post describes the synclient tool, part of the xf86-input-synaptics package. It does not describe the various options, that's what the synclient(1) and synaptics(4) man pages are for. This post describes what synclient is, where it came from and how it works on a high level. Think of it as a anti-bus-factor post.
Maintenance status
The most important thing first: synclient is part of the synaptics X.Org driver which is in maintenance mode, and superseded by libinput and the xf86-input-libinput driver. In general, you should not be using synaptics anymore anyway, switch to libinput instead (and report bugs where the behaviour is not correct). It is unlikely that significant additional features will be added to synclient or synaptics and bugfixes are rare too.
The interface
synclient's interface is extremely simple: it's a list of key/value pairs that would all be set at the same time. For example, the following command sets two options, TapButton1 and TapButton2:
The -l switch lists the current values in one big list:
synclient TapButton1=1 TapButton2=2
The commandline interface is effectively a mapping of the various xorg.conf options. As said above, look at the synaptics(4) man page for details to each option.
$ synclient -l
Parameter settings:
LeftEdge = 1310
RightEdge = 4826
TopEdge = 2220
BottomEdge = 4636
FingerLow = 25
FingerHigh = 30
MaxTapTime = 180
...
History
A decade ago, the X server had no capabilities to change driver settings at runtime. Changing a device's configuration required rewriting an xorg.conf file and restarting the server. To avoid this, the synaptics X.Org touchpad driver exposed a shared memory (SHM) segment. Anyone with knowledge of the memory layout (an internal struct) and permission to write to that segment could change driver options at runtime. This is how synclient came to be, it was the tool that knew that memory layout. A synclient command would thus set the correct bits in the SHM segment and the driver would use the newly updated options. For obvious reasons, synclient and synaptics had to be the same version to work.
8 or so years ago, the X server got support for input device properties, a generic key/value store attached to each input device. The keys are the properties, identified by an "Atom" (see box on the side). The values are driver-specific. All drivers make use of this now, being able to change a property at runtime is the result of changing a property that the driver knows of.
synclient was converted to use properties instead of the SHM segment and eventually the SHM support was removed from both synclient and the driver itself. The backend to synclient is thus identical to the one used by the xinput tool or tools used by other drivers (e.g. the xsetwacom tool). synclient's killer feature was that it was the only tool that knew how to configure the driver, these days it's merely a commandline argument to property mapping tool. xinput, GNOME, KDE, they all do the same thing in the backend.
How synclient works
The driver has properties of a specific name, format and value range. For example, the "Synaptics Tap Action" property contains 7 8-bit values, each representing a button mapping for a specific tap action. If you change the fifth value of that property, you change the button mapping for a single-finger tap. Another property "Synaptics Off" is a single 8-bit value with an allowed range of 0, 1 or 2. The properties are described in the synaptics(4) man page. There is no functional difference between this synclient command:
and this xinput command
synclient SynapticsOff=1
Both set the same property with the same calls. synclient uses XI 1.x's XChangeDeviceProperty() and xinput uses XI 2.x's XIChangeProperty() if available but that doesn't really matter. They both fetch the property, overwrite the respective value and send it back to the server.
xinput set-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 1
Pitfalls and quirks
synclient is a simple tool. If multiple touchpads are present it will simply pick the first one. This is a common issue for users with a i2c touchpad and will be even more common once the RMI4/SMBus support is in a released kernel. In both cases, the kernel creates the i2c/SMBus device and an additional PS/2 touchpad device that never sends events. So if synclient picks that device, all the settings are changed on a device that doesn't actually send events. This depends on the order the devices were added to the X server and can vary between reboots. You can work around that by disabling or ignoring the PS/2 device.
synclient is a one-shot tool, it does not monitor devices. If a device is added at runtime, the user must run the command to change settings. If a device is disabled and re-enabled (VT-switch, suspend/resume, ...), the user must run synclient to change settings. This is a major reason we recommend against using synclient, the desktop environment should take care of this. synclient will also conflict with the desktop environment in that it isn't aware when something else changes things. If synclient runs before the DE's init scripts (e.g. through xinitrc), its settings may be overwritten by the DE. If it runs later, it overwrites the DE's settings.
synclient exclusively supports synaptics driver properties. It cannot change any other driver's properties and it cannot change the properties created by the X server on each device. That's another reason we recommend against it, because you have to mix multiple tools to configure all devices instead of using e.g. the xinput tool for all property changes. Or, as above, letting the desktop environment take care of it.
The interface of synclient is IMO not significantly more obvious than setting the input properties directly. One has to look up what TapButton1 does anyway, so looking up how to set the property with the more generic xinput is the same amount of effort. A wrong value won't give the user anything more useful than the equivalent of a "this didn't work".
TL;DR
If you're TL;DR'ing an article labelled "the definitive guide to" you're kinda missing the point...
January 02, 2017
summing up is my recurring series on topics & insights that compose a large part of my thinking and work. please find previous editions here or subscribe below to get them straight in your inbox.
When We Invented the Personal Computer, by Steve Jobs
A few years ago I read a study – I believe it was in Scientific American – about the efficiency of locomotion for various species on the earth. The study determined which species was the most efficient, in terms of getting from point A to point B with the least amount of energy exerted. The condor won. Man made a rather unimpressive showing about 1/3 of the way down the list.
But someone there had the insight to test man riding a bicycle. Man was twice as efficient as the condor! This illustrated man's ability as a tool maker. When man created the bicycle, he created a tool that amplified an inherent ability. That's why I like to compare the personal computer to the bicycle. The personal computer is a 21st century bicycle if you will, because it's a tool that can amplify a certain part of our inherent intelligence.
i just love steve jobs’ idea of comparing computers to a bicycle for the mind. so much actually, that i used it in my talk the lost medium last year. we humans are tool builders and we can fundamentally amplify our human capabilities with tools. tools that take us far beyond our inherent abilities. nevertheless we're only at the early stages of this tool. we've already seen the enormous changes around us, but i think that will be nothing to what's coming in the next hundred years.
Teaching Children Thinking, by Seymour Papert
The phrase “technology and education” usually means inventing new gadgets to teach the same old stuff in a thinly disguised version of the same old way. Moreover, if the gadgets are computers, the same old teaching becomes incredibly more expensive and biased towards its dullest parts, namely the kind of rote learning in which measurable results can be obtained by treating the children like pigeons in a Skinner box.
there is this notion that our problems are easily being solved with more technology. doing that we're throwing technology against a wall to see what sticks rather than asking what the technology could offer and who that could help. papert is talking about education, and even if that is a vital part of our society, his thinking applies to so much more.
The Computer for the 21st Century, by Mark Weiser
The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. "Ubiquitous computing" in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box. By analogy to writing, carrying a super-laptop is like owning just one very important book. Customizing this book, even writing millions of other books, does not begin to capture the real power of literacy.
Furthermore, although ubiquitous computers may employ sound and video in addition to text and graphics, that does not make them "multimedia computers." Today's multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background.
computers should fit the human environment, instead of forcing humans to enter theirs. especially mobile computing is a major paradigm shift, but right now we're becoming slaves of our own devices. weiser puts out some very interesting ideas on how computers could integrate in our environment and enhance our abilities there.
I said that I would post regular updates on what is happening in GTK+ 4 land. This was a while ago, so an update is overdue.
So, whats new ?
Cleanup
Deprecation cleanup has continued, and is mostly done at this point. We have the beginning of a porting guide that mentions some of the required changes for early adopters who want to stick their toes into the GTK+ 4 waters. Sadly, I haven’t gotten the GTK+ 4 docs up on the website yet, so no link…
Among the things that have been dropped as part of our ongoing cleanup has been the pixel cache, which should no longer be needed. This is nice since the pixel cache was causing problems, in particular on connection with transparency and component alpha (in font rendering).
Not really a cleanup, but we also got rid of the split into multiple shared objects (libgtk, libgdk, libgsk). Now, we just install a single libgtk, which also provides the gdk and gsk APIs. This has some small performance benefits, but mainly, it makes it easier for us to have private APIs that cross the gtk/gdk boundary.
Widget APIs
Some of the core APIs that are important when you are creating your own widgets have been changed around a bit:
- The five different virtual functions that are used for size requisition have been replaced by a single new vfunc, measure(). This is using the same approach that we are already using for gadgets, where it has worked well.
- The draw() virtual function that lets widget render themselves onto a cairo surface has been replaced by the new snapshot() vfunc, which lets widget create render nodes. This is essentially the change from direct to indirect rendering. Most widgets and gadgets have been ported over to this new wayof doing things.
These changes are only important to you if you create your own widgets.
Window APIs
GdkWindow has gained a few new constructors to replace the old libX11-style gdk_window_new. Their names should indicate what they are good for:
- gdk_window_new_toplevel
- gdk_window_new_popup
- gdk_window_new_temp
- gtk_window_new_child
- gdk_window_new_input
- gdk_wayland_window_new_subsurface
- gdk_x11_window_foreign_new_for_display
The last two are worth mentioning as examples where we move backend-specific functionality to backend APIs.
In the medium term, we are moving towards a world with only toplevel windows. As a first step towards this, we no longer support native child windows, and gdk_window_reparent() is gone. This allowed us to considerably simply the GdkWindow code.
Renderers
When we initially merged GSK, it had a GL renderer and a software fallback (using cairo). Since then, Benjamin has created a Vulkan renderer. The renderer can be selected using the GSK_RENDERER environment variable.
So, for example, this is how to run gtk4-demo with the cairo renderer and the X11 backend:
GSK_RENDERER=cairo GDK_BACKEND=x11 gtk4-demo
After the GSK merge, we struggled a bit to come up with a working approach to converting all our widget and CSS rendering to render nodes. With the introduction of the snapshot() vfunc, we’ve been able to make progress on this front. As part of this effort, Benjamin changed the GSK API around a bit. There are now a bunch of special-purpose render node subclasses that let us effectively translate the CSS rendering, e.g.
- gsk_linear_gradient_node_new
- gsk_texture_node_new
- gsk_color_node_new
- gsk_border_node_new
- gsk_transform_node_new
…and so on. More node types will be created as we discover the need for them.
New fun
As an example of new functionality that would be very hard to support adequately in GTK+ 3, Benjamin recently added gsk_color_matrix_node_new and used it to implement the CSS filter spec, which is good for a few screenshots:
Since this is all done on the GPU (unless you are using the software renderer), applying one of these filters does not affect performance much, as can be seen in this screencast of the fishbox demo:
Expect to see more uses of these new capabilities in GTK+ as things progress. Fun times ahead!
January 01, 2017
That annual list of awkward incomplete pop music preferences: Stuff I listened to a lot in the last 12 months (and which did not necessarily get released in 2016).
- Supersilent left me without words.
- In Electronica, Digitalism were beautifully vibrant and energetic (cannot say if it was a DJ set or a concert – somewhere in between).
Also enjoyed numerous challenging Neoru DJ sets (one flyer really called that “post-genre”), Poliça, Zola Jesus, and Bulp. - In the guitar section, blessed to see Ignite again after all those years (still the band I’ve seen the most) and (classic) Motorpsycho.
- In Hiphop, it was a pleasure to see and talk to Little Simz and Angel Haze (see last year). Die Antwoord were entertaining as expected.
- In Electronica, Simple Forms by The Naked and Famous is probably my favorite. School Of Seven Bells had a post-mortem release, and (thanks to Silvana Imam) I got aware of Beatrice Eli.
- In Hiphop and those things: Tommy Genesis (who I unfortunately missed live), Mala Rodríguez, Elliphant, Jamila Woods.
Thanks to my peeps. You know who you are. :)
As Berlin’s fireworks roar outside (for hours now!), I want to write my typical end of year post with some thoughts of 2016 and what’s coming.
We are leaving behind a year that many (most?) of us will not miss. I hope 2017 will be a better one but the global events that took place throughout 2016 do not make me very confident about that. With unusual political events affecting the lives of millions of people, an apparently acceptance of indecency, bigotry, and hate, together with the ongoing humanitarian crisis and senseless violence, this is surely not the world I had pictured for my children.
Still, I want to be positive and hope that by seeing what’s happening in some places, people can make good choices in 2017 (I am talking in broad terms but we got some important elections coming soon in Europe).
On a more positive note — because personally 2016 was actually a very good year –, this past year I also started working for Endless and moved from SwitzerFrance to Berlin, Germany. I love my job and I have been working very hard to do my share of Endless’ mission. The current direction of the world only validates our mission more, and motivates me to work harder. It’s difficult however to make time for everything, and again my pet projects took the hit, so no big updates in that subject this year.
The other big news is that Helena and I will be parents again soon! The joy of raising a child is something so special that it is hard for me to put into words so I can just say that we are of course extremely happy and curious (and scared too) about how life will be with two kids.
Happy 2017 everyone!
December 30, 2016
Hello again, and I hope you’re having a pleasant end of the year (if you are, maybe don’t check the news until next year).
I’d written about synchronised playback with GStreamer a little while ago, and work on that has been continuing apace. Since I last wrote about it, a bunch of work has gone in:
Landed support for sending a playlist to clients (instead of a single URI)
Added the ability to start/stop playback
The API has been cleaned up considerably to allow us to consider including this upstream
The control protocol implementation was made an interface, so you don’t have to use the built-in TCP server (different use-cases might want different transports)
Made a bunch of robustness fixes and documentation
Introduced API for clients to send the server information about themselves
Also added API for the server to send video transformations for specific clients to apply before rendering
While the other bits are exciting in their own right, in this post I’m going to talk about the last two items.
Video walls
For those of you who aren’t familiar with the term, a video wall is just an array of displays stacked to make a larger display. These are often used in public installations.
One way to set up a video wall is to have each display connected to a small computer (such as the Raspberry Pi), and have them play a part of the entire video, cropped and scaled for the display that is connected. This might look something like:
A 4×4 video wallThe tricky part, of course, is synchronisation — which is where gst-sync-server comes in. Since we’re able to play a given stream in sync across devices on a network, the only missing piece was the ability to distribute a set of per-client transformations so that clients could apply those, and that is now done.
In order to keep things clean from an API perspective, I took the following approach:
Clients now have the ability to send a client ID and a configuration (which is just a dictionary) when they first connect to the server
The server API emits a signal with the client ID and configuration, which allows you to know when a client connects, what kind of display it’s running, and where it is positioned
The server now has additional fields to send a map of client ID to a set of video transformations
This allows us to do fancy things like having each client manage its own information with the server dynamically adapting the set of transformations based on what is connected. Of course, the simpler case of having a static configuration on the server also works.
Demo
Since seeing is believing, here’s a demo of the synchronised playback in action:
The setup is my laptop, which has an Intel GPU, and my desktop, which has an NVidia GPU. These are connected to two monitors (thanks go out to my good friends from Uncommon for lending me their thin-bezelled displays).
The video resolution is 1920×800, and I’ve adjusted the crop parameters to account for the bezels, so the video actually does look continuous. I’ve uploaded the text configuration if you’re curious about what that looks like.
As I mention in the video, the synchronisation is not as tight than I would like it to be. This is most likely because of the differing device configurations. I’ve been working with Nicolas to try to address this shortcoming by using some timing extensions that the Wayland protocol allows for. More news on this as it breaks.
More generally, I’ve done some work to quantify the degree of sync, but I’m going to leave that for another day.
p.s. the reason I used kmssink in the demo was that it was the quickest way I know of to get a full-screen video going — I’m happy to hear about alternatives, though
Future work
Make it real
My demo was implemented quite quickly by allowing the example server code to load and serve up a static configuration. What I would like is to have a proper working application that people can easily package and deploy on the kinds of embedded systems used in real video walls. If you’re interested in taking this up, I’d be happy to help out. Bonus points if we can dynamically calculate transformations based on client configuration (position, display size, bezel size, etc.)
Hardware acceleration
One thing that’s bothering me is that the video transformations are applied in software using GStreamer elements. This works fine(ish) for the hardware I’m developing on, but in real life, we would want to use OpenGL(ES) transformations, or platform specific elements to have hardware-accelerated transformations. My initial thoughts are for this to be either API on playbin or a GstBin that takes a set of transformations as parameters and internally sets up the best method to do this based on whatever sink is available downstream (some sinks provide cropping and other transformations).
Why not audio?
I’ve only written about video transformations here, but we can do the same with audio transformations too. For example, multi-room audio systems allow you to configure the locations of wireless speakers — so you can set which one’s on the left, and which on the right — and the speaker will automatically play the appropriate channel. Implementing this should be quite easy with the infrastructure that’s currently in place.
Merry Happy *.*
I hope you enjoyed reading that — I’ve had great responses from a lot of people about how they might be able to use this work. If there’s something you’d like to see, leave a comment or file an issue.
Happy end of the year, and all the best for 2017!
December 29, 2016
I’ll say it: it’s been rough since the election. Like so many other people, I was thrown into a state of reflection about my country, the world and my role in it. I’ve struggled with understanding how I can live in a world where it seems facts don’t matter. It’s been reassuring to see so many of my friends, family and colleagues (many of them lawyers!) become invigorated to work in the public good. This has all left me with some real self-reflection. I’ve been passionate about software freedom for a long time, and while I think it has really baffled many of my loved ones, I’ve been advocating for the public good in that context somewhat doggedly. But is this issue worth so much of my time? Is it the most impactful way I can spend my time?
I think I was on some level anticipating something like this. I started down this road in my OSCON EU keynote entitled “Is Software Freedom A Social Justice Issue,” in which I talked about software freedom ideology and its place relative to social justice issues.
This time, like when I was doing the soul searching that led to the OSCON EU talk, I kept coming back to thinking about my heart and the proprietary software I rely on for my life. But what’s so powerful about it is that my heart is truly a metaphor for all of the software we rely on. The pulse of our society is intertwined with our software and much of it is opaque from scrutiny and wholly under the control of single companies. We do not have ultimate control of the software that that we need the most.
After all of this deep reflection, the values and the mission of software freedom has never seemed more important. Specifically, there are a few core pieces of Conservancy’s mission and activities that I think are particularly relevant in this era of Trump.
Defending the integrity of our core infrastructure
One the things I’ve focused on in my advocacy generally is how vulnerable our core infrastructure is. This is where we need software freedom the most. We need to make sure that we are doing our best to balance corporate and public interests, and we need to be able to fix problems when they arise unexpectedly in our key systems. If we’ve learned anything from Volkswagen last year, it’s that companies may be knowingly doing the wrong thing, covering it up while also promoting corporate culture that makes it extremely unlikely that employees may come forward. We need to have confidence in our software, be able to audit it and be able to repair it when we detect vulnerabilities or unwanted functionality like surveillance.
Software freedom, and copyleft in particular, helps us keep the balance. Conservancy is dedicated to promoting software freedom, defending our licenses and supporting many of our member projects that are essential pieces of our infrastructure.
Transparency
It may feel like we’ve entered into a world where facts don’t matter but we at Conservancy disagree. Conservancy is committed to transparency, both in the development of software that can be trusted, but also in our own operations. We’re committed to helping others understand complex topics that other people gloss over, as well as shedding light on our own financial situation and activities. (This end-of-year giving season I recommend you carefully read the Form 990s of all of the organizations you consider donating to, including ours – check out how money much the top people make and think about what the organizations accomplish with the amount of resources they have available to them.)
Diversity
While hate and exclusion are on the rise, it’s more important than ever to make sure that our own communities do the right thing. I’m proud to have Conservancy host and to also personally help run Outreachy, making sure that many of the groups that are now feeling so marginalized have opportunities to succeed. Additionally, software freedom democratizes access to technology, which can (in time) empower disenfranchised communities and close digital divides.
Because together, we get it
Perhaps most importantly, unethical software is something that everyone is vulnerable to, but most don’t understand it at all. You need a certain level of expertise just to understand what software freedom is let alone why it’s so important. There are many things we can and should work on, but if we don’t keep our focus on software freedom the long term consequences will be dire. Software freedom is a long-term cause. We must work towards sound infrastructure and look after the ethical underpinnings of the technology we rely on, because if we don’t, who will?
We can’t just be reactive. We have to build the better world.
Please join me in doubling your efforts to promote software freedom. If you can, help Conservancy continue its important mission and become a Supporter now.
My background with Linux started with Unix systems. I was a Unix systems administrator for several years before I introduced Linux at work. When I managed systems for a living, I learned to do a lot of cool things in shell scripts. And these days, sometimes I like to craft something new in a Bash script. Enjoy!
Reading RSS with Bash
A summary of an article I wrote for Linux Journal. The idea originated with an update to the FreeDOS website. Like many other project websites, we fetch our news from a separate news system, then parse it to display on the front page. Today, we do that every time someone loads the page. Effective, but inefficient. As I update the FreeDOS website, I wanted to automate the news feed to generate a static news "include" file, so I decided to do it in Bash.Web comics using a Bash script
Another article I wrote for Linux Journal. I follow several web comics. Every morning, I used to open my browser and check out each comic's web site to read that day's comic. That method worked well when I read only a few web comics, but it became a pain to stay current when I followed more than about ten comics. I figured there had to be a better way, a simpler way for me to read all of my web comics at once. So I wrote a Bash script that automatically collects my web comics and puts them on a single page on a personal web server in my home. Now, I just open my web browser to my private web server, and read all my comics at once.Solitaire in a Bash script
I wanted to write my own version of Klondike Solitaire as a Bash script. Sure, I could grab another shell script implementation of Solitaire called Shellitaire but I liked the challenge of writing my own. And I did. Or rather, I mostly did. I have run out of free time to work on it. So I'm sharing it here in case others want to build on it. I have implemented most of the game, except for the card selection.March Madness in a Bash script
I don't really follow basketball, but I like to engage with others at my office who do. But I just don't know enough about the teams to make an informed decision on my own March Madness bracket. So a few years ago, I found another way: I wrote a little Bash script to do it for me. I wrote a similar version of the article for Linux Journal, and later compared the results. However, I have since discovered a major flaw in this Bash script which I've now fixed. Look for that article coming soon in Linux Journal.
Commercial open-source software is usually based around some kind of asymmetry: the owner possesses something that you as a user do not, allowing them to make money off of it.
This asymmetry can take on a number of forms. One popular option is to have dual licensing: the product is open-source (usually GPL), but if you want to deviate from that, there’s the option to buy a commercial license. These projects are recognizable by the fact that they generally require you to sign a Contributor License Agreement (CLA) in which you transfer all your rights to the code over to the project owners. A very bad deal for you as a contributor (you work but get nothing in return) so I recommend against participating in those projects. But that’s a subject for a different day.
Another option for making asymmetry is open core: make a limited version open-source and sell a full-featured version. Typically named “the enterprise version”. Where you draw the line between both versions determines how useful the project is in its open-source form versus how much potential there is to sell it. Most of the time this tends towards a completely useless open-source version, but there are exceptions (e.g. Gitlab).
These models are so prevalent that I was pleasantly surprised to see who Sentry does things: as little asymmetry as possible. The entire product is open-source and under a very liberal license. The hosted version (the SaaS product that they sell) is claimed to be running using exactly the same source code. The created value, for which you’ll want to pay, is in the belief that a) you don’t want to spend time running it yourself and b) they’ll do a better job at it than you do.
This model certainly won’t work in all contexts and it probably won’t lead to a billion dollar exit, but that doesn’t always have to be the goal.
So kudos to Sentry, they’re certainly trying to make money in the nicest way possible, without giving contributors and hobbyists a bad deal. I hope they do well.
More info on their open-source model can be read on their blog: Building an Open Source Service.
I've meant to do this release for quite a while now and last week I finally had some time to package everything and update the dependencies. scikit-survival contains the majority of code I developed during my Ph.D.
About Survival Analysis
Survival analysis – also referred to as reliability analysis in engineering – refers to type of problem in statistics where the objective is to establish a connections between a set of measurements (often called features or covariates) and the time to an event. The name survival analysis originates from clinical research: in many clinical studies, one is interested in predicting the time to death, i.e., survival. Broadly speaking, survival analysis is a type of regression problem (one wants to predict a continuous value), but with a twist. Consider a clinical study, which investigates coronary heart disease and has been carried out over a 1 year period as in the figure below.

Patient A was lost to follow-up after three months with no recorded cardiovascular event, patient B experienced an event four and a half months after enrollment, patient D withdrew from the study two months after enrollment, and patient E did not experience any event before the study ended. Consequently, the exact time of a cardiovascular event could only be recorded for patients B and C; their records are uncensored. For the remaining patients it is unknown whether they did or did not experience an event after termination of the study. The only valid information that is available for patients A, D, and E is that they were event-free up to their last follow-up. Therefore, their records are censored.
Formally, each patient record consists of a set of covariates $x \in \mathbb{R}^d$ , and the time $t > 0$ when an event occurred or the time $c > 0$ of censoring. Since censoring and experiencing and event are mutually exclusive, it is common to define an event indicator $\delta \in \{0; 1\}$ and the observable survival time $y > 0$. The observable time $y$ of a right censored sample is defined as
\[ y = \min(t, c) =
\begin{cases}
t & \text{if } \delta = 1 , \\
c & \text{if } \delta = 0 ,
\end{cases}
\]
What is scikit-survival?
Recently, many methods from machine learning have been adapted for these kind of problems: random forest, gradient boosting, and support vector machine, many of which are only available for R, but not Python. Some of the traditional models are part of lifelines or statsmodels, but none of those libraries plays nice with scikit-learn, which is the quasi-standard machine learning framework for Python.
This is exactly where scikit-survival comes in. Models implemented in scikit-survival follow the scikit-learn interfaces. Thus, it is possible to use PCA from scikit-learn for dimensionality reduction and feed the low-dimensional representation to a survival model from scikit-survival, or cross-validate a survival model using the classes from scikit-learn. You can see an example of the latter in this notebook.
Download and Install
The source code is available at GitHub and can be installed via Anaconda (currently only for Linux) or pip.
conda install -c sebp scikit-survival
pip install scikit-survival
The API documentation is available here and scikit-survival ships with a couple of sample datasets from the medical domain to get you started.
Hi everyone!
How are going your last days of 2016 so far? It’s been a strange year? Well let’s not diverge, and focus on ZeMarmot, then, shall we? First be aware that our dear Director, Aryeom Han, is getting a lot better. She was also really happy to get a few “get well” messages and say thanks. Her hand is still aching sometimes, in particular on straining or long activities, but on the whole, she says she can draw fine now.
Reminding the project
I will discuss below what was done in the last months, but first — because it is customary to do so at end of year — I remind that ZeMarmot is a project relying on the funding by willing individuals and companies, with 2 sides: art and software.
I am a GIMP developer, the second biggest contributor in term of number of commits in the last 4 years and I also develop a plugin for digital 2D animation with GIMP, which Aryeom is using on ZeMarmot. I want to get my plugin to a releasable state by GIMP 2.10.
Aryeom is using the software to fully animate, draw and paint a movie, based on an original story which I wrote a few years ago, about a marmot who travels the world for reasons you will know when the film will be released.
Oh and the movie will be Creative Commons by-SA of course!
Up to now, our initial crowdfunding (~ 14 000 €) has allowed to pay several months of salary to Aryeom. I have chosen to not earn anything for the time being (not because I don’t like being paid but because we cannot afford it with current funding). Some of it is remaining but is kept to pay the musicians.
Now we are mostly relying on the monthly crowdfunding through the Patreon (USD funding) and Tipeee (EUR funding) platforms. But all combined, that’s about 180 € a month, which amounts to barely more than a day of salary (and with non-wage labour costs, that’s not all of it for Aryeom). 1 day per month to make a movie, that’s far from enough, right?
My dream? I wish we could some day consider ourselves a real studio, with many paid artists, producing cool Libre Art movies going to the cinema (yes in my crazy dream, Creative Commons by-sa films are on the big screen!), and developers paid to improve Free Software so that our media-making ecosystem gets even better and for everybody to use!
But right now, that’s no more than an experiment mostly done voluntarily.
Do you like my dream? Do you want to help us make it real? You can by helping the project financially! It can be the symbolic coin as the bigger donation, any push is actually helping us to make things happen!

Click here to fund ZeMarmot in USD on Patreon »
Click here to fund ZeMarmot in EUR on Tipeee »
Not sure yet? Feel free to read more below and to pitch in at any time later on!
Note that not only the money but also the number of supporters is of great help since it shows supports to bigger funders; and for us that’s good for morale too! A good monthly crowdfunding can also help us find producers without having to abandon any of the social and idealistic aspects of the project (note that we have already been contacted by a production who were interested by the film after the crowdfunding but we refuse to compromise too much on the ideal).
The animation
We illustrated Aryeom’s work by 2 videos presenting extracts of her work-in-progress. In this first video, she shows different steps in animating a few cuts of the main character:
In this second video, we examine some cuts of another character, the Golden Eagle, main predator of the marmot:
There are a lot which can be said on these few minutes shown about the work of “animator”. Many pages of books on the art of animating life could be filled from such examples! We will probably detail these steps in longer blog posts but I will still explain the basics here.
Animating = giving life
Aryeom says it in the first video and you can see it in several examples in both videos. When your character moves from A to B, you are not just “moving” it. You have to give the impression that the character is acting on oneself, that it is alive, inhabited, in other words: animated.
This is no surprise one of the most famous book on animation is called “The illusion of life” (by Frank Thomas and Ollie Johnston), also the bedside book of Aryeom. Going this way has a lot of ramifications on the animator job.
Believable, not realistic
Before we continue, I have to make sure I am understood. Even though realistic animation is also a thing (Disney comes to mind), making a good animation is otherwise not necessarily about making it “realistic”, but instead about making it “believable”.
It is very common to exaggerate some movements for various reasons (often because it is funnier, but also sometimes because exaggerating it may sometimes look even more believable than the realistic version!), or the opposite (bypassing anatomically-correct movements). There are no bad reasons, only choices to achieve what you want.
Now that this thing is clear, let’s continue.
You can’t just “move an arm”
The very classical example beginners will be given is often: “lift your right arm up”. That’s it? Did you only move your arm and the rest of the body stayed unchanged? Of course not. To stay in balance, your body shifted to the left as a counterweight; the right shoulder lifted whereas the left shoulder lowered; and so on.
A lot of things will change in your body with this simple action. Even your feet and legs may move to compensate the shift of the center of gravity. As a consequence, you don’t “move your arm”, you “move your whole body” (in a configuration where your arm is up).
This is one of the first reason why to just move a single part in a body, you cannot reuse previous drawings and change just this part. No, you will properly redraw the whole body because if you are to fake life, you may as well do it well.
Note: when you say “animation” to computer people, their brain usually immediately wires to “interpolation“, which is the mathematics to compute (among other things) intermediate positions. Because of what I said above, in reality, this mathematical technique is barely used in traditional (even when digital) animation. It is used a lot more in vector and 3D animation, but its role should definitely be minimized compared to the animator work even on these fields. In vector/3D, I would say that interpolation only replaced the inbetweener role (some kind of “assistant” who draw non-keyframe images) from the traditional animation world.
Timing, silence and acceleration
You often hear it from actors, poets, writers, singers, anyone who gives some kind of life: the silence is as important as the noise for their art. Well I would also add the acceleration and the symmetrical deceleration.
You can see this well on this first example of the video 1 (at 0’41). Aryeom was unhappy with her running marmot which was nearly of linear speed. Marmot arrived too fast on the flower. Well he slowed down, but barely. Her finale version, Marmot would arrive much faster with a much more visible slowdown, making the movement more “believable” (we get to the bases!).
The eagle flight in video 2 (at 1’09) is another good example of a difficult timing as Aryeom went through 2 stages before finding the right movements. With the wrong timing, her flying eagle feels heavy, like it has difficulty to lift itself into the air (what she called her “sick” eagle in the video); then she got the opposite with an eagle she felt more sparrow-like, too light and easy-lifted. She was quite happy with the last version (obtained after 8 attempts) though, and in particular of this very last bit in the cut, when the eagle gets in glider mode. Can you spot it? This is the kind of difference which just lasts for a few hundredth of seconds, barely noticed, yet on which an animator can spend a significant amount of time.
Living still images (aka “line boil”)
A common and interesting effects you find in a lot of animation is about a shaking still image. You can see it in the second video (at 0’33), first cut presenting the proud eagle still on his mountain. Sometimes you want to show a non-moving situation, but just sticking to a still image feels too weird because in real life, there is no perfect stillness. Even if you make all efforts to stay still for a few seconds, you will imperceptibly move, right? So how do you reproduce this? The attempt to stay perfectly still while this being impossible? Well commonly animators will just redraw the same image several times because as much as you can’t stay still, you can’t draw perfectly identical images twice either (you can get very close by trying hard though) and you loop them.
You usually don’t do this for everything. Typically, elements of the background, you accept them to be still much more easily. But this is common for your living character or sometimes to pull main elements which you want to tick out of the background.
Avoiding cycles
Now, loops are very usual in animation. But the higher quality you aim for, the less you have loops. Same as stillness does not exist in life, you never repeat exactly the same movement twice. So even though loops seems to be the first thing many animators will teach (the famous “walking cycles”), you don’t actually use these in your most beautiful animations. When your main character walks, you will likely re-animate every step.
Of course, it is up to you to decide where to stops. Maybe for this flock of birds in the background, far away, just looping (and even copy-pasting the birds to multiply them!) may be enough. Though this is all a matter of taste, time, and money ready to spent on animator-time obviously.
Camera work
This part has not really started yet, even though it has already been planned (from the storyboard step). But since Aryeom started (first video at 1’06), let’s give some more infos.
Panning and tilting
In animation, where the movement is by essence 2D as well, these refers to respectively a horizontal and vertical camera movement. Why do I need to say “in 2D animation”? Because in more traditional cinema, these will rather correspond to a tracking shot done on rails, whereas panning and tilting refer to angle movements of a static camera. Different definitions for different references. Note that even though 3D animation could be using one or the others, they mostly kept the animation vocabulary.
This gives you a good hint on how characters and background are separately managed. If you have a character walking, you will usually create a single image of the background, much bigger than the screen size, and your camera will move on it, along with the character layers. With fully digital animation, this usually means working on image files of much higher sizes than the expected display size; in traditional physically-drawn animations, it means using very large papers (or often even sticking papers together). As an example, at a Ghibli exhibition, they would display the background for a flying cut of “Kiki’s delivery service” and it would take a full wall in a very large room.
Animation is a lot of drawing
I will conclude the section on animation by saying: that’s a bloody lot of drawing!
As you can see, Aryeom spends time redrawing the same cuts so many times to get the perfect movement that sometimes she becomes crazy and thinks that she is just drawing the wrong animal. The story about the pigeon is a true story and I am the one who told her to add it to the video because that was so funny. Some day, she comes to me and show me her cut she has been working on for days. Then she asks me: “doesn’t it look like a pigeon?”
Hadn’t I stopped her, she was ready to start over.
This is an art where you even draw again when you want to show stillness, and you forbid yourself from using too much shortcuts like using loops. So what do you want: you probably have to be a little crazy from the start, no? 
There are actually several “schools”, and some of them would go for simplicity, shortcut and reusage. Japan is well known for the studio Ghibli which goes the hard way as we do, but this is quite a contradiction in the country industry. The whole rest of Japan’s animation industry is based on animating as little as possible. Haven’t they proved so many times that it is possible to show a single still image for 30 seconds, add sounds and voices, then call it an animation?
Sometimes it is just a choice or a focus. Some animation films focus on design rather than believable movements, or scenario rather than wonderful images. For instance, I don’t think you can say that The Simpsons has a wonderful graphics appeal and realistic animation (they even regularly makes meta-jokes inside episodes about the quality of their animation!), but they have the most fantastic scripts, and that’s what makes their success.
So in the end, there is no right choice. Every one should just go the way they wish for a given project.
And this is the way we are going for ZeMarmot!
Music
Just a very short note on music. We have started working with the musicians, remotely and on a physical meeting on December 1st. We have a few extracts of “first ideas” but they won’t do justice to the quality of the work.
I think this will have to wait for much later.
Software
I went so long about animation that I hope I have not lost half of the readers already! If you are still reading, I’ll say what I worked on these last months.
GIMP
I am trying to do my share on GIMP, to improve it globally, speed up the release of 2.10 and because I love GIMP. So I count 259 commit authorship in 2016 (60 in the last 3 months) + 48 as committers only (i.e. I am not the author, but the main reviewer of a patch which I pushed into our codebase). I commented on 352 bug reports in 2016, making it a habit to review patches when possible.
I have a lot of projects for GIMP, some of the grander being for instance a plugin management system (to install, uninstall and update them easily from within GIMP, and a backend side for plugin developpers to propose extensions), but also a lot of ideas about the evolution of the GUI (this should be discussed topic-per-topic on later blog posts).
Also I have been starting to experiment with Flatpak so that GIMP can provide an official release for GIMP. For years, our official stance has always been to provide a Windows installer, a OSX package, and GNU/Linux… yeah grab the source and compile or use the outdated version from your package manager! I think this situation can be considerably improved with Flatpak and similar technologies which were born these years.
Animation in GIMP
As explained already, I took the path of writing it as a plugin rather than a core feature. Anyway GIMP is only missing a single feature which would make it nearly as powerful: bi-directional notification (basically currently plugins don’t get notified when pixels are updated, layers are renamed, moved or deleted, images closed…). That’s actually something I’d like to work on (I already have a stash somewhere with WIP code for this).
The animation plugin currently has 2 views:
Storyboard view
GIMP’s animation plug-in: storyboard viewThis actually corresponds to the very basic animation logic of 1 layer = 1 frame, which is very common by people making animated GIF (or MNG/WebP now), except with a nice UI to set each image duration (instead of tagging the layer names, a very nasty user experience, feature hidden and found only on some forums or old tutorials), do basic compositing and even comments on vignettes if-need-be. All this with a nice preview in real-time!
Cel-Animation view
GIMP’s Animation plug-in: cel-animation viewThis is the more powerful view where you can compose a frame from several images, often at least a background and a character. In the above example, the cut is made from 3 elements composed together: the background, the eagle and the marmot.
You may usually know more of the “timeline” style of view, which is basically the same thing except that frames are displayed as horizontal tracks. I tried this too, but quickly shifted to this much more traditional view in the animation world, which is usually called an x-sheet (eXposure sheet). I found it much more practical, allowing commenting more easily too, easy scroll, and especially more organized. There is a lot you don’t see in this screenshot, but this view is really targetting a professional and organized workflow. In particular with layers properly named, you can create animation loops and line tests of dozens of images, with various timings, in a few clicks.
I am also working on keyframing for effects (using animated GEGL operations) and camera movements.
Well there is a lot done but definitely a lot more I am planning to do there, which takes time. I will post more detailed blog posts and will push the code on a branch very soon (probably before Libre Graphics Meeting this year).
That’s all, folks!
And so that’s it for this end-of year report from ZeMarmot team! I hope you appreciate the project. And if so and can spare the dime (or haven’t done so yet), I remind the project accepts any amount on the links given above. Some people just give 1 Euro, others 15 Euro per month. In the end, you are all giving life to ZeMarmot!
Thanks and have a great year 2017!
As some people might have noticed, Fedora has had some issues with spam on our Wiki and Trac instances.
This spam attack is a targeted attack, since they had to create a lot of new users, and not only did they work with the Fedora Account System (FAS),
but they also worked with our Contributor Agreement signing process, and everytime I edited one small thing to stop them, they'd edit their stuff to work with it.
We had been doubting our captcha for a while now, but changing the captcha system entirely was quite hard since the obvious candidates all would not suit us because they're not open source or they don't offer a sound version of the CAPTCHA.
The issue
After writing lots of scripts to automatically detect and delete the spam (which I'll open source soon) since we hoped that if we'd do that they'd stop, they still didn't stop (if anything, they only increased their rate), I looked into why they were able to create hundreds of accounts per day.
When looking through the logs, I saw that they were creating the accounts in bursts.
So they would create lots of accounts within a few minutes, then wait a while, before again creating lots of accounts within a few minutes.
With this information I dove into our captcha system, and I discovered that what it does is it makes the client submit both the captcha value that the user entered, and an encrypted version of the correct captcha.
It would then decrypt the correct captcha and check whether it had the same plaintext value as the user just entered, and also whether the captcha hadn't expired yet, which happens 5 minutes after generation.
At this point I started thinking that they probably just stored an encrypted version and the correct answer, and just submitted them together, which means that it would match.
This also matches the bursts, because after 5 minutes the captcha would expire and they would need to solve a new captcha, and they were able to again create lots of accounts in a burst.
After adding some logging code, I discovered that this is indeed what they had been doing.
Since we now have Distributed Weakness Filing and I am a DWF Number Authority, I decided to issue to very first DWF from my block to this issue: DWF-2016-89000.
What I ended up doing to fix this was adding a nonce system to the captcha library, to make sure each captcha is only used once.
This also required patches to FAS, to make sure that the captcha is stored in the database so it works in multi-server setups, but those are now out: TGCaptcha2 0.3.0 and FAS2 0.11.0 have
come to be!
Conclusion
At this moment, this issue is fixed, and I'm hoping again that this will stop the spammers, but if not, we'll continue the arms race, and I get to look into things like machine learning...
December 28, 2016
For the early adopters of the original ColorHug I’ve been offering a service where I send all the newer parts out to people so they can retrofit their device to the latest design. This included an updated LiveCD, the large velcro elasticated strap and the custom cut foam pad that replaced the old foam feet. In the last two years I’ve sent out over 300 free upgrades, but this has reduced to a dribble recently as later ColorHug1’s and all ColorHug2 had all the improvements and extra bits included by default. I’m going to stop this offer soon as I need to make things simpler so I can introduce a new thing (+? :) next year. If you do need a HugStrap and gasket still, please fill in the form before the 4th January. Thanks, and Merry Christmas to all.
One of the criticisms towards Wayland is the lack of a Window Manager concept. This to have an option of a different window manager behaviour/experience without needing to write a whole compositor as well. On LWN, daniels confirmed that it’ll become easier with time thanks to libweston.
Quoting daniels (first line/paragraph he’s quoting me):
> Not exactly sure what this allows, but I assume that most of the compositor logic is in this libweston, thereby reducing the complexity creating a different Wayland compositor.
Correct. The idea is to let people write window managers and desktop environments without having to worry about the details of DRM/KMS/GBM, EGL, dmabuf, the Wayland protocol itself, and whatever other plumbing. It’s not there yet, but hopefully in the next year or so it’ll become a really solid viable alternative.
December 27, 2016

The last update has been a while, so with the new year around the corner and sitting in c-base @ 33c3, I’ll do my best to sum up what’s been going on in Rapicorn and Beast development since the last releases.
Now both projects make use of extended instruction sets (SIMD) that have been present in CPUs for the last 8 – 10 years, such as MMX, SSE, SSE2, SSE3 and CMPXCHG16B. Also both projects now support easy test builds in Docker images, which makes automated testing for different Linux distributions from travis-ci much simpler and more reproducible. Along the way, both got finally fixed up to fully support clang++ builds, although clang++ still throws a number of warnings. This means we can use clang++ based development and debugging tools now! A lot of old code that became obsolete or always remained experimental could be removed (and still is being removed).
Beast got support for using multiple CPU cores in its synthesis engine, we are currently testing performance improvements and stability of this addition. Rapicorn gained some extra logic to allow main loop integration with a GMainContext, which allows Beast to execute a Gtk+ and a Rapicorn event loop in the same thread.
Rapicorn widgets now always store coordinates relative to their parents, and always buffer drawings in per-widget surfaces. This allowed major optimizations to the size negotiation process so renegotiations can now operate much more fine grained. The widget states also got an overhaul and XML nodes now use declare=”…” attributes when new widgets are composed. Due to some rendering changes, librsvg modifications could be obsoleted, so librapicorn now links against a preinstalled librsvg. RadioButton, ToggleButton, SelectableItem and new painter widgets got added, as well as a few convenience properties.
After setting up an experimental Rapicorn build with Meson, we got some new ideas to speed up and improve the autotools based builds. I.e. I managed to do a full Rapicorn build with meson and compare that to autotools + GNU Make. It turns out Meson had two significant speed advantages:
- Meson builds files from multiple directories in parallel;
- Meson configuration happens a lot faster than what the autoconf scripts do.
Meson also has/had a lot of quirks (examples #785, #786, #753) and wasn’t really easier to use than our GNU Make setup. At least for me – given that I know GNU Make very well. The number one advantage of Meson was overcome with migrating Rapicorn to use a non-recursive Makefile (I find dependencies can still be expressed much better in Make than Meson), since parallel GNU Make can be just as fast as Ninja for small to medium sized projects.
The number two issue is harder to beat though. Looking at our configure.ac file, there where a lot of shell and compiler invocations I could remove, simply by taking the same shortcuts that Meson does, e.g. detect clang or gcc and then devise a batch of compiler flags instead of testing compiler support for each flag individually. Executing ./configure takes ca 3 seconds now, which isn’t too bad for infrequent invocations. The real culprit is autoreconf though, which takes over 12 seconds to regenerate everything after a configure.ac or related change (briefly looking into that, it seems aclocal takes longer than all of autoconf, automake, autoheader and libtoolize together).
PS: I’m attending 33C3 in Hamburg atm, so drop me a line (email or twitter) if you’re around and like to chat over coffee.
December 26, 2016
I’m still not quite done with this project. And since it is vacation time, I had some time to spend on it, leading to a release with some improvements that I’d like to present briefly.

One thing I noticed missing right away when I started to transcribe one of my mothers recipes was a segmented ingredients list. What I mean by that is the typical cake recipe that will say “For the dough…” “For the frosting…”
So I had to add support for this before I could continue with the recipe. The result looks like this:
Another weak point that became apparent was editing the ingredients on the edit page. Initially, the ingredients list was just a plain text field. The previous release changed this to a list view, but the editing support consisted just of a popover with plain entries to add a new row.
This turned out to be hard to get right, and I had to go back to the designers (thanks, Jakub and Elvin) to get some ideas. I am reasonably happy with the end result. The popover now provides suggestions for both ingredients and units, while still allowing you to enter free-form text. And the same popover is now also available to edit existing ingredients:
Just in time for the Christmas release, I was reminded that we have a nice and simple solution for spell-checking in GTK+ applications now, with Sébastian Wilmet’s gspell library. So I quickly added spell-checking to the text fields in Recipes:
Lastly, not really a new feature or due to my efforts, but Recipes looks really good in dark as well.
Looking back at the goals that are listed on the design page for this application, we are almost there:
- Find delicious food recipes to cook from all over the world
- Assist people with dietary restrictions
- Allow defining ingredient constraints
- Print recipes so I can pin them on my fridge
- Share recipes with my friends using e-mail
The one thing that is not covered yet is sharing recipes by email. For that, we need work on the Flatpak side, to create a sharing portal that lets applications send email.
And for the first goal we really need your support – if you have been thinking about writing up one of your favorite recipes, the holiday season is the perfect opportunity to cook it again, take some pictures of the result and contribute your recipe!
December 25, 2016
There's a surgery that can be done in order to (try to) fix the root cause of the issue but the recovering process seems way less appealing than treating it a few times per year when needed.
Apart from this issue, I've started feeling some pain on my lower-back that, after a few days, went all the way down to my knee and for a few weeks limited my movements. Finding the root-cause of this issue was not so easy, but thanks to Pavel Grunt's mother I ended up visiting a neurologist who found it out: a slipped disc!
Again, there's a surgery that can be done and, again, I decided to try to work it around.
Time to change?
Both the project and the people there are amazing but I wasn't feeling as excited as I knew I could feel. So, with the heart-broken and quite uncertain about the future, I've decided to leave the project. I've decided to search for something else and have a completely new thing to occupy my mind during 8+ hours/day.
What I can tell, for sure, is that I feel more and more like I've chosen wisely.
Adaptation period
It seems quite obvious but, well, it isn't!
After a lot of talks with my manager, with some people from the office that went through the same shock ... I've decided to talk to my colleagues. After the talk, the situation has been improving quite a lot. And, for sure, what I felt during the first weeks is just a water over the bridge nowadays.
An important point to bring up here is the reason behind this whole situation. People (usually) won't speak their local language because they don't want you to understand the context or anything similar (although, is really hard to avoid that impression). Of course, it may happen, but in the most part of the cases it just happens because they don't even know the impact it may cause in their colleagues. In case it happens to you, at some point of your life, just let them know about the situation (in a polite way, of course) and I guess the outcome will be beneficial for both sides.
After the storm comes the calm
Now, after a few months working on SSSD (and together with FreeIPA) I finally feel comfortable enough to say that my decision to join this team was the really good.
During this whole changing period, some meetings happened inside Brno's office in order to find out what could be done to improve the "well-being" for foreigners here. Some points were raised, some changes already can be seen, which is quite good. Those changes have been pushed (mainly) by Jana Chvalkovska and her team. So, sincerely, I'm proud to see things moving to a good direction.
Also, as I mentioned before in the "Pain! A lot of pain!" section, a common cause for both problems that I had was my overweight. Well, I've been working on this as well. So far I've been able to lose 45+kg and there are 30kg more to go along this year that is coming. Let's see whether I'll be able to reach my goal ...
Am I happy in the end?
Yes. Happier than in the beginning of the year, for sure. Personal life has been going well. Work has been going well.
So, I guess there's nothing really serious that I could complain about ...
Any expectation for the next year?
As, at some point of my life, I've learned that the most effective way to avoid frustrations is to lower the expectations, let's say that outlive 2017 with less pain would be good enough. :-)
And if things keep going as they are right now there's a big chance it can easily happen.








































