June 16, 2016

John Goerzen

Mud, Airplanes, Arduino, and Fun

The last few weeks have been pretty hectic in their way, but I’ve also had the chance to take some time off work to spend with family, which has been nice.

Memorial Day: breakfast and mud

For Memorial Day, I decided it would be nice to have a cookout for breakfast rather than for dinner. So we all went out to the fire ring. Jacob and Oliver helped gather kindling for the fire, while Laura chopped up some vegetables. Once we got a good fire going, I cooked some scrambled eggs in a cast iron skillet, mixed with meat and veggies. Mmm, that was tasty.

Then we all just lingered outside. Jacob and Oliver enjoyed playing with the cats, and the swingset, and then…. water. They put the hose over the slide and made a “water slide” (more mud slide maybe).

IMG_7688

Then we got out the water balloon fillers they had gotten recently, and they loved filling up water balloons. All in all, we all just enjoyed the outdoors for hours.

MVI_7738

Flying to Petit Jean, Arkansas

Somehow, neither Laura nor I have ever really been to Arkansas. We figured it was about time. I had heard wonderful things about Petit Jean State Park from other pilots: it’s rather unique in that it has a small airport right in the park, a feature left over from when Winthrop Rockefeller owned much of the mountain.

And what a beautiful place it was! Dense forests with wonderful hiking trails, dotted with small streams, bubbling springs, and waterfalls all over; a nice lake, and a beautiful lodge to boot. Here was our view down into the valley at breakfast in the lodge one morning:

IMG_7475

And here’s a view of one of the trails:

IMG_7576

The sunset views were pretty nice, too:

IMG_7610

And finally, the plane we flew out in, parked all by itself on the ramp:

IMG_20160522_171823

It was truly a relaxing, peaceful, re-invigorating place.

Flying to Atchison

Last weekend, Laura and I decided to fly to Atchison, KS. Atchison is one of the oldest cities in Kansas, and has quite a bit of history to show off. It was fun landing at the Amelia Earhart Memorial Airport in a little Cessna, and then going to three museums and finding lunch too.

Of course, there is the Amelia Earhart Birthplace Museum, which is a beautifully-maintained old house along the banks of the Missouri River.

IMG_20160611_134313

I was amused to find this hanging in the county historical society museum:

IMG_20160611_153826

One fascinating find is a Regina Music Box, popular in the late 1800s and early 1900s. It operates under the same principles as those that you might see that are cylindrical. But I am particular impressed with the effort that would go into developing these discs in the pre-computer era, as of course the holes at the outer edge of the disc move faster than the inner ones. It would certainly take a lot of careful calculation to produce one of these. I found this one in the Cray House Museum:

VID_20160611_151504

An Arduino Project with Jacob

One day, Jacob and I got going with an Arduino project. He wanted flashing blue lights for his “police station”, so we disassembled our previous Arduino project, put a few things on the breadboard, I wrote some code, and there we go. Then he noticed an LCD in my Arduino kit. I hadn’t ever gotten around to using it yet, and of course he wanted it immediately. So I looked up how to connect it, found an API reference, and dusted off my C skills (that was fun!) to program a scrolling message on it. Here is Jacob showing it off:

VID_20160614_074802.mp4

16 June, 2016 04:00AM by John Goerzen

June 15, 2016

Reproducible builds folks

Reproducible builds: week 59 in Stretch cycle

What happened in the Reproducible Builds effort between June 5th and June 11th 2016:

Media coverage

Ed Maste gave a talk at BSDCan 2016 on reproducible builds (slides, video).

GSoC and Outreachy updates

Weekly reports by our participants:

  • Scarlett Clark worked on making some packages reproducible, focusing on KDE backend and utility programs.
  • Ceridwen published an initial design for the interface for reprotest, including a discussion on different types of build variations and the difficulties of specifying certain types of variations.
  • Valerie Young improved documentation for building our tests website, began migrating Debian-specific pages into a new namespace, and planned future work around its navigation.

Documentation update

  • Ximin Luo proposed a modification to our SOURCE_DATE_EPOCH spec explaining FORCE_SOURCE_DATE.

    Some upstream build tools (e.g. TeX, see below) have expressed a desire to control which cases of embedded timestamps should obey SOURCE_DATE_EPOCH. They were not convinced by our arguments on why this is a bad idea, so we agreed on an environment variable FORCE_SOURCE_DATE for them to implement their desired behaviour - named generically, so that at least we can set it centrally. For more details, see the text just linked. However, we strongly urge most build tools not to use this, and instead obey SOURCE_DATE_EPOCH unconditionally in all cases.

Toolchain fixes

  • TeX Live 2016 released with SOURCE_DATE_EPOCH support for all engines except LuaTeX and original TeX.
  • Continued discussion (alternative archive) with TeX upstream, about SOURCE_DATE_EPOCH corner cases, eventually resulting in the FORCE_SOURCE_DATE proposal from above.
  • gcc-5/5.4.0-4 by Matthias Klose now avoids storing -fdebug-prefix-map in DW_AT_producer, thanks to original patch by Daniel Kahn Gillmor.
  • sphinx/1.4.3-1 by Dmitry Shachnev now drops Debian-specific patches relating to SOURCE_DATE_EPOCH applied upstream, original patch by Alexis Bienvenüe.
  • asciidoctor/1.5.4-2 by Cédric Boutillier now supports SOURCE_DATE_EPOCH, thanks to original patch by Alexis Bienvenüe.
  • dh-python/1.5.4-2 by Piotr Ożarowski now behaves better in some cases, thanks to original patch by Chris Lamb.

Packages fixed

The following 16 packages have become reproducible due to changes in their build-dependencies: apertium-dan-nor apertium-swe-nor asterisk-prompt-fr-armelle blktrace canl-c code-saturne coinor-symphony dsc-statistics frobby libphp-jpgraph paje.app proxycheck pybit spip tircd xbs

The following 5 packages are new in Debian and appear to be reproducible so far: golang-github-bowery-prompt golang-github-pkg-errors golang-gopkg-dancannon-gorethink.v2 libtask-kensho-perl sspace

The following packages had older versions which were reproducible, and their latest versions are now reproducible again after being fixed:

The following packages have become reproducible after being fixed:

Some uploads have fixed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

  • #806331 against xz-utils by Ximin Luo: make the selected POSIX shell stable accross build environments
  • #806494 against gnupg by intrigeri: Make man pages not embed a build-time dependent timestamp
  • #806945 against bash by Reiner Herrmann and Ximin Luo: Use the system man2html, and set PGRP_PIPE unconditionally.
  • #825857 against python-setuptools by Anton Gladky: sort libs in native_libs.txt
  • #826408 against brainparty by Reiner Herrmann: Sort object files for deterministic linking order
  • #826416 against blockout2 by Reiner Herrmann: Sort the list of source files
  • #826418 against xgalaga++ by Reiner Herrmann: Sort source files to get a deterministic linking order
  • #826423 against kraptor by Reiner Herrmann: Sort source files for deterministic linking order
  • #826431 against traceroute by Reiner Herrmann: Sort lists of libraries/source/object files
  • #826544 against doc-debian by intrigeri: make the created files stable regardless of the locale
  • #826676 against python-openstackclient by Chris Lamb: make the build reproducible
  • #826677 against cadencii by Chris Lamb: make the build reproducible
  • #826760 against dctrl-tools by Reiner Herrmann: Sort object files for deterministic linking order
  • #826951 against slicot by Alexis Bienvenüe: please make the build reproducible (fileordering)
  • #826982 against hoichess by Reiner Herrmann: Sort object files for deterministic linking order

Package reviews

68 reviews have been added, 19 have been updated and 28 have been removed in this week. New and updated issues:

26 FTBFS bugs have been reported by Chris Lamb, 1 by Santiago Vila and 1 by Sascha Steinbiss.

diffoscope development

  • Mattia Rizzolo uploaded diffoscope/54 to jessie-backports.

strip-nondeterminism development

  • Mattia uploaded strip-nondeterminism/0.018-1 to jessie-backports, to support a debhelper backport.
  • Andrew Ayer uploaded strip-nondeterminism/0.018-2 fixing #826700, a packaging improvement for Multi-Arch to ease cross-build situations.
  • 2 days later Andrew released strip-nondeterminism/0.019; now strip-nondeterminism is able to:
    • recursively normalize JAR files embedded within JAR files (#823917)
    • clamp the timestamp, the same way tar >=1.28-2.2 can (for now available only for gzip archives)

disorderfs development

  • Andrew Ayer released disorderfs/0.4.3, fixing a issue with umask handling (#826891)

tests.reproducible-builds.org

  • Valerie Young namespaced the Debian-specific pages to /debian/ namespace, with redirects to for the previous URLs.
  • Holger Levsen improved the reliability of build jobs: the availability of both build nodes (for a given build) is now being tested when a build job is started, to better cope when one of the 25 build nodes go down for some reason.
  • Ximin Luo improved the index of identified issues to include the total popcon scores of each issue, which is now also used for sorting that page.

Misc.

Steven Chamberlain submitted a patch to FreeBSD's makefs to allow reproducible builds of the kfreebsd installer.

Ed Maste committed a patch to FreeBSD's binutils to enable determinstic archives by default in GNU ar.

Helmut Grohne experimented with cross+native reproductions of dash with some success, using rebootstrap.

This week's edition was written by Ximin Luo, Chris Lamb, Holger Levsen, Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC.

15 June, 2016 11:27PM

Enrico Zini

Verifying gpg keys

Suppose you have a gpg keyid like 9F6C6333 that corresponds to both key 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333 and 88BB08F633073D7129383EE71EA37A0C9F6C6333, and you don't know which of the two to use.

You go to http://pgp.cs.uu.nl/ and find out that the site uses short key IDs, so the two keys are indistinguishable.

Building on Clint's hopenpgp-tools, I made a script that screenscrapes http://pgp.cs.uu.nl/ for trust paths, downloads all the potentially connecting keys in a temporary keyring, and runs hkt findpaths on it:

$ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333
hkt (hopenpgp-tools) 0.18
Copyright (C) 2012-2016  Clint Adams
hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions.
(4,[1,4,3,6])

(1,1793D6AB75663E6BF104953A634F4BD1E7AD5568)
(3,F8921D3A7404C86E11352215C7197699B29B232A)
(4,C331BA3F75FB723B5873785B06EAA066E397832F)
(6,1AE0322EB8F74717BDEABF1D44BB1BA79F6C6333)

$ ./verify-trust-paths 1793D6AB75663E6BF104953A634F4BD1E7AD5568 88BB08F633073D7129383EE71EA37A0C9F6C6333
hkt (hopenpgp-tools) 0.18
Copyright (C) 2012-2016  Clint Adams
hkt comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions.
(0,[])

This is a start: it could look in the local keyring for all ultimately trusted key finegrprints and use those as starting points. It could just take as an argument a short keyid and automatically check all matching fingerprints.

I'm currently quite busy with https://nm.debian.org and at the moment verify-trust-paths scratches enough of my itch that I can move on with my other things.

Please send patches, or take it over: I'd like to see this grow.

15 June, 2016 07:47PM

hackergotchi for Steve Kemp

Steve Kemp

So I should document the purple server a little more

I should probably document the purple server I hacked together in Perl and mentioned in my last post. In short it allows you to centralise notifications. Send "alerts" to it, and when they are triggered they will be routed from that central location. There is only a primitive notifier included, which sends data to the console, but there are sample stubs for sending by email/pushover, and escalation.

In brief you create alerts by sending a JSON object via HTTP-POST. These objects contain a bunch of fields, but the two most important are:

  • id
    • A human-name for the alert. e.g. "disk-space", "heartbeat", or "unread-mail".
  • raise
    • When to raise the alert. e.g. "now", "+5m", "1466006086".

When an update is received any existing alert has its values updated, which makes heartbeat alerts trivial. Send a message with:

{ "id": "heartbeat", "raise": "+5m", .. }

The existing alert will be updated each time such a new event is submitted, which means that the time at which that alert will raise will be pushed back by five minutes. If you send this every 60 seconds then you'll get informed of an outage five minutes after your server explodes (because the "+5m" will have been turned into an absolute time, and that time will eventually become in the past - triggering a notification).

Alerts are keyed on the source IP which sent the submission and the id field, meaning you can send the same update from multiple hosts without causing any problems.

Notifications can be viewed in a reasonably pretty Web UI, so you can clear raised-alerts, see the pending ones, and suppress further notifications on something that has been raised. (By default notifications are issued every sixty seconds, until the alert is cleared. There is support for only raising an alert once, which is useful for services you might deliver events via, such as pushover which will repeat themselves.)

Anyway this is a fun project, which is a significantly simplified and less scalable version of a project which is open-sourced already and used at Bytemark.

15 June, 2016 04:15PM

Andrew Shadura

Migrate to systemd without a reboot

Yesterday I was fixing an issue with one of the servers behind kallithea-scm.org: the hook intended to propagage pushes from Our Own Kallithea to Bitbucket stopped working. Until yesterday, that server was using Debian’s flavour of System V init and djb’s dæmontools to keep things running. To make the hook asynchronous, I wrote a service to be managed to dæmontools, so that concurrency issued would be solved by it. However, I didn’t implement any timeouts, so when last week wget froze while pulling Weblate’s hook, there was nothing to interrupt it, so the hook stopped working since dæmontools thought it’s already running and wouldn’t re-trigger it. Killing wget helped, but I decided I need to do something with it to prevent the situation from happening in the future.

I’ve been using systemd at work for the last year, so I am now confident I’m happier with systemd than with dæmontools, so I decided to switch the server to systemd. Not surprisingly, I prepared unit files in about 5 minutes without having to look into the manuals again, while with dæmontools I had to check things every time I needed to change something. The tricky thing was the switch itself. It is a virtual server, presumably running in Xen, and I don’t have access to the console, so if I bork break something, I need to summon Bradley Kuhn or someone from Conservancy, who’s kindly donated the server to the project. In any case, I decided to attempt to upgrade without a reboot, so that I have more options to roll back my changes in the case things go wrong.

After studying the manpages of both systemd’s init and sysvinit’s init, I realised I can install systemd as /sbin/init and ask already running System V init to re-exec. However, systemd’s init can’t talk to System V init, so before installing systemd I made a backup on it. It’s also important to stop all running services (except probably ssh) to make sure systemd doesn’t start second instances of each. And then: /tmp/init u — and we’re running systemd! A couple of additional checks, and it’s safe to reboot.

Only when I did all that I realised that in the case of systemd not working I’d probably not be able to undo my changes if my connection interrupted. So, even though at the end it worked, probably it’s not a good idea to perform such manipulations when you don’t have an alternative way to connect to the server :)

15 June, 2016 11:51AM

Enrico Zini

On discomfort and new groups

I recenyly wrote:

When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop.

Last night I asked a group of friends what do they do if they start feeling uncomfortable when they are alone in a group of people they don't know. Here are the replies:

  • Wait outside the group until I figure the group out.
  • Find someone to talk for a while until you get comfortable.
  • If a person is making things uncomfortable for you, let them know, and leave if nobody cares.
  • Sit there in silence.
  • Work around unwelcome people by bearing them for a bit while trying to integrate with others.
  • Some people are easy to bribe into friendship, just bring cake.
  • While you don't know what is going on, you try to replicate what others are doing.
  • Spend time trying to get a feeling of what are the consequences of taking actions.
  • Purposefully disagree with people in a new environment to figure out if having a different opinion is accepted.
  • Once I was new and I was asked to be the person that invites everyone for lunch, that forced me to talk to everyone, and integrate.
  • When you are the first one to point something out, you'll probably soon find out you're not alone.
  • The reaction on the first time something is exposed, influences how often similar cases will be reported.

I think a lot of these point are good food for thought about barriers of entry, and about safety nets that a group has or might want to have.

15 June, 2016 08:14AM

Russ Allbery

Review: Matter

Review: Matter, by Iain M. Banks

Publisher: Orbit
Copyright: February 2008
ISBN: 0-316-00536-3
Format: Hardcover
Pages: 593

Sursamen is an Arithmetic, Mottled, Disputed, Multiply Inhabited, Multi-million Year Safe, and Godded Shellworld. It's a constructed world with multiple inhabitable levels, each lit by thermonuclear "suns" on tracks, each level supported above the last by giant pillars. Before the recorded history of the current Involved Species, a culture called the Veil created the shellworlds with still-obscure technology for some unknown purpose, and then disappeared. Now, they're inhabited by various transplants and watched over by a hierarchy of mentor and client species. In the case of Sursamen, both the Aultridia and the Oct claim jurisdiction (hence "Disputed"), and are forced into an uneasy truce by the Nariscene, a more sophisticated species that oversees them both.

On Sursamen, on level eight to be precise, are the Sarl, a culture with an early industrial level of technology in the middle of a war of conquest to unite their level (and, they hope, the next level down). Their mentors are the Oct, who claim descendance from the mysterious Veil. The Deldeyn, the next level down, are mentored by the Aultridia, a species that evolved from a parasite on Xinthian Tensile Aranothaurs. Since a Xinthian, treated by the Sarl as a god, lives in the heart of Sursamen (hence "Godded"), tensions between the Sarl and the Aultridians run understandably high.

The ruler of the Sarl had three sons and a daughter. The oldest was killed by the people he is conquering as Matter starts. The middle son is a womanizer and a fop who, as the book opens, watches a betrayal that he's entirely unprepared to deal with. The youngest is a thoughtful, bookish youth pressed into a position that he also is not well-prepared for.

His daughter left the Sarl, and Sursamen itself, fifteen years previously. Now, she's a Special Circumstances agent for the Culture.

Matter is the eighth Culture novel, although (like most of the series) there's little need to read the books in any particular order. The introduction to the Culture here is a bit scanty, so you'll have more background and understanding if you've read the previous novels, but it doesn't matter a great deal for the story.

Sharp differences in technology levels have turned up in previous Culture novels (although the most notable example is a minor spoiler), but this is the first Culture novel I recall where those technological differences were given a structure. Usually, Culture novels have Special Circumstances meddling in, from their perspective, "inferior" cultures. But Sursamen is not in Culture space or directly the Culture's business. The Involved Species that governs Sursamen space is the Morthenveld: an aquatic species roughly on a technology level with the Culture themselves. The Nariscene are their client species; the Oct and Aultridia are, in turn, client species (well, mostly) of the Nariscene, while meddling with the Sarl and Deldeyn.

That part of this book reminded me of Brin's Uplift universe. Banks's Involved Species aren't the obnoxious tyrants of Brin's universe, and mentoring doesn't involve the slavery of the Uplift universe. But some of the politics are a bit similar. And, as with Uplift, all the characters are aware, at least vaguely, of the larger shape of galactic politics. Even the Sarl, who themselves have no more than early industrial technology. When Ferbin flees the betrayal to try to get help, he ascends out of the shellworld to try to get assistance from an Involved species, or perhaps his sister (which turns out to be the same thing). Banks spends some time here, mostly through Ferbin and his servant (who is one of the better characters in this book), trying to imagine what it would be like to live in a society that just invented railroads while being aware of interstellar powers that can do practically anything.

The plot, like the world on which it's set, proceeds on multiple levels. There is court intrigue within the Sarl, war on their level and the level below, and Ferbin's search for support and then justice. But the Sarl live in an artifact with some very mysterious places, including the best set piece in the book: an enormous waterfall that's gradually uncovering a lost city on the level below the Sarl, and an archaeological dig that proceeds under the Deldeyn and Sarl alike. Djan Seriy decides to return home when she learns of events in Sarl, originally for reasons of family loyalty and obligation, but she's a bit more in touch with the broader affairs of the galaxy, including the fact that the Oct are acting very strangely. There's something much greater at stake on Sursamen than tedious infighting between non-Involved cultures.

As always with Banks, the set pieces and world building are amazing, the scenery is jaw-dropping, and I have some trouble warming to the characters. Dramatic flights across tower-studded landscapes seeking access to forbidden world-spanning towers largely, but don't entirely, make up for not caring about most of the characters for most of the book. This did change, though: although I never particularly warmed to Ferbin, I started to like his younger brother, and I really liked his sister and his servant by the end of the book.

Unfortunately, the end of Matter is, if not awful, at least exceedingly abrupt. As is typical of Banks, we get a lot of sense of wonder but not much actual explanation, and the denouement is essentially nonexistent. (There is a coy epilogue hiding after the appendices, but it mostly annoyed me and provides only material for extrapolation about the characters.) Another SF author would have written a book about the Xinthian, the Veil, the purpose of the shellworlds, and the deep history of the galaxy. I should have known going in that Banks isn't that sort of SF author, but it was still frustrating.

Still, Banks is an excellent writer and this is a meaty, complex, enjoyable story with some amazing moments of wonder and awe. If you like Culture novels in general, you will like this. If you like set-piece-heavy SF on a grand scale, such as Alastair Reynolds or Kim Stanley Robinson, you probably like this. Recommended.

Rating: 8 out of 10

15 June, 2016 03:37AM

June 14, 2016

hackergotchi for Joey Hess

Joey Hess

second system

Five years ago I built this, and it's worked well, but is old and falling down now.

mark I outdoor shower

The replacement is more minimalist and like any second system tries to improve on the design of the first. No wood to rot away, fully adjustable height. It's basically a shower swing, suspended from a tree branch.

mark II outdoor shower

Probably will turn out to have its own new problems, as second systems do.

14 June, 2016 09:15PM

hackergotchi for Simon McVittie

Simon McVittie

GTK versioning and distributions

Allison Lortie has provoked a lot of comment with her blog post on a new proposal for how GTK is versioned. Here's some more context from the discussion at the GTK hackfest that prompted that proposal: there's actually quite a close analogy in how new Debian versions are developed.

The problem we're trying to address here is the two sides of a trade-off:

  • Without new development, a library (or an OS) can't go anywhere new
  • New development sometimes breaks existing applications

Historically, GTK has aimed to keep compatible within a major version, where major versions are rather far apart (GTK 1 in 1998, GTK 2 in 2002, GTK 3 in 2011, GTK 4 somewhere in the future). Meanwhile, fixing bugs, improving performance and introducing new features sometimes results in major changes behind the scenes. In an ideal world, these behind-the-scenes changes would never break applications; however, the world isn't ideal. (The Debian analogy here is that as much as we aspire to having the upgrade from one stable release to the next not break anything at all, I don't think we've ever achieved that in practice - we still ask users to read the release notes, even though ideally that wouldn't be necessary.)

In particular, the perceived cost of doing a proper ABI break (a fully parallel-installable GTK 4) means there's a strong temptation to make changes that don't actually remove or change C symbols, but are clearly an ABI break, in the sense that an application that previously worked and was considered correct no longer works. A prominent recent example is the theming changes in GTK 3.20: the ABI in terms of functions available didn't change, but what happens when you call those functions changed in an incompatible way. This makes GTK hard to rely on for applications outside the GNOME release cycle, which is a problem that needs to be fixed (without stopping development from continuing).

The goal of the plan we discussed today is to decouple the latest branch of development, which moves fast and sometimes breaks API, from the API-stable branches, which only get bug fixes. This model should look quite familiar to Debian contributors, because it's a lot like the way we release Debian and Ubuntu.

In Debian, at any given time we have a development branch (testing/unstable) - currently "stretch", the future Debian 9. We also have some stable branches, of which the most recent are Debian 8 "jessie" and Debian 7 "wheezy". Different users of Debian have different trade-offs that lead them to choose one or the other of these. Users who value stability and want to avoid unexpected changes, even at a cost in terms of features and fixes for non-critical bugs, choose to use a stable release, preferably the most recent; they only need to change what they run on top of Debian for OS API changes (for instance webapps, local scripts, or the way they interact with the GUI) approximately every 2 years, or perhaps less often than that with the Debian-LTS project supporting non-current stable releases. Meanwhile, users who value the latest versions and are willing to work with a "moving target" as a result choose to use testing/unstable.

The GTK analogy here is really quite close. In the new versioning model, library users who value stability over new things would prefer to use a stable-branch, ideally the latest; library users who want the latest features, the latest bug-fixes and the latest new bugs would use the branch that's the current focus of development. In practice we expect that the latter would be mostly GNOME projects. There's been some discussion at the hackfest about how often we'd have a new stable-branch: the fastest rate that's been considered is a stable-branch every 2 years, similar to Ubuntu LTS and Debian, but there's no consensus yet on whether they will be that frequent in practice.

How many stable versions of GTK would end up shipped in Debian depends on how rapidly projects move from "old-stable" to "new-stable" upstream, how much those projects' Debian maintainers are willing to patch them to move between branches, and how many versions the release team will tolerate. Once we reach a steady state, I'd hope that we might have 1 or 2 stable-branched versions active at a time, packaged as separate parallel-installable source packages (a lot like how we handle Qt). GTK 2 might well stay around as an additional active version just from historical inertia. The stable versions are intended to be fully parallel-installable, just like the situation with GTK 1.2, GTK 2 and GTK 3 or with the major versions of Qt.

For the "current development" version, I'd anticipate that we'd probably only ship one source package, and do ABI transitions for one version active at a time, a lot like how we deal with libgnome-desktop and the evolution-data-server family of libraries. Those versions would have parallel-installable runtime libraries but non-parallel-installable development files, again similar to libgnome-desktop.

At the risk of stretching the Debian/Ubuntu analogy too far, the intermediate "current development" GTK releases that would accompany a GNOME release are like Ubuntu's non-LTS suites: they're more up to date than the fully stable releases (Ubuntu LTS, which has a release schedule similar to Debian stable), but less stable and not supported for as long.

Hopefully this plan can meet both of its goals: minimize breakage for applications, while not holding back the development of new APIs.

14 June, 2016 01:56AM

June 13, 2016

Reproducible builds folks

First alpha release of reprotest

Author: ceridwen

The first, very-alpha release of reprotest is now out at PyPi. It should hit Debian experimental later this week. While it only builds on an existing system (as I'm still working on support for virtualization), it can now check its own reproducibility, which it does in its own tests, both using setuptools and debuild. Unfortunately, setuptools seems to generate file-order-dependent binaries, meaning python setup.py bdist creates unreproducible binaries. With debuild, reprotest probably would be reproducible with the modified packages from the Reproducible Builds project, though I haven't tested that yet. It tests 'captures_environment', 'fileordering' (renamed from 'filesystem'), 'home', 'kernel', 'locales', 'path', 'time', 'timezone', and 'umask'. The other variations require superuser privileges and modifications that would be unsafe to make to a running system, so they will only be enabled in the containers.

The next major part of the project is integrating autopkgtests's container management system into reprotest. For the curious, autopkgtest is composed of a main program, adt-run, which then calls other command-line programs, adt-virt-chroot, adt-virt-lxd, adt-virt-schroot, adt-virt-null, adt-virt-schroot, and adt-virt-qemu, that communicate with the containers. (The autopkgtest maintainer has since renamed the programs, but the underlying structure remains the same.) I think this is a bit of an odd design but it works well for my purposes since the container programs already have existing CLIs that reprotest can use.

13 June, 2016 10:15PM

Satyam Zode

GSoC 2016 Week 2 and 3: Reproducible Builds in Debian

This is report on my previous week activities with Debian Reproducible-Builds.

In last 10 days, I had build different Debain packages at my own using prebuilder to experience the reproducibility issues. I am thankful to deki and Lunar for suggesting me to do that task. Based on this experience, I managed to find more use cases for –hide=profiles specification.

I also researched differences of different unreproducible Debian packages on http://tests.reproducible-builds.org . There are many packages available for examination.

In brief I did following tasks:

  • I worked on –hide=profiles specification. Mostly, I tried to find use cases.
  • I made changes to https://wiki.debian.org/ReproducibleBuilds/HideProfilesSpecification added detailed information in each use case.
  • Read documentation on argcomplete python module and had some hands on experience with module. Purpose of doing this was to add argument completion feature to Diffoscope. pabs had filed bug report for this #826711. I am implementing this feature and discussing issues with pabs as well as researching diffs side-by-side to generate more use cases. Here, Thanks to pabs for guidance and support :)
  • I went through different software to see how they are ignoring the stuff. Those are following:

Upcoming week will be an important as well as fun week because I will be implementing the use cases. Right now, I am currently looking at different softwares which ignores stuff and taking notes of it. So that, it will help me during implementing solution of use cases. I am also looking forward to feedback from community on use cases and CLI interfaces. Have a great week :)

13 June, 2016 05:22PM

Olivier Grégoire

Third week at GSoC

I begin this week by finish my QT tutorial. With that new knowledge, I was able to implemet my method launchSmartInfo(int) on LRC

.

After that, I needed to implement the gnome client too. I followed the tutorial on QT so I thought I can just learn GTK+ by reading the code. Finally, I just lost a lot of time by doing that and I didn't learn a lot. In response to that problem I finally use an GTK+ tutorial.

I began to try to show n transparente window with some text in front of the call view. To do that, I want to use clutter library

------------

Conclusion: as you can see, I lost a lot of time in this GUI learning. Now it's done, I can move forward! :)

13 June, 2016 04:57PM

Kevin Avignon

GSOC 2016 : The end

Hi guys, Well because of health problems, I won’t be able to meet the expectation for the midterm evaluation coming next week and I will have to step down from the program. It pains me to do so since the project was taking me out of my comfort zone and forcing to adapt to a … Continue reading GSOC 2016 : The end

13 June, 2016 04:48PM by KevinAvignon

Scarlett Clark

Debian: KDE: Reproducible Builds week 3, Randa Platforms Equals Busy times!

Debian:

I am a smidgen late on post due to travel, sorry!

choqok:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825322
For this I was able to come up with a patch for kconfig_compiler to encode generated files to utf-8.
Review request is here:
https://git.reviewboard.kde.org/r/128102/

This has been approved and I will be pushing it as soon as I patch the qt5 frameworks version.

kxmlgui:

WIP this has been a steep learning curve, according to the notes it was an easy embedded kernel version, that was not the case! After grueling hours of
trying to sort out randomness in debug output I finally narrowed it down to cases where QStringLiteral was used and there were non letter characters eg. (” <") These were causing debug symbols to generate with ( lambda() ) which caused unreproducible symbol/debug files. It is now a case of fixing all of these in the code to use QString::fromUtf8 seems to fix this. I am working on a mega patch for upstream and it should be ready early in the week.

This last week I spent a large portion making my through a mega patch for kxmlgui, when it was suggested to me to write a small qt app
to test QStringLiteral isolated and sure enough two build were byte for byte identical. So this means that QStringLiteral may not be the issue at all. With some
more assistance I am to expand my test app with several QStringLiterals of varying lengths, we have suspicion it is a padding issue, which complicates things.

KDE:
On the KDE front, I have arrived safe and sound in Randa and aside from some major jetlag, reproducible builds, I have been quite busy with the KDE CI. I am reworking
my DSL to use friendly yaml files to generate jobs for all platforms ( linux, android, osx, windows, snappy, flatpak ) and can easily be extended later.
Major workpoints so far for Randa:

  • I have delegated the windows backend to Hannah
  • Andreas has provided a docker build for Android, and upon initial testing it will work great.
  • I have recruited several nice folks to assist me with my snappy efforts.

TODO:

  • Add all the nodes to sandbox
  • Finish yaml CI files
  • OSX re-setup with new macmini

Have a great day.

13 June, 2016 04:41PM by Scarlett Clark

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, May 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 166 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 20h.
  • Ben Hutchings did 10 hours (out of 15 hours allocated, keeping 5 extra hours for June).
  • Brian May did 15 hours.
  • Chris Lamb did 18 hours.
  • Guido Günther did 17.25 hours (out of 8 hours allocated + 9.25 remaining hours).
  • Markus Koschany did 30 hours (out of 31 hours allocated, thus keeping one extra hour for June).
  • Santiago Ruano Rincón did 20 hours (out of 20h allocated + 8 remaining, thus keeping 8 extra hours for June).
  • 8 hours that were initially affected to Scott Kitterman have been put back in the June pool after he resigned.
  • Thorsten Alteholz did 31 hours.

Evolution of the situation

The number of sponsored hours stayed the same over May but will likely increase a little bit the next month as we have two new Bronze sponsors being processed.

The security tracker currently lists 36 packages with a known CVE and the dla-needed.txt file lists 36 packages awaiting an update.

Despite the higher than usual number of work hours dispatched in May, we still have more open CVE than we used to have at the end of the squeeze LTS period. So more support is always needed…

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

13 June, 2016 02:15PM by Raphaël Hertzog

Keerthana Krishnan

Installing reSIProcate with apt-get source package

I had earlier tried to install telepathy according to the instructions I found here. But that gave me a an unexplainable error. So after a few unsuccessful attempts, I decided to install it from the apt-get source.

First, install telepathy-qt. This is the part that gave me all the errors and this is where I had made changes from the source I had been using.

  1. Check and make sure you have the proper dependencies installed for the package.
    $ sudo apt-get build-dep telepathy-qt
  2. If you are installing the package in the home folder, you can skip this step, but usually, it’s better to have a dedicated file/file structure. If you do have one, cd into that folder
    $ mkdir ~/telepathy-qt-stuff
    $ cd ~/telepathy-qt-stuff
  3. Next, get telepathy-qt from the source :
    $ apt-get source telepathy-qt
    $ apt-get source -b telepathy-qt
  4. Check and make sure that there are a list of libtelepathy-qt* and telepathy-qt* .deb packages. You just have to install a few more packages:
    $ ls *.deb
  5. Next, you have to install a few more packages:
    $ dpkg -i libtelepathy-qt4-2_0.9.6.1-?_amd64.deb libtelepathy-qt4-dev_0.9.6.1-?_amd64.deb libtelepathy-qt4-farstream2_0.9.6.1-?_amd64.deb

    Obviously, we have to replace the ‘?’ with the version number of the .deb package.
  6. After that you have the necessary packages to install reSIProcate:
    $ dpkg -l | grep telepathy-qt
    This should return something like :
    ii libtelepathy-qt4-2:amd64 0.9.6.1-2 amd64 Telepathy framework – Qt 4 library
    ii libtelepathy-qt4-dev 0.9.6.1-2 amd64 Qt 4 Telepathy library (headers and static library)
    ii libtelepathy-qt4-farstream2:amd64 0.9.6.1-2 amd64 Telepathy/Farsight integration – Qt 4 library

The next part is to install and configure reSIProcate.

  1. Include the proper backport line in the /etc/apt/sources.list file. Be sure to run sudo apt-get update after any changes to the source file
  2. Clone the code from the git repo :
    $ git clone https://github.com/resiprocate/resiprocate
    $ cd resiprocate
  3. Check the build dependencies and install what’s required:
    $ apt-get install libpq-dev dh-autoreconf
    $ apt-get build-dep resiprocate
    $ apt-get install -t jessie-backports libradcli-dev
  4. Build the packages:
    $ ./build/debian.sh
    $ sudo make
  5. Finally, ensure all your packages are built right by running  sudo make check

And then you’re done! 😀

13 June, 2016 10:05AM by keerthana

Mark Brown

We show up

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

13 June, 2016 09:50AM by broonie

Keerthana Krishnan

A beginner’s guide to Debian Source Packages

Source packages have a very different process of installation and handling while compared to the traditional executable packages which are handled by the sudo apt-get install command. Source packages provide source for a software. These are used to build other packages. Source code can be studied or errors can be fixed in this by downloading these to your system.

To access these you have to add deb-src lines sources.list in /etc/apt/sources.list If you had followed my earlier article on setting up Jessie, this means you have to remove the ‘#’ in front of the deb-src lines.

First, find the required source package online in the debian packages list here. Be sure to select the ‘Source Package Names’ option while searching. When you find a package it will have a list of binary packages that are built from this source package.

Make sure you have all the dependencies to build the package by running:

     $ apt-get build-dep packagename

Then, download the source package, using the following command:

     $ apt-get source packagename

This will download three files: a .orig.tar.gz, a .dsc and a .diff.gz. In the case of packages made specifically for Debian, the last of these is not downloaded and the first usually won’t have “orig” in the name.

The .dsc file is used by dpkg-source for unpacking the source package into the directory packagename-version. Within each downloaded source package there is a debian/ directory that contains the files needed for creating the .deb package.

To auto-build the package when it’s been downloaded, just add -b to the command line, like this:

     $ apt-get -b source packagename

If you decide not to create the .deb at the time of the download, you can create it later by running:

     $ dpkg-buildpackage -rfakeroot -uc -b

from within the directory that was created for the package after downloading. To install the package built by the commands above one must use the package manager directly, like this:

     # dpkg -i file.deb

There’s a difference between apt-get‘s source method and its other methods. The source method can be used by normal users, without needing special root powers. The files are downloaded to the directory from which the apt-get source package command was called.

Tip : If you are a developer testing software, you may need to run the make command after this. Be sure to run make check as well later to make sure there are no errors and everything is alright.

Most of the commands I’ve used here was by following the APT How To documentation which is marked as “obsolete” for some reason but turned out to be pretty useful. Hope it works for you too! 🙂

13 June, 2016 05:52AM by keerthana

Simon Désaulniers

Week 2 - Report

I’ve been reworking the code for the queries I introduced in the first week.

What’s done

  • I have worked on value pagination and optimization of accnounce operations;
  • Fixed bugs like #72, #73;
  • I’ve split the Query into Select and Where strcutures. This change was explained here.

What’s still work in progress

  • Value pagination;
  • Optimizing announce operations;

13 June, 2016 04:22AM

June 12, 2016

Iustin Pop

Elsa Bike Trophy 2016—my first bike race!

Elsa Bike Trophy 2016—my first bike race!

So today, after two months of intermittent training using Zwift and some actual outside rides, I did my first bike race. Not of 2016, not of 2000+, but like ever.

Which is strange, as I learned biking very young, and I did like to bike. But as it turned out, even though I didn't like running as a child, I did participate in a number of running events over the years, but no biking ones.

The event

Elsa Bike Trophy is a mountain bike event—cross-country, not downhill or anything crazy; it takes part in Estavayer-le-Lac, and has two courses - one 60Km with 1'791m altitude gain, and a smaller one of 30Km with 845m altitude gain. I went, of course, for the latter. 845m is more than I ever did in a single ride, so it was good enough for a first try. The web page says that this smaller course “… est nerveux, technique et ne laisse que peu de répit”. I choose to think that's a bit of an exaggeration, and that it will be relatively easy (as I'm not too skilled technically).

The atmosphere there was like for the running races, with the exception of bike stuff being sold, and people on very noisy rollers. I'm glad for my trainer which sounds many decibels quieter…

The long race started at 12:00, and the shorter one at 12:20. While waiting for the start I had to concerns in mind: whether I'm able to do the whole course (endurance), and whether it will be too cold (the weather kept moving towards rain). I had a small concern about the state of the course, as it was not very nice weather recently, but a small one.

And then, after one hour plus of waiting, go!

Racing, with a bit of "swimming"

At first thing went as expected. Starting on paved roads, moving towards the small town exit, a couple of 14% climbs, then more flat roads, then a nice and hard 18% short climb (I'll never again complain about < 10%!), then… entering the woods. It became quickly apparent that the ground in the forest was in much worse state than I feared. Much worse as in a few orders of magnitude.

In about 5 minutes after entering the tree cover, my reasonably clean, reasonably light bike became a muddy, heavy monster. And the pace that until then went quite OK became walking pace, as the first rider that didn't manage to keep going up because the wheel turned out of the track blocked the one behind him, which had to stop, and repeat until we were one line (or two, depending on how wide the trail was) of riders walking their bikes up. While on dry ground walking your bike up is no problem, or hiking through mud with good hiking shoes is also no problem, walking up with biking shoes is a pain. Your foot slides and you waste half of your energy "swimming" in the mud.

Once the climb is over, you get on the bike, and of course the pedals and cleats are full of heavy mud, so it takes a while until you can actually clip in. Here the trail version of SPD was really useful, as I could pedal reasonably well without being clipped-in, just had to be careful and push too hard.

Then maybe you exit the trail and get on paved road, but the wheels are so full of mud that you still are very slow (and accelerate very slowly), until the shed enough of the mud to become somewhat more "normal".

After a bit of this "up through mud, flat and shedding mud", I came upon the first real downhill section. I would have been somewhat confident in dry ground, but I got scared and got off my bike. Better safe than sorry was the thing for now.

And after this is was a repetition of the above: climb, sometimes (rarely) on the bike, most times pushing the bike, fast flat sections through muddy terrain where any mistake of controlling the bike can send the front wheel flying due to the mud being highly viscous, slow flat sections through very liquid mud where it definitely felt like swimming, or any dry sections.

My biggest fear, uphill/endurance, was unfounded. The most gains I made were on the dry uphills, where I had enough stamina to overtake. On flat ground I mostly kept order (i.e. neither being overtaken nor overtaking), but on downhill sections, I lost lots of time, and was overtaken a lot. Still, it was a good run.

And then, after about 20 kilometres out of the 30, I got tired enough of getting off the bike, on the bike, and also tired mentally and not being careful enough, that I stopped getting off the bike on downhills. And the feeling was awesome! It was actually much much easier to flow through the mud and rocks and roots on downhill, even when it was difficult (for me) like 40cm drops (estimated), than doing it on foot, where you slide without control and the bike can come crashing down on you. It was a liberating feel, like finally having overcome the mud. I was soo glad to have done a one-day training course with Swiss Alpine Adventure, as it really helped. Thanks Dave!

Of course, people were still overtaking me, but I also overtook some people (who were on foot; he he, I wasn't the only one it seems). And being easier, I had some more energy so I was able to push a bit harder on the flats and dry uphill sections.

And then the remaining distance started shrinking, and the last downhill was over, I entered through familiar roads the small town, a passer-by cries "one kilometre left", I push hard (I mean, hard as I could after all the effort), and I reach the finish.

Oh, and my other concern, the rain? Yes it did rain somewhat, and I was glad for it (I keep overheating); there was a single moment I felt cold, when exiting a nice cosy forest into a field where the wind was very strong—headwind, of course.

Lessons learned

I did learn a lot in this first event.

  • indoor training sessions only help with endurance (but they do good on this); they don't help with technique, and most importantly, they don't teach how to handle the bike in inclement weather; biking to work on paved road also doesn't help.
  • nevertheless, indoor training does help with endurance ☺
  • mud guards…; before the race, I thought they'll help; during the race, I cursed at the extra weight and their seemingly uselessness; after the race, after I saw how other people looked, I realised that indeed they helped a lot—I was only dirty on my legs, mostly below the knee, but not on my upper body. Unsure whether I will use the again.
  • a drop seat is not needed if your seat is set in-between, but sure damn would have been more easy with one
  • installing your GPS on your handle-bar with elastic bands in a section of non-constant diameter is a very bad idea, as it lives in an unstable equilibrium: any move towards the thinner section makes the mount very loose, and you have to lose time fixing it.

Results

So, how did I do after all? As soon as I reached the finish and recovered my items, among which the phone, I checked the datasport page: I was rank 59/68 in my category. Damn, I hoped (and thought) I would do better. Similar % in the overall ranking for this distance.

That aside, it was mighty fun. So much fun I'd do it again tomorrow! I forgot the awesome atmosphere of such events, even in the back of the rankings.

And then, after I reach drive home and open on my workstation the datasport page, I get very confused: the overall number of participants was different. And the I realised: not everybody finished the race when I first checked (d'oh)! Final ranking: 59 out of 84 in my category, and 247/364 in the overall 30km rankings. That makes it 70% and 67% respectively, which matches somewhat with my usual running results a few years back (but a bit worse). It is in any case better than what I thought originally, yay!

Also, Strava activity for some more statistics (note that my Garmin says it was not 800+ meters of altitude…):

I'd embed a Veloviewer nice 3D-map but I can't seem to get the embed option, hmm…

TODO: train more endurance, train more technique, train in more various conditions!

12 June, 2016 11:09PM

hackergotchi for Sune Vuorela

Sune Vuorela

Randa day 0

Sitting on Lake Zurich and reflecting over things was a great way to get started. http://manifesta.org/2015/11/pavillon-of-reflections-for-zurich-in-2016/

After spending a bit of time in a train, I climbed part of a mountain together with Adriaan – up to the snow where I could throw a snowball at him. We also designed a couple of new frameworks on our climbing trip. Maybe they will be presented later.

12 June, 2016 10:16PM by Sune Vuorela

hackergotchi for Mario Lang

Mario Lang

A Raspberry Pi Zero in a Handy Tech Active Star 40 Braille Display

TL;DR: I put a $5 Raspberry Pi Zero, a Bluetooth USB dongle, and the required adapter cable into my new Handy Tech Active Star 40 braille display. An internal USB port provides the power. This has transformed my braille display into an ARM-based, monitorless, Linux laptop that has a keyboard and a braille display. It can be charged/powered via USB so it can also be run from a power bank or a solar charger, thus potentially being able to run for days, rather than just hours, without needing a standard wall-jack.

[picture: a Raspberry Pi Zero embedded within an Active Star 40] [picture: a braille display with a keyboard on top and a Raspberry Pi Zero inside]

Some Background on Braille Display Form Factors

Braille displays come in various sizes. There are models tailored for desktop use (with 60 cells or more), models tailored for portable use with a laptop (usually with 40 cells), and, nowadays, there are even models tailored for on-the-go use with a smartphone or similar (with something like 14 or 18 cells).

Back in the old days, braille displays were rather massive. A 40-cell braille display was typically about the size of a 13" laptop. In modern times, manufacturers have managed to reduce the size of the internals such that a 40-cell display can be placed in front of a laptop or keyboard instead of placing the laptop on top of the braille display.

While this is a nice achievement, I personally haven't found it to be very convenient because you now have to place two physically separate devices on your lap. It's OK if you have a real desk, but, at least in my opinion, if you try to use your laptop as its name suggests, it's actually inconvenient to use a small form factor, 40-cell display.

For this reason, I've been waiting for a long-promised new model in the Handy Tech Star series. In 2002, they released the Handy Tech Braille Star 40, which is a 40-cell braille display with enough space to put a laptop directly on top of it. To accommodate larger laptop models, they even built in a little platform at the back that can be pulled out to effectively enlarge the top surface. Handy Tech has now released a new model, the Active Star 40, that has essentially the same layout but modernized internals.

[picture: a plain Active Star 40]

You can still pull out the little platform to increase the space that can be used to put something on top.

[picture: an Active Star 40 with extended platform and a laptop on top]

But, most conveniently, they've designed in an empty compartment, roughly the size of a modern smartphone, beneath the platform. The original idea was to actually put a smartphone inside, but this has turned out (at least to me) to not be very feasible. Fortunately, they thought about the need for electricity and added a Micro USB cable terminating within the newly created, empty compartment.

My first idea was to put a conventional Raspberry Pi inside. When I received the braille display, however, we immediately noticed that a standard-sized rpi is roughly 3mm too high to fit into the empty compartment.

Fortunately, though, a co-worker noticed that the Raspberry Pi Zero was available for order. The Raspberry Pi Zero is a lot thinner, and fits perfectly inside (actually, I think there's enough space for two, or even three, of them). So we ordered one, along with some accessories like a 64GB SDHC card, a Bluetooth dongle, and a Micro USB adapter cable. The hardware arrived a few days later, and was immediately bootstrapped with the assistance of very helpful friends. It works like a charm!

Technical Details

The backside of the Handy Tech Active Star 40 features two USB host ports that can be used to connect devices such as a keyboard. A small form-factor, USB keyboard with a magnetic clip-on is included. When a USB keyboard is connected, and when the display is used via Bluetooth, the braille display firmware additionally offers the Bluetooth HID profile, and key press/release events received via the USB port are passed through to it.

I use the Bluetooth dongle for all my communication needs. Most importantly, BRLTTY is used as a console screen reader. It talks to the braille display via Bluetooth (more precisely, via an RFCOMM channel).

The keyboard connects through to Linux via the Bluetooth HID profile.

Now, all that is left is network connectivity. To keep the energy consumption as low as possible, I decided to go for Bluetooth PAN. It appears that the tethering mode of my mobile phone works (albeit with a quirk), so I can actually access the internet as long as I have cell phone reception. Additionally, I configured a Bluetooth PAN access point on my desktop machines at home and at work, so I can easily (and somewhat more reliably) get IP connectivity for the rpi when I'm near one of these machines. I plan to configure a classic Raspberry Pi as a mobile Bluetooth access point. It would essentially function as a Bluetooth to ethernet adapter, and should allow me to have network connectivity in places where I don't want to use my phone.

BlueZ 5 and PAN

It was a bit challenging to figure out how to actually configure Bluetooth PAN with BlueZ 5. I found the bt-pan python script (see below) to be the only way so far to configure PAN without a GUI.

It handles both ends of a PAN network, configuring a server and a client. Once instructed to do so (via D-Bus) in client mode, BlueZ will create a new network device - bnep0 - once a connection to a server has been established. Typically, DHCP is used to assign IP addresses for these interfaces. In server mode, BlueZ needs to know the name of a bridge device to which it can add a slave device for each incoming client connection. Configuring an address for the bridge device, as well as running a DHCP server + IP Masquerading on the bridge, is usually all you need to do.

A Bluetooth PAN Access Point with Systemd

I'm using systemd-networkd to configure the bridge device.

/etc/systemd/network/pan.netdev:

[NetDev]
Name=pan
Kind=bridge
ForwardDelaySec=0

/etc/systemd/network/pan.network:

[Match]
Name=pan

[Network]
Address=0.0.0.0/24
DHCPServer=yes
IPMasquerade=yes

Now, BlueZ needs to be told to configure a NAP profile. To my surprise, there seems to be no way to do this with stock BlueZ 5.36 utilities. Please correct me if I'm wrong.

Luckily, I found a very nice blog post, as well as an accommodating Python script that performs the required D-Bus calls.

For convenience, I use a Systemd service to invoke the script and to ensure that its dependencies are met.

/etc/systemd/system/pan.service:

[Unit]
Description=Bluetooth Personal Area Network
After=bluetooth.service systemd-networkd.service
Requires=systemd-networkd.service
PartOf=bluetooth.service

[Service]
Type=notify
ExecStart=/usr/local/sbin/pan

[Install]
WantedBy=bluetooth.target

/usr/local/sbin/pan:

#!/bin/sh
# Ugly hack to work around #787480
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

exec /usr/local/sbin/bt-pan --systemd --debug server pan

This last file wouldn't be necessary if IPMasquerade= were supported in Debian right now (see #787480).

After the obligatory systemctl daemon-reload and systemctl restart systemd-networkd, you can start your Bluetooth Personal Area Network with systemctl start pan.

Bluetooth PAN Client with Systemd

Configuring the client is also quite easy to do with Systemd.

/etc/systemd/network/pan-client.network:

[Match]
Name=bnep*

[Network]
DHCP=yes

/etc/systemd/system/[email protected]:

[Unit]
Description=Bluetooth Personal Area Network client

[Service]
Type=notify
ExecStart=/usr/local/sbin/bt-pan --debug --systemd client %I --wait

Now, after the usual configuration reloading, you should be able to connect to a specific Bluetooth access point with:

systemctl start pan@00:11:22:33:44:55

Pairing via the Command Line

Of course, the server and client-side service configuration require a pre-existing pairing between the server and each of its clients.

On the server, start bluetoothctl and issue the following commands:

power on
agent on
default-agent
scan on
scan off
pair XX:XX:XX:XX:XX:XX
trust XX:XX:XX:XX:XX:XX

Once you've set scan mode to on, wait a few seconds until you see the device you're looking for scroll by. Note its device address, and use it for the pair and (optional) trust commands.

On the client, the sequence is essentially the same except that you don't need to issue the trust command. The server needs to trust a client in order to accept NAP profile connections from it without waiting for manual confirmation by the user.

I'm actually not sure if this is the optimal sequence of commands. It might be enough to just pair the client with the server and issue the trust command on the server, but I haven't tried this yet.

Enabling Use of the Bluetooth HID Profile

Essentially the same as above also needs to be done in order to use the Bluetooth HID profile of the Active Star 40 on Linux. However, instead of agent on, you need to issue the command agent KeyboardOnly. This explicitly tells bluetoothctl that you're specifically looking for a HID profile.

Configuring Bluetooth via the Command Line Feels Vague

While I'm very happy that I actually managed to set all of this up, I must admit that the command-line interface to BlueZ feels a bit incomplete and confusing. I initially thought that agents were only for PIN code entry. Now that I've discovered that "agent KeyboardOnly" is used to enable the HID profile, I'm not sure anymore. I'm surprised that I needed to grab a script from a random git repository in order to be able to set up PAN. I remember, with earlier version of BlueZ, that there was a tool called pand that you could use to do all of this from the command-line. I don't seem to see anything like that for BlueZ 5 anymore. Maybe I'm missing something obvious?

Performance

The data rate is roughly 120kB/s, which I consider acceptable for such a low power solution. The 1GHz ARM CPU actually feels sufficiently fast for a console/text-mode person like me. I'll rarely be using much more than ssh and emacs on it anyway.

Console fonts and screen dimensions

The default dimensions of the framebuffer on the Raspberry Pi Zero are a bit unexpectedly strange. fbset reports that the screen dimension is 656x416 pixels (of course, no monitor connected). With a typical console font of 8x16, I got 82 columns and 26 lines.

With a 40 cell braille display, the 82 columns are very inconvenient. Additionally, as a braille user, I would like to be able to view Unicode braille characters in addition to the normal charset on the console. Fortunately, Linux supports 512 glyphs, while most console fonts do only provide 256. console-setup can load and combine two 256-glyph fonts at once. So I added the following to /etc/default/console-setup to make the text console a lot more friendly to braille users:

SCREEN_WIDTH=80
SCREEN_HEIGHT=25
FONT="Lat15-Terminus16.psf.gz brl-16x8.psf"

Note

You need console-braille installed for brl-16x8.psf to be available.

Further Projects

There's a 3.5mm audio jack inside the braille display as well. Unfortunately, there are no converters from Mini-HDMI to 3.5mm audio that I know of. It would be very nice to be able to use the sound card that is already built into the Raspberry Pi Zero, but, unfortunately, this doesn't seem possible at the moment. Alternatively, I'm looking at using a Micro USB OTG hub and an additional USB audio adapter to get sound from the Raspberry Pi Zero to the braille display's speakers. Unfortunately, the two USB audio adapters I've tried so far have run hot for some unknown reason. So I have to find some other chipset to see if the problem goes away.

A little nuisance, currently, is that you need to manually power off the Raspberry, wait a few seconds, and then power down the braille display. Turning the braille display off cuts power delivery via the internal USB port. If this is accidentally done too soon then the Raspberry Pi Zero is shut down ungracefully (which is probably not the best way to do it). We're looking into connecting a small, buffering battery to the GPIO pins of the rpi, and into notifying the rpi when external power has dropped. A graceful, software-initiated shutdown can then be performed. You can think of it as being like a mini UPS for Micro USB.

The image

If you are a happy owner of a Handy Tech Active Star 40 and would like to do something similar, I am happy to share my current (Raspbian Stretch based) image. In fact, if there is enough interest by other blind users, we might even consider putting a kit together that makes it as easy as possible for you to get started. Let me know if this could be of interest to you.

Thanks

Thanks to Dave Mielke for reviewing the text of this posting.

Thanks to Simon Kainz for making the photos for this article.

And I owe a big thank you to my co-workers at Graz University of Technology who have helped me a lot to bootstrap really quickly into the rpi world.

P.S.

My first tweet about this topic is just five days ago, and apart from the soundcard not working yet, I feel like the project is already almost complete! By the way, I am editing the final version of this blog posting from my newly created monitorless ARM-based Linux laptop via an ssh connection to my home machine.

12 June, 2016 09:20AM by Mario Lang

June 11, 2016

hackergotchi for Francois Marier

Francois Marier

Cleaning up obsolete config files on Debian and Ubuntu

As part of regular operating system hygiene, I run a cron job which updates package metadata and looks for obsolete packages and configuration files.

While there is already some easily available information on how to purge unneeded or obsolete packages and how to clean up config files properly in maintainer scripts, the guidance on how to delete obsolete config files is not easy to find and somewhat incomplete.

These are the obsolete conffiles I started with:

$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/tunables/ntpd 5519e4c01535818cb26f2ef9e527f191 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete
 /etc/apparmor.d/usr.sbin.ntpd a00aa055d1a5feff414bacc89b8c9f6e obsolete
 /etc/bash_completion.d/initramfs-tools 7eeb7184772f3658e7cf446945c096b1 obsolete
 /etc/bash_completion.d/insserv 32975fe14795d6fce1408d5fd22747fd obsolete
 /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf 8df3896101328880517f530c11fff877 obsolete
 /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf d81013f5bfeece9858706aed938e16bb obsolete

To get rid of the /etc/bash_completion.d/ files, I first determined what packages they were registered to:

$ dpkg -S /etc/bash_completion.d/initramfs-tools
initramfs-tools: /etc/bash_completion.d/initramfs-tools
$ dpkg -S /etc/bash_completion.d/insserv
initramfs-tools: /etc/bash_completion.d/insserv

and then followed Paul Wise's instructions:

$ rm /etc/bash_completion.d/initramfs-tools /etc/bash_completion.d/insserv
$ apt install --reinstall initramfs-tools insserv

For some reason that didn't work for the /etc/dbus-1/system.d/ files and I had to purge and reinstall the relevant package:

$ dpkg -S /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.NewPrinterNotification.conf
$ dpkg -S /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf
system-config-printer-common: /etc/dbus-1/system.d/com.redhat.PrinterDriversInstaller.conf

$ apt purge system-config-printer-common
$ apt install system-config-printer

The files in /etc/apparmor.d/ were even more complicated to deal with because purging the packages that they come from didn't help:

$ dpkg -S /etc/apparmor.d/abstractions/evince
evince: /etc/apparmor.d/abstractions/evince
$ apt purge evince
$ dpkg-query -W -f='${Conffiles}\n' | grep 'obsolete$'
 /etc/apparmor.d/abstractions/evince ae2a1e8cf5a7577239e89435a6ceb469 obsolete
 /etc/apparmor.d/usr.bin.evince 08a12a7e468e1a70a86555e0070a7167 obsolete

I was however able to get rid of them by also purging the apparmor profile packages that are installed on my machine:

$ apt purge apparmor-profiles apparmor-profiles-extra evince ntp
$ apt install apparmor-profiles apparmor-profiles-extra evince ntp

Not sure why I had to do this but I suspect that these files used to be shipped by one of the apparmor packages and then eventually migrated to the evince and ntp packages directly and dpkg got confused.

If you're in a similar circumstance, you want want to search for the file you're trying to get rid of on Google and then you might end up on http://apt-browse.org/ which could lead you to the old package that used to own this file.

11 June, 2016 09:40PM

Simon Désaulniers

Week 1 - Report

I have been working on writing serializable structure for remote filtering of values on the distributed hash table OpenDHT. This structure is called Query.

What’s done

The implementation of the base design with other changes have been made. You can see evolution on the matter here;

Changes allow to create a Query with a SQL-ish statement like the following

Query q("SELECT * WHERE id=5");

You can then use this query like so

get(hash, getcb, donecb, filter, "SELECT * WHERE id=5");

I verified the working state of the code with the dhtnode. I have also done some tests using our python benchmark scripts.

What’s next

  • Value pagination;
  • Optimization of put operations with query for ids before put, hence avoiding potential useless traffic.

Thoughts

The Query is the key part for optimizing my initial work on data persistence on the DHT. It will enhance the DHT on more than one aspects. I have to point out it would not have been possible to do that before our major refactoring we introduced in 0.6.0.

11 June, 2016 05:06PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

The road to debconf 2016, tourism and arts.

A longish blog post, please bear, a second part of the blog post would be published in few days from now. My fixed visa finally arrived, yeah But this story doesn’t start here, it starts about a year back. While I have been contributing to Debian in my free time over the years, and sometimes […]

11 June, 2016 06:55AM by shirishag75

Hideki Yamane

Which compression do Debian packages use?

gzip: 4576
bzip2: 54
xz: 46250
none: 9

90% of packages use xz. Packages use bzip2 should migrate to xz.

11 June, 2016 03:50AM by Hideki Yamane ([email protected])

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

It's all relative

As nearly anyone who's worked with me will attest to, I've long since touted nedbat's talk Pragmatic Unicode, or, How do I stop the pain? as one of the most foundational talks, and required watching for all programmers.

The reason is because netbat hits on something bigger - something more fundamental than how to handle Unicode -- it's how to handle data which is relative.

For those who want the TL;DR, the argument is as follows:

Facts of Life:

  1. Computers work with Bytes. Bytes go in, Bytes go out.
  2. The world needs more than 256 symbols.
  3. You need both Bytes and Unicode
  4. You cannot infer the encoding of bytes.
  5. Declared encodings can be Wrong

Now, to fix it, the following protips:

  1. Unicode sandwich
  2. Know what you have
  3. TEST

Relative Data

I've started to think more about why we do the things we do when we write code, and one thing that continues to be a source of morbid schadenfreude is watching code break by failing to handle Unicode right. It's hard! However, watching what breaks lets you gain a bit of insight into how the author thinks, and what assumptions they make.

When you send someone Unicode, there are a lot of assumptions that have to be made. Your computer has to trust what you (yes, you!) entered into your web browser, your web browser has to pass that on over the network (most of the time without encoding information), to a server which reads that bytestream, and makes a wild guess at what it should be. That server might save it to a database, and interpolate it into an HTML template in a different encoding (called Mojibake), resulting in a bad time for everyone involved.

Everything's awful, and the fact our computers can continue to display text to us is a goddamn miracle. Never forget that.

When it comes down to it, when I see a byte sitting on a page, I don't know (and can't know!) if it's Windows-1252, UTF-8, Latin-1, or EBCDIC. What's a poem to me is terminal garbage to you.

Over the years, hacks have evolved. We have magic numbers, and plain ole' hacks to just guess based on the content. Of course, like all good computer programs, this has lead to its fair share of hilarious bugs, and there's nothing stopping files from (validly!) being multiple things at the same time.

Like many things, it's all in the eye of the beholder.

Timezones

Just like Unicode, this is a word that can put your friendly neighborhood programmer into a series of profanity laden tirades. Go find one in the wild, and ask them about what they think about timezone handling bugs they've seen. I'll wait. Go ahead.

Rants are funny things. They're fun to watch. Hilarious to give. Sometimes just getting it all out can help. They can tell you a lot about the true nature of problems.

It's funny to consider the isomorphic nature of Unicode rants and Timezone rants.

I don't think this is an accident.

U̶n̶i̶c̶o̶d̶e̶ timezone Sandwich

Ned's Unicode Sandwich applies -- As early as we can, in the lowest level we can (reading from the database, filesystem, wherever!), all datetimes must be timezone qualified with their correct timezone. Always. If you mean UTC, say it's in UTC.

Treat any unqualified datetimes as "bytes". They're not to be trusted. Never, never, never trust 'em. Don't process any datetimes until you're sure they're in the right timezone.

This lets the delicious inside of your datetime sandwich handle timezones with grace, and finally, as late as you can, turn it back into bytes (if at all!). Treat locations as tzdb entries, and qualify datetime objects into their absolute timezone (EST, EDT, PST, PDT)

It's not until you want to show the datetime to the user again should you consider how to re-encode your datetime to bytes. You should think about what flavor of bytes, what encoding -- what timezone -- should I be encoding into?

TEST

Just like Unicode, testing that your code works with datetimes is important. Every time I think about how to go about doing this, I think about that one time that mjg59 couldn't book a flight starting Tuesday from AKL, landing in HNL on Monday night, because United couldn't book the last leg to SFO. Do you ever assume dates only go forward as time goes on? Remember timezones.

Construct test data, make sure someone in New Zealand's +13:45 can correctly talk with their friends in Baker Island's -12:00, and that the events sort right.

Just because it's Noon on New Years Eve in England doesn't mean it's not 1 AM the next year in New Zealand. Places a few miles apart may go on Daylight savings different days. Indian Standard Time is not even aligned on the hour to GMT (+05:30)!

Test early, and test often. Memorize a few timezones, and challenge your assumptions when writing code that has to do with time. Don't use wall clocks to mean monotonic time. Remember there's a whole world out there, and we only deal with part of it.

It's also worth remembering, as Andrew Pendleton pointed out to me, that it's possible that a datetime isn't even unique for a place, since you can never know if 2016-11-06 01:00:00 in America/New_York (in the tzdb) is the first one, or second one. Storing EST or EDT along with your datetime may help, though!

Pitfalls

Improper handling of timezones can lead to some interesting things, and failing to be explicit (or at least, very rigid) in what you expect will lead to an unholy class of bugs we've all come to hate. At best, you have confused users doing math, at worst, someone misses a critical event, or our security code fails.

I recently found what I regard to be a pretty bad bug in apt (which David has prepared a fix for and is pending upload, yay! Thank you!), which boiled down to documentation and code expecting datetimes in a timezone, but accepting any timezone, and silently treating it as UTC.

The solution is to hard-fail, which is an interesting choice to me (as a vocal fan of timezone aware code), but at the least it won't fail by misunderstanding what the server is trying to communicate, and I do understand and empathize with the situation the apt maintainers are in.

Final Thoughts

Overall, my main point is although most modern developers know how to deal with Unicode pain, I think there is a more general lesson to learn -- namely, you should always know what data you have, and always remember what it is. Understand assumptions as early as you can, and always store them with the data.

11 June, 2016 03:45AM by Paul Tagliamonte

June 10, 2016

Valerie Young

Week 1 on Reproducible Builds

In this post I’m reviewing what I’ve done the last 6 days of Outreachy-funded reproducible builds work, outline what I plan to do the next two weeks, and speculate on long term goals. For those of you involved in the Debian reproducible builds project, please provide feedback about future plans and work!

Week One review

One week of Outreachy completed! What have I done?

  • Reproduced the reproducible builds tests website locally
  • Added information to the INSTALL file about reproducing the tests website (viewable here)
  • Checked in changes that broke nearly every link to the tests website
  • Fixed mosts of the broken links by adding redirects. (Please let me know if you find any!)

The change that broke everything was the addition of a directory: tests.reproducible-builds.org/debian

The directory was added to contain all Debian-specific pages, in line with the other project’s reproducible builds status pages: arch linux, fedora, coreboot, etcs. Previously, all Debian pages we simply served directly out of the DocumentRoot. To fix all the broken things, I’m pretty sure I had to find, inspect, and add  /debian or change global variables within every file pointer in the entire tests website. Sometime tedious, but chasing down bugs and complaints was mostly fun 🙂

I also learned (everything I now know) about Apache websites, redirects, the website/navigation/directory structure of tests.reproducible-builds.org, and the roles of many of the reproducible scripts in jenkins.debian.net/bin.

Week Two plan

What will or should I do next?

In the short term, over the next two weeks, I hope to make useful improvements to the tests website and backend while continuing to get up to speed (as well as learn Python).

  • Improve navigation on tests.reproducible-builds.org/debian
    • fix highlighting in the nav bar on package pages
    • address navigation bar re-organizing requests like this one
    • add documentation/hovertext for links
  • Create a front page for the test reproducible builds site (update: probably not do this yet, low priority, already “too many front pages” for reproducible builds)
  • Convert bin/reproducible_html_pkg_sets.sh to python

Have other thoughts about minor improvements to tests.reproducible-builds.org? Please let me know! The above list is not internally prioritized, feel free to ask for things to be bubbled up.

Longer-term goals

My long term summer goal is to make the Debian test code more easily extensible to show the reproducible results from other projects. This will lower the barrier for new projects to keep track of the reproducibility of their code, for great good.

This starts with the reproducible.db database, which presently only tracks reproducible testing results for the Debian project.  The reproducible builds project’s needs have outgrown the original SQLight database, so this redesigning includes a migration to Postgre. Goals of the redesign include ease of querying/comparing packages across distributions, as well as generalization to include results from projects other than Debian. I’ll start on this work in two weeks, when I get to DebCamp! 🙂

Redesigning the database will also lead to updating the python script which use that data to produce the Debian tests website. Other project scripts (like Fedora, RedHat and Coreboot) can then be updated to track results in the database as well, instead simply directly producing their own test websites.

update: as an intermediate step — before redesigned the reproducible.db database to handle multiple projects — h01ger recommended I help the FreeBSD project recorded tests to a FreeBSD specific database.

10 June, 2016 11:32PM by spectranaut

hackergotchi for Guido Günther

Guido Günther

Debian Fun in May 2016

Debian LTS

May marked the thirteenth month I contributed to Debian LTS under the Freexian umbrella. I spent the 17.25 hours working on these LTS things:

  • Fixed CVE-2014-7210 in pdns resulting in DLA-492-1
  • Fixed the build failure of Icedove on armhf resulting in DLA 472-2
  • Forward ported our nss, nspr enhancements to to the current versions in testing to continue the discussion on the same nss and nspr versions in all suites including some ABI compliance research (thanks abi-compliance-tester!), resulting in 824872.
  • Backported Icedve 45 and Enigmail to wheezy to check if we can continue to support it - we can with a minor tweaks. Upload will happen in June.
  • While at that added some autpkgtests for Icedove 45 resulting in 809723 (already applied).
  • Released DLA-498-1 for ruby-active-model-3.2 to address CVE-2016-0753.
  • Reviewed the Updates of ruby-active-record-3.2 for CVE-2015-7577 and eglibc.

Other Debian stuff

  • Uploaded libvirt 1.3.4 to sid, 1.3.5~rc1 to experimental
  • Uploaded libosinfo 0.3.0 to sid
  • Uploaded git-buildpackage 0.7.4 to sid including experimental multiple tarball support for gbp buildpackage

10 June, 2016 05:38PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, May 2016

I was assigned another 15 hours of work by Freexian's Debian LTS initiative, but only worked a total of 10 hours. I intend to make up for this in June.

I began preparing the next stable update for Linux 3.2 on kernel.org, but haven't yet sent it out for review. I rebased the wheezy-security branch onto Linux 3.2.80, and added fixes for one more security issue and one data corruption issue affecting aufs.

I started a week in the front desk, triaging new issues for wheezy.

10 June, 2016 11:02AM

June 09, 2016

hackergotchi for Martin Pitt

Martin Pitt

autopkgtest 4.0: Simplified CLI, deprecating “adt”

Historically, the “adt-run” command line has allowed multiple tests; as a consequence, arguments like --binary or --override-control were position dependent, which confused users a lot (#795274, #785068, #795274, LP #1453509). On the other hand I don’t know anyone or any CI system which actually makes use of the “multiple tests on a single command line” feature.

The command line also was a bit confusing in other ways, like the explicit --built-tree vs. --unbuilt-tree and the magic / vs. // suffixes, or option vs. positional arguments to specify tests.

The other long-standing confusion is the pervasive “adt” acronym, which is still from the very early times when “autopkgtest” was called “autodebtest” (this was changed one month after autodebtest’s inception, in 2006!).

Thus in some recent night/weekend hack sessions I’ve worked on a new command line interface and consistent naming. This is now available in autopkgtest 4.0 in Debian unstable and Ubuntu Yakkety. You can download and use the deb package on Debian jessie and Ubuntu ≥ 14.04 LTS as well. (I will provide official backports after the first bug fix release after this got some field testing.)

New “autopkgtest” command

The adt-run program is now superseded by autopkgtest:

  • It accepts only exactly one tested source package, and gives a proper error if none or more than one (often unintend) is given. Binaries to be tested, --override-control, etc. can now be specified in any order, making the arguments position independent. So you now can do things like:
    autopkgtest *.dsc *.deb [...]

    Before, *.deb only applied to the following test.

  • The explicit --source, --click-source etc. options are gone, the type of tested source/binary packages, including built vs. unbuilt tree, is detected automatically. Tests are now only specified with positional arguments, without the need (or possibility) to explicitly specify their type. The one exception is --installed-click com.example.myapp as possible names are the same as for apt source package names.
    # Old:
    adt-run --unbuilt-tree pkgs/foo-2 [...]
    # or equivalently:
    adt-run pkgs/foo-2// [...]
    
    # New:
    autopkgtest pkgs/foo-2
    # Old:
    adt-run --git-source http://example.com/foo.git [...]
    # New:
    autopkgtest http://example.com/foo.git [...]
    
  • The virtualization server is now separated with a double instead of a tripe dash, as the former is standard Unix syntax.
  • It defaults to the current directory if that is a Debian source package. This makes the command line particularly simple for the common case of wanting to run tests in the package you are just changing:
    autopkgtest -- schroot sid

    Assuming the current directory is an unbuilt Debian package, this will build the package, and run the tests in ./debian/tests against the built binaries.

  • The virtualization server must be specified with its “short” name only, e. g. “ssh” instead of “adt-virt-ssh”. They also don’t get installed into $PATH any more, as it’s hardly useful to call them directly.

README.running-tests got updated to the new CLI, as usual you can also read the HTML online.

The old adt-run CLI is still available with unchanged behaviour, so it is safe to upgrade existing CI systems to that version.

Image build tools

All adt-build* tools got renamed to autopkgtest-build*, and got changed to build images prefixed with “autopkgtest” instead of “adt”. For example, adt-build-lxc ubuntu xenial now produces an autopkgtest-xenial container instead of adt-xenial.

In order to not break existing CI systems, the new autopkgtest package contains symlinks to the old adt-build* commands, and when being called through them, also produce images with the old “adt-” prefix.

Environment variables in tests

Finally there is a set of environment variables that are exported by autopkgtest for using in tests and image customization tools, which now got renamed from ADT_* to AUTOPKGTEST_*:

  • AUTOPKGTEST_APT_PROXY
  • AUTOPKGTEST_ARTIFACTS
  • AUTOPKGTEST_AUTOPILOT_MODULE
  • AUTOPKGTEST_NORMAL_USER
  • AUTOPKGTEST_REBOOT_MARK
  • AUTOPKGTEST_TMP

As these are being used in existing tests and tools, autopkgtest also exports/checks those under their old ADT_* name. So tests can be converted gradually over time (this might take several years).

Feedback

As usual, if you find a bug or have a suggestion how to improve the CLI, please file a bug in Debian or in Launchpad. The new CLI is recent enough that we still have some liberty to change it.

Happy testing!

09 June, 2016 08:24PM by pitti

Patrick Schoenfeld

Ansible: Indenting in Templates

When using ansible to configure systems and services, templates can reach a significant complexity.  Proper indenting can help to improve the readability of the templates, which is very important for further maintenance.

Unfortunately the default settings for the jinja2 template engine in ansible do enable trim_blocks only, while a combination with lstrip_blocks would be better. But here comes the good news:

It’s possible to enable that setting on a per-template base. The secret is to add a special comment to the very first line of a template:

#jinja2: lstrip_blocks: True

This setting does the following: If enabled, leading spaces and tabs „are stripped from the start of a line to a block“.

So a resulting template could look like this:

global
{% for setting in global_settings %}
    {% if setting ... %}
    option {{ setting }}
    {% endif %}
{% endfor %}

Unfortunately (or fortunately, if you want to see it this way 😉 this does not strip leading spaces and tabs where the indentation is followed by pure text, e.g. the whitespaces in line 4 are preserved. So as a matter of fact, if you care for the indentation in the resulting target file, you need to indent those lines  according to the indentation wanted in the target file instead, like it is done in the example.

In less simple cases, with more deep nesting, this may seem odd, but hey: it’s the best compromise between a good, readable template and a consistently indented output file.

09 June, 2016 09:09AM by Patrick Schönfeld

NOKUBI Takatsugu

Recurrent Convolutional Neural Networks for Text Classification

I made a simple implementation of text classification with Recurrent CNN.

https://github.com/knok/rcnn-text-classification

It uses chainer, a Deep Learning framework.

Recurrent convolutional neural networks for text classification
Siwei Lai, Liheng Xu, Kang Liu, Jun Zhao, Chinese Academy of Sciences, China
AAAI. 2015.

09 June, 2016 07:38AM by knok

June 08, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

Working to pass GSoC

GSoC students have officially been coding since 23 May (about 2.5 weeks) and are almost half-way to the mid-summer evaluation (20 - 27 June). Students who haven't completed some meaningful work before that deadline don't receive payment and in such a large program, there is no possibility to give students extensions or let them try and catch up later.

Every project and every student are different, some are still getting to know their environment while others have already done enough to pass the mid-summer evaluation.

I'd like to share a few tips to help students ensure they don't inadvertently fail the mid-summer evaluation

Kill electronic distractions

As a developer of real-time communications projects, many people will find it ironic or hypocritical that this is at the top of my list.

Switch off the mobile phone or put it in silent mode so it doesn't even vibrate. Research has suggested that physically turning it off and putting it out of sight has significant benefits. Disabling the voicemail service can be an effective way of making sure no time is lost listening to a bunch of messages later. Some people may grumble at first but if they respect you, they'll get into the habit of emailing you and waiting for you to respond when you are not working.

Get out a piece of paper and make a list of all the desktop notifications on your computer, whether they are from incoming emails, social media, automatic updates, security alerts or whatever else. Then figure out how to disable them all one-by-one.

Use email to schedule fixed times for meetings with mentors. Some teams/projects also have fixed daily or weekly times for IRC chat. For a development project like GSoC, it is not necessary or productive to be constantly on call for 3 straight months.

Commit every day

Habits are a powerful thing. Successful students have a habit of making at least one commit every day. The "C" in GSoC is for Code and commits are a good way to prove that coding is taking place.

GSoC is not a job, it is like a freelance project. There is no safety-net for students who get sick or have an accident and mentors are not bosses, each student is expected to be their own boss. Although Google has started recommending students work full time, 40 hours per week, it is unlikely any mentors have any way to validate these hours. Mentors can look for a commit log, however, and simply won't be able to pass a student if there isn't code.

There may be one day per week where a student writes a blog or investigates a particularly difficult bug and puts a detailed report in the bug tracker but by the time we reach the second or third week of GSoC, most students are making at least one commit in 3 days out of every 5.

Consider working away from home/family/friends

Can you work without anybody interrupting you for at least five or six hours every day?

Do you feel pressure to help with housework, cooking, siblings or other relatives? Even if there is no pressure to do these things, do you find yourself wandering away from the computer to deal with them anyway?

Do family, friends or housemates engage in social activities, games or other things in close proximity to where you work?

All these things can make a difference between passing and failing.

Maybe these things were tolerable during high school or university. GSoC, however, is a stepping stone into professional life and that means making a conscious decision to shut those things out and focus. Some students have the ability to manage these distractions well, but it is not for everybody. Think about how leading sports stars or musicians find a time and space to be "in the zone" when training or rehearsing, this is where great developers need to be too.

Some students find the right space in a public library or campus computer lab. Some students have been working in hacker spaces or at empty desks in local IT companies. These environments can also provide great networking opportunities.

Managing another summer job concurrently with GSoC

It is no secret that some GSoC students have another job as well. Sometimes the mentor is aware of it, sometimes it has not been disclosed.

The fact is, some students have passed GSoC while doing a summer job or internship concurrently but some have also failed badly in both GSoC and their summer job. Choosing one or the other is the best way to succeed, get the best results and maximize the quality of learning and community interaction. For students in this situation, now it is not too late to make the decision to withdraw from GSoC or the other job.

If doing a summer job concurrently with GSoC is unavoidable, the chance of success can be greatly increased by doing the GSoC work in the mornings, before starting the other job. Some students have found that they actually finish more quickly and produce better work when GSoC is constrained to a period of 4 or 5 hours each morning and their other job is only in the afternoon. On the other hand, if a student doesn't have the motivation or energy to get up and work on GSoC before the other job then this is a strong sign that it is better to withdraw from GSoC now.

08 June, 2016 05:11PM by Daniel.Pocock

hackergotchi for Gunnar Wolf

Gunnar Wolf

University degrees and sysadmin skills

I'll tune in to the post-based conversation being held on Planet Debian: Russell Coker wonders about what's needed to get university graduates with enough skills for a sysadmin job, to which Lucas Nussbaum responds with his viewpoints. They present a very contrasting view of what's needed for students — And for a good reason, I'd say: Lucas is an academician; I don't know for sure about Russell, but he seems to be a down-to-the-earth, dirty-handed, proficient sysadmin working on the field. They both contact newcomers to their fields, and will notice different shortcomings.

I tend to side with Lucas' view. That does not come as a surprise, as I've been working for over 15 years in an university, and in the last few years I started walking from a mostly-operative sysadmin in an academic setting towards becoming an academician that spends most of his time sysadmining. Subtle but important distinction.

I teach at the BSc level at UNAM, and am a Masters student at IPN (respectively, Mexico's largest and second-largest universities). And yes, the lack of sysadmin abilities in both is surprising. But so is a good understanding of programming. And I'm sure that, were I to dig into several different fields, I'd feel the same: Student formation is very basic at each of those fields.

But I see that as natural. Of course, if I were to judge people as geneticists as they graduate from Biology, or were I to judge them as topologists as they graduate from Mathematics, or any other discipline in which I'm not an expert, I'd surely not know where to start — Given I have about 20 years of professional life on my shoulders, I'm quite skewed as to what is basic for a computing professional. And of course, there are severe holes in my formation, in areas I never used. I know next to nothing of electronics, my mathematical basis is quite flaky, and I'm a poor excuse when talking about artificial intelligence.

Where am I going with this? An university degree (BSc in English, would amount to "licenciatura" in Spanish) is not for specialization. It is to have a sufficiently broad panorama of the field, and all of the needed tools to start digging deeper and specializing — either by yourself, working on a given field and learning its details as you go, or going through a postgraduate program (Specialization, Masters, Doctorate).

Even most of my colleagues at the Masters in Engineering in Security and Information Technology lack of a good formation in fields I consider essential. However, what does information security mean? Many among them are working on legal implications of several laws that touch our field. Many other are working on authenticity issues in images, audios and other such media. Many other are trying to come up with mathematical ways to cheapen the enormous burden of crypto operations (say, "shaving" CPU cycles off a very large exponentiation). Others are designing autonomous learning mechanisms to characterize malware. Were I as a computing professional to start talking about their research, I'd surely reveal I know nothing about it and get laughed at. That's because I haven't specialized in those fields.

University education should give a broad universal basis to enter a professional field. It should not focus on teaching tools or specific procedures (although some should surely be presented as examples or case studies). Although I'd surely be happy if my university's graduates were to know everything about administering a Debian system, that would be wrong for a university to aim at; I'd criticize it the same way I currently criticize programs that mix together university formation and industry certification as if they were related.

08 June, 2016 05:03PM by gwolf

Reproducible builds folks

Reproducible builds: week 58 in Stretch cycle

What happened in the Reproducible Builds effort between May 29th and June 4th 2016:

Media coverage

Ed Maste will present Reproducible Builds in FreeBSD at BDSCan 2016 in Ottawa, Canada on June 11th.

GSoC and Outreachy updates

Toolchain fixes

  • Paul Gevers uploaded fpc/3.0.0+dfsg-5 with a new helper script fp-fix-timestamps, which helps with reproducibility issues of PPU files in freepascal packages.
  • Sascha Steinbiss uploaded a patched version of epydoc to our experimental repository to test a patch for the use_epydoc issue.

Other upstream fixes

Packages fixed

The following 53 packages have become reproducible due to changes in their build-dependencies: angband blktrace code-saturne coinor-symphony device-tree-compiler mpich rtslib ruby-bcrypt ruby-bson-ext ruby-byebug ruby-cairo ruby-charlock-holmes ruby-curb ruby-dataobjects-sqlite3 ruby-escape-utils ruby-ferret ruby-ffi ruby-fusefs ruby-github-markdown ruby-god ruby-gsl ruby-hdfeos5 ruby-hiredis ruby-hitimes ruby-hpricot ruby-kgio ruby-lapack ruby-ldap ruby-libvirt ruby-libxml ruby-msgpack ruby-ncurses ruby-nfc ruby-nio4r ruby-nokogiri ruby-odbc ruby-oj ruby-ox ruby-raindrops ruby-rdiscount ruby-redcarpet ruby-redcloth ruby-rinku ruby-rjb ruby-rmagick ruby-rugged ruby-sdl ruby-serialport ruby-sqlite3 ruby-unicode ruby-yajl ruby-zoom thin

The following packages have become reproducible after being fixed:

Some uploads have addressed some reproducibility issues, but not all of them:

Uploads with an unknown result because they fail to build:

  • h2database/1.4.192-1 by Emmanuel Bourg, which forces a specific locale to generate documentation.

Patches submitted that have not made their way to the archive yet:

  • #825764 against docbook-ebnf by Chris Lamb: sort list of globbed files.
  • #825857 against python-setuptools by Anton Gladky: sort list of files in native_libs.txt.
  • #825968 against epydoc by Sascha Steinbiss: traverse lists in sorted order.
  • #826051 against dh-lua by Reiner Herrmann: sort list of Lua versions embedded into control file.
  • #826093 against osc by Alexis Bienvenüe: use SOURCE_DATE_EPOCH for manpage date.
  • #826158 against texinfo by Alexis Bienvenü: use SOURCE_DATE_EPOCH for dates in makeinfo output.
  • #826162 against slime by Alexis Bienvenüe: sort list of contributors locale-independently.
  • #826209 against fastqtl by Chris Lamb: normalize permissions and order in tarball.
  • #826309 against gnupg2 by intrigeri: don't embed hostname and timestamp into gpgv.exe.

Package reviews

45 reviews have been added, 25 have been updated and 25 have been removed in this week.

12 FTBFS bugs have been reported by Chris Lamb and Niko Tyni.

diffoscope development

  • diffoscope 53 was been released by Mattia Rizzolo, with:
    • various improvements on temporary file handling;
    • fix a crash when comparing directories with broken symlinks (#818856);
    • great improvement on the deb(5) support (#818414), by Reiner Herrmann;
    • add FreeBSD packages in --list-tools, by Ed Maste.
  • diffoscope 54 (released shortly after) to address a regression involving --list-tools, where a syntax error prevented proper listing of all tools.

strip-nondeterminism development

Mattia uploaded strip-nondeterminism 0.018-1 which improved support for *.epub files.

tests.reproducible-builds.org

Misc.

Last week we also learned about progress of reproducible builds in FreeBSD. Ed Maste announced a change to record the build timestamp during ports building, which is required for later reproduction.

This week's edition was written by Reiner Herrman, Holger Levsen and Chris Lamb and reviewed by a bunch of Reproducible builds folks on IRC.

08 June, 2016 02:08PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Some tools for working with Docker images

For developing complex, real-world Docker images, there are a number of tools that can make life easier.

The first thing to realise is that the Dockerfile format is severely limited. At work, we have eventually outgrown it and it has been replaced with a structured YAML document that is processed into a Dockerfile by a tool called dogen. There are several advantages to this, but I'll point out two: firstly, having data about the image available in a structured format makes automatically deriving technical documentation very easy. Secondly, some of the quirks of Dockerfiles, such as the ADD command respecting the environment's umask, are worked around in the dogen tool.

We have a large suite of integration tests that we run against images to make sure that we haven't introduced regressions during their development. The core of this is the Container Testing Framework, which makes use of the Behave system.

Each command that is run in a Dockerfile generates a new docker image layer. In practice, this can mean a real-world image has a great number of layers underneath it. Docker-dot-com have resisted introducing layer squashing into their tools, but with both hard limits for layers in some of the storage backends, and performance issues for most of the rest, this is a very real issue. Marek Goldmann wrote a squashing tool that we use to control the number of intermediate layers that are introduced by our images.

Finally, even with tools like dogen and ctf, we would like to be able to have more sophisticated tools than shell scripts for configuring images, both at image build time and container run time. We want to do this without introducing extra dependencies inside the images which will not otherwise be used for their operation.

Ansible could be a solution for this, but there are practical issues with relying on it for runtime configuration in our situation. For that reason David Becvarik is designing and implementing Container Configuration Tool, or cct, a framework for performing configuration of containers written in Python.

08 June, 2016 12:45PM

hackergotchi for Tanguy Ortolo

Tanguy Ortolo

Process command line arguments in shell

When writing a wrapper script, one often has to process the command line arguments to transform them according to his needs, to change some arguments, to remove or insert some, or perhaps to reorder them.

Naive approach

The naive approach to do that is¹:

# Process arguments, building a new argument list
new_args=""
for arg in "$@"
do
    case "$arg"
    in
        --foobar)
            # Convert --foobar to the new syntax --foo=bar
            new_args="$args --foo=bar"
        ;;
        *)
            # Take other options as they are
            new_args="$args $arg"
        ;;
    esac
done

# Call the actual program
exec program $new_args

This naive approach is simple, but fragile, as it will break on arguments that contain a space. For instance, calling wrapper --foobar "some file" (where some file is a single argument) will result in the call program --foo=bar some file (where some and file are two distinct arguments).

Correct approach

To handle spaces in arguments, we need either:

  • to quote them in the new argument list, but that requires escaping possible quotes they contain, which would be error-prone, and implies using external programs such as sed;
  • to use an actual list or array, which is a feature of advanced shells such as Bash or Zsh, not standard shell…

… except standard shell does support arrays, or rather, it does support one specific array: the positional parameter list "$@"². This leads to one solution to process arguments in a reliable way, which consists in rebuilding the positional parameter list with the built-in command set --:

# Process arguments, building a new argument list in "$@"
# "$@" will need to be cleared, not right now but on first iteration only
first_iter=1
for arg in "$@"
do
    if [ "$first_iter" -eq 1 ]
    then
        # Clear the argument list
        set --
        first_iter=0
    fi
    case "$arg"
    in
        --foobar) set -- "$@" --foo=bar ;;
        *) set -- "$@" "$arg" ;;
    esac
done

# Call the actual program
exec program "$@"

Notes

  1. I you prefer, for arg in "$@" can be simplified to just for arg.
  2. As a reminder, and contrary to what it looks like, quoted "$@" does not expand to a single field, but to one field per positional parameter.

08 June, 2016 11:29AM by Tanguy

hackergotchi for Lucas Nussbaum

Lucas Nussbaum

Re: Sysadmin Skills and University Degrees

Russell Coker wrote about Sysadmin Skills and University Degrees. I couldn’t agree more that a major deficiency in Computer Science degrees is the lack of sysadmin training. It seems like most sysadmins learned most of what they know from experience. It’s very hard to recruit young engineers (freshly out of university) for sysadmin jobs, and the job interviews are often a bit depressing. Sysadmins jobs are also not very popular with this public, probably because university curriculums fail to emphasize what’s exciting about those jobs.

However, I think I disagree rather deeply with Russell’s detailed analysis.

First, Version Control. Well, I think that it’s pretty well covered in university curriculums nowadays. From my point of view, teaching CS in Université de Lorraine (France), mostly in Licence Professionnelle Administration de Systèmes, Réseaux et Applications à base de Logiciels Libres (warning: french), a BSc degree focusing on Linux systems administration, it’s not usual to see student projects with a mandatory use of Git. And it doesn’t seem to be a major problem for students (which always surprises me). However, I wouldn’t rate Version Control as the most important thing that is required for a sysadmin. Similarly Dependencies and Backups are things that should be covered, but probably not as first class citizens.

I think that there are several pillars in the typical sysadmin knowledge.

First and foremost, sysadmins need a good understanding of the inner workings of an operating system. I sometimes feel that many Operating Systems Design courses are a bit too much focused on the “Design” side of things. Yes, it’s useful to understand the low-level mechanisms, and be able to (mentally) recreate an OS from scratch. But it’s also interesting to know how real systems are actually built, and what are the trade-off involved. I very much enjoyed reading Branden Gregg’s Systems Performance: Enterprise and the Cloud because each chapter starts with a great overview of how things are in the real world, with a very good level of detail. Also, addressing OS design from the point of view of performance could be a way to turn those courses into something more attractive for students: many people like to measure, benchmark, optimize things, and it’s quite easy to demonstrate how different designs, or different configurations, make a big difference in terms of performance in the context of OS design. It’s possible to be a sysadmin and ignore, say, the existence of the VFS, but there’s a large class of problems that you will never be able to solve. It can be a good trade-off for a curriculum (e.g. at the BSc level) to decide to ignore most of the low-level stuff, but it’s important to be aware of it.

Students also need to learn how to design a proper infrastructure (that meets requirements in terms of scalability, availability, security, and maybe elasticity). Yes, backups are important. But monitoring is, too. As well as high availability. In order to scale, it’s important to be able to automatize stuff. Russell writes that Sysadmins need some programming skills, but that’s mostly scripting and basic debugging. Well, when you design an infrastructure, or when you use configuration management tools such as Puppet, in some sense, you are programming, and in terms of needs to abstract things, it’s actually similar to doing object-oriented programming, with similar choices (should I use that off-the-shelf puppet module, or re-develop my own? How should everything fit together?). Also, when debugging, it’s often useful to be able to dig into code, understand what the developer was trying to do, and if the expected behavior actually matches what you are seeing. It often results in spending a lot of time to create a one-line fix, and it requires very advanced programming skills. Again, it’s possible to be a sysadmin with only limited software development knowledge, but there’s a large class of things that you are unlikely to address properly.

I think that what makes sysadmins jobs both very interesting and very challenging is that they require a very wide range of knowledge. There’s often the ability to learn about new stuff (much more than in software development jobs). Of course, the difficult question is where to draw the line. What is the sysadmin knowledge that every CS graduate should have, even in curriculums not targeting sysadmin jobs? What is the sysadmin knowledge for a sysadmin BSc degree? for a sysadmin MSc degree?

08 June, 2016 08:04AM by lucas

Russell Coker

Sysadmin Skills and University Degrees

I think that a major deficiency in Computer Science degrees is the lack of sysadmin training.

Version Control

The first thing that needs to be added is the basics of version control. CVS (which is now regarded as obsolete) was initially released when I was in the first year of university. But SCCS and RCS had been in use for some time. I think that the people who designed my course were remiss in not adding any mention of version control (not even strategies for saving old versions of your work), one could say that they taught us about version control by letting us accidentally delete our assignments. :-#

If a course is aimed at just teaching programmers (as most CS degrees are) then version control for group assignments should be a standard part of the course. Having some marks allocated for the quality of comments in the commit log would also be good.

A modern CS degree should cover distributed version control, that means covering Git as it’s the most popular distributed version control system nowadays.

For people who want to work as sysadmins (as opposed to developers who run their own PCs) a course should have an optional subject for version control of an entire system. That includes tools like etckeeper for version control of system configuration and tools like Puppet for automated configuration and system maintenance.

Dependencies

It’s quite reasonable for a CS degree to provide simplified problems for the students to solve so they can concentrate on one task. But in the real world the problems are more complex. One of the more difficult parts of managing real systems is dependencies. You have issues of header files etc at compile time and library versions at deployment. Often you need a program to run on systems with different versions of the OS which means making it compile for both and deal with differences in behaviour.

There are lots of hacky things that people do to deal with dependencies in systems. People link compiled programs statically, install custom versions of interpreters in user home directories or /usr/local for daemons, and do many other things. These things can have bad consequences including data loss, system downtime, and security problems. It’s not always wrong to do such things, but it’s something that should only be done with knowledge of the potential consequences and a plan for mitigating them. A CS degree should teach the potential advantages and disadvantages of these options to allow graduates to make informed decisions.

Backups

I’ve met many people who call themselves computer professionals and think that backups aren’t needed. I’ve seen production systems that were designed in a way that backups were impossible. The lack of backups is a serious problem for the entire industry.

Some lectures about backups could be part of a version control subject in a general CS degree. For a degree that majors in Sysadmin at least one subject about backups is appropriate.

For any backup (even backing up your home PC) you should have offsite backups to deal with fire damage, multiple backups of different ages (especially important now that encryption malware is a serious threat), and a plan for how fast you can restore things.

The most common use of backups is to deal with the case of deleting the wrong file. Unfortunately this case seems to be the most rarely mentioned.

Another common situation that should be covered is a configuration error that results in a system that won’t boot correctly. It’s a very common problem and one that can be solved quickly if you are prepared but which can take a long time if you aren’t.

For a Sysadmin course it is important to cover backups of systems in remote datacenters.

Hardware

A good CS degree should cover the process of selecting suitable hardware. Programmers often get to advise on the hardware used to run their code, especially at smaller companies. Reliability features such as RAID, ECC RAM, and clustering should be covered.

Planning for upgrades is a very important part of this which is usually not taught. Not only do you need to plan for an upgrade without much downtime or cost but you also need to plan for what upgrades are possible. Next year will your system require hardware that is more powerful than you can buy next year? If so you need to plan for a cluster now.

For a Sysadmin course some training about selecting cloud providers and remote datacenter hosting should be provided. There are many complex issues that determine whether it’s most appropriate to use a cloud service, hosted virtual machines, hosted physical servers managed by the ISP, hosted physical servers purchased by the client, or on-site servers. Often a large system will involve 2 or more of those options, even some small companies use 3 or more of those options to try and provide the performance and reliability they need at a price they can afford.

We Need Sysadmin Degrees

Covering the basic coding skills takes a lot of time. I don’t think we can reasonably expect a CS degree to cover all that and also give good coverage to sysadmin work. While some basic sysadmin skills are needed by every programmer I think we need to have separate majors for people who want a career in system administration.

Sysadmins need some programming skills, but that’s mostly scripting and basic debugging. Someone who’s main job is as a sysadmin can probably expect to never make any significant change to a program that’s more than 10,000 lines long. A large amount of the programming in a CS degree can be replaced by “file a bug report” for a sysadmin degree.

This doesn’t mean that sysadmins shouldn’t be doing software development or that they aren’t good at it. One noteworthy fact is that it appears that the most common job among developers of the Debian distribution of Linux is System Administration. Developing an OS involves some of the most intensive and demanding programming. But I think that more than a few people who do such work would have skipped a couple of programming subjects in favour of sysadmin subjects if they were given a choice.

Suggestions

Did I miss anything? What other sysadmin skills should be taught in a CS degree?

Do any universities teach these things now? If so please name them in the comments, it is good to help people find universities that teach them what they want to learn and help them in their career.

08 June, 2016 06:10AM by etbe

hackergotchi for Francois Marier

Francois Marier

Simple remote mail queue monitoring

In order to monitor some of the machines I maintain, I rely on a simple email setup using logcheck. Unfortunately that system completely breaks down if mail delivery stops.

This is the simple setup I've come up with to ensure that mail doesn't pile up on the remote machine.

Server setup

The first thing I did on the server-side is to follow Sean Whitton's advice and configure postfix so that it keeps undelivered emails for 10 days (instead of 5 days, the default):

postconf -e maximal_queue_lifetime=10d

Then I created a new user:

adduser mailq-check

with a password straight out of pwgen -s 32.

I gave ssh permission to that user:

adduser mailq-check sshuser

and then authorized my new ssh key (see next section):

sudo -u mailq-check -i
mkdir ~/.ssh/
cat - > ~/.ssh/authorized_keys

Laptop setup

On my laptop, the machine from where I monitor the server's mail queue, I first created a new password-less ssh key:

ssh-keygen -t ed25519 -f .ssh/egilsstadir-mailq-check
cat ~/.ssh/egilsstadir-mailq-check.pub

which I then installed on the server.

Then I added this cronjob in /etc/cron.d/egilsstadir-mailq-check:

0 2 * * * francois /usr/bin/ssh -i /home/francois/.ssh/egilsstadir-mailq-check mailq-check@egilsstadir mailq | grep -v "Mail queue is empty"

and that's it. I get a (locally delivered) email whenever the mail queue on the server is non-empty.

There is a race condition built into this setup since it's possible that the server will want to send an email at 2am. However, all that does is send a spurious warning email in that case and so it's a pretty small price to pay for a dirt simple setup that's unlikely to break.

08 June, 2016 05:30AM

June 07, 2016

Enrico Zini

You'll thank me later

I agree with this post by Matthew Garrett.

I am quite convinced that most of the communities that I have known are vulnerable to people who are good manipulators of people.

Also, in my experience, manipulation by negating, pushing, or reframing the boundaries of people tends not to be recognised as manipulation, let alone abusive behaviour.

It's not about physically forcing people to do things that they don't want to do. It's about pushing people, again and again, wearing them out, making them feel like, despite their actual needs and wants, saying "yes" to you is the only viable way out.

It can happen for sex, and it can happen for getting a patch merged. It can happen out of habit. It can happen for pretty much anything.

Consent culture was not part of my education, and it was something I've had to discover for myself. I assume that to be a common experience, and that pushing against boundaries does happen, even without malicious intentions, on a regular basis.

However, it is not ok.

Take insisting. It is not the same as persisting. Persisting is what I do when I advocate for change. Persisting is what I do when the first version of my code segfaults. Insisting is what I do when a person says "no" to me and I don't want to accept it.

Is it ok to insist that a friend, whom you think is sick, goes and gets help?

Is it ok to insist that a friend, whom you think is sexually repressed, pushes through their boundaries to explore their sexuality with you?

In both cases, one may say, or think, trust me, you'll thank me afterwards. In both cases, what if afterwards I have nothing to thank you for?

I see a common pattern in you'll thank me afterwards situations. It can be in good faith, it can be creepy, it can be abusive, and most of the time, what it is, is dangerously unclear to most of the people involved.

I think that in a community like Debian, at the level of personal interaction, Insisting is not ok.

I think that in a community like Debian, at the level of personal interaction, "You'll thank me afterwards" is not ok.

When I say it's not ok I mean that it should not happen. If it happens, people must be free to say "stop". If it doesn't stop, people must expect to be able to easily find support, understanding, and help to make it stop.

Just like when people upload untested packages.

Pushing against personal boundaries of people is not ok, and pushing against personal boundaries does happen. When you get involved in a new community, such as Debian, find out early where, if that happens, you can find support, understanding, and help to make it stop.

If you cannot find any, or if the only thing you can find is people who say "it never happens here", consider whether you really want to be in that community.

07 June, 2016 10:43AM

hackergotchi for Matthew Garrett

Matthew Garrett

Be wary of heroes

Inspiring change is difficult. Fighting the status quo typically means being able to communicate so effectively that powerful opponents can't win merely by outspending you. People need to read your work or hear you speak and leave with enough conviction that they in turn can convince others. You need charisma. You need to be smart. And you need to be able to tailor your message depending on the audience, even down to telling an individual exactly what they need to hear to take your side. Not many people have all these qualities, but those who do are powerful and you want them on your side.

But the skills that allow you to convince people that they shouldn't listen to a politician's arguments are the same skills that allow you to convince people that they shouldn't listen to someone you abused. The ability that allows you to argue that someone should change their mind about whether a given behaviour is of social benefit is the same ability that allows you to argue that someone should change their mind about whether they should sleep with you. The visibility that gives you the power to force people to take you seriously is the same visibility that makes people afraid to publicly criticise you.

We need these people, but we also need to be aware that their talents can be used to hurt as well as to help. We need to hold them to higher standards of scrutiny. We need to listen to stories about their behaviour, even if we don't want to believe them. And when there are reasons to believe those stories, we need to act on them. That means people need to feel safe in coming forward with their experiences, which means that nobody should have the power to damage them in reprisal. If you're not careful, allowing charismatic individuals to become the public face of your organisation gives them that power.

There's no reason to believe that someone is bad merely because they're charismatic, but this kind of role allows a charismatic abuser both a great deal of cover and a great deal of opportunity. Sometimes people are just too good to be true. Pretending otherwise doesn't benefit anybody but the abusers.

comment count unavailable comments

07 June, 2016 03:33AM

June 06, 2016

Carl Chenet

My Free Activities in May 2015

Follow me also on Diaspora*diaspora-banner or Twitter 

Trying to catch up with my blog posts about My Free Activities. This blog post will tell you about my free activities from January to May 2016.

1. Personal projects

2. Journal du hacker

That’s all folks! See you next month!


06 June, 2016 10:00PM by Carl Chenet

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Some work on a VyOS image with Let’s Encrypt certs

I put some packages together this weekend. It’s been a while since I’ve debuilt anything officially.

The plan is to build a binding to the libgnutls.so.30 API. The certtool CSR (REQ) generation interface does not allow me to create a CRL with “not critical” attributes set on purposes. Maybe if I do it a bit closer to the metal it will be easier…

06 June, 2016 07:51PM by C.J. Adams-Collier

Olivier Grégoire

Community bounding + 2 weeks at GSoC!

Welcome in my first and second report!
The community was really good, I talked with a lot of people from all around the world. All that projects are just awesome and I am happy to be part of this.
For this first week, I provided a list of all information I need to pull in my client (this list is subject to change a little bit):
-Call ID
-Resolution of the camera of all the people connect to the call
-Percentage of loosing frame
-Bandwidth (upload + download)
-Name of Codecs using on the conversation
-Time to contact the other person
-The security level
-Performance using by Ring (RAM + CPU)

I am trying to figure out how work the Ring project.
-Understand the external exchange by using Wireshark to catch some important packages.
-Understand the internal exchange between the daemon and clients on the D-Bus by using Bustle

I created my architecture program on the daemon and the D-Bus. [1] You can call the method launchSmartInfo(int x) from the D-Bus (by using D-Feet for example). That will call SmartInfo signal all x ms. This signal can only push an int for the moment, but he will push all the information we want on the clients.

I actually work on the Ring GNU Linux client. So I learn how work the QT and GTK+.

[1]github/Gasuleg/Smartlnfo-Ring: I will stop update this repository because I will push my code on the Gerrit draft of Savoir Faire Linux. It's more easy for my team to do some commentary on that platform and it's free software)

06 June, 2016 04:57PM

Community bounding + 2 weeks at GSoC!

Welcome in my first and second report!
The community was really good, I talked with a lot of people from all around the world. All that projects are just awesome and I am happy to be part of this.
For this first week, I provided a list of all information I need to pull in my client (this list is subject to change a little bit):
-Call ID
-Resolution of the camera of all the people connect to the call
-Percentage of loosing frame
-Bandwidth (upload + download)
-Name of Codecs using on the conversation
-Time to contact the other person
-The security level
-Performance using by Ring (RAM + CPU)

I am trying to figure out how work the Ring project.
-Understand the external exchange by using Wireshark to catch some important packages.
-Understand the internal exchange between the daemon and clients on the D-Bus by using Bustle

I created my architecture program on the daemon and the D-Bus. [1] You can call the method launchSmartInfo(int x) from the D-Bus (by using D-Feet for example). That will call SmartInfo signal all x ms. This signal can only push an int for the moment, but he will push all the information we want on the clients.

I actually work on the Ring GNU Linux client. So I learn how work the QT and GTK+.

[1][https://github.com/Gasuleg/Smartlnfo-Ring](https://github.com/Gasuleg/Smartlnfo-Ring)(I will stop update this repository because I will push my code on the Gerrit draft of Savoir Faire Linux. It's more easy for my team to do some commentary on that platform and it's free software)

06 June, 2016 04:57PM

Reproducible builds folks

Reprotest has a preliminary CLI and configuration file handling

Author: ceridwen

This is the first draft of reprotest's interface, and I welcome comments on how to improve it. At the moment, reprotest's CLI takes two mandatory arguments, the build command to run and the build artifact file to test after running the build. If the build command or build artifact have spaces, they have to be passed as strings, e.g. "debuild -b -uc -us". For optional arguments, it has --variations, which accepts a list of possible build variations to test, one or more of 'captures_environment', 'domain_host', 'filesystem', 'home', 'kernel', 'locales', 'path', 'shell', 'time', 'timezone', 'umask', and 'user_group' (see variations for more information); --dont_vary, which makes reprotest not test any variations in the given list (the default is to run all variations); --source_root, which accepts a directory to run the build command in and defaults to the current working directory; and --verbose, which will eventually enable more detailed logging. To get help for the CLI, run reprotest -h or reprotest --help.

The config file has one section, basics, and the same options as the CLI, except there's no dont_vary option, and there are build_command and artifact options. If build_command and/or artifact are set in the config file, reprotest can be run without passing those as command-line arguments. Command-line arguments always override config file options. Reprotest currently searches the working directory for the config file, but it will also eventually search the user's home directory. A sample config file is below.

[basics]
build_command = setup.py sdist
artifact = dist/reprotest-0.1.tar.gz
source_root = reprotest/
variations =
  captures_environment
  domain_host
  filesystem
  home
  host
  kernel
  locales
  path
  shell
  time
  timezone
  umask
  user_group

At the moment, the only build variations that reprotest actually tests are the environment variable variations: captures_environment, home, locales, and timezone. Over the next week, I plan to add the rest of the basic variations and accompanying tests. I also need to write tests for the CLI and the configuration file. After that, I intend to work on getting (s)chroot communication working, which will involve integrating autopkgtest code.

Some of the variations require specific other packages to be installed: for instance, the locales variation currently requires the fr_CH.UTF-8 locale. Locales are a particular problem because I don't know of a way in Debian to specify that a given locale must be installed. For other packages, it's unclear to me whether I should specify them as depends or recommends: they aren't dependencies in a strict sense, but marking them as dependencies will make it easier to install a fully-functional reprotest. When reprotest runs with variations enabled that it can't test because it doesn't have the correct packages installed, I intend to have it print a warning but continue to run.

tests.reproducible-builds.org also has different settings, such as different locales, for different architectures. I'm not clear on why this is. I'd prefer to avoid having to generate a giant list of variations based on architecture, but if necessary, I can do that. The prebuilder script contains variations specific to Linux, to Debian, and to pbuilder/cowbuilder. I'm not including Debian-specific variations until I get much more of the basic functionality implemented, and I'm not sure I'm going to include pbuilder-specific variations ever, because it's probably better for extensibility to other OSes, e.g. BSD, to add support for plugins or more complicated configurations.

I implemented the variations by creating a function for each variation. Each function takes as input two build commands, two source trees, and two sets of environment variables and returns the same. At the moment, I'm using dictionaries for the environment variables, mutating them in-place and passing the references forward. I'm probably going to replace those at some point with an immutable mapping. While at the moment, reprotest only builds on the existing system, when I start extending it to other build environments, this will require double-dispatch, because the code that needs to be executed will depend on both the variation to be tested and the environment being built on. At the moment, I'm probably going to implement this with a dictionary with tuple keys of (build_environment, variation) or nested dictionaries. If it's necessary for code to depend on OS or architecture, too, this could end up becoming a triple or quadruple dispatch.

06 June, 2016 03:08PM

John Goerzen

How git-annex replaces Dropbox + encfs with untrusted providers

git-annex has been around for a long time, but I just recently stumbled across some of the work Joey has been doing to it. This post isn’t about it’s traditional roots in git or all the features it has for partial copies of large data sets, but rather for its live syncing capabilities like Dropbox. It takes a bit to wrap your head around, because git-annex is just a little different from everything else. It’s sort of like a different-colored smell.

The git-annex wiki has a lot of great information — both low-level reference and a high-level 10-minute screencast showing how easy it is to set up. I found I had to sort of piece together the architecture between those levels, so I’m writing this all down hoping it will benefit others that are curious.

Ir you just want to use it, you don’t need to know all this. But I like to understand how my tools work.

Overview

git-annex lets you set up a live syncing solution that requires no central provider at all, or can be used with a completely untrusted central provider. Depending on your usage pattern, this central provider could require only a few MBs of space even for repositories containing gigabytes or terabytes of data that is kept in sync.

Let’s take a look at the high-level architecture of the tool. Then I’ll illustrate how it works with some scenarios.

Three Layers

Fundamentally, git-annex takes layers that are all combined in Dropbox and separates them out. There is the storage layer, which stores the literal data bytes that you are interested in. git-annex indexes the data in storage by a hash. There is metadata, which is for things like a filename-to-hash mapping and revision history. And then there is an optional layer, which is live signaling used to drive the real-time syncing.

git-annex has several modes of operation, and the one that enables live syncing is called the git-annex assistant. It runs as a daemon, and is available for Linux/POSIX platforms, Windows, Mac, and Android. I’ll be covering it here.

The storage layer

The storage layer simply is blobs of data. These blobs are indexed by a hash, and can be optionally encrypted at rest at remote backends. git-annex has a large number of storage backends; some examples include rsync, a remote machine with git-annex on it that has ssh installed, WebDAV, S3, Amazon Glacier, removable USB drive, etc. There’s a huge list.

One of the git-annex features is that each client knows the state of each storage repository, as well as the capability set of each storage repository. So let’s say you have a workstation at home and a laptop you take with you to work or the coffee shop. You’d like changes on one to be instantly recognized on another. With something like Dropbox or OwnCloud, every file in the set you want synchronized has to reside on a server in the cloud. With git-annex, it can be configured such that the server in the cloud only contains a copy of a file until every client has synced it up, at which point it gets removed. Think about it – that is often what you want anyhow, so why maintain an unnecessary copy after it’s synced everywhere? (This behavior is, of course, configurable.) git-annex can also avoid storing in the cloud entirely if the machines are able to reach each other directly at least some of the time.

The metadata layer

Metadata about your files includes a mapping from the file names to the storage location (based on hashes), change history, and information about the status of each machine that participates in the syncing. On your clients, git-annex stores this using git. This detail is very useful to some, and irrelevant to others.

Some of the git-annex storage backends can support only storage (S3, for instance). Some can support both storage and metadata (rsync, ssh, local drives, etc.) You can even configure a backend to support only metadata (more on why that may be useful in a bit). When you are working with a git-backed repository for git-annex, it can hold data, metadata, or both.

So, to have a working sync system, you must have a way to transport both the data and the metadata. The transport for the metadata is generally rsync or git, but it can also be XMPP in which Git changesets are basically wrapped up in XMPP presence messages. Joey says, however, that there are some known issues with XMPP servers sometimes dropping or reordering some XMPP messages, so he doesn’t encourage that method currently.

The live signaling layer

So once you have your data and metadata, you can already do syncs via git annex sync --contents. But the real killer feature here will be automatic detection of changes, both on the local and the remote. To do that, you need some way of live signaling. git-annex supports two methods.

The first requires ssh access to a remote machine where git-annex is installed. In this mode of operation, when the git-annex assistant fires up, it opens up a persistent ssh connection to the remote and runs the git-annex-shell over there, which notifies it of changes to the git metadata repository. When a change is detected, a sync is initiated. This is considered ideal.

A substitute can be XMPP, and git-annex actually converts git commits into a form that can be sent over XMPP. As I mentioned above, there are some known reliability issues with this and it is not the recommended option.

Encryption

When it comes to encryption, you generally are concerned about all three layers. In an ideal scenario, the encryption and decryption happens entirely on the client side, so no service provider ever has any details about your data.

The live signaling layer is encrypted pretty trivially; the ssh sessions are, of course, encrypted and TLS support in XMPP is pervasive these days. However, this is not end-to-end encryption; those messages are decrypted by the service provider, so a service provider could theoretically spy on metadata, which may include change times and filenames, though not the contents of files themselves.

The data layer also can be encrypted very trivially. In the case of the “dumb” backends like S3, git-annex can use symmetric encryption or a gpg keypair and all that ever shows up on the server are arbitrarily-named buckets.

You can also use a gcrypt-based git repository. This can cover both data and metadata — and, if the target also has git-annex installed, the live signalling layer. Using a gcrypt-based git repository for the metadata and live signalling is the only way to accomplish live syncing with 100% client-side encryption.

All of these methods are implemented in terms of gpg, and can support symmetric of public-key encryption.

It should be noted here that the current release versions of git-annex need a one-character patch in order to fix live syncing with a remote using gcrypt. For those of you running jessie, I recommend the version in jessie-backports, which is presently 5.20151208. For your convenience, I have compiled an amd64 binary that can drop in over /usr/bin/git-annex if you have this version. You can download it and a gpg signature for it. Note that you only need this binary on the clients; the server can use the version from jessie-backports without issue.

Putting the pieces together: some scenarios

Now that I’ve explained the layers, let’s look at how they fit together.

Scenario 1: Central server

In this scenario, you might have a workstation and a laptop that sync up with each other by way of a central server that also has a full copy of the data. This is the scenario that most closely resembles Dropbox, box, or OwnCloud.

Here you would basically follow the steps in the git-assistant screencast: install git-annex on a server somewhere, and point your clients to it. If you want full end-to-end encryption, I would recommend letting git-annex generate a gpg keypair for you, which you would then need to copy to both your laptop and workstation (but not the server).

Every change you make locally will be synced to the server, and then from the server to your other PC. All three systems would be configured in the “client” transfer group.

Scenario 1a: Central server without a full copy of the data

In this scenario, everything is configured the same except the central server is configured with the “transfer” transfer group. This means that the actual data synced to it is deleted after it has been propagated to all clients. Since git-annex can verify which repository has received a copy of which data, it can easily enough delete the actual file content from the central server after it has been copied to all the clients. Many people use something like Dropbox or OwnCloud as a multi-PC syncing solution anyhow, so once the files have been synced everywhere, it makes sense to remove them from the central server.

This is often a good ideal for people. There are some obvious downsides that are sometimes relevant. For instance, to add a third sync client, it must be able to initially copy down from one of the existing clients. Or, if you intend to access the data from a device such as a cell phone where you don’t intend for it to have a copy of all data all the time, you won’t have as convenient way to download your data.

Scenario 1b: Split data/metadata central servers

Imagine that you have a shell or rsync account on some remote system where you can run git-annex, but don’t have much storage space. Maybe you have a cheap VPS or shell account somewhere, but it’s just not big enough to hold your data.

The answer to this would be to use this shell or rsync account for the metadata, but put the data elsewhere. You could, for instance, store the data in Amazon S3 or Amazon Glacier. These backends aren’t capable of storing the git-annex metadata, so all you need is a shell or rsync account somewhere to sync up the metadata. (Or, as below, you might even combine a fully distributed approach with this.) Then you can have your encrypted data pushed up to S3 or some such service, which presumably will grow to whatever size you need.

Scenario 2: Fully distributed

Like git itself, git-annex does not actually need a central server at all. If your different clients can reach each other directly at least some of the time, that is good enough. Of course, a given client will not be able to do fully automatic live sync unless it can reach at least one other client, so changes may not propagate as quickly.

You can simply set this up by making ssh connections available between your clients. git-annex assistant can automatically generate appropriate ~/.ssh/authorized_keys entries for you.

Scenario 2a: Fully distributed with multiple disconnected branches

You can even have a graph of connections available. For instance, you might have a couple machines at home and a couple machines at work with no ability to have a direct connection between them (due to, say, firewalls). The two machines at home could sync with each other in real-time, as could the two machines at work. git-annex also supports things like USB drives as a transport mechanism, so you could throw a USB drive in your pocket each morning, pop it in to one client at work, and poof – both clients are synced up over there. Repeat when you get home in the evening, and you’re synced there. The USB drive’s repository can, of course, be of the “transport” type so data is automatically deleted from it once it’s been synced everywhere.

Scenario 3: Hybrid

git-annex can support LAN sync even if you have a central server. If your laptop, say, travels around but is sometimes on the same LAN as your PC, git-annex can easily sync directly between the two when they are reachable, saving a round-trip to the server. You can assign a cost to each remote, and git-annex will always try to sync first to the lowest-cost path that is available.

Drawbacks of git-annex

There are some scenarios where git-annex with the assistant won’t be as useful as one of the more traditional instant-sync systems.

The first and most obvious one is if you want to access the files without the git-annex client. For instance, many of the other tools let you generate a URL that you can email to people, and then they can download files without any special client software. This is not directly possible with git-annex. You could, of course, make something like a public_html directory be managed with git-annex, but it wouldn’t provide things like obfuscated URLs, password-protected sharing, time-limited sharing, etc. that you get with other systems. While you can share your repositories with others that have git-annex, you can’t share individual subdirectories; for a given repository, it is all or nothing.

The Android client for git-annex is a pretty interesting thing: it is mostly a small POSIX environment, providing a terminal, git, gpg, and the same web interface that you get on a standalone machine. This means that the git-annex Android client is fully functional compared to a desktop one. It also has a quick setup process for syncing off your photos/videos. On the other hand, the integration with the Android ecosystem is poor compared to most other tools.

Other git-annex features

git-annex has a lot to offer besides the git-annex assistant. Besides the things I’ve already mentioned, any given git-annex repository — including your client repository — can have a partial copy of the full content. Say, for instance, that you set up a git-annex repository for your music collection, which is quite large. You want some music on your netbook, but don’t have room for it all. You can tell git-annex to get or drop files from the netbook’s repository without deleting them remotely. git-annex has quite a few ways to automate and configure this, including making sure that at least a certain number of copies of a file exist in your git-annex ecosystem.

Conclusion

I initially started looking at git-annex due to the security issues with encfs, and the difficulty with setting up ecryptfs in this way. (I had been layering encfs atop OwnCloud). git-annex certainly ticks the box for me security-wise, and obviously anything encrypted with encfs wasn’t going to be shared with others anyhow. I’ll be using git-annex more in the future, I’m sure.

06 June, 2016 02:38PM by John Goerzen

Petter Reinholdtsen

The new "best" multimedia player in Debian?

When I set out a few weeks ago to figure out which multimedia player in Debian claimed to support most file formats / MIME types, I was a bit surprised how varied the sets of MIME types the various players claimed support for. The range was from 55 to 130 MIME types. I suspect most media formats are supported by all players, but this is not really reflected in the MimeTypes values in their desktop files. There are probably also some bogus MIME types listed, but it is hard to identify which one this is.

Anyway, in the mean time I got in touch with upstream for some of the players suggesting to add more MIME types to their desktop files, and decided to spend some time myself improving the situation for my favorite media player VLC. The fixes for VLC entered Debian unstable yesterday. The complete list of MIME types can be seen on the Multimedia player MIME type support status Debian wiki page.

The new "best" multimedia player in Debian? It is VLC, followed by totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and kmplayer. I am sure some of the other players desktop files support several of the formats currently listed as working only with vlc, toten and parole.

A sad observation is that only 14 MIME types are listed as supported by all the tested multimedia players in Debian in their desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl, audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg, video/quicktime, video/vnd.rn-realvideo, video/x-matroska, video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find it sad that video/ogg and video/webm is not supported by all the media players in Debian. As far as I can tell, all of them can handle both formats.

06 June, 2016 10:50AM

hackergotchi for Alessio Treglia

Alessio Treglia

Why children can use their imagination better than we do?

 

Children can use their imagination better than us because they are (still) immediately in contact with the Whole and they represent the most pristine prototype of the human being. From birth and for the first years of life, the child is the mirror of our species, who carries in himself the primary elements and the roots of evolution, without conditions or interference.

When then education begins, especially school, his imagination is restrained and limited, everything is being done to concentrate his interests only for what is ‘real’ and to let him leave the world of fantasy. In the first drawing exercises to which the children are subjected at school, their imagination or the appearance of how they perceive some elements of nature are discarded; the drawing that best fit to a photographic vision of reality is rewarded, inhibiting their own imaginative potential from the very beginning, in favour of a more reassuring homologation…

<Read More…[by Fabio Marzocca]>

06 June, 2016 09:19AM by Fabio Marzocca

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2016 released

After long months of testing and waiting for the DVD production, we have released TeX Live 2016 today!

texlive2016

Detailed changes can be found here, the most important ones are:

  • LuaTeX is updated to 0.95 with a sweeping change of primitives. Most documents and classes need to be adapted!
  • Metafont got lua-hooks, mflua and mfluajit
  • SOURCE_DATE_EPOCH support for all engines except LuaTeX and original TeX
  • pdfTeX, XeTeX: some new primitives
  • new programs: gregorio, upmendex
  • tlmgr: system level configuration support
  • installer, tlmgr: cryptographic verification (if gpg is available)

CTAN mirrors are working on getting the latest releases, but in a day or two all the servers should be updated.

Thanks to all the developers, package writers, documentation writers, testers, and especially Karl Berry for his perfect organization.

Now get the champagne and write some nice documents!

06 June, 2016 04:57AM by Norbert Preining

Scarlett Clark

Debian: Reproducible builds Week 2

This was a holiday week for me, so my time was rather crammed into a short space.
I still managed to make progress though.

kapptemplate:
https://bugs.kde.org/show_bug.cgi?id=363448
I was given the green light to push this upstream. So this will eventually trickle down and all will benefit, not just debian.
I will leave bug open though as it will need to be applied to all the releases I expect.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825122

choqok:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=825322
For this I was able to come up with a patch for kconfig_compiler to encode generated files to utf-8.
Review request is here:
https://git.reviewboard.kde.org/r/128102/

kxmlgui:
WIP this has been a steep learning curve, according to the notes it was an easy embedded kernel version, that was not the case! After grueling hours of
trying to sort out randomness in debug output I finally narrowed it down to cases where QStringLiteral was used and there were non letter characters eg. (” <") These were causing debug symbols to generate with ( lambda() ) which caused unreproducible symbol/debug files. It is now a case of fixing all of these in the code to use QString::fromUtf8 seems to fix this. I am working on a mega patch for upstream and it should be ready early in the week. Other items were minor and not notable enough to blog about 🙂 Have a great day.

06 June, 2016 01:43AM by Scarlett Clark

June 05, 2016

Ingo Juergensmann

Request for Adoption: Buildd.Net project

I've been running Buildd.Net for quite a long time. Buildd.Net is a project that focusses on the autobuilders, not the packages. It started back then when the m68k port had a small website running on kullervo, a m68k buildd. Kullervo was too loaded to deal with the increased load of that website, so together with Stephen Marenka we moved the page from kullervo to my server under the domain m68k.bluespice.org. Over time I got many requests if that page could do the same for other archs as well, so I started to hack the code to be able to deal with different archs: Buildd.Net was born.

Since then many years passed by and Buildd.Net evolved into a rather complex project, being capable to deal with different archs and different releases, such as unstable, backports, non-free, etc. Sadly the wanna-build output changed over the years as well, but I didn't have the time anymore to keep up with the changes.

Buildd.Net is based on: 

  • some Bash scripts
  • some Python scripts
  • a PostgreSQL database
  • gnuplot for some graphs
  • some small Perl scripts
  • ... and maybe more...

As long as I was more deeply involved with the m68k autobuilders and others, I found Buildd.Net quite informative as I could get a quick overview how all of the buildds were performing. Based on the PostgreSQL database we could easily spot if a package was stuck on one of the buildds without directly watching the buildd logs.

Storing the information from the buildds about the built packages in a SQL database can give you some benefit. Originally my plan was to use that kind of information for a better autobuilder setup. In the past it happened that large packages were built by buildds with, let's say, 64 MB of RAM and smaller packages were built on the buildds with 128 MB of RAM. Eventually this led to failed builds or excessive build times. Or m68k buildds like Apple Centris boxes or so suffered from slow disk I/O, while some Amiga buildds had reasonable disk speeds (consider 160 kB/s vs. 2 MB/s). 

As you can see there is/was a lot room for optimization of how packages can be distributed between buildds. This could have been done by analyzing the statistics and some scripting, but was never implemented because of missing skills and time on my side.

The lack of time to keep up with the changes of the official wanna-build output (like new package states) is the main reason why I want to give Buildd.Net into good hands. If you are interested in this project, please contact me! I still believe that Buildd.Net can be beneficial to the Debian project. :-)

Kategorie: 
 

05 June, 2016 04:54PM by ij

Iustin Pop

Short trip to Opio en Provence

Short trip to Opio en Provence

I had a short work-related trip this week to Opio en Provence. It was not a working trip, but rather a team event, which means almost a vacation!

Getting there and back

I dislike taking the plane for very short flights (and Zürich-Nice is indeed only around one hour), as that means you're spending 3× as much going to the airport, at the airport, waiting to take off, waiting to get off the plane, and then going from the airport to the actual destination. So I took the opportunity to drive there, since I've never driven that way, and on the map the route seemed reasonably interesting. Not that it's a shorter trip by any measure, but seemed more interesting.

Leaving Zürich I went over San Bernardino pass, as I never did that before. On the north side, the pass is actually much less suited to traffic than the Gotthard pass (also on the north side), as you basically climb around 300m in a very short distance, with very sharp hairpins. There was still snow on the top, and the small lake had lots of slush/ice floating on it. As to the south side, it looked much more driveable, but I'm not sure as I made the mistake of re-joining the highway, so instead of driving reasonably nice on the empty pass road, I spent half an hour in a slow moving line. Lesson learned…

Entering Italy was the usual Como-Milan route, but as opposed to my other trips, this time it was around Milan on the west (A50) and then south on the A7 until it meets the A26 and then down to the coast. From here, along the E80 (Italian A10, French A8) until somewhere near Nice, and then exiting the highway system to get on the small local roads towards Opio.

What I read in advance on the internet was that the coastal highway is very nice, and has better views of the sea than the actual seaside drive (which goes through towns and is much slower). I should know better than trust the internet ☺, and I should read maps instead, which would have shown me the fact that the Alps are reaching to the sea in this region, so… The road was OK, but it definitely didn't feel like a highway: maximum allowed speed was usually either 90km/h or 110km/h, and half the time you're in a short tunnel, so it's sun, tunnel/dark, sun, dark, and you're eyes get quite tired from this continuous switching. The few glimpses of the sea were nice, but the road required enough concentration (both due to traffic and the amount of curves) that one couldn't look left or right.

So that was that a semi-failure; I expected a nice drive, but instead it was a challenge drive ☺ If I had even more time to spend, going back via the Rhone valley (Grenoble, Geneva, Zürich) would have been a more interesting alternative.

France

Going to France always feels strange for me. I learned (some) French way before German, so the French language feels much more familiar to me, even without never actually having used it on a day-to-day basis; so going to France feels like getting back to somewhere where I never lived. Somewhat similar with Italian due to the language closeness between Romanian and Italian, but not the same feeling as I didn't actually hear or learn Italian in the childhood.

So I go to France, and I start partially understand what I hear, and I can somewhat talk/communicate. Very weird, while I still struggle with German in my daily life in Zürich. For example, I would hesitate before asking for directions in German, but not so in French, unrelated to my actual understanding of either language. The brain is funny…

The hotel

We stayed at Club Med Opio-en-Provence, which was interesting. Much bigger than I thought from quick looks on the internet (this internet seems quite unreliable), but also better than I expected from a family-oriented, all-inclusive hotel.

The biggest problem was the food - French Pâtisserie is one of my weaknesses, and I failed to resist. I mean, it was much better than I expected, and I indulged a bit too much. I'll have to pay that back on the bike or running :-P

The other interesting part of the hotel was the wide range of activities. Again, this being a family hotel, I thought the organised activities would be pretty mild; but at least for our group, they weren't. The mountain bike ride included an easy single-trail section, but while easy it was single-trail and rocky, so complete beginners might have had a small surprise. Overall it was about 50 minutes, 13.5km, with 230m altitude gain, which again for sedentary people might be unusual. I probably spent during the ride one of the deserts I ate later that day ;-) The "hike" they organised for another sub-group was also interesting, involving going through old tunnels and something with broken water pipes that caused people to either get their feet wet or monkey-spidering along the walls. Fun!

After the bike ride, on the same afternoon, while walking around the hotel, we found the Ecole de Trapèze volant open, which looked way to exciting not to try it. Try and fail to do things right, but nevertheless it was excellent and unexpected fun. I'll have to do that again some day when I'll be more fit!

Plus that the hotel itself had a very nice location and olive garden, so short runs in the morning were very pleasant. Only one cookie though each…

Back home

… and then it was over; short, but quite good. The Provence area is nice, and I'd like to be back again someday, for a proper vacation—longer and more relaxed. And do the trapèze thing again, properly this time.

05 June, 2016 04:25PM

hackergotchi for Simon McVittie

Simon McVittie

Flatpak in Debian

Quite a lot has happened in xdg-app since last time I blogged about it. Most noticeably, it isn't called xdg-app any more, having been renamed to Flatpak. It is now available in Debian experimental under that name, and the xdg-app package that was briefly there has been removed. I'm currently in the process of updating Flatpak to the latest version 0.6.4.

The privileged part has also spun off into a separate project, Bubblewrap, which recently had its first release (0.1.0). This is intended as a common component with which unprivileged users can start a container in a way that won't let them escalate privileges, like a more flexible version of linux-user-chroot.

Bubblewrap has also been made available in Debian, maintained by Laszlo Boszormenyi (also maintainer of linux-user-chroot). Yesterday I sent a patch to update Laszlo's packaging for 0.1.0. I'm hoping to become a co-maintainer to upload that myself, since I suspect Flatpak and Bubblewrap might need to track each other quite closely. For the moment, Flatpak still uses its own internal copy of Bubblewrap, but I consider that to be a bug and I'd like to be able to fix it soon.

At some point I also want to experiment with using Bubblewrap to sandbox some of the game engines that are packaged in Debian: networked games are a large attack surface, and typically consist of the sort of speed-optimized C or C++ code that is an ideal home for security vulnerabilities. I've already made some progress on jailing game engines with AppArmor, but making sensitive files completely invisible to the game engine seems even better than preventing them from being opened.

Next weekend I'm going to be heading to Toronto for the GTK Hackfest, primarily to to talk to GNOME and Flatpak developers about their plans for sandboxing, portals and Flatpak. Hopefully we can make some good progress there: the more I know about the state of software security, the less happy I am with random applications all being equally privileged. Newer display technologies like Wayland and Mir represent an opportunity to plug one of the largest holes in typical application containerization, which is a major step in bringing sandboxes like Flatpak and Snap from proof-of-concept to a practical improvement in security.

Other next steps for Flatpak in Debian:

  • To get into the next stable release (Debian 9), Flatpak needs to move from experimental into unstable and testing. I've taken the first step towards that by uploading libgsystem to unstable. Before Flatpak can follow, OSTree also needs to move.
  • Now that it's in Debian, please report bugs in the usual Debian way or send patches to fix bugs: Flatpak, OSTree, libgsystem.
  • In particular, there are some OSTree bugs tagged help. I'd appreciate contributions to the OSTree packaging from people who are interested in using it to deploy dpkg-based operating systems - I'm primarily looking at it from the Flatpak perspective, so the boot/OS side of it isn't so well tested. Red Hat have rpm-ostree, and I believe Endless do something analogous to build OS images with dpkg, but I haven't had a chance to look into that in detail yet.
  • Co-maintainers for Flatpak, OSTree, libgsystem would also be very welcome.

05 June, 2016 11:24AM

Petter Reinholdtsen

A program should be able to open its own files on Linux

Many years ago, when koffice was fresh and with few users, I decided to test its presentation tool when making the slides for a talk I was giving for NUUG on Japhar, a free Java virtual machine. I wrote the first draft of the slides, saved the result and went to bed the day before I would give the talk. The next day I took a plane to the location where the meeting should take place, and on the plane I started up koffice again to polish the talk a bit, only to discover that kpresenter refused to load its own data file. I cursed a bit and started making the slides again from memory, to have something to present when I arrived. I tested that the saved files could be loaded, and the day seemed to be rescued. I continued to polish the slides until I suddenly discovered that the saved file could no longer be loaded into kpresenter. In the end I had to rewrite the slides three times, condensing the content until the talk became shorter and shorter. After the talk I was able to pinpoint the problem – kpresenter wrote inline images in a way itself could not understand. Eventually that bug was fixed and kpresenter ended up being a great program to make slides. The point I'm trying to make is that we expect a program to be able to load its own data files, and it is embarrassing to its developers if it can't.

Did you ever experience a program failing to load its own data files from the desktop file browser? It is not a uncommon problem. A while back I discovered that the screencast recorder gtk-recordmydesktop would save an Ogg Theora video file the KDE file browser would refuse to open. No video player claimed to understand such file. I tracked down the cause being file --mime-type returning the application/ogg MIME type, which no video player I had installed listed as a MIME type they would understand. I asked for file to change its behavour and use the MIME type video/ogg instead. I also asked several video players to add video/ogg to their desktop files, to give the file browser an idea what to do about Ogg Theora files. After a while, the desktop file browsers in Debian started to handle the output from gtk-recordmydesktop properly.

But history repeats itself. A few days ago I tested the music system Rosegarden again, and I discovered that the KDE and xfce file browsers did not know what to do with the Rosegarden project files (*.rg). I've reported the rosegarden problem to BTS and a fix is commited to git and will be included in the next upload. To increase the chance of me remembering how to fix the problem next time some program fail to load its files from the file browser, here are some notes on how to fix it.

The file browsers in Debian in general operates on MIME types. There are two sources for the MIME type of a given file. The output from file --mime-type mentioned above, and the content of the shared MIME type registry (under /usr/share/mime/). The file MIME type is mapped to programs supporting the MIME type, and this information is collected from the desktop files available in /usr/share/applications/. If there is one desktop file claiming support for the MIME type of the file, it is activated when asking to open a given file. If there are more, one can normally select which one to use by right-clicking on the file and selecting the wanted one using 'Open with' or similar. In general this work well. But it depend on each program picking a good MIME type (preferably a MIME type registered with IANA), file and/or the shared MIME registry recognizing the file and the desktop file to list the MIME type in its list of supported MIME types.

The /usr/share/mime/packages/rosegarden.xml entry for the Shared MIME database look like this:

<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info">
  <mime-type type="audio/x-rosegarden">
    <sub-class-of type="application/x-gzip"/>
    <comment>Rosegarden project file</comment>
    <glob pattern="*.rg"/>
  </mime-type>
</mime-info>

This states that audio/x-rosegarden is a kind of application/x-gzip (it is a gzipped XML file). Note, it is much better to use an official MIME type registered with IANA than it is to make up ones own unofficial ones like the x-rosegarden type used by rosegarden.

The desktop file of the rosegarden program failed to list audio/x-rosegarden in its list of supported MIME types, causing the file browsers to have no idea what to do with *.rg files:

% grep Mime /usr/share/applications/rosegarden.desktop
MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi;
X-KDE-NativeMimeType=audio/x-rosegarden-composition
%

The fix was to add "audio/x-rosegarden;" at the end of the MimeType= line.

If you run into a file which fail to open the correct program when selected from the file browser, please check out the output from file --mime-type for the file, ensure the file ending and MIME type is registered somewhere under /usr/share/mime/ and check that some desktop file under /usr/share/applications/ is claiming support for this MIME type. If not, please report a bug to have it fixed. :)

05 June, 2016 06:30AM

Jamie McClelland

Signal and Mobile XMPP Update

First, many thanks to Planet Debian readers for your thoughtful and constructive feedback to my Signal and Mobile Instant Messaging blogs. I learned a lot.

Particularly useful was the comment directing me to Daniel Gultsch's The State of Mobile in 2016 post.

I had previously listed the outstanding technical challenges as:

  • Implement end-to-end encryption
  • Receive messages the moment they are sent without draining the battery

I am now fairly convined that both problems are well-solved on Android via the Conversations app and a well-tuned XMPP server (I had no idea how easy it was to install your own Prosody modulues -- the client state indicator module is only 22 lines of lua code!)

I think the current technical challenges could be better summarized as: adding iOS (iPhone) support. Both end-to-end encryption and receiving messages consistently seem to be hurdles. However, it seems that Chris Ballinger and the Chat Secure team are well on their way toward solving the push issue and facing funder skittishness on the encryption front. Nonetheless, but seem to be progressing.

With the obvious technical hurdles in progress, we have the luxury of talking about the less obvious ones - particularly the ones requiring trade-offs.

In particular: Signal replaces your SMS client. It looks and feels like an SMS client and automatically sends un-encrypted messages to everyone your address book that is not on signal and sends encrypted messages to those that are on signal.

The significance of this feature is hard to over-state. It differentiates tools by and for technically minded people and those designed for a mass audience.

When I convince people to use Conversations, in contrast, I have to teach them to:

  • Create an entirely new address book by entering addresses for your friends that you don't already have
  • Use a new and different app for sending encrypted messages

For most people who don't (yet) have their friends XMPP addresses or for people who don't have any friends who use XMPP, it means that they will install it, send me a few messages and then never use it again.

The price Signal pays for this convenience is steep: Signal seems to synchronize your entire address book to their servers so they can keep a map of cell phone numbers to signal users. It's not only creepy (I get a text message everytime someone in my address book joins Signal) but it's flies in the face of expectations for a privacy-minded application.

How could we take advantage of this feature, without the privacy problems?

What if...

  • Our app could send both XMPP messages and SMS messages
  • Everytime you added a new XMPP contact, it added the contact to your address book with a new XMPP field
  • Anytime you send a message to a contact with an XMPP field filled in, it would send via XMPP and otherwise it would send a normal SMS message

The main downside (which Signal faces as well) is that you have to contend with the complexities of sending SMS messages on top of the work needed to write a well-functioning XMPP client. As I mentioned in my Signal blog, there are no shortage of MMS bugs against Signal. Nobody wants that head-ache.

Additinally, we would still lose one Signal feature: with Signal, when a user joins, everyone automatically sends them encrypted messages. With this proposed app, each user would have to manually add the XMPP address and have no way of knowing when one of their friends gets an XMPP address.

Any other ideas?

05 June, 2016 03:03AM

June 04, 2016

Jaminy Prabaharan

Weekly Report for GSoC16-Community bonding period

April 23rd to May 23rd

The period of introducing ourselves to the Debian community. I have updated my debian wiki page to introduce more about myself to the Debian community.

https://wiki.debian.org/SummerOfCode2016/StudentApplications/Jaminy

There was a webRTC session of MiniDebconf through Jitsi on 3oth April to know more about the Debian resources.

During this period  I have updated my PC with the Debian latest version, Jessie and got practised with the new platform.I have also learnt some basic theories on my project such as VoIP and IMAP. I was assigned by my mentor Daniel Pocock  to work on telepathy reSIProcate.

  • Steps to install reSIProcate

System used

  • Debian GNU/Linux 8.3 (jessie)
  • Ubuntu 14.04.4 LTS (trusty)

Telepathy-Qt

First you have to configure the telepathy-qt library properly to be able to install reSIProcate. It’s important to notice that you shouldn’t install telepathy-qt from apt-get because in this way it wont have the telepathy-qt4-service shared library.

$ mkdir ~/telepathy-qt-stuff
$ cd ~/telepathy-qt-stuff
$ git clone https://github.com/dpocock/telepathy-qt-debian
$ cd telepathy-qt-debian
$ git checkout jessie-build-all-shared
$ cd ..

Then you should download the tar http://http.debian.net/debian/pool/main/t/telepathy-qt/telepathy-qt_0.9.6.1.orig.tar.gz in the telepathy-qt-stuff folder and continue:

$ tar xzf telepathy-qt_0.9.6.1.orig.tar.gz
$ cd telepathy-qt_0.9.6.1
$ [ -d debian ] && echo "warning: debian/ already exists!"
$ cp -r ../telepathy-qt-debian/debian .
$ dpkg-buildpackage -rfakeroot -i.* -j13 -us -uc
$ cd ..
$ ls *.deb

Now you should see a list of libtelepathy-qt* and telepathy-qt* .deb packages. You just have to install a few more packages:

$ dpkg -i libtelepathy-qt4-2_0.9.6.1-2_amd64.deb libtelepathy-qt4-dev_0.9.6.1-2_amd64.deb libtelepathy-qt4-farstream2_0.9.6.1-2_amd64.deb

After that you have the necessary packages to install reSIProcate.

$ dpkg -l | grep telepathy-qt

Should return you something like this:

ii

libtelepathy-qt4-2:amd64

0.9.6.1-2

amd64

Telepathy framework – Qt 4 library

ii

libtelepathy-qt4-dev

0.9.6.1-2

amd64

Qt 4 Telepathy library (headers and static library)

ii

libtelepathy-qt4-farstream2:amd64

0.9.6.1-2

amd64

Telepathy/Farsight integration – Qt 4 library

reSIProcate

After installing telepathy-qt properly you would be able to configure reSIProcate.

Make sure you have added backports to your /etc/apt/sources.list file

$ git clone https://github.com/resiprocate/resiprocate
$ cd resiprocate
$ apt-get install libpq-dev dh-autoreconf
$ apt-get build-dep resiprocate
$ apt-get install -t jessie-backports libradcli-dev
$ ./build/debian.sh
$ make

And then you are done!


04 June, 2016 08:20AM by jaminycom

June 03, 2016

Eduard Sanou

No more unencrypted emails to gpg contacts

I have been using mutt for about half a year already and I’m very happy with it. The previous email client I used was Thunderbird (with the Enigmail extension to handle GPG). There were two main reasons that made me switch.

The first one was that I often would like to check my email while I’m offline, and it seems that Thunderbird is not very good at this. Sometimes not all my email would have been downloaded (just the headers), and I also found it frustrating that after marking more than 50 emails as read while offline, they would be marked as unread again once I went back online. With mutt I’m using mbsync (which apparently is faster than offlineimap) to sync my email to a local folder with a cron job. I couldn’t be happier.

The other issue was that I like having many filters, and it was tedious to customize filters in Thunderbird: there’s no way to copy a filter and modify it, and there’s a limit in the combinations of ANDs and ORs for fields. I’m using procmail now, which allows me save the filter configuration in plaintext and define patterns with more flexibility.

The setup for mutt took several weeks, but I never felt that I couldn’t accomplish what I wanted (unlike in Thunderbird). I’m using mutt with several python and bash scripts that I wrote.

But the reason for this post is an issue that I believe happens in every email client (or should I say, MUA, to be more precise). I’ve seen it happening to people using both Thunderbird and mutt, and I bet it has happened in other cases: sending an email to someone for which you have their GPG key unencrypted unwillingly. I’ve seen this happening in email replies with several participants: after a few encrypted messages are exchanged, someone replies in the clear, quoting all the previous messages. I tried to avoid this by configuring mutt to encrypt and sign by default, forcing me to set sign only manually before sending every email that I can’t send encrypted (I’d like to send all my emails encrypted, but not everybody has a GPG key :( ).

So what happened? I got so used to sending many unencrypted emails that I would press “P S” (PGP setting, Sign only) before sending emails as a reflex act. And I sent an email unencrypted to a friend for which I have his GPG key :(

So I thought: It’s a very rare case to want to send an unencrypted email to someone for which you have their GPG key. I think extensions like Enigmail should give you a warning when this happens, to alert you about it. In my case, I solved it with a python script that inspects the email, and, if it’s unencrypted and the recipient/s is/are in your GPG keyring it warns you about it and returns an error. The script stores a temporary file with the Message-ID so that if you run it again with the same email it will properly send the email without returning an error.

Now, I only needed to configure mutt to use this script as the sendmail command:

per_account:set sendmail  = "$HOME/bin/check-mail-gpg /usr/bin/msmtp -a $my_email"

And here goes the python script check-mail-gpg:

#! /usr/bin/python3

import os
import sys
import subprocess
import email.parser
from email.header import decode_header
from email.utils import parseaddr

STATUS_FILE = '/tmp/check-mail-gpg.tmp'

def dec(header):
    head = decode_header(header)
    if len(head) == 1 and head[0][1] == None:
        return head[0][0]
    else:
        return ''.join([h.decode(enc) if enc else h.decode('ascii') \
                for (h,enc) in head])

def send_mail(mail):
    print('Calling external email client to send the email...')
    #return -1 # testing mode
    p = subprocess.Popen(sys.argv[1:], stdin=subprocess.PIPE)
    p.stdin.write(mail.encode('utf-8'))
    p.stdin.close()
    return p.wait()

def main():
    mail = sys.stdin.read()
    heads = email.parser.Parser().parsestr(mail, headersonly=True)
    content = heads['Content-Type'].split(';')[0].strip()
    print('Content is:', content)

    if content == 'multipart/encrypted':
        print('Ok: encrypted mail, we can return now...')
        sys.exit(send_mail(mail))

    addrs = [parseaddr(addr) for addr in heads['To'].split(',')]
    print('Found emails:', addrs)

    gpg_cnt = 0
    for name, addr in addrs:
        print('Looking for', addr, 'in the keyring...')
        res = subprocess.call(['gpg', '--list-keys', addr],
                stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
        if res == 0:
            gpg_cnt += 1

    if gpg_cnt == 0:
        print('Ok: no email found in the gpg keyring, we can return now...')
        sys.exit(send_mail(mail))

    if not os.path.exists(STATUS_FILE):
        open(STATUS_FILE, 'w').close()

    msg_id = heads['Message-ID']
    msg_id_prev = ''
    with open(STATUS_FILE, 'r') as f:
        msg_id_prev = f.read()

    if msg_id.strip() == msg_id_prev.strip():
        sys.exit(send_mail(mail))
    else:
        with open(STATUS_FILE, 'w') as f:
            f.write(msg_id)
        print('Alert: trying to send an unencrypted email to', addrs,
                ', for which some gpg keys were found in the keyring!')
        print('Try again if you are sure to send this message unencrypted.')
        sys.exit(1)

if __name__ == '__main__':
    main()

Update

I’ve been told about the option crypt_opportunistic_encrypt in mutt, which provides a feature very similar to what I was looking for. This option will automatically enable encryption when the recipient has a GPG key in your keyring.

From mutt’s man page:

3.41. crypt_opportunistic_encrypt

Type: boolean Default: no

Setting this variable will cause Mutt to automatically enable and disable encryption, based on whether all message recipient keys can be located by mutt.

When this option is enabled, mutt will determine the encryption setting each time the TO, CC, and BCC lists are edited. If $edit_headers is set, mutt will also do so each time the message is edited.

While this is set, encryption settings can’t be manually changed. The pgp or smime menus provide an option to disable the option for a particular message.

If $crypt_autoencrypt or $crypt_replyencrypt enable encryption for a message, this option will be disabled for the message. It can be manually re-enabled in the pgp or smime menus. (Crypto only)

03 June, 2016 10:13PM