October 18, 2016

Enrico Zini

debtags and aptitude forget-new

I like to regularly go through the new packages section in aptitude to see what interesting new packages entered testing, but recently that joyful moment got less joyful for me because of a barrage of obscurely named packages.

I have just realised that aptitude forget-new supports search patterns, and that brought back the joy.

I put this in a script that I run before looking for new packages in aptitude:

aptitude forget-new '?tag(field::biology)
                   | ?tag(devel::lang:ruby)
                   | ?tag(devel::lang:perl)
                   | ?tag(role::shared-lib)
                   | ?tag(suite::openstack)
                   | ?tag(implemented-in::php)
                   | ~n^node-'

The actual content of the search pattern is purely a matter of taste.

I'm happy to see how debtags becomes quite useful here, to keep my own user experience manageable as the size of Debian keeps growing.

18 October, 2016 08:25AM

hackergotchi for MJ Ray

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

18 October, 2016 04:28AM by mjr

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

gettz 0.0.2

Release 0.0.2 of gettz is now on CRAN.

gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

Windows is now no longer excluded, though it doesn't do anything useful yet. The main use of the package is still for Linux.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 October, 2016 02:16AM

October 17, 2016

Russ Allbery

pgpcontrol 2.5

pgpcontrol is the collection of the original signing and verification scripts that David Lawrence wrote (in Perl) for verification of Usenet control messages. I took over maintenance of it, with a few other things, but haven't really done much with it. It would benefit a lot from an overhaul of both the documentation and the code, and turning it into a more normal Perl module and supporting scripts.

This release is none of those things. It's just pure housekeeping, picking up changes made by other people (mostly Julien ÉLIE) to the copies of the scripts in INN and making a few minor URL tweaks. But I figured I may as well, rather than distribute old versions of the scripts.

You can tell how little I've done with this stuff by noting that they don't even have a distribution page on my web site. The canonical distribution site is ftp.isc.org, although I'm not sure if that site will pick up the new release. (This relies on a chain of rsync commands that have been moved multiple times since the last time I pushed the release button, and I suspect that has broken.) I'll ping someone about possibly fixing that; in the meantime, you can find the files on archives.eyrie.org.

17 October, 2016 11:36PM

Arturo Borrero González

nftables in Debian Stretch

Debian - Netfilter

The next Debian stable release is codenamed Stretch, which I would expect to be released in less than a year.

The Netfilter Project has been developing nftables for years now, and the status of the framework has been improved to a good point: it’s ready for wide usage and adoption, even in high-demand production environments.

The last released version of nft was 0.6, and the Debian package was updated just a day after Netfilter announced it.

With the 0.6 version the software freamework reached a good state of maturity, and I myself encourage users to migrate from iptables to nftables.

In case you don’t know about nftables yet, here is a quick reference:

  • it’s the tool/framework meant to replace iptables (also ip6tables, arptables and ebtables)
  • it integrates advanced structures which allow to arrange your ruleset for optimal performance
  • all the system is more configurable than in iptables
  • the syntax is much better than in iptables
  • several actions in a single rule
  • simplified IPv4/IPv6 dual stack
  • less kernel updates required
  • great support for incremental, dynamic and atomic ruleset updates

To run nftables in Debian Stretch you need several components:

  1. nft: the command line interface
  2. libnftnl: the nftables-netlink library
  3. linux kernel: a least 4.7 is recommended

A simple aptitude run will put your system ready to go, out of the box, with nftables:

root@debian:~# aptitude install nftables

Once installed, you can start using the nft command:

root@debian:~# nft list ruleset

A good starting point is to copy a simple workstation firewall configuration:

root@debian:~# cp /usr/share/doc/nftables/examples/syntax/workstation /etc/nftables.conf

And load it:

root@debian:~# nft -f /etc/nftables.conf

Your nftables ruleset is now firewalling your network:

root@debian:~# nft list ruleset
table inet filter {
        chain input {
                type filter hook input priority 0;
                iif lo accept
                ct state established,related accept
                ip6 nexthdr icmpv6 icmpv6 type { nd-neighbor-solicit,  nd-router-advert, nd-neighbor-advert } accept
                counter drop
        }
}

Several examples can be found at /usr/share/doc/nftables/examples/.

A simple systemd service is included to load your ruleset at boot time, which is disabled by default.

If you are running Debian Jessie and want to give a try, you can use nftables from jessie-backports.

If you want to migrate your ruleset from iptables to nftables, good news. There are some tools in place to help in the task of translating from iptables to nftables, but that doesn’t belong to this post :-)

nft

The nano editor includes nft syntax highlighting. What are you waiting for to use nftables?

17 October, 2016 01:30PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 5.2 is going to the cloud

The newest version of FAI, the Fully Automatic Installation tool set, now supports creating disk images for virtual machines or for your cloud environment.

The new command fai-diskimage uses the normal FAI process for building disk images of different formats. An image with a small set of packages can be created in less than 50 seconds, a Debian XFCE desktop in nearly two minutes and a complete Ubuntu 16.04 desktop image is created in four minutes.

New FAI installation images for CD and USB stick are also available.

Update: Add link to announcement

FAI cloud

17 October, 2016 11:51AM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Something Else Will Be Posted Soon Also.

Yikes today was Sharad Purnima which means there is about two weeks to go before Diwali and I haven't written anything here all year.

OK new challenge: write 7 substantive blog posts before Diwali. Can I manage to do it? Let's see...

17 October, 2016 06:07AM

Russell Coker

Improving Memory

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

17 October, 2016 04:20AM by etbe

October 16, 2016

hackergotchi for Thomas Goirand

Thomas Goirand

Released OpenStack Newton, Moving OpenStack packages to upstream Gerrit CI/CD

OpenStack Newton is released, and uploaded to Sid

OpenStack Newton was released on the Thursday 6th of October. I was able to upload nearly all of it before the week-end, though there was a bit of hick-ups still, as I forgot to upload python-fixtures 3.0.0 to unstable, and only realized it thanks to some bug reports. As this is a build time dependency, it didn’t disrupt Sid users too much, but 38 packages wouldn’t build without it. Thanks to Santiago Vila for pointing at the issue here.

As of writing, a lot of the Newton packages didn’t migrate to Testing yet. It’s been migrating in a very messy way. I’d love to improve this process, but I’m not sure how, if not filling RC bugs against 250 packages (which would be painful to do), so they would migrate at once. Suggestions welcome.

Bye bye Jenkins

For a few years, I was using Jenkins, together with a post-receive hook to build Debian Stable backports of OpenStack packages. Though nearly a year and a half ago, we had that project to build the packages within the OpenStack infrastructure, and use the CI/CD like OpenStack upstream was doing. This is done, and Jenkins is gone, as of OpenStack Newton.

Current status

As of August, almost all of the packages Git repositories were uploaded to OpenStack Gerrit, and the build now happens in OpenStack infrastructure. We’ve been able to build all packages a release OpenStack Newton Debian packages using this system. This non-official jessie backports repository has also been validated using Tempest.

Goodies from Gerrit and upstream CI/CD

It is very nice to have it built this way, so we will be able to maintain a full CI/CD in upstream infrastructure using Newton for the life of Stretch, which means we will have the tools to test security patches virtually forever. Another thing is that now, anyone can propose packaging patches without the need for an Alioth account, by sending a patch for review through Gerrit. It is our hope that this will increase the likeliness of external contribution, for example from 3rd party plugins vendors (ie: networking driver vendors, for example), or upstream contributors themselves. They are already used to Gerrit, and they all expected the packaging to work this way. They are all very much welcome.

The upstream infra: nodepool, zuul and friends

The OpenStack infrastructure has been described already in planet.debian.org, by Ian Wienand. So I wont describe it again, he did a better job than I ever would.

How it works

All source packages are stored in Gerrit with the “deb-” prefix. This is in order to avoid conflict with upstream code, and to easily locate packaging repositories. For example, you’ll find Nova packaging under https://git.openstack.org/cgit/openstack/deb-nova. Two Debian repositories are stored in the infrastructure AFS (Andrew File System, which means a copy of that repository exist on each cloud were we have compute resources): one for the actual deb-* builds, under “jessie-newton”, and one for the automatic backports, maintained in the deb-auto-backports gerrit repository.

We’re using a “git tag” based workflow. Every Gerrit repository contains all of the upstream branch, plus a “debian/newton” branch, which contains the same content as a tag of upstream, plus the debian folder. The orig tarball is generated using “git archive”, then used by sbuild to produce binaries. To package a new upstream release, one simply needs to “git merge -X theirs FOO” (where FOO is the tag you want to merge), then edit debian/changelog so that the Debian package version matches the tag, then do “git commit -a –amend”, and simply “git review”. At this point, the OpenStack CI will build the package. If it builds correctly, then a core reviewer can approve the “merge commit”, the patch is merged, then the package is built and the binary package published on the OpenStack Debian package repository.

Maintaining backports automatically

The automatic backports is maintained through a Gerrit repository called “deb-auto-backports” containing a “packages-list” file that simply lists source packages we need to backport. On each new CR (change request) in Gerrit, thanks to some madison-lite and dpkg –compare-version magic, the packages-list is used to compare what’s in the Debian archive and what we have in the jessie-newton-backports repository. If the version is lower in our repository, or if the package doesn’t exist, then a build is triggered. There is the possibility to backport from any Debian release (using the -d flag in the “packages-list” file), and even we can use jessie-backports to just rebuild the package. I also had to write a hack to just download from jessie-backports without rebuilding, because rebuilding the webkit2gtk package (needed by sphinx) was taking too resources (though we’ll try to never use it, and rebuild packages when possible).

The nice thing with this system, is that we don’t need to care much about maintaining packages up-to-date: the script does that for us.

Upstream Debian repository are NOT for production

The produced package repositories are there because we have interconnected build dependencies, needed to run unit test at build time. It is the only reason why such Debian repository exist. They are not for production use. If you wish to deploy OpenStack, we very much recommend using packages from distributions (like Debian or Ubuntu). Indeed, the infrastructure Debian repositories are updated multiple times daily. As a result, it is very likely that you will experience failures to download (hash or file size mismatch and such). Also, the functional tests aren’t yet wired in the CI/CD in OpenStack infra, and therefore, we cannot guarantee yet that the packages are usable.

Improving the build infrastructure

There’s a bunch of things which we could do to improve the build process. Let me give a list of things we want to do.

  • Get sbuild pre-setup in the Jessie VM images, so we can win 3 minutes per build. This means writing a diskimage-builder element for sbuild.
  • Have the infrastructure use a state-of-the-art Debian ftp-sync mirror, instead of the current reprepro mirroring which produces an unsigned reprository, which we can’t use for sbuild-createchroot. This will improve things a lot, as currently, there’s lots of build failures because of httpredir.debian.org mirror inconsistencies (and these are very frustrating loss of time).
  • For each packaging change, there’s 3 build: the check job, the gate job, and the POST job. This is a loss of time and resources, as we need to build a package once only. It will be hopefully possible to fix this when the OpenStack infra team will deploy Zuul 3.

Generalizing to Debian

During Debconf 16, I had very interesting talks with the DSA (Debian System Administrator) about deploying such a CI/CD for the whole of the Debian archive, interfacing Gerrit with something like dgit and a build CI. I was told that I should provide a proof of concept first, which I very much agreed with. Such a PoC is there now, within OpenStack infra. I very much welcome any Debian contributor to try it, through a packaging patch. If you wish to do so, you should read how to contribute to OpenStack here: https://wiki.openstack.org/wiki/How_To_Contribute#If_you.27re_a_developer and then simply send your patch with “git review”.

This system, however, currently only fits the “git tag” based packaging workflow. We’d have to do a little bit more work to make it possible to use pristine-tar (basically, allow to push in the upstream and pristine-tar branches without any CI job connected to the push).

Dear DSA team, as we now nice PoC that is working well, on which the OpenStack PKG team is maintaining 100s of packages, shall we try to generalize and provide such infrastructure for every packaging team and DDs?

16 October, 2016 09:28PM by Goirand Thomas

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 800 CRAN packages

800 Rcpp packages

A moment ago, Rcpp hit another milestone: 800 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

The easiest way to compute this is to use the reverse_dependencies_with_maintainers() function from a helper scripts file on CRAN. This still gets one or false positives of packages declaring a dependency but not actually containing C++ code and the like. There is also a helper function revdep() in the devtools package but it includes Suggests: which does not firmly imply usage, and hence inflates the count. I have always opted for a tighter count with corrections.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June of last year (when I only tweeted about it), 500 packages less than a year ago in late October, 600 packages this March and 700 packages this July. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of last year, seven percent just before Christmas and eight percent this summer.

800 user packages is staggeringly large and humbling number. This puts more than some responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

At the rate we are going, the big 1000 may be hit before we all meet again for useR! 2017.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 October, 2016 07:42PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

backup.sh opensourced

It's been said that backup is a bit like flossing; everybody knows you should do it, but nobody does it.

If you want to start flossing, an immediate question is what kind of dental floss to get—and conversely, for backup, which backup software do you want to rely on? I had some criteria:

  • Automated full-system backup, not just user files.
  • Self-controlled, not cloud (the cloud economics don't really make sense for 10 TB+ of backup storage, especially when you factor in restore cost).
  • Does not require one file on the backup server for one each file on the backed-up server (makes for infinitely long fscks, greatly increased risk of file system corruption, frequently gives performance problems in the backup host, and makes inter-file compression impossible).
  • Not written in Python (makes for glacial speeds).
  • Pull backups, not push (so a backed-up server cannot delete its own backups in event of a break-in).
  • Does not require any special preparation or lots of installation on each server.
  • Ideally, restore using UNIX standard tools only.

I looked at basically everything that existed in Debian and then some, and all of them failed. But Samfundet had its own script that's basically just a simple wrapper around tar and ssh, which has worked for 15+ years without a hitch (including several restores), so why not use it?

All the authors agreed to a GPLv2+ licensing, so now it's time for backup.sh to meet the world. It does about the simplest thing you can imagine: ssh to the server and use GNU tar to tar down every filesystem that has the “dump” bit set in fstab. Every 30 days, it does a full backup; otherwise, it does an incremental backup using GNU tar's incremental mode (which makes sure you will also get information about file deletes). It doesn't do inter-file diffs (so if you have huge files that change only a little bit every day, you'll get blowup), and you can't do single-file restores without basically scanning through all the files; tar isn't random-access. So it doesn't do much fancy, but it works, and it sends you a nice little email every day so you can know your backup went well. (There's also a less frequently used mode where the backed-up server encrypts the backup using GnuPG, so you don't even need to trust the backup server.) It really takes fifteen minutes to set up, so now there's no excuse. :-)

Oh, and the only good dental floss is this one. :-)

16 October, 2016 01:43PM

Rémi Vanicat

Trying to install Debian on G752VM-GC006T

I'm trying to install Debian GNU/linux on my new ASUS G752VM-GC006T

So what I've discovered:

  • It's F2 to have the bios, and in the last bios section, you can directly boot on any device.
  • It boot on the netinst DVD
  • netinst can't see the SSD disk
  • the trackpad doesn't work
  • after successful install, booting on the fresh install failed. I had to use the recovery tools to install nvidia non-free package to have debian successfully boot.
  • I mostly use sid on my computer (mostly to test problem, and report them). It was a bad idea: Debian stopped to find its own disk. adding pci=nomsi to the kernel option fix this.

So I've a working linux. My problem are:

  • I still can't see the SSD disk from linux
  • I cannot easily dualboot:
    • linux can't see the SSD where windows is,
    • windows boot loader don't want to start Debian, because it doesn't want to,
    • at least, the bios can boot both of them, but there is no "pretty" menu
  • the trackpad is not working.
  • 0.5 To feel small today...

And the question is: where to report those bug.

First edit: rEFInd seem to find windows and Debian, thanks to blackcat77

16 October, 2016 12:13PM

hackergotchi for Mirco Bauer

Mirco Bauer

Debian 8 on Dell XPS 15

It was time for a new work laptop so I got a Dell XPS 15 9950. I wasn't planning to write a blog post of how to install Debian 8 "Jessie" on the laptop but since it wasn't just install and use, I will share what is needed to get the wifi and graphics card to work.

So first download the DVD-1 AMD64 image of Debian 8 from your favorite download mirror. The closest one for me is the Hong Kong mirror. You do not need to download the other DVDs, just the first one is sufficient. The netinstaller and CD images will not provide a good experience since they need a working network/internet connection. With the DVD image you can do a full default desktop install and most things will just work out-of-the-box.

Now you can do a regular install, no special procedure or anything will be needed. Depending on your desktop selection it will boot right into lovely GNOME3.

You will quickly notice that the wifi is not working out-of-the-box though. It is a Qualcomm Atheros QCA6174 and the Linux kernel version 3.16 shipped with Debian 8 does not support that wifi card. This card needs the ath10k_pci kernel module which is included in a newer Linux kernel package from the Debian backports archive. If you don't have the Dell docking station as neither I do, then there is no wired ethernet that you can use for getting a temporary Internet connection. So use a different computer with Internet access to download the following packages from the Debian backports archive manually and put them on a USB disk.

After that connect the USB disk to the new Dell laptop and mount the disk using the GNOME3 file browser (nautilus). It will mount the USB disk to /media/$your_username/$volume_name. Become root using sudo or su. Then install all downloaded package from USB with like this:

cd /media/$your_username/$volume_name
dpkg -i linux-base_*.deb
dpkg -i linux-image-4.7.0-0.bpo.1-amd64_*.deb
dpkg -i firmware-atheros_*.deb
dpkg -i firmware-misc-nonfree_*.deb
dpkg -i xserver-xorg-video-intel_*.deb

That's it. If dpkg finished without error message then you can reboot and your wifi and graphics card should just work! After reboot you can verify the wifi card is recognized by running "/sbin/iwconfig" and see if wlan0 shows up.

Have fun with your Dell XPS and Debian!

PS: if this does not work for you, leave a comment or write to meebey at meebey . net

16 October, 2016 03:46AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.0.3: Tint Is Not Tufte

The tint package, whose name stands for Tint Is Not Tufte , on CRAN offers a fresh take on the excellent Tufte-style for html and pdf presentations.

It marks a milestone for me: I finally have a repository with more "stars" than commits. Gotta keep an eye on the big prize...

Kidding aside, and as a little teaser, here is what the pdf variant looks like:

This release corrects one minor misfeature in the pdf variant. It also adds some spit and polish throughout, including a new NEWS.Rd file. We quote from it the entries for the current as well as previous releases:

Changes in tint version 0.0.3 (2016-10-15)

  • Correct pdf mode to no use italics in table of contents (#9 fixing #8); also added color support for links etc

  • Added (basic) Travis CI support (#10)

Changes in tint version 0.0.2 (2016-10-06)

  • In html mode, correct use of italics and bold

  • Html function renamed to tintHtml Roboto fonts with (string) formats and locales; this allow for adding formats; (PRs #6 and #7)

  • Added pdf mode with new function tintPdf(); added appropriate resources (PR #5)

  • Updated resource files

Changes in tint version 0.0.1 (2016-09-24)

  • Initial (non-CRAN) release to ghrr drat

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 October, 2016 01:17AM

October 15, 2016

Thorsten Alteholz

DOPOM: libmatthew-java – Unix socket API and bindings for Java

While looking at the “action needed”-paragraph of one of my packages, I saw that a dependency was orphaned and needed a new maintainer. So I decided to restart DOPOM (Debian Orphaned Package Of the Month), that I started in 2012 with ent as the first package.

This month I adopted libmatthew-java. Sure it was not a big deal as the QA-team already did a good job and kept the package in shape. But now there is one burden lifted from their shoulders.

According to the Work-Needing and Prospective Packages page 956 packages are ophaned at the moment. If every Debian contributor grabs one of them, we could unwind the QA-team (no, just kidding). So similar to NEW which was down to 0 this year, can we get rid of the WNPP as well? At least for a short time?

15 October, 2016 09:01PM by alteholz

hackergotchi for Daniel Silverstone

Daniel Silverstone

Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

15 October, 2016 03:11AM by Daniel Silverstone

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.0.3: Extension and fixes

anytime arrived on CRAN with releases 0.0.1 and 0.0.2 about a month ago. anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects.

Release 0.0.3 brings a bugfix for Windows (where for dates before the epoch of 1970-01-01, accessing the tm_isdst field for daylight savings would crash the session) and a small (unexported) extension to test format strings. This last feature plays well the ability to add format strings which we added in 0.0.2.

The NEWS file summarises the release:

Changes in Rcpp version 0.0.3 (2016-10-13)

  • Added (non-exported) helper function testFormat()

  • Do not access tm_isdst on Windows for dates before the epoch (pull request #13 fixing issue #12); added test as well

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 October, 2016 12:37AM

October 14, 2016

hackergotchi for Michal Čihař

Michal Čihař

New free software projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. I'm quite slow in processing the hosting requests, but when I do that, I process them in a batch and add several projects at once.

This time, the newly hosted projects include:

Filed under: Debian English SUSE Weblate | 0 comments

14 October, 2016 04:00PM

Mike Gabriel

[Arctica Project] Release of nx-libs (version 3.5.99.2)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Thursday, Oct 13th, version 3.5.99.2 of nx-libs has been released [1].

This release brings a major backport of libNX_X11 to the status of libX11 1.3.4 (as provided by X.org). On top of that, all CVE fixes provided for libX11 by the Debian X11 Strike Force and the Debian LTS team got cherry-picked to libNX_X11, too. This big chunk of work has been performed by Ulrich Sibiller and there is more to come. We currently have a pull request pending review that backports more commits from libX11 (bumping the status of libNX_X11 to the state of libX11 1.6.4, which is the current HEAD on the X.org Git site).

Another big clean-up performed by Ulrich is the split-up of XKB code which got symlinked between libNX_X11 and nx-X11/programs/Xserver. This brings in some code duplications but allows maintaing the nxagent Xserver code and the libNX_X11 code separately.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [2] on the ChangeLog file for details.

So for this releas, a very special and massive thanks goes to Ulrich Sibiller!!! Well done!!!

Change Log

A list of recent changes (since 3.5.99.1) can be obtained from here.

Known Issues

This version of nx-libs is known to segfault when LDFLAGS / CFLAGS have the -pie / -fPIE hardening flags set. This issue is currently under investigation.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

References

14 October, 2016 03:47PM by sunweaver

Antoine Beaupré

Managing good bug reports

Bug reporting is an art form that is too often neglected in software projects. Bug reports allow contributors to participate without deep technical knowledge and at the same time provide a crucial space for developers to be made aware of issues with their software that they could not have foreseen or found themselves, for lack of resources, variety or imagination.

Prior art

Unfortunately, there are rarely good guidelines for submitting bug reports. Historically, people have pointed towards How to report bugs effectively or How to ask questions the smart way. While those guides can be useful for motivated people and may seem attractive references for project managers, they suffer from serious issues:

  • they are written by technical people, for non-technical people
  • as a result, they have a deeply condescending attitude such as calling people "stupid" or various animal names like "mongoose"
  • they are also very technical themselves: one starts with a copyright notice and a changelog, the other uses magic words like "Core dumps" and $Id$
  • they are too long: sgtatham's is about 3600 words long, esr's is even longer at about 11800 words. those texts will take about 20 to 60 minutes to read by an average reader, according to research

Individual projects have their own guides as well. Linux has the REPORTING_BUGS file that is a much shorter 1200 that can be read under 5 minutes, provided that you can understand the topic at hand. Interestingly, that guide refers to both esr's and sgtatham's guidelines, which means, in the degenerate case where the user hasn't had the "privilege" of reading esr's prose already, they will have an extra hour and a half of reading to do to have honestly followed the guidelines before reporting the bug.

I often find good documentation in the Tails project. Their bug reporting guidelines are easily accessible and quick to read, although they still might be too technical. It could be argued that you need to get technical at some point to get that information out, of course.

In the Monkeysign project, I have started a bug reporting guide that doesn't yet address all those issues. I am considering writing a new guide, but I figured I would look at other people's work and get feedback before writing my own standard.

What's the point?

Why have those documents been written? Are people really expected to read them before seeking help? It seems to me unlikely that someone would:

  1. be motivated enough to do something about a broken part of their computer
  2. figure out they can do something about it
  3. read a fifteen thousand words novel about how to report a bug...
  4. just to finally write a 20-line bug report that has no warranty of support attached to it

And if I would be a paying customer, I wouldn't want to be forced to waste my time reading that prose either: it's your job to help me fix your broken things, not the reverse. As someone doing consulting these days: I totally understand: it's not you, the user, it's us, the developers, that have a problem. We have been socialized through computers, and it makes us weird and obtuse, but that's no excuse, and we need to clean up our act.

Furthermore, it's surprising how often we get (and make!) bug reports that can be difficult to use. The Monkeysign project is very "technical" and I have expected that the bug reports I would get would be well written, with ways to reproduce and so on, but it happened that I received bug reports that were all over the place, didn't have any ways of reproducing or were simply incomplete. Those three bug reports were filed by people that I know to be very technically capable: one is a fellow Debian developer, the second had filed a good bug report 5 days before, and the third one is a contributor that sent good patches before.

In all three cases, they knew what they were doing. Those three people probably read the guidelines mentioned in the past. They may have even read the Monkeysign bug reporting guidelines as well. I can only explain those bug reports by the lack of time: people thought the issue was obvious, that it would get fixed rapidly because, obviously, something is broken.

We need a better way.

The takeaway

What are those guides trying to tell us?

  1. ask questions in the right place
  2. search for similar questions and issues before reporting the bug
  3. try to make the developers reproduce the issues
  4. failing that, try to describe the issue as well as you can
  5. write clearly, be specific and verbose yet concise

There are obviously contradictions in there, like sgtatham telling us to be verbose and esr tells us to, basically, not be verbose. There is definitely a tension in there, and there are many, many more details about how great bug reports can be if done properly.

I tend towards the side of terseness in our descriptions: people that will know how to be concise will be, people that don't will most likely not learn by reading a 12 000 words novel that, in itself, didn't manage to be parsimonious.

But I am willing to allow for verbosity in bug reports: I prefer too many details instead of missing a key bit of information.

Issue trackers

Step 1 is our job: we should send people in the right place, and give them the right tools. Monkeysign used to manage bugs with bugs-everywhere and this turned out to be a terrible idea: you had to understand git and bugs-everywhere to file any bug reports. As a result, there were exactly zero bug reports filed by non-developers during the whole time BE was used, although some bugs were filed in the Debian Bugtracker.

So have a good bug tracker. A mailing list or email address is not a good bug tracker: you lose track of old issues, and it's hard for newcomers to search the archives. It does have the advantage of having a unified interface for the support forum and bug tracking, however.

Redmine, Gitlab, Github and others are all decent-enough bug trackers. The key point is that the issue tracker should be publicly available, and users should be able to register easily to file new issues. You should also be able to mass-edit tickets and users should be able to discover the tracker's features easily. I am sorry to say that the Debian BTS somewhat falls short on those two features.

Step 2 is a shared responsibility: there should be an easy way to search for issues, and we should help the user looking for similar issues. Stackexchange sites do an excellent job at this, by automatically searching for similar questions while you write your question, suggesting similar ones in an attempt to weed out duplicates. Duplicates still happen, but they can then clearly be marked and linked with a distinct mechanism. Most bug trackers do not offer such high level functionality, but should, so I feel the fault lies more on "our" end than at the user's end.

Reproducing the environment

Step 3 and 4 are more or less the user's responsibility. We can detail in our documentation how to clearly share the environment where we reproduced the bug, for example, but in the end, the user decides if they want to share that information or not.

In Monkeysign, I have finally implemented joeyh's suggestion of shipping the test suite with the program. I can now tell people to run the test suite in their environment to see if this is a regression that is specific to their environment - so a known bug, in a way - or a novel bug for which I can look at writing a new unit test. I also include way more information about the environment in the --version output, an idea I brought forward in the borg project to ease debugging. That way, people can just send the output of monkeysign --test and monkeysign --version, and I have a very good overview of what is happening on their end. Of course, Monkeysign also supports the usual --verbose and --debug flag that users should enable when reproducing issues.

Another idea is to report bugs directly from the application. We have all seen Firefox or other software have automatic bug reporting tools, but somehow those seem unsatisfactory for a user: we have no feedback of where the report goes, if it's followed up on. It is useful for larger project to get statistical data, but not so useful for users in the short term.

Monkeysign tries to handle exceptions in the code in a graceful way, but could do better. We use a small library to handle exceptions, but that library has since then been improved to directly file bugs against the Github project. This assumes the user is logged into Github, but it is nice to pre-populate bug reports with the relevant information up front.

Issue templates

In the meantime, to make sure people provide enough information, I have now moved a lot of the bug reporting guidelines to a separate issue template. That issue template is available through the issue creation form now, although it is not enabled by default, a weird limitation of Gitlab. Issue templates are available in Gitlab and Github.

Issue templates somewhat force users in a straight jacket: there is already something to structure their bug report. Those could be distinct form elements that had to be filled in, but I like the flexibility of the template, and the possibility for users to just escape the formalism and just plead for help in their own way.

Issue guidelines

In the end, I opted for a short few paragraphs in the style of the Tails documentation, including a reference to sgtatham, as an optional future reference:

  • Before you report a new bug, review the existing issues in the online issue tracker and the Debian BTS for Monkeysign to make sure the bug has not already been reported elsewhere.

  • The first aim of a bug report is to tell the developers exactly how to reproduce the failure, so try to reproduce the issue yourself and describe how you did that.

  • If that is not possible, try to describe what went wrong in detail. Write down the error messages, especially if they have numbers.

  • Take the necessary time to write clearly and precisely. Say what you mean, and make sure it cannot be misinterpreted.

  • Include the output of monkeysign --test, monkeysign --version and monkeysign --debug in your bug reports. See the issue template for more details about what to include in bug reports.

If you wish to read more about issues regarding communication in bug reports, you can read How to Report Bugs Effectively, which takes around 20 to 30 minutes.

Unfortunately, short of rewriting sgtatham's guide, I do not feel there is much more we can do as a general guide. I find esr's guide to be too verbose and commanding, so sgtatham it will be for now.

The prose and literacy

In the end, there is a fundamental issue with reporting bugs: it assumes our users are literate and capable of writing amazing prose that we will enjoy reading as the last J.K. Rowling novel (if you're into that kind of thing). It's just an unreasonable expectation: some of your users don't even speak the same language as you, let alone read or write it. This makes for challenging collaboration, to say the least. This is where automated reporting makes sense: it doesn't require user's intervention, and the communication is mediated by machines without human intervention and their pesky culture.

But we should, as maintainers, "be liberal in what we accept and conservative in what we send". Be tolerant, and help your users in fixing their issues. It's what you are there for, after all.

And in the end, we all fail the same way. In an attempt to improve the situation on bug reporting guides, I seem to have myself written a 2000 short story that will have taken up a hopefully pleasant 10 minutes of your time at minimum. Hopefully I will have succeeded at being clear, specific, verbose and concise all at once and look forward to your feedback on how to improve our bug reporting culture.

14 October, 2016 03:11PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Hi-Fi Furniture

sadly obsolete

sadly obsolete

For the last four years or so, I've had my Hi-Fi and the vast majority of my vinyl collection stored in a self-contained, mildly-customized Ikea unit. Since moving house this has been in my dining room—which we have always referred to as the "play room", since we have a second dining room in which we actually dine.

The intention for the play room was for it to be the room within which all our future children would have their toys kept, in an attempt to keep the living room from being overrun with plastic. The time has thus come for my Hi-Fi to come out of there, so we've moved it to our living room. Unfortunately, there's not enough room in the living room for the Ikea unit: I need something narrower for the space available.

via IkeaHackers.net

via IkeaHackers.net

In the spirit of my original hack, I started looking at what others might have achieved with Ikea components. There are some great examples of open-style units built out of the (extremely cheap) Lack coffee tables, such as this ikeahackers article, but I'd prefer something more enclosed. One problem I've had with the Expedit unit was my cat trying to scratch the records. I ended up putting framed records at the front to cover the spines of the records within. If I were keeping the unit, I'd look at fitting hinges (another ikeahackers article)

Asides from hacked Ikea stuff, there are a few companies offering traditional enclosed Hi Fi cabinets. I'm going to struggle to fit both the equipment and a subset of records into these, so I might have to look at storing them separately. In some ways that makes life easier: the records could go into a 1x4 Ikea KALLAX unit, leaving the amp and deck to home somewhere. Perhaps I could look at a bigger unit for under the TV.

My parents have a nice Hi-Fi unit that pretends to be a chest of drawers. I'm fairly sure my Dad custom-built it, as it has a hinged top to provide access to the turntable and I haven't seen anything like that on the market.

That brings me onto thinking about other AV things I'd like to achieve in the living room. I've always been interested in exploring surround sound, but my initial attempt in my prior flat did not go well, either because the room was not terribly suited accoustically, or because the Pioneer unit I bought was rubbish, or both. It seems that there aren't really AV receivers which are designed to satisfy both people wanting to use them in a Hi-Fi and a home cinema setting. I could stick to stereo and run the TV into my existing (or a new) amplifier, subject to some logistics around wiring. A previous house owner ran some phono cables under the hard-wood flooring from the TV alcove to the opposite side of the fire place, which might give me some options.

There's also the world of wireless audio, Sonos etcetera. Realistically the majority of my music is digital nowadays, and it would be good to be able to listen to it conveniently in the house. I've heard good reports on the entry level Sonos stuff, but they seem to be Mono, and even the more high-end ones with lots of drivers have very small separation. I did buy a Chromecast Audio on a whim recently, but I haven't looked at it much yet: perhaps that's part of the solution.

So, lots of stuff in the melting pot to figure out here!

14 October, 2016 02:23PM

hackergotchi for Daniel Silverstone

Daniel Silverstone

Gitano - Approaching Release - Changes

Continuing on from the previous article, here is a (probably incomplete) list of the critical changes to Gitano which have been, or will be, worked on during the run toward a 1.0 release. Each of these will have a blog posting to discuss what the changes mean for current and future users. Sometimes I'll aggregate postings, sometimes I won't.

The following are some highlights from the past little while of development which has been undertaken by Richard and myself. Each item is, I feel, important enough to warrant commentary, even for those who already use Gitano.

  • Lace now supports a sub-define syntax: [foo bar] which makes for simpler rulesets.
  • Gitano no longer creates auto_user_XXX and auto_group_XXX Lace predicates
  • Gitano no longer supports "basic" simple matches of the form user foo but instead requires a match kind such as group prefix bar-.
  • Gitano is gaining i18n/l10n support, though it will not be complete for version 1.0 the basics will be in place.
  • Gitano is gaining a much larger integration test suite using yarn.
  • Deprecated commands have now been removed from Gitano. (e.g. no more set-owner)
  • Gitano has gained PGP/GPG signature verification for commits and tags.

Any number of smaller things have been done which fall below some arbitrary barrier for telling you about. If you're aware of any of them and feel they are worthwhile telling the world about, then please prod me and I'll add an article to the series.

Finally it's worth noting that the effort to get all this into Debian Stretch proceeds apace. Of the eight packages needed, at the time of posting: one was already in and has been updated (luxio), three have been accepted into Debian already (supple, clod, lua-scrypt), two are in NEW (gall and lace), and that leaves the newest library (tongue) and then Gitano itself still to go. The Debian FTP team have been awesome in helping me with all this, so thanks go to them.

14 October, 2016 01:30PM by Daniel Silverstone

hackergotchi for Michal Čihař

Michal Čihař

motranslator 2.0

Yesterday, the motranslator 2.0 has been released. As the version change suggests there are some important changes under the hood.

Full list of changes:

  • Consistently use camelCase in API
  • No more relies on using eval()
  • Depends on symfony/expression-language for calculations

As you can see, yesterday announced SimpleMath is not used in the end and I've moved to use existing library. Somehow I misunderstood library description and I thought that it works as PHP, what would be problem for us (or would bring need to add parenthesis around ternary operator as we did with eval()). But this is not the case and ternary operator behaves sane in ExpressionLanguage, so we're good too use it.

Anyway if you were using MoTranslator, it might be good idea to upgrade and check if API changes affect you.

Filed under: Debian English phpMyAdmin | 0 comments

14 October, 2016 04:00AM

October 13, 2016

hackergotchi for Wouter Verhelst

Wouter Verhelst

Webserver certificate authentication with intermediate CAs

Authenticating HTTPS clients using TLS certificates has long been very easy to do. The hardest part, in fact, is to create a PKI infrastructure robust enough so that compromised (or otherwise no longer desirable) certificates cannot be used anymore. While setting that up can be fairly involved, it's all pretty standard practice these days.

But authentication using a private CA is laughably easy. With apache, you just do something like:

SSLCACertificateFile /path/to/ca-certificate.pem
SSLVerifyClient require

...and you're done. With the above two lines, apache will send a CertificateRequest message to the client, prompting the client to search its local certificate store for certificates signed by the CA(s) in the ca-certificate.pem file, and using the certificate thus found to authenticate against the server.

This works perfectly fine for setting up authentication for a handful of users. It will even work fine for a few thousands of users. But what if you're trying to set up website certificate authentication for millions of users? Well, in that case, storing everything in a single CA just doesn't work, and you'll need intermediate CAs to make things not fall flat on their face.

Unfortunately, the standard does not state what should happen when a client certificate is signed by an intermediate certificate, and the distinguished name(s) in the CertificateRequest message are those of the certificate(s) at the top of the chain rather than the intermediates. Previously, browsers would not just send out the client certificate, but also send along the certificate that it knew to have signed that client certificate, and so on until it found a self-signed certificate. With that, the server would see a chain of certificates that it could verify against the root certificates in its local trust store, and certificate verification would succeed.

It would appear that browsers are currently changing this behaviour, however. With the switch to BoringSSL, Google Chrome on the GNU/Linux platform no longer sent the signing certificates, and instead only sends the leaf certificate that it wants to use for certificate authentication. In the bug report, Ryan Sleevi explains that while the change was premature, the long term plans are for this to be the behaviour not just on GNU/Linux, but on Windows and macOS too. Meanwhile, the issue has been resolved for Chrome 54 (due to be released), but there's no saying for how long. As if that was not enough, the new version of Safari as shipped with macOS 10.12 has also stopped sending intermediate certificates, and expects the web server to be happy with just receiving the leaf certificates.

So, I guess it's safe to say that when you want to authenticate a client in a multi-level hierarchical CA environment, you cannot just hand the webserver a list of root certificates and expect things to work anymore; you'll have to hand it the intermediary certificates as well. To do so, you need to modify the SSLCACertificateFile parameter in apache configuration so that it not only contains the root certificate, but all the intermediate certificates as well. If your list of intermediate certificates is rather long, it may improve performance to use SSLCACertificatePath and the c_rehash tool to create a hashed directory with certificates rather than a PEM file, but in essense, that should work.

While this works, the problem with doing so is that now the DNs of all the intermediate certificates are sent along with the CertificateRequest message to the client, which (again, if your list of intermediate certificates is rather long) may result in performance issues. The fix for that is fairly easy: add a line like

SSLCADNRequestFile /path/to/root-certificates.pem

where the file root-certificates.pem contains the root certificates only. This file will tell the webserver which certificates should be announced in the CertificateRequest message, but the SSLCACertificateFile (or SSLCACertificatePath) configuration item will still be used for actual verification of the certificates in question. Note though that the root certificates apparently also seem to need to be available in the SSLCACertificatePath or SSLCACertificateFile configuration; if you do not do so, then authentication seems to fail, although I haven't yet found why.

I've set up a test page for all of those "millions" of certificates, and it seems to work for me. I've you're trying to use one of those millions of certificates against your own webserver or have a similar situation with a different set of certificates, you might want to make the above changes, too.

13 October, 2016 12:35PM

hackergotchi for Michal Čihař

Michal Čihař

Announcing SimpleMath

For quite some time we've been relying on using eval() function in phpMyAdmin in two places. One of them is gettext library, where we have to evaluate plural forms and second of them is MySQL configuration advisor, which does it's suggestions based on text file (the original idea was to make this file shared with other tools, but it never really worked out).

Using eval() in PHP is something what is better to avoid, but we were using it on data we ship, so it was considered safe. On the other side, there are hostings which deny using eval() altogether (as many of exploits are using this function), so it's better to avoid that. I've been looking for options for replacing eval() in motranslator (library we use for handling Gettext MO files) for quite some time, but never found library which would support all operators needed in Gettext plural formulas.

Yesterday I finally came to conclusion that writing own library to do this is best approach. This way it can in future extended to work with Advisor as well. Also we can make it pretty lightweight without additional dependencies (what was problem in some existing libraries I've found).

To make the story short, this is how SimpleMath was born. As of now, it has grown to version 0.2 (you can use Packagist to install it). For now it's really simple and it can be probably confused by various strange inputs, but it seems for work pretty well for our case. Currently supported features:

  • Supports basic arithmetic operations +, -, *, /, %
  • Supports parenthesis
  • Supports right associative ternary operator
  • Supports comparison operators ==, !=, >, <, >=, <=
  • Supports basic logical operations &&, ||
  • Supports variables (either PHP style $a or simple n)

Maybe it will be usable for somebody else as well, but even if not, it's the way for us to get rid of using eval() in our codebase.

Update

It seems that Symfony ExpressionLanguage Component is doing pretty much same, but more flexible and faster, so SimpleMath will be probably dead soon and we will switch to using Symphony component.

Filed under: Debian English phpMyAdmin | 4 comments

13 October, 2016 04:00AM

hackergotchi for Norbert Preining

Norbert Preining

John Oliver and news in Japan

Yesterday evening I was enjoying several features by John Oliver, mostly about the upcoming election in the US (Scandals), but also one of the best features I have heard from him on Guantánamo. It sadly reminded me of the completely different landscape in Japan.
news-around-the-world

Not only since the unprecedented warning to close down “biased” broadcasters by the Ministry of Internal Affairs and Communication, but ever since Abe is building up his more and more totalitarian control over the country, the freedom of press has been on shaky grounds.

Even worse, newspapers and TV outlets restrict themselves to “save” topics, which means: stupid talk shows, food, and above all praise of Japan and how good, how lovely, how great it is (Make Japan great again!). All this despite the fact that there would be a lot to rumble upon: covering up the truth around Fukushima, mountains of scandals around Olympia 2020, police brutality in Okinawa, the list is long.

Only thinking about having something remotely similar to John Oliver on TV in Japan is as unthinkable as Trump donating all his money to a charity for immigrants. Sure enough, John Oliver is one great example out of tons of rubbish in the US, sure enough, but this one example is missing in Japan.

What remains are Japanese media stations that crawl into the *** of the government, what a sad state.

(Photo credit partially due to Über Arschkriecher)

13 October, 2016 01:03AM by Norbert Preining

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Seinfeld streak at GitHub: Round Three

Two years ago in this post I reference the Seinfeld Streak used in an even earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

and showed the this first chart of GitHub streaking

github activity october 2013 to october 2014

Last year a follow-up appeared in this post:

github activity october 2014 to october 2015

And as it is that time again, here is this year's version:

github activity october 2015 to october 2016

Special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 October, 2016 12:04AM

October 12, 2016

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Using SonarQube 5.4, Maven 3.3.9, Jenkins 2.19.1 on systems with both Java 1.7 and 1.8

Hello folks! My team spent hours and hours beating our head against a Sonar deployment problem on Ubuntu Trusty (14.04 LTS). I thought I might share our findings so that you won’t have to!

As you probably know, Trusty only makes Java Development Kit 1.7 available on the stock installation. The current stable version of the Java is 1.8. The way we install this is to use the OpenJDK PPA, generously uploaded by our dear friend Matthias Klose.

To make things even more exciting, a modern Maven is not available on this platform. And so we use the stock Maven 3.3.9 tarball distribution. This tarball distribution does not integrate well with Debian, and so, when we tell the system using sudo update-java-alternatives -s /usr/lib/jvm/java-1.8.0-openjdk-amd64 that we wish to use Java 1.8 as our default system JDK, it does not get the message.

The only way to reliably let Maven know which java you wish to use is to set the JAVA_HOME environment variable. In order to do this within the Jenkins environment, one must select the JDK one wishes to use:

openjdk8-ubuntu1404-as-jdk

To make things worse, this option is not, as one might expect, available for editing in a stock Jenkins 2.x installation. In Jenkins 1.x, one would be able to specify which java one wished to use just by specifying “openjdk8” in a field. With Jenkins 2.x, the field does not exist unless a configuration option in an unrelated form is set.

So! One should first select Manage Jenkins -> Global Tool Configuration:

Jenkins2-Global_Tool_Configuration

Once this form is open, look for the “JDK installations…” button:

Jenkins2-JDK_installations

Click it very thoroughly just once.

You’ll be presented with a form into which you may enter details about the various JDKs your build executors may have access to. You’ll refer to them in your job configuration by the value of their “Name” field, and when executing the build, Jenkins will set JAVA_HOME to the value of the (you guessed it) JAVA_HOME field:

Jenkins2-JDK_installations-expanded

Once these entries are made, they can be selected in two place.

1) on the ZMQ Event Publisher:

jdk-select-project

2) in the post-build actions under SonarQube analysis with Maven (advanced)

postbuild-sonarqube-select-jdk

And that’s how it’s done!

Here’s the details from my colleague, Thanh:

https://lists.fd.io/pipermail/honeycomb-dev/2016-October/000387.html

12 October, 2016 06:01PM by C.J. Adams-Collier

Craig Small

axdigi resurrected

Seems funny to talk about 20 year old code that was a stop-gap measure to provide a bridging function the kernel had not (as yet) got, but here it is, my old bridge code.

When I first started getting involved in Free Software, I was also involved with hamradio. In 1994 I release my first Free Software, or Open Source program called axdigi.  This program allowed you to “digipeat”. This was effectively source route bridging across hamradio packet networks. The code I used for this was originally network sniffer code to debug my PackeTwin kernel driver but  got frustrated at there being no digipeating function within Linux, so I wrote axdigi which is about 200 lines.

The funny thing is, back then I thought it would be a temporary solution until digipeating got put into the kernel, which it temporarily did then got removed.

Recently some people asked me about axdigi and where there is an “official” place where the code lives. The answer is really the last axdigi was 0.02 written in July 1995. It seems strange to resurrect 20 year old code but it is still in use; though it does show its age.  I’ve done some quick work on getting rid of the compiler warnings but there is more to do.

So now axdigi has a nice shiny new home on GitHub, at https://github.com/csmall/axdigi

12 October, 2016 10:50AM by Craig

October 11, 2016

Stig Sandbeck Mathisen

DevOps toys, looking at new and old tools

From last month’s toybox of distractions, I’ve spent time with GitLab CI, Ansible, Prometheus and OpenShift.

GitLab CI is a lot like Travis CI, and a little less like Jenkins. When a commit is pushed to the repository in GitLab, and the branch contains a .gitlab-ci.yml file, a GitLab CI runner will check out the repository, and follow the instructions in that file. Useful for configuration syntax checks, unit tests, and puppet environment deployments.

I’ve mostly used Ansible for orchestration, performing tasks across a number of nodes. I’ve used it much configuration management for a bit, and from what I see, it can do that rather well, too. I’ve used Puppet in production for a bit (I committed revision 1 in the old puppet configuration management repository at work in 2007-07-04). A new perspective on configuration management is good.

Prometheus is a master-node stats gatherer and presenter. It does a single HTTP GET to fetch all metrics in a single request. I’ve used Munin for a long, long time, and while the plugin ecosystem is far larger for Munin, the Prometheus master scales much better (millions of metrics per minute on a modern machine). I use Grafana to present graphs from Prometheus and logs from Elasticsearch in the same dashboard. Prometheus can collect data from a munin node, using a munin node exporter.

Last week I got training in OpenShift, which was an eye-opener. I’ve used Docker for a good while, and planned to introduce Kubernetes, as well as an imperial buttload of shell scripts to keep it all automated. Thankfully, OpenShift Origin already includes Kubernetes and does the required automation. An OpenShift cluster is now being added to the core infrastructure to do the required care and feeding of the herd of APIs and microservices written over the years. Bunch it together behind an API Management Gateway, and you should be able to label the whole thing “Microservice Architecture”.

I’m not running out of fun things to do for a while.

11 October, 2016 09:17PM

hackergotchi for Daniel Pocock

Daniel Pocock

Outreachy and GSoC 2017 opportunities in Computer Security, Cryptography, PGP and Python

I've proposed the PGP/PKI Clean Room as a topic in Outreachy this year. The topic will also be promoted as part of GSoC 2017.

If you are interested in helping as either an intern or mentor, please follow the instructions there to make contact.

Even if you can't participate, if you have the opportunity to promote the topic in a university or any other environment where potential interns will see it, please do so as this makes a big difference to the success of these programs.

11 October, 2016 06:54PM by Daniel.Pocock

Outreachy and GSoC 2017 opportunities in Multimedia Real-Time Communication

I've proposed Free Real-Time Communication as a topic in Outreachy this year. The topic will also be promoted as part of GSoC 2017.

If you are interested in helping as either an intern or mentor, please follow the instructions there to make contact.

Even if you can't participate, if you have the opportunity to promote the topic in a university or any other environment where potential interns will see it, please do so as this makes a big difference to the success of these programs.

The project could involve anything related to SIP, XMPP, WebRTC or peer-to-peer real-time communication, as long as it emphasizes a specific feature or benefit for the Debian community. If other Outreachy organizations would also like to have a Free RTC project for their community, then this could also be jointly mentored.

11 October, 2016 06:53PM by Daniel.Pocock

hackergotchi for Michal Čihař

Michal Čihař

stardicter 0.10

Stardicter 0.10, the set of scripts to convert some freely available dictionaries to StarDict format, has been released today. There are mostly minor changes and it's time to push them out in official release.

There is one change worth mentioning though - the original site for English - Czech dictionary (http://slovnik.zcu.cz/) has stopped to work and has been moved to https://www.svobodneslovniky.cz/. Hopefully this new location will live at least as long as the original one and will bring back new contributors (honestly the original dictionary gained mostly spam entries in last months). The dictionary data are now hosted in Git repository on GitHub.

Filed under: Debian English StarDict | 0 comments

11 October, 2016 04:00PM

Vincent Sanders

The pine stays green in winter... wisdom in hardship.

In December 2015 I saw the kickstarter for the Pine64. The project seemed to have a viable hardware design and after my experience with the hikey I decided it could not be a great deal worse.

Pine64 board in my case design
The system I acquired comprises of:
  • Quad core Allwinner A64 processor clocked at 1.2GHz 
  • 2 Gigabytes of DDR3 memory
  • Gigabit Ethernet
  • two 480Mbit USB 2.0 ports
  • HDMI type A
  • micro SD card for storage.
Hardware based kickstarter projects are susceptible to several issues and the usual suspects occurred causing delays:
  • Inability to scale, several thousand backers instead of the hundred they were aiming for
  • Issues with production
  • Issues with shipping
My personal view is that PINE 64 inc. handled it pretty well, much better than several other projects I have backed and as my Norman Douglas quotation suggests I think they have gained some wisdom from this.

I received my hardware at the beginning of April only a couple of months after their initial estimated shipping date which as these things go is not a huge delay. I understand some people who had slightly more complex orders were just receiving their orders in late June which is perhaps unfortunate but still well within kickstarter project norms.

As an aside: I fear that many people simply misunderstand the crowdfunding model for hardware projects and fail to understand that they are not buying a finished product, on the other side of the debate I think many projects need to learn expectation management much better than they do. Hyping the product to get interest is obviously the point of the crowdfunding platform, but over promising and under delivering always causes unhappy customers.

Pine64 board dimensions
Despite the delays in production and shipping the information available for the board was (and sadly remains) inadequate. As usual I wanted to case my board and there were no useful dimension drawings so I had to make my own from direct measurements together with a STL 3D model.

Also a mental sigh for "yet another poor form factor decision" so another special case size and design. After putting together a design and fabricating with the laser cutter I moved on to the software.

Once more this is where, once again, the story turns bleak. We find a very pretty website but no obvious link to the software (hint scroll to the bottom and find the "support" wiki link) once you find the wiki you will eventually discover that the provided software is either an Android 5.1.1 image (which failed to start on my board) or relies on some random guy from the forums who has put together his own OS images using a hacked up Allwinner Board Support Package (BSP) kernel.

Now please do not misunderstand me, I think the work by Simon Eisenmann (longsleep) to get a working kernel and Lenny Raposo to get viable OS images is outstanding and useful. I just feel that Allwinner and vendors like Pine64 Inc. should have provided something much, much better than they have. Even the efforts to get mainline support for this hardware are all completely volunteer community efforts and are are making slow progress as a result.

Assuming I wanted to run a useful OS on this hardware and not just use it as a modern work of art I installed a basic Debian arm64 using Lenny Raposo's pine64 pro site downloads. I was going to use the system for compiling and builds so used the "Debian Base" image to get a minimal setup. After generating unique ssh keys, renaming the default user and checking all the passwords and permissions I convinced myself the system was reasonably trustworthy.

The standard Debian Jessie OS runs as expected with few surprises. The main concern I have is that there are a number of unpackaged scripts installed (prefixed with pine64_) which perform several operations from reporting system health (using sysfs entries) to upgrading the kernel and bootloader.

While I understand these scripts have been provided for the novice users to reduce support burden, doing even more of the vendors job, I would much rather have had proper packages for these scripts, kernel and bootloader which apt could manage. This would have reduced image creation to a simple debootstrap giving much greater confidence in the images provenance.

The 3.10 based kernel is three years old at the time of writing and lacks a great number of features for the aarch64 ARM processors introduced since release. However I was pleasantly surprised at kvm apparently being available.

# dmesg|grep -i kvm
[    7.592896] kvm [1]: Using HYP init bounce page @b87c4000
[    7.593361] kvm [1]: interrupt-controller@1c84000 IRQ25
[    7.593778] kvm [1]: timer IRQ27
[    7.593801] kvm [1]: Hyp mode initialized successfully

I installed the libvirt packages (and hence all their dependencies like qemu) and created a bridge ready for the virtual machines.

I needed access to storage for the host disc images and while I could have gone the route of using USB attached SATA as with the hikey I decided to try and use network attached storage instead. Initially I investigated iSCSI but it seems the Linux target (iSCSI uses initiator for client and target for server) support is either old, broken or unpackaged.

I turned to network block device (nbd) which is packaged and seems to have reasonable stability out of the box on modern distributions. This appeared to work well, indeed over the gigabit Ethernet interface I managed to get a sustained 40 megabytes a second read and write rate in basic testing. This is better performance than a USB 2.0 attached SSD on the hikey

I fired up the guest and perhaps I should have known better than to expect a 3.10 vendor kernel to cope. The immediate hard crashes despite tuning many variables convinced me that virtualisation was not viable with this kernel.

So abandoning that approach I attempted to run the CI workload directly on the system. To my dismay this also proved problematic. The processor has the bad habit of throttling due to thermal issues (despite a substantial heatsink) and because the storage is network attached throttling the CPU also massively impacts I/O.

The limitations meant that the workload caused the system to move between high performance and almost no progress on a roughly ten second cycle. This caused a simple NetSurf recompile CI job to take over fifteen minutes. For comparison the same task takes the armhf builder (CubieTruck) four minutes and a 64 bit x86 build which takes around a minute.

If the workload is tuned to a single core which does not trip thermal throttling the build took seven minutes. which is almost identical to the existing single core virtual machine instance running on the hikey.

In conclusion the Pine64 is an interesting bit of hardware with fatally flawed software offering. Without Simon and Lenny providing their builds to the community the device would be practically useless rather than just performing poorly. There appears to have been no progress whatsoever on the software offering from Pine64 in the six months since I received the device and no prospect of mainline Allwinner support for the SoC either.

Effectively I have spent around 50usd (40 for the board and 10 for the enclosure) on a failed experiment. Perhaps in the future the software will improve sufficiently for it to become useful but I do not hold out much hope that this will come from Pine64 themselves.

11 October, 2016 12:19PM by Vincent Sanders ([email protected])

Craig Small

Changing Jabber IDs

I’ve shuffled some domains around, using less of enc.com.au and more of my new domain dropbear.xyz The website should work with both, but the primary domain is dropbear.xyz

 

Another change is my Jabber ID which used to be csmall at enc but now is same username at dropbear.xyz I think I have done all the required changes in prosody for it to work, even with a certbot certificate!

11 October, 2016 11:38AM by Craig

Russ Allbery

remctl 3.13

remctl is a client and server that forms a very simple remote RPC system, normally authenticated with Kerberos, although including a remctl-shell variant that works over ssh.

This release adds forced-command support for remctl-shell, which allows it to work without enabling setting environment variables in authorized_keys. This may be a preferrable configuration to using it as an actual shell.

Also in this release, the summary configuration option is allowed for commands with subcommands other than ALL, which allows proper generation of command summaries even for users who only have access to a few subcommands of a command. It also adds some build system support for building binaries with -fPIE.

You can get the latest release from the remctl distribution page.

11 October, 2016 03:18AM

rra-c-util 6.1

This is my collection of supporting libraries, Autoconf macros, and test code for various projects.

This release fixes return-value checks for snprintf to avoid a few off-by-one errors (none of which should have been exploitable, but better to be safe and correct). It adds a new RRA_PROG_CC_FLAG macro to test compiler support for a specific flag and a new RRA_PROG_CC_WARNINGS_FLAGS macro to probe for all the flags I use as my standard make warnings target. And it fixes some problems with one utility due to the removal of the current directory from @INC in the latest Perl release.

You can get the latest version from the rra-c-util distribution page.

11 October, 2016 03:00AM

October 10, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

DVD-based Clean Room for PGP and PKI

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails.

There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony.

Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian.

Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with.

Trying it out in VirtualBox

It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside:

At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL.

I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card.

Ready to use today

More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson.

The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline.

Getting involved

To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools.

If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it.

One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

10 October, 2016 07:25PM by Daniel.Pocock

Balram Pariyarath

Reproducible builds folks

Reproducible Builds: week 76 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 2 and Saturday October 8 2016:

Media coverage

Events

  • Vagrant Cascadian gave an impromptu talk about reproducible builds at CAT Barcamp on 8th October.

  • Holger discussed Reproducible coreboot at coreboot.berlin. Unlike other projects, coreboot doesn't do binary releases because there have been many instances of people taking some incorrect coreboot binary, flashed it and bricked their machines… The end idea is that coreboot will simply release .buildinfo files (and still no binaries) instead.

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

31 package reviews have been added, 27 have been updated and over 20 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been addded:

1 issue type has been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (12)

tests.reproducible-builds.org

Debian:

  • The data in reproducible-tracker.json (which is fed to tracker.d.o and DDPO) has been changed to contain data from testing as the build path variations we introduced for unstable are not yet ready for wider consumption. For testing/stretch we recomment to create reproducible packages by rebuilding in the same path. (h01ger)
  • Various reproducibility statistics for testing/stretch have been added to the dashboard view. (h01ger)
  • The repository comparison page has been improved to only show obsolete packages if they exist (which they currently don't as we have rebuilt everything from the plain Debian repos, except for our modified dpkg due to #138409 and #787980). (h01ger)
  • All armhf boards are now using Linux kernels provided by Debian. (vagrant)

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & Vagrant Cascadian and reviewed by a bunch of Reproducible Builds folks on IRC.

10 October, 2016 11:16AM

Petter Reinholdtsen

Experience and updated recipe for using the Signal app without a mobile phone

In July I wrote how to get the Signal Chrome/Chromium app working without the ability to receive SMS messages (aka without a cell phone). It is time to share some experiences and provide an updated setup.

The Signal app have worked fine for several months now, and I use it regularly to chat with my loved ones. I had a major snag at the end of my summer vacation, when the the app completely forgot my setup, identity and keys. The reason behind this major mess was running out of disk space. To avoid that ever happening again I have started storing everything in userdata/ in git, to be able to roll back to an earlier version if the files are wiped by mistake. I had to use it once after introducing the git backup. When rolling back to an earlier version, one need to use the 'reset session' option in Signal to get going, and notify the people you talk with about the problem. I assume there is some sequence number tracking in the protocol to detect rollback attacks. The git repository is rather big (674 MiB so far), but I have not tried to figure out if some of the content can be added to a .gitignore file due to lack of spare time.

I've also hit the 90 days timeout blocking, and noticed that this make it impossible to send messages using Signal. I could still receive them, but had to patch the code with a new timestamp to send. I believe the timeout is added by the developers to force people to upgrade to the latest version of the app, even when there is no protocol changes, to reduce the version skew among the user base and thus try to keep the number of support requests down.

Since my original recipe, the Signal source code changed slightly, making the old patch fail to apply cleanly. Below is an updated patch, including the shell wrapper I use to start Signal. The original version required a new user to locate the JavaScript console and call a function from there. I got help from a friend with more JavaScript knowledge than me to modify the code to provide a GUI button instead. This mean that to get started you just need to run the wrapper and click the 'Register without mobile phone' to get going now. I've also modified the timeout code to always set it to 90 days in the future, to avoid having to patch the code regularly.

So, the updated recipe for Debian Jessie:

  1. First, install required packages to get the source code and the browser you need. Signal only work with Chrome/Chromium, as far as I know, so you need to install it.
    apt install git tor chromium
    git clone https://github.com/WhisperSystems/Signal-Desktop.git
    
  2. Modify the source code using command listed in the the patch block below.
  3. Start Signal using the run-signal-app wrapper (for example using `pwd`/run-signal-app).
  4. Click on the 'Register without mobile phone', will in a phone number you can receive calls to the next minute, receive the verification code and enter it into the form field and press 'Register'. Note, the phone number you use will be user Signal username, ie the way others can find you on Signal.
  5. You can now use Signal to contact others. Note, new contacts do not show up in the contact list until you restart Signal, and there is no way to assign names to Contacts. There is also no way to create or update chat groups. I suspect this is because the web app do not have a associated contact database.

I am still a bit uneasy about using Signal, because of the way its main author moxie0 reject federation and accept dependencies to major corporations like Google (part of the code is fetched from Google) and Amazon (the central coordination point is owned by Amazon). See for example the LibreSignal issue tracker for a thread documenting the authors view on these issues. But the network effect is strong in this case, and several of the people I want to communicate with already use Signal. Perhaps we can all move to Ring once it work on my laptop? It already work on Windows and Android, and is included in Debian and Ubuntu, but not working on Debian Stable.

Anyway, this is the patch I apply to the Signal code to get it working. It switch to the production servers, disable to timeout, make registration easier and add the shell wrapper:

cd Signal-Desktop; cat <<EOF | patch -p1
diff --git a/js/background.js b/js/background.js
index 24b4c1d..579345f 100644
--- a/js/background.js
+++ b/js/background.js
@@ -33,9 +33,9 @@
         });
     });
 
-    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
     var SERVER_PORTS = [80, 4433, 8443];
-    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
     var messageReceiver;
     window.getSocketStatus = function() {
         if (messageReceiver) {
diff --git a/js/expire.js b/js/expire.js
index 639aeae..beb91c3 100644
--- a/js/expire.js
+++ b/js/expire.js
@@ -1,6 +1,6 @@
 ;(function() {
     'use strict';
-    var BUILD_EXPIRATION = 0;
+    var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
 
     window.extension = window.extension || {};
 
diff --git a/js/views/install_view.js b/js/views/install_view.js
index 7816f4f..1d6233b 100644
--- a/js/views/install_view.js
+++ b/js/views/install_view.js
@@ -38,7 +38,8 @@
             return {
                 'click .step1': this.selectStep.bind(this, 1),
                 'click .step2': this.selectStep.bind(this, 2),
-                'click .step3': this.selectStep.bind(this, 3)
+                'click .step3': this.selectStep.bind(this, 3),
+                'click .callreg': function() { extension.install('standalone') },
             };
         },
         clearQR: function() {
diff --git a/options.html b/options.html
index dc0f28e..8d709f6 100644
--- a/options.html
+++ b/options.html
@@ -14,7 +14,10 @@
         <div class='nav'>
           <h1>{{ installWelcome }}</h1>
           <p>{{ installTagline }}</p>
-          <div> <a class='button step2'>{{ installGetStartedButton }}</a> </div>
+          <div> <a class='button step2'>{{ installGetStartedButton }}</a>
+	    <br> <a class="button callreg">Register without mobile phone</a>
+
+	  </div>
           <span class='dot step1 selected'></span>
           <span class='dot step2'></span>
           <span class='dot step3'></span>
--- /dev/null   2016-10-07 09:55:13.730181472 +0200
+++ b/run-signal-app   2016-10-10 08:54:09.434172391 +0200
@@ -0,0 +1,12 @@
+#!/bin/sh
+set -e
+cd $(dirname $0)
+mkdir -p userdata
+userdata="`pwd`/userdata"
+if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
+    (cd $userdata && git init)
+fi
+(cd $userdata && git add . && git commit -m "Current status." || true)
+exec chromium \
+  --proxy-server="socks://localhost:9050" \
+  --user-data-dir=$userdata --load-and-launch-app=`pwd`
EOF
chmod a+rx run-signal-app

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10 October, 2016 09:30AM

Arturo Borrero González

The day I became Debian Developer

Debian

The moment has come. You may contact me now at [email protected] :-)

After almost 6 months of tough NM process, the waiting is over. I have achieved the goal I set to myself back in 2011: become Debian Developer.

This is a professional and personal victory.

I would like to mention many people who have been important for this to happen. But they all know, no need to create a list here. Thanks!

This weekend I was doing some hiking in the mountains and had no internet conection at all. When I arrived back home, I discovered an email from Debian System Administrators on behalf of The Debian New Maintainer Team, in which they let me know that my official DD account had been created.

During the last 6 month I have been trying to imagine the moment in which the process is finally completed (yes, I have been a bit impatient). At the end, the magical moment in the mountains was followed by the joy of the DD account. Curious how things happen sometimes.

Here is a pic of this mountain day, with my adventure friends. I am the first from the left.

pic

10 October, 2016 05:00AM

October 09, 2016

Hideki Yamane

Simplest debian/watch file GNOME packages

Simplest (two lines) debian/watch file GNOME-related packages, you can just copy&paste it.
version=4
http://download.gnome.org/sources/@PACKAGE@/([\d\.]+[02468])/@PACKAGE@-@ANY_VERSION@@ARCHIVE_EXT@

09 October, 2016 11:03PM by Hideki Yamane ([email protected])

Bits from Debian

Debian is participating in the next round of Outreachy!

Following the success of the last round of Outreachy, we are glad to announce that Debian will take part in the program for the next round, with internships lasting from the 6th of December 2016 to the 6th of March 2017.

From the official website: Outreachy helps people from groups underrepresented in free and open source software get involved. We provide a supportive community for beginning to contribute any time throughout the year and offer focused internship opportunities twice a year with a number of free software organizations.

Currently, internships are open internationally to women (cis and trans), trans men, and genderqueer people. Additionally, they are open to residents and nationals of the United States of any gender who are Black/African American, Hispanic/Latin@, American Indian, Alaska Native, Native Hawaiian, or Pacific Islander.

If you want to apply to an internship in Debian, you should take a look at the wiki page, and contact the mentors for the projects listed, or seek more information on the (public) debian-outreach mailing-list. You can also contact the Outreach Team directly. If you have a project idea and are willing to mentor an intern, you can submit a project idea on the Outreachy wiki page.

Here's a few words on what the interns for the last round achieved within Outreachy:

  • Tatiana Malygina worked on Continuous Integration for Bioinformatics applications; She has pushed more than a hundred commits to the Debian Med SVN repository over the last months, and has been sponsored for more than 20 package uploads.

  • Valerie Young worked on Reproducible Builds infrastructure, driving a complete overhaul of the database and software behind the tests.reproducible-builds.org website. Her blog contains regular updates throughout the program.

  • ceridwen worked on creating reprotest, an all-in-one tool allowing anyone to check whether a build is reproducible or not, replacing the string of ad-hoc scripts the reproducible builds team used so far. She posted regular updates on the Reproducible Builds team blog.

  • While Scarlett Clark did not complete the internship (as she found a full-time job by the mid-term evaluation!), she spent the four weeks she participated in the program providing patches for reproducible builds in Debian and KDE upstream.

Debian would not be able to participate in Outreachy without the help of the Software Freedom Conservancy, who provides administrative support for Outreachy, as well as the continued support of Debian's donors, who provide funding for the internships. If you want to donate, please get in touch with one of our trusted organizations.

Debian is looking forward to welcoming new interns for the next few months, come join us!

09 October, 2016 05:50PM by Nicolas Dandrimont

hackergotchi for Guido Günther

Guido Günther

Debian Fun in September 2016

Debian LTS

September marked the seventeenth month I contributed to Debian LTS under the Freexian umbrella. I spent 6 hours (out of 7) working on

  • updating Icedove to 45.3 resulting in DLA-640-1
  • finishing my work on bringing rails into shape security wise resulting in DLA-641-1 for ruby-activesupport-3.2 and DLA-642-1 for ruby-activerecord-3.2.
  • enhancing the autopkgtests for qemu a bit

Other Debian stuff

  • Uploaded libvirt 2.3.0~rc1 to experimental
  • Uploaded whatmaps to 0.0.12 in unstable.
  • Uploaded git-buildpackage 0.8.4 to unstable.

Other Free Software activities

  • Ansible: got the foreman callback plugin needed for foreman_ansible merged upstream.
  • Made several improvements to foreman_ansible_inventory (a ansible dynamic inventory querying Foreman): Fixing an endless loop when Foreman would miscalculate the number of hosts to process, flake8 cleaniness and some work on python3 support
  • ansible-module-foreman:
    • unbreak defining subnets by setting the default boot mode.
    • add support for configuring realms
  • Foreman: add some robustness to the nice rebuild host feature when DNS entries are already there
  • Released whatmaps 0.0.12.
    • Errors related to a single package don't abort the whole program but rather skip over it now.
    • Systemd user sessions are filtered out
    • The codebase is now checked with flake8.

09 October, 2016 02:59PM

hackergotchi for Ben Armstrong

Ben Armstrong

Annual Hike with Ryan: Salt Marsh Trail, 2016

Once again, Ryan Neily and I met last month for our annual hike. This year, to give our aging knees a break, we visited the Salt Marsh Trail for the first time. For an added level of challenge and to access the trail by public transit, we started with the Shearwater Flyer Trail and finished with the Heritage Trail. It was a perfect day both for hiking and photography: cool with cloud cover and a refreshing coastal breeze. The entire hike was over 25 km and took the better part of the day to complete. Good times, great conversations, and I look forward to visiting these beautiful trails again!

Salt Marsh trail hike, 2016. Click to start the slideshow.Salt Marsh trail hike, 2016. Click to start the slideshow.
We start here, on the Shearwater flyer trail.We start here, on the Shearwater flyer trail.
Couldn’t ID this bush. The berries are spectacular! A pond to the side of the trail. Different angle for dramatic lighting effect. Rail bridge converted to foot bridge. Cranberries! Reviewing our progress. From the start … Map of the Salt Marsh trail ahead. Off we go again! First glimpse through the trees. Appreciating the cloud cover today. Salt-marshy grasses. Never far from rocks in NS. Rocks all laid out in stripes. Lunch & selfie time. Ryan attacking his salad. Vantage point. A bit of causeway coast. Plenty of eel grass. Costal flora. We head for the bridge next. Impressed by the power of the flow beneath. Snapping more marsh shots. Ripples. Gulls, and if you squint, a copter. More ripples. Swift current along this channel. Until it broadens out and slows down. Nearly across. Heron! Sorry it’s so tiny. Heron again, before I lost it. Ducks at the head of the Atlantic View trail where we rested and then turned back. Attempt at artsy. Nodding ladies tresses on the way back. Several of them. Sky darkening, but we still have time. A lonely wild rose. The last gasp of late summer. Back across the marshes. A short breather on the Heritage Trail.

Here’s the Strava record of our hike:

Facebooktwittergoogle_plusredditpinterestlinkedinmail

09 October, 2016 12:20PM by Ben Armstrong

hackergotchi for Norbert Preining

Norbert Preining

Reload: Android 7.0 Nougat – Root – Pokemon Go

Ok, it turned out that a combinations of updates has broken my previous guide on playing Pokemon GO on a rooted Android device. What has happened that the October security update of the Android Nougat has changed the SecurityNet that is used for checking for rooted devices, and at the same time the Magisk rooting system as catapulted itself (hopefully temporarily) into the complete irrelevance by removing the working version and providing an “improved” version that does neither have SuperSU installed, nor the ability to hide root – well done, congratulations.

android-nougat-root-poke

But there is a way around, and I am now back at the latest security patch level, rooted, and playing Pokemon GO (not very often, no time, though!).

My previous guide used Magisk Version 6 to root and hide root. But the recent security updated of Andorid Nougat (October 2016) has rendered Magisk-v6 non-working. I first thought that Magisk-v7 could solve the problem, but I was badly disappointed: After reflashing my device to pristine state, installing Magisk-v7, I suddenly was left with: no SuperSU (that means, X-plore, Titanium Backup etc do not work anymore), nor the ability to hide root for Pokemon Go or banking apps. Great update.

Thus, I have decided to remove Magisk completely and make a clean start with SuperSU and suhide (and a GUI for suhide). And it turned out to be more convenient and more standard than Magisk, may it rest in peace (until they fix their stuff together).

In the following I assume you have a clean slate Android Nougat device, if not, please see one of the previous guides for hints how to flash back without loosing your user data.

Ingredients

One need the following few items:

Rooting

Unzip the CF-Auto-Root-angler-angler-nexus6p.zip and either use the included programs (root-linux.sh, root-mac.sh, root-windows.bat) to root your device, or simply connect your device to your computer, and run (assuming you have adb and fastboot installed):

adb reboot bootloader
sleep 10
fastboot boot image/CF-Auto-Root-angler-angler-nexus6p.img

After that your device will reboot a few times, and you will finally land in your normal Android screen and a new program SuperSU will be available. At this stage you will not be able to play Pokemon GO anymore.

Updating SuperSU

The version of SuperSU packaged with the CF-AutoRoot is unfortunately too old, so one needs to update using the zip file SR1-SuperSU-v2.78-SR1-20160915123031.zip (or later). Here are two options: Either you use TWRP recovery system, or you install FlashFire (main page, app store page) which allows you to flash zip/ota directly from your Android screen. This time I used for the first time the FlashFire method, and it worked without any problem.

Just press the “+” button in FlashFire, than the “Flash zip/ota” button, select the SR1-SuperSU-v2.78-SR1-20160915123031.zip, click two times yes, and then wait a bit and a few black screens (don’t do anything!) later you will be back in your Nougat environment. Opening the SuperSU app should show you on the Settings tag that the version has been updated to 2.78-SR1.

Installing suhide

As with the update of SuperSU, install the suhide zip file, smae procedure, nothing special.

After this you will be able to add an application (like Pokemon GO) from the command line (shell), but this is not very convenient. Better is to install the suhide GUI from the app store, start it, scroll for Pokemnon GO, add a tick, and you are settled.

After that you are free to play Pokemon GO again. At least until the next security update brings again problems. In the long run this is a loosing game, anyway. Enjoy it while you can.

09 October, 2016 11:01AM by Norbert Preining

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

New phone: Samsung Galaxy S III phone with Replicant

Thanks to the Bazaar effort of The Guardian Project, I’ve been offered a phone to test F-Droid and other free software apps for Android. I accepted the offer, and chose a Samsung Galaxy S III phone with Replicant 4.2.2,  installed and shipped by Tehnoetic.
I’m using it now as my main phone, and since it uses Android 4.x I’m able to install more modern apps than in my old Galaxy Ace (which remains usable with CyanongenMod 7.2 (Android 2.3.7)).
My plans with this new phone are:
  • Test Replicant and free software for Android on it
  • Get more involved in translations of Android apps
  • Get more involved in the F-Droid community
  • Keep an eye on Android tools in Debian
  • Post here in my blog articles about what I’ve been doing (and of course report issues and contributions upstream)

Migration to the new phone

I’ve migrated my stuff from the old phone to this one. Some notes:
  • Wrote down my list of apps
  • Used Slight Backup for contacts, call logs and messages
  • Periodical has its own backup tool
  • Whatsapp has its own backup tool
  • Exported settings in K-9 Mail
  • Exported Kontalk GPG key
  • Simply Do has its own backup tool
  • I don’t use calendars in the phone so I didn’t migrate any events (I have Offline Calendar to ad temporary notes/reminders, but that’s all)
I moved the SIM card and the SD Card to the new phone and tried the restore tool for each app.
I found out that several apps could not find the backups because they were not looking at the SD Card for the files (seems that they were using internal memory locations). So for recovering my backups, I made new backups in the new phone with the empty apps, then found out where those backups were created (in the internal memory, /storage/emulated/0), and then copied the authentic backup files there (overwriting teh dummy ones), and then used the app to restore the backup.
For some apps (K-9) I had to set again the folder for attachments, since the SD was not anymore in /media/sdcard, now it was in /storage/sdcard1.
Apart from that, everything went well.
I was a bit upset that I could not migrate Kontalk conversations (there is no backup/export tool, and I am not sure where are the files/database stored).
I noticed that although Kontalk is ‘registered’ using the phone number, and it uses the phone numbers for contacts, it kept working in the old phone (Whatsapp detects when you change to a new phone and kind of ‘deactivates’ itself in the old one, but that’s not the case for Kontalk: it works as any XMPP client (if it’s open, it can send/receive messages)).

Replicant 4.2 in a Galaxy S III (i9300)

Here I write some particularities that I found in the phone, mostly bugs or problems. But don’t get me wrong: overall I’m very happy with it!
I experienced a problem when using the phone to make/receive calls, it seemed that the proximity sensor was not working well. I thought it was a Replicant issue, but later I realized that there was a Tehnoetic sticker that was partially covering the sensor. I removed the sticker and everything worked well.
The phone came with F-Droid installed which is nice. I upgraded to the latest alpha and I’m testing the alpha releases since then🙂
I found that I cannot choose “where” to install apps nor move apps from internal memory to the SD Card: there is no such option in Settings > Apps > Manage Apps (there is such setting in my CyanogenMod 7.2 phone, though). Since my phone is rooted and I have full access to both internal memory and SDCard, and I have plenty of room in the internal memory, I didn’t bother too much. I’m not sure if this is a bug, a feature, something related to Android 4 or specific to Replicant, o specific to this phone model. Pending to investigate, but low priority.
Replicant is almost fully translated to Spanish, yay!. I only found one untranslated string: You go to Settings > Wireless > Cell Broadcasts, and in the settings page, “Cell Broadcasts” is untranslated (but the settings themselves are). I still need to find where/how to send a patch for this (not sure if it comes from Android, CyanogenMod, or it’s something specific for Replicant. Also, being Android 4.x, I’m not sure about the usefulness of reporting such a minimal and unimportant patch upstream…).
When I turn on the phone, I get the Samsung S III splash screen, later the Replicant Splash screen, later the numeric pad to unlock the SIM card. After that, I see the screen lock but when I press the lock to enter the pattern, the screen turns off and on, screen lock appearing again (and I have to press the lock again to enter the pattern). If after unlocking the SIM card I wait a bit, I see the screen lock and again black screen and screen lock, so it’s not my tap causing it. Doing like this (waiting a bit for the phone to show the screen lock for 2nd time) is less annoying, but I wonder why this happen and I cannot unlock the screen directly in the first attempt. This is also pending for research, but low priority.
When the phone boots, I find the splash screens too bright (the “Samsung Galaxy S III” splash, and later the red Replicant one). I don’t know if I can change that. I know that other people have created different ‘Replicant’ splash screens, so maybe I can create one almost black and only the “Replicant” text in very dark grey. But this is obviously a workaround, not a fix. OTOH, it’s an annoying thing just some seconds: when the unlock screen is shown, the phone shows the brightness level that I’ve set (usually, the lowest one).
From time to time, I suffer soft reboots:
  1. the phone hangs for 2-3 seconds
  2. then the red “Replicant” splash screen is shown (the phone is not totally rebooted, because I don’t see the “Samsung Galaxy S III” splash screen and and the SIM card unlock PIN is not requested)
  3. after unlocking the screen, I see a normal ‘desktop’ (similar to what I see after rebooting the phone: no apps running, and no “last used apps” history. Time and date are ok, wireless or 3G starts correctly etc).
I’ve tried to track the causes of these soft reboots, but I couldn’t find anything specific. They are not frequent at all, and when I decide to launch CatLog to try to catch any hint, the phone works perfectly for hours or days :s
Replicant is currently using the fallback Android EGL implementation, which is incomplete. The missing features of this implementation cause multiple issues, which are described in #705. These are the ones that I experience (or I miss):
  • The phone comes with a video editor preinstalled: Movie Studio. I got excited about it, because I was jealous of the small built-in video editor that comes with Whatsapp, but I became sad because Movie Studio does not work😦
  • The camera does not record video.
  • When I long-press the central button of my phone to see the list of recent apps, I don’t see their thumbnails (only the name, and their icons). This is quite unimportant for me, names and icons are enough.
  • The stock Gallery app does not work well: I cannot see thumbnails of the albums. This is not very important, because I installed Gallery.
  • I cannot use Firefox, Orfox and other derivative web browsers (I usually use the stock browser, and I installed Lightning too).
  • I cannot use barcode or QR scanners.
  • My son cannot play Shattered Pixel Dungeon (nor Pixel Dungeon). Fortunately he uses now my old Android 2.x devide for that.
I installed the non-free firmware to be able to use Wifi and tethering, GPS and some other things. This does not fix the graphics problems listed above.

New apps, and translations

Note: when I write about Android apps, I usually link to their pages in the F-Droid website. Here I talk about translations (contributions), so I’ll link to their original website or souce code repos. But you can find all those apps in F-Droid too.
As I told before, I installed another gallery app called Gallery and submitted an update to it Spanish translation.
I installed Red Moon to reduce (even more) the screen brightness. At night it’s a relieve. Maybe the brightness of the splash screen is not so much, and I perceive them annoying because I got accostumed to Red Moon! I contributed some strings to the Spanish translation.
I liked RadioDroid very much, and I translated the app to Spanish.
I translated Wifi Privacy Police, and I used for some time, but I became tired that it keeps asking all the time that I walk across my workplace (multiple buildings within the same Wifi network, but quite a lot access points…).
I keep on contributing to K-9 Mail to make it 100% translated to Spanish. Now with a modern Android I can move to the development branch (5.1xx releases), and just did it.
I submitted a Spanish translation to DAVDroid, although I’m not using it yet (I have to see if my University’s Owncloud instance allows to sync contacts and calendar).
I updated the Spanish translation of PassAndroid, although I don’t use it yet (I tend to print my train/airplane tickets…). I keep it installed in my phone, just in case.

Other apps that I use

I’m testing OwnCloudNextCloud and NexCloud Beta clients with my University’s Owncloud and with Davros in my Sandstorm box (with Davros, I could only make it work installing an old version of Owncloud/Nexcloud client, and then upgrading. See #65).
I didn’t get accostumed to Conversations. Not sure why, though. Maybe it’s just that I got accostumed to Xabber-Classic, so I upgraded to Xabber. It works like a charm, dark theme, and I can close it easily when I don’t want to chat.
I got in love with KDE Connect. Later I realized that I could have been using it in my Android 2.x phone since long…
Sometimes I have fun activating Voice Notification and entering the redeslibres XMPP multi user chat at salas.mijabber.es, for example while I’m cooking in the kitchen (in that room people talk in Spanish and make many wordplays, mixing Spanish and English, and use tech slang, etc so it’s really fun to hear the Spanish-TTS deal with the conversation there!).

More to come

As I told at the beginning of this long post, my plan is to keep on tinkering with the phone, testing and translating apps, and becoming more involved. So expect some more posts about Android in this blog, in the future.
For now, some big things in my TODO:
  • Watching again some videos: DebConf16 videos about Android tools in Debian, FOSDEM talks about Replicant, and some other talks about free software in Android.
  • I track the #fdroid and #fdroid-dev channels in IRC, but I’m not very talkative there. I guess I could do more user support.
  • Participate more in the F-Droid (client, server, data) issue trackers (I send reports when the alpha version crashes, and comment on few issues, but I don’t triage the issue tracker to find issues that I could reproduce or help to diagnose or contribute to fix).
  • Long time ago I learned to setup an Android development environment and build apps. I would like to re-learn and maybe do some small fixes in unmaintained or near unmaintained apps, and maybe adopt them or join their development teams (I’m thinking, for example, in Puma, an Android client for pump.io network, the MediaGoblin app, or the DebianDroid app). And ship new versions of unmaintained apss, including Spanish translations.
We’ll see how far I can go!

Comments?

You can comment about this post in this pump.io thread.

Filed under: My experiences and opinion, Tools Tagged: Android, Debian, English, F-Droid

09 October, 2016 10:48AM by larjona

Craig Sanders

Converting to a ZFS rootfs

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdp: 537234768 sectors, 256.2 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 537234734
Partitions will be aligned on 8-sector boundaries
Total free space is 6 sectors (3.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40            2047   1004.0 KiB  EF02  BIOS boot partition
   2            2048         2099199   1024.0 MiB  EF00  EFI System
   3         2099200         6293503   2.0 GiB     8300  Linux filesystem
   4         6293504        14682111   4.0 GiB     8200  Linux swap
   5        14682112       455084031   210.0 GiB   BF07  Solaris Reserved 1
   6       455084032       459278335   2.0 GiB     BF08  Solaris Reserved 2
   7       459278336       537234734   37.2 GiB    BF09  Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

clone-partitions.sh

#! /bin/bash

src='sdp'

targets=( 'sdq' 'sdr' 'sds' )

for tgt in "${targets[@]}"; do
  sgdisk --replicate="/dev/$tgt" /dev/"$src"
  sgdisk --randomize-guids "/dev/$tgt"
done

3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

create.sh

#! /bin/bash

exec &> ./create.log

hn="$(hostname -s)"
base='ata-Crucial_CT275MX300SSD1_'

md='/dev/md0'
md_part=3
md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) )

zfs_part=5

# 4 disks, so use the top half and bottom half for the two mirrors.
zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) )
zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) )

# create /boot raid array
mdadm "$md" --create \
    --bitmap=internal \
    --raid-devices=4 \
    --level 1 \
    --metadata=0.90 \
    "${md_parts[@]}"

mkfs.ext4 "$md"

# create zpool
zpool create -o ashift=12 "$hn" \
    mirror "${zmirror1[@]}" \
    mirror "${zmirror2[@]}"

# create zfs rootfs
zfs set compression=on "$hn"
zfs set atime=off "$hn"
zfs create "$hn/root"
zpool set bootfs="$hn/root"

# mount the new /boot under the zfs root
mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \
  -o primarycache=metadata ganesh/postgres

zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \
  -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).

Then:

hn="$(hostname -s)"
time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

chroot.sh

#! /bin/sh

hn="$(hostname -s)"

for i in proc sys dev dev/pts ; do
  mount -o bind "/$i" "/${hn}/root/$i"
done

chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root    /         zfs     defaults                                         0  0
/dev/md0        /boot     ext4    defaults,relatime,nodiratime,errors=remount-ro   0  2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

fix-ata-links.sh

#! /bin/sh

cd /dev
ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

umount-zfs-root.sh

#! /bin/sh

hn="$(hostname -s)"
md="/dev/md0"

for i in dev/pts dev sys proc ; do
  umount "/${hn}/root/$i"
done

umount "$md"

zfs umount "${hn}/root"
zfs umount "${hn}"
zfs set mountpoint=/ "${hn}/root"
zfs set canmount=off "${hn}"

8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes

  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \
      mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \
             ata-Crucial_CT275MX300SSD1_163313AB002C-part6
    

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
    

10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

09 October, 2016 05:57AM by cas

October 08, 2016

Norbert Tretkowski

Gajim plugins packaged for Debian

Wolfgang Borgert started to package some of the available Gajim plugins for Debian. At the time of writing, the OMEMO, HTTP Upload and URL Image Preview plugins are available in testing and unstable.

More plugins will follow.

08 October, 2016 10:00PM

hackergotchi for Joachim Breitner

Joachim Breitner

T430s → T460s

Earlier this week, I finally got my new machine that came with my new position at the University of Pennsylvania: A shiny Thinkpad T460s that now replaces my T430s. (Yes, there is a pattern. It continues with T400 and T41p.) I decided to re-install my Debian system from scratch and copy over only the home directory – a bit of purification does not hurt. This blog post contains some random notes that might be useful to someone or alternative where I hope someone can tell me how to fix and improve things.

Installation

The installation (using debian-installer from a USB drive) went mostly smooth, including LVM on an encrypted partition. Unfortunately, it did not set up grub correctly for the UEFI system to boot, so I had to jump through some hoops (using the grub on the USB drive to manually boot into the installed system, and installing grub-efi from there) until the system actually came up.

High-resolution display

This laptop has a 2560×1440 high resolution display. Modern desktop environments like GNOME supposedly handle that quite nicely, but for reasons explained in an earlier post, I do not use a desktop envrionment but have a minimalistic setup based on Xmonad. I managed to get a decent setup now, by turning lots of manual knobs:

  • For the linux console, setting

    FONTFACE="Terminus"
    FONTSIZE="12x24"

    in /etc/default/console-setup yielded good results.

  • For the few GTK-2 applications that I am still running, I set

    gtk-font-name="Sans 16"

    in ~/.gtkrc-2.0. Similarly, for GTK-3 I have

    [Settings]
    gtk-font-name = Sans 16

    in ~/.config/gtk-3.0/settings.ini.

  • Programs like gnome-terminal, Evolution and hexchat refer to the “System default document font” and “System default monospace font”. I remember that it was possible to configure these in the GNOME control center, but I could not find any way of configuring these using command line tools, so I resorted to manually setting the font for these. With the help from Alexandre Franke I figured out that the magic incarnation here is:

    gsettings set org.gnome.desktop.interface monospace-font-name 'Monospace 16'
    gsettings set org.gnome.desktop.interface document-font-name 'Serif 16'
    gsettings set org.gnome.desktop.interface font-name 'Sans 16'
  • Firefox seemed to have picked up these settings for the UI, so that was good. To make web pages readable, I set layout.css.devPixelsPerPx to 1.5 in about:config.

  • GVim has set guifont=Monospace\ 16 in ~/.vimrc. The toolbar is tiny, but I hardly use it anyways.

  • Setting the font of Xmonad prompts requires the sytax

    , font = "xft:Sans:size=16"

    Speaking about Xmonad prompts: Check out the XMonad.Prompt.Unicode module that I have been using for years and recently submitted upstream.

  • I launch Chromium (or rather the desktop applications that I use that happen to be Chrome apps) with the parameter --force-device-scale-factor=1.5.

  • Libreoffice seems to be best configured by running xrandr --dpi 194 before hand. This seems also to be read by Firefox, doubling the effect of the font size in the gtk settings, which is annoying. Luckily I do not work with Libreoffice often, so for now I’ll just set that manually when needed.

I am not quite satisfied. I have the impression that the 16 point size font, e.g. in Evolution, is not really pretty, so I am happy to take suggestions here.

I found the ArchWiki page on HiDPI very useful here.

Trackpoint and Touchpad

One reason for me to sticking with Thinkpads is their trackpoint, which I use exclusively. In previous models, I disabled the touchpad in the BIOS, but this did not seem to have an effect here, so I added the following section to /etc/X11/xorg.conf.d/30-touchpad.conf

Section "InputClass"
        Identifier "SynPS/2 Synaptics TouchPad"
        MatchProduct "SynPS/2 Synaptics TouchPad"
        Option "ignore" "on"
EndSection

At one point I left out the MatchProduct line, disabling all input in the X server. Had to boot into recovery mode to fix that.

Unfortunately, there is something wrong with the trackpoint and the buttons: When I am moving the trackpoint (and maybe if there is actual load on the machine), mouse button press and release events sometimes get lost. This is quite annoying – I try to open a folder in Evolution and accidentially move it.

I installed the latest Kernel from Debian experimental (4.8.0-rc8), but it did not help.

I filed a bug report against libinput although I am not fully sure that that’s the culprit.

Update: According to Benjamin Tissoires it is a known firmware bug and the appropriate people are working on a work-around. Until then I am advised to keep my palm of the touchpad.

Also, I found the trackpoint too slow. I am not sure if it is simply because of the large resolution of the screen, or because some movement events are also swallowed. For now, I simply changed the speed by writing

SUBSYSTEM=="serio", DRIVERS=="psmouse", ATTRS{speed}="120"

to /etc/udev/rules.d/10-trackpoint.rules.

Brightness control

The system would not automatically react to pressing Fn-F5 and Fn-F6, which are the keys to adjust the brightness. I am unsure about how and by what software component it “should” be handled, but the solution that I found was to set

Section "Device"
        Identifier  "card0"
        Driver      "intel"
        Option      "Backlight"  "intel_backlight"
        BusID       "PCI:0:2:0"
EndSection

so that the command line tool xbacklight would work, and then use Xmonad keybinds to perform the action, just as I already do for sound control:

    , ((0, xF86XK_Sleep),       spawn "dbus-send --system --print-reply --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.Suspend")
    , ((0, xF86XK_AudioMute), spawn "ponymix toggle")
    , ((0, 0x1008ffb2 {- xF86XK_AudioMicMute -}), spawn "ponymix --source toggle")
    , ((0, xF86XK_AudioRaiseVolume), spawn "ponymix increase 5")
    , ((0, xF86XK_AudioLowerVolume), spawn "ponymix decrease 5")
    , ((shiftMask, xF86XK_AudioRaiseVolume), spawn "ponymix increase 5 --max-volume 200")
    , ((shiftMask, xF86XK_AudioLowerVolume), spawn "ponymix decrease 5")
    , ((0, xF86XK_MonBrightnessUp), spawn "xbacklight +10")
    , ((0, xF86XK_MonBrightnessDown), spawn "xbacklight -10")

The T460s does not actually have a sleep button, that line is a reminiscence from my T430s. I suspend the machine by pressing the power button now, thanks to HandlePowerKey=suspend in /etc/systemd/logind.conf.

Profile Weirdness

Something strange happend to my environment variables after the move. It is clearly not hardware related, but I simply cannot explain what has changed: All relevant files in /etc look similar enough.

I use ~/.profile to extend the PATH and set some other variables. Previously, these settings were in effect in my whole X session, which is started by lightdm with auto-login, followed by xmonad-session. I could find no better way to fix that than stating . ~/.profile early in my ~/.xmonad/xmonad-session-rc. Very strange.

08 October, 2016 09:22PM by Joachim Breitner ([email protected])

hackergotchi for Charles Plessy

Charles Plessy

I just finished to read the Imperial Radch trilogy.

I liked it a lot. There are already many comments on Internet (thanks Russ for making me discover these novels), so I will not go into details. And it is hard to summarise without spoiling. In brief:

The first tome, Ancillary Justice, makes us visit various worlds and cultures, and give us an impression of what it feels to be a demigod. The main culture does not make a difference between the two sexes, and the grammar of its language does not have genders. This gives an original taste to the story, for instance when the hero speaks a foreign language, he has difficulties to correctly address people without risking to frown them. Unfortunately the English language itself does not use gender very much, so the literary effect is a bit weakened. Perhaps the French translation (which I have not read) could be more interesting in that respect?

The second tome, Ancillary Sword, shows us how one can communicate things in a surveillance society without privacy, by subtle variations on how to serve tea. Gallons of tea are drunk in this tome, of which the main interest is the relation between the characters and heir conversations.

In the third tome, Ancillary Mercy, asks the question of what makes us human. Among the most interesting characters, there is a kind of synthetic human, who acts as ambassador for an alien race. At first, he indeed behaves completely alien, but in the end, he is not very different from a newborn who would happen by miracle to know how to speak: in the beginning the World makes no sense, but step by step and by experimenting, he deduces how it works. This is how this character ends up understanding that what is called "war" is a complex phenomenon where one of the consequences is a shortage of fish sauce.

I was a bit surprised that no book lead us at the heart of the Radch empire, but I just see on Wikipedia that one more novel is in preparation... One can speculate that central Radch resembles to a future dystopian West, in which surveillance of everybody is total and constant, but where people think they are happy, and peace and well-being inside are kept possible thanks to military operations outside, mostly performed by killer robots controlled by artificial intelligences. A not so distant future ?

It is a matter of course that there does not seem to by any Free software in the Radch empire. That reminds me that I did not contribute much to Debian while I was reading...

08 October, 2016 03:29PM

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX update October 2016: all of TeX Live and Biber 2.6

Finally a new update of many TeX related packages: all the texlive-* including the binary packages, and biber have been updated to the latest release. This upload was delayed by my travels around the world, as well as the necessity to package a new Perl module (libdatetime-calendar-julian-perl) as required by new Biber. Also, my new job leaves me only the weekends for packaging. Anyway, the packages are now uploaded and should appear soon on your friendly local server.

texlive2016-debian

There are several highlights: The binaries have been patched with several upstream fixes (tex4ht and XeTeX compatibility, as well as various Japanese TeX engine fixes), updated biber and biblatex, and as usual loads of new and updated packages.

Last but not least I want to thank one particular author: His package was removed from TeX Live due to the addition of a rather unusual clause in the license. Instead of simply uploading new packages to Debian with the rather important removed, I contacted the author and asked for clarification. And to my great pleasure he immediately answered with an update of the package with fixed license.

All of us user of these many packages should be grateful to the authors of the packages who invest loads of their free time into supporting our community. Thanks!

Enough now, here as usual the list of new and updated packages with links to their respective CTAN pages. Enjoy.

New packages

addfont, apalike-german, autoaligne, baekmuk, beamerswitch, beamertheme-cuerna, beuron, biblatex-claves, biolett-bst, cooking-units, cstypo, emf, eulerpx, filecontentsdef, frederika2016, grant, latexgit, listofitems, overlays, phonenumbers, pst-arrow, quicktype, revquantum, richtext, semantic-markup, spalign, texproposal, tikz-page, unfonts-core, unfonts-extra, uspace.

Updated packages

achemso, acmart, acro, adobemapping, alegreya, allrunes, animate, arabluatex, archaeologie, asymptote, attachfile, babel-greek, bangorcsthesis, beebe, biblatex, biblatex-anonymous, biblatex-apa, biblatex-bookinother, biblatex-chem, biblatex-fiwi, biblatex-gost, biblatex-ieee, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-phys, biblatex-realauthor, biblatex-science, biblatex-true-citepages-omit, bibleref, bidi, chemformula, circuitikz, cochineal, colorspace, comment, covington, cquthesis, ctex, drawmatrix, ejpecp, erewhon, etoc, exsheets, fancyhdr, fei, fithesis, footnotehyper, fvextra, geschichtsfrkl, gnuplottex, gost, gregoriotex, hausarbeit-jura, ijsra, ipaex, jfontmaps, jsclasses, jslectureplanner, latexdiff, leadsheets, libertinust1math, luatexja, markdown, mcf2graph, minutes, multirow, mynsfc, nameauth, newpx, newtxsf, notespages, optidef, pas-cours, platex, prftree, pst-bezier, pst-circ, pst-eucl, pst-optic, pstricks, pstricks-add, refenums, reledmac, rsc, shdoc, siunitx, stackengine, tabstackengine, tagpair, tetex, texlive-es, texlive-scripts, ticket, translation-biblatex-de, tudscr, turabian-formatting, updmap-map, uplatex, xebaposter, xecjk, xepersian, xpinyin.

Enjoy.

08 October, 2016 04:45AM by Norbert Preining

October 07, 2016

Petter Reinholdtsen

Isenkram, Appstream and udev make life as a LEGO builder easier

The Isenkram system provide a practical and easy way to figure out which packages support the hardware in a given machine. The command line tool isenkram-lookup and the tasksel options provide a convenient way to list and install packages relevant for the current hardware during system installation, both user space packages and firmware packages. The GUI background daemon on the other hand provide a pop-up proposing to install packages when a new dongle is inserted while using the computer. For example, if you plug in a smart card reader, the system will ask if you want to install pcscd if that package isn't already installed, and if you plug in a USB video camera the system will ask if you want to install cheese if cheese is currently missing. This already work just fine.

But Isenkram depend on a database mapping from hardware IDs to package names. When I started no such database existed in Debian, so I made my own data set and included it with the isenkram package and made isenkram fetch the latest version of this database from git using http. This way the isenkram users would get updated package proposals as soon as I learned more about hardware related packages.

The hardware is identified using modalias strings. The modalias design is from the Linux kernel where most hardware descriptors are made available as a strings that can be matched using filename style globbing. It handle USB, PCI, DMI and a lot of other hardware related identifiers.

The downside to the Isenkram specific database is that there is no information about relevant distribution / Debian version, making isenkram propose obsolete packages too. But along came AppStream, a cross distribution mechanism to store and collect metadata about software packages. When I heard about the proposal, I contacted the people involved and suggested to add a hardware matching rule using modalias strings in the specification, to be able to use AppStream for mapping hardware to packages. This idea was accepted and AppStream is now a great way for a package to announce the hardware it support in a distribution neutral way. I wrote a recipe on how to add such meta-information in a blog post last December. If you have a hardware related package in Debian, please announce the relevant hardware IDs using AppStream.

In Debian, almost all packages that can talk to a LEGO Mindestorms RCX or NXT unit, announce this support using AppStream. The effect is that when you insert such LEGO robot controller into your Debian machine, Isenkram will propose to install the packages needed to get it working. The intention is that this should allow the local user to start programming his robot controller right away without having to guess what packages to use or which permissions to fix.

But when I sat down with my son the other day to program our NXT unit using his Debian Stretch computer, I discovered something annoying. The local console user (ie my son) did not get access to the USB device for programming the unit. This used to work, but no longer in Jessie and Stretch. After some investigation and asking around on #debian-devel, I discovered that this was because udev had changed the mechanism used to grant access to local devices. The ConsoleKit mechanism from /lib/udev/rules.d/70-udev-acl.rules no longer applied, because LDAP users no longer was added to the plugdev group during login. Michael Biebl told me that this method was obsolete and the new method used ACLs instead. This was good news, as the plugdev mechanism is a mess when using a remote user directory like LDAP. Using ACLs would make sure a user lost device access when she logged out, even if the user left behind a background process which would retain the plugdev membership with the ConsoleKit setup. Armed with this knowledge I moved on to fix the access problem for the LEGO Mindstorms related packages.

The new system uses a udev tag, 'uaccess'. It can either be applied directly for a device, or is applied in /lib/udev/rules.d/70-uaccess.rules for classes of devices. As the LEGO Mindstorms udev rules did not have a class, I decided to add the tag directly in the udev rules files included in the packages. Here is one example. For the nqc C compiler for the RCX, the /lib/udev/rules.d/60-nqc.rules file now look like this:

SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
    SYMLINK+="rcx-%k", TAG+="uaccess"

The key part is the 'TAG+="uaccess"' at the end. I suspect all packages using plugdev in their /lib/udev/rules.d/ files should be changed to use this tag (either directly or indirectly via 70-uaccess.rules). Perhaps a lintian check should be created to detect this?

I've been unable to find good documentation on the uaccess feature. It is unclear to me if the uaccess tag is an internal implementation detail like the udev-acl tag used by /lib/udev/rules.d/70-udev-acl.rules. If it is, I guess the indirect method is the preferred way. Michael asked for more documentation from the systemd project and I hope it will make this clearer. For now I use the generic classes when they exist and is already handled by 70-uaccess.rules, and add the tag directly if no such class exist.

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

To help out making life for LEGO constructors in Debian easier, please join us on our IRC channel #debian-lego and join the Debian LEGO team in the Alioth project we created yesterday. A mailing list is not yet created, but we are working on it. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

07 October, 2016 07:50AM

October 06, 2016

hackergotchi for Joey Hess

Joey Hess

keysafe with local shares

If your gpg key is too valuable for you to feel comfortable with backing it up to the cloud using keysafe, here's an alternative that might appeal more.

Keysafe can now back up some shares of the key to local media, and other shares to the cloud. You can arrange things so that the key can't be restored without access to some of the local media and some of the cloud servers, as well as your password.

For example, I have 3 USB sticks, and there are 3 keysafe servers. So let's make 6 shares total of my gpg secret key and require any 4 of them to restore it.

I plug in all 3 USB sticks and look at mount to get the paths to them. Then, run keysafe, to back up the key spread amoung all 6 locations.

keysafe --backup --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdc1 \
    --add-storage-directory /media/sdd1 \
    --add-storage-directory /media/sde1

Once it's done, I can remove the USB sticks, and distribute them to secure places.

To restore, I need at least one of the USB sticks. (If some of the servers are down, more USB sticks will be needed.) Again I tell keysafe the paths where USB stick(s) are mounted.

keysafe --restore --totalshares 6 --neededshares 4 \
    --add-storage-directory /media/sdb1

Using keysafe this way, physical access to the USB sticks is the first level of defense, and hopefully you'll know if that's breached. The keysafe password is the second level of defense, and cracking that will take a lot of work. Leaving plenty of time to revoke your key, etc, if it comes to that.

I feel this is better than the methods I've been using before to back up my most important gpg keys. With paperkey, physical access to the printout immediately exposes the key. With Shamir Secret Sharing and manual distribution of shares, the only second line of defense is the much easier to crack gpg passphrase. Using OpenPGP smartcards is still a more secure option, but you'd need 3 smartcards to reach the same level of redundancy, and it's easier to get your hands on 3 USB sticks than 3 smartcards.

There's another benefit to using keysafe this way. It means that sometimes, the data stored on the keysafe servers is not sufficient to crack a key. There's no way to tell, so an attacker risks doing a lot of futile work.

If you're not using an OpenPGP smartcard, I encourage you to back up your gpg key with keysafe as described above.

Two of the three necessary keysafe servers are now in operation, and I hope to have a full complement of servers soon.

(This was sponsored by Thomas Hochstein on Patreon.)

06 October, 2016 10:37PM

hackergotchi for Nathan Handler

Nathan Handler

FOSSCON

This post is long past due, but I figured it is better late than never. At the start of the year, I set a goal to get more involved with attending and speaking at conferences. Through work, I was able to attend the Southern California Linux Expo (SCALE) in Pasadena, CA in January. I also got to give a talk at O'Relly's Open Source Convention (OSCON) in Austin, TX in May. However, I really wanted to give a talk about my experience contributing in the Ubuntu community.

José Antonio Rey encouraged me to submit the talk to FOSSCON. While I've been aware of FOSSCON for years thanks to my involvement with the freenode IRC network (which has had a reference to FOSSCON in the /motd for years), I had never actually attended it before. I also wasn't quite sure how I would handle traveling from San Francisco, CA to Philadelphia, PA. Regardless, I decided to go ahead and apply.

Fast forward a few weeks, and imagine my surprise when I woke up to an email saying that my talk proposal was accepted. People were actually interested in me and what I had to say. I immediately began researching flights. While they weren't crazy expensive, they were still more money than I was comfortable spending. Luckily, José had a solution to this problem as well; he suggested applying for funding through the Ubuntu Community Donations fund. While I've been an Ubuntu Member for over 8 years, I've never used this resource before. However, I was happy when I received a very quick approval.

The conference itself was smaller than I was expecting. However, it was packed with lots of friendly and familiar faces of people I've interacted with online and in person over the years at various Open Source events.

I started off the day by learning from José how to use Juju to quickly setup applications in the cloud. While Juju has definitely come a long way over the last couple of years, and it appears t be quite easy to learn and use, it still appears to be lacking some of the features needed to take full control over how the underlying applications interact with each other. However, I look forward to continuing to watch it grow and mature.

Net up, we had a lunch break. There was no catered lunch at this conference, so we decided to get some cheesesteak at Abner's (is any trip to Philadelphia complete without cheesesteak?).

Following lunch, I took some time to make a few last minute changes to my presentation and rehearse a bit. Finally, it was time. I got up in front of the audience and gave my presentation. Overall, I was quite pleased. It was not perfect, but for the first time giving the talk, I thought it went pretty well. I will work hard to make it even better for next tme.

Following my talk was a series of brief lightning talks prior to the closing keynote. Another long time friend of mine, Elizabeth Krumbach Joseph, was giving the keynote about listening to the needs of your global open source community. While I have seen her speak on several other occassions, I really enjoyed this particular talk. It was full of great examples and anecdotes that were easy for the audience to relate to and start applying to their own communities.

After the conference, a few of us went off and played tourist, paying the Liberty Bell a visit before concluding our trip in Philadelpha.

Overall, I had a great time as FOSSCON. It was great being re-united with so many friends. A big thank you to José for his constant support and encouragement and to Canonical and the Ubuntu Community for helping to make it possible for me to attend this conference. Finally, thanks to the terrific FOSSCON staff for volunteering so much time to put on this great event.

06 October, 2016 09:31PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, September 2016

I was assigned 12.3 hours of work by Freexian's Debian LTS initiative and carried over 1.45 from last month. I was unwell for much of this month and only worked 6 hours on LTS. I returned 7 hours to the pool and carry over 0.75 hours.

I wrote and sent the DLA for linux 3.2.81-2, and I discussed various handling of various issues on the debian-lts mailing list. Most of my time was spent working on the long backlog of security issues in imagemagick. I hope to complete this and upload a fixed version this month.

06 October, 2016 05:39PM

Arturo Borrero González

About Pacemaker HA stack in Debian Jessie

Debian - Pacemaker

People keep ignoring the status of the Pacemaker HA stack in Debian Jessie. Most people think that they should stick to Debian Wheezy.

Why does this happen? Perhaps little or none publicity of the situation.

Since some time now, Debian contains a Pacemaker stack which is ready to use in both Debian Jessie and in Debian Stretch.

Anyway, let’s see what we have so far:

  1. The pacemaker stack was updated in Debian unstable around Feb 2016.
  2. They migrated to Debian testing by that time as well.
  3. Most of the key packages were backported to jessie-backports (if not all).
  4. Therefore, Stretch is ready for the HA stack, and so is Jessie (using backports).

The packages I’m refering to are those which I consider the key components of the stack, and by the time of this blogpost, the versions are:

package jessie-backports stretch sid upstream
corosync 2.3.6 2.3.6 2.3.6 2.4.1
pacemaker 1.1.14 1.1.15 1.1.15 1.1.15
crmsh 2.2.0 2.2.1 2.2.1 2.4.1
libqb 1.0 1.0 1.0 1.0

How can you check this by yourself? Here some pointers:

  • Debian HA packaging team overview: link
  • Package tracker for corosync: link
  • Package tracker for pacemaker: link
  • Package tracker for crmsh: link
  • Package tracker for libqb: link

I’m sure we even have the chance to improve a bit the packages before the release of stretch. There are some packages which are a bit behind the upstream version.

In any case: Yes! you can move from Debian Wheezy to Debian Jessie!

06 October, 2016 02:30PM

Reproducible builds folks

Reproducible Builds: week 75 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday September 25 and Saturday October 1 2016:

Statistics…

For the first time, we reached 91% reproducible packages in our testing framework on testing/amd64 using a determistic build path. (This is what we recommend to make packages in Stretch reproducible.) For unstable/amd64, where we additionally test for reproducibility across different build paths we are at almost 76% again.

IRC meetings

We have a poll to set a time for a new regular IRC meeting. If you would like to attend, please input your available times and we will try to accommodate for you.

There was a trial IRC meeting on Friday, 2016-09-31 1800 UTC. Unfortunately, we did not activate meetbot. Despite this participants consider the meeting a success as several topics where discussed (eg changes to IRC notifications of tests.r-b.o) and the meeting stayed within one our length.

Upcoming events

Reproduce and Verify Filesystems - Vincent Batts, Red Hat - Berlin (Germany), 5th October, 14:30 - 15:20 @ LinuxCon + ContainerCon Europe 2016.

From Reproducible Debian builds to Reproducible OpenWrt, LEDE & coreboot - Holger "h01ger" Levsen and Alexander "lynxis" Couzens - Berlin (Germany), 13th October, 11:00 - 11:25 @ OpenWrt Summit 2016.

Introduction to Reproducible Builds - Vagrant Cascadian will be presenting at the SeaGL.org Conference In Seattle (USA), November 11th-12th, 2016.

Previous events

GHC Determinism - Bartosz Nitka, Facebook - Nara (Japan), 24th September, ICPF 2016.

Toolchain development and fixes

Michael Meskes uploaded bsdmainutils/9.0.11 to unstable with a fix for #830259 based on Reiner Herrmann's patch. This fixed locale_dependent_symbol_order_by_lorder issue in the affected packages (freebsd-libs, mmh).

devscripts/2.16.8 was uploaded to unstable. It includes a debrepro script by Antonio Terceiro which is similar in purpose to reprotest but more lightweight; specific to Debian packages and without support for virtual servers or configurable variations.

Packages reviewed and fixed, and bugs filed

The following updated packages have become reproducible in our testing framework after being fixed:

The following updated packages appear to be reproducible now for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • gkrellm/2.3.8-1 by Sandro Tosi
  • glassfish/1:2.1.1-b31g+dfsg1-4 by Emmanuel Bourg

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Reviews of unreproducible packages

77 package reviews have been added, 178 have been updated and 80 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been updated:

Weekly QA work

As part of reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (3)
  • Chris Lamb (12)
  • Lucas Nussbaum (3)
  • Sebastian Reichel (1)

diffoscope development

A new version of diffoscope 61 was uploaded to unstable by Chris Lamb. It included contributions from:

  • Ximin Luo:
    • Improve the CLI --help text and add an --output-empty option.
  • Chris Lamb:
    • Add a progress bar and show it if stdout is a TTY. You can read more about it here. It can also be read by higher-level programs via the --status-fd CLI option.
  • Maria Glukhova:
    • Behaviour improvements in the case of OS-level errors.
  • Mattia Rizzolo:
    • Testing and packaging improvements.

Post-release there were further contributions from:

  • Chris Lamb:
    • Code architecture improvements.
  • Maria Glukhova:
    • Testing improvements.

reprotest development

A new version of reprotest 0.3.2 was uploaded to unstable by Ximin Luo. It included contributions from:

  • Ximin Luo:
    • Add a --diffoscope-arg CLI option to pass extra args to diffoscope.

Post-release there were further contributions from:

  • Chris Lamb:
    • Code quality improvements.

tests.reproducible-builds.org

  • Hans-Christoph Steiner continued work on setting up reproducible tests for F-Droid.
  • Holger cleaned up the script creating the page showing breakages, so that it now also cleans up some of the breakage it finds.
  • IRC notifications about diffoscope crashes and artifacts available for investigations have been dropped; instead the breakages page has a permanent pointer. (h01ger)
  • IRC notifications from the automatic package scheduler and status changes for packages have been moved -- as a temporary trial -- to #debian-reproducible-changes on irc.oftc.net (Mattia).

Misc.

This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

06 October, 2016 02:24PM

hackergotchi for Clint Adams

Clint Adams

Drawers

Ria has the sprue. She keeps her cœliac disease a secret, though, because she works in food service, and customers knowing about her little gluten-sensitive enterology problem would, she feels, damage her credibility.

“The fried chicken is delicious,” she coos. There is nothing gluten-free on the menu, so she does not have first-hand knowledge of this. Instead she is proxying the amalgamated judgments of others.

06 October, 2016 05:24AM