November 19, 2016

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.62.0-1

The BH package on CRAN was updated to version 1.62.0. BH provides a large part of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the upstream version Boost 1.62.0, and adds three new libraries as shown in the brief summary of changes from the NEWS file which follows below.

Special thanks to Kurt Hornik and Duncan Murdoch for help tracking down one abort() call which was seeping into R package builds, and then (re-)testing the proposed fix. We are now modifying one more file ever so slightly to use ::Rf_error(...) instead.

Changes in version 1.62.0-1 (2016-11-15)

  • Upgraded to Boost 1.62 installed directly from upstream source

  • Added Boost property_tree as requested in #29 by Aydin Demircioglu

  • Added Boost scope_exit as requested in #30 by Kirill Mueller

  • Added Boost atomic which we had informally added since 1.58.0

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHubGitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 November, 2016 01:07PM

hackergotchi for Keith Packard

Keith Packard

AltOS-Lisp-2

Updates to Altos Lisp

I wrote a few days ago about a tiny lisp interpreter I wrote for AltOS

Really, it's almost "done" now, I just wanted to make a few improvements

Incremental Collection

I was on a walk on Wednesday when I figured out that I didn't need to do a full collection every time; a partial collection that only scanned the upper portion of memory would often find plenty of free space to keep working for a while.

To recap, the heap is in two pieces; the ROM piece and the RAM piece. The ROM piece is generated during the build process and never changes afterwards (hence the name), so the only piece which is collected is the RAM piece. Collection works like:

chunk_low = heap base
new_top = heap base

For all of the heap
    Find the first 64 live objects above chunk_low
    Compact them all to new_top
    Rewrite references in the whole heap for them
    Set new_top above the new locations
    Set chunk_low above the old locations

top = new_top

The trick is to realize that there's really no need to start at the bottom of the heap; you can start anywhere you like and compact stuff, possibly leaving holes below that location in the heap. As the heap tends to have long-lived objects slowly sift down to the beginning, it's useful to compact objects higher than that, skipping the compaction process for the more stable area in memory.

Each time the whole heap is scanned, the top location is recorded. After that, incremental collects happen starting at that location, and when that doesn't produce enough free space, a full collect is done.

The collector now runs a bunch faster on average now.

Binary Searches

I stuck some linear searches in a few places in the code, the first was in the collector when looking to see where an object had moved to. As there are 64 entries, the search is reduced from 32 to 6 compares on average. The second place was in the frame objects, which hold the list of atom/value bindings for each lexical scope (including the global scope). These aren't terribly large, but a binary search is still a fine plan. I wanted to write down here the basic pattern I'm using for binary searches these days, which avoids some of the boundary conditions I've managed to generate in the past:

int find (needle) {
    int l = 0;
    int r = count - 1;
    while (l <= r) {
        int m = (l + r) >> 1;
        if (haystack[m] < needle)
            l = m + 1;
        else
            r = m - 1;
    }
    return l;
}

With this version, the caller can then check to see if there's an exact match, and if not, then the returned value is the location in the array where the value should be inserted. If the needle is known to not be in the haystack, and if the haystack is large enough to accept the new value:

void insert(needle) {
    int l = find(needle);

    memmove(&haystack[l+1],
        &haystack[l],
        (num - l) * sizeof (haystack[0]));

    haystack[l] = needle;
}

Similarly, if the caller just wants to know if the value is in the array:

bool exists(needle) {
    int l = find(needle);

    return (l < count && haystack[l] == needle);
}

Call with Current Continuation

Because the execution stack is managed on the heap, it's completely trivial to provide the scheme-like call with current continuation, which constructs an object which can be 'called' to transfer control to a saved location:

> (+ "hello " (call/cc (lambda (return) (setq boo return) (return "foo "))) "world")
"hello foo world"
> (boo "bar ")
"hello bar world"
> (boo "yikes ")
"hello yikes world"

One thing I'd done previously is dump the entire state of the interpreter on any error, and that included a full stack trace. I adopted that code for printing of these continuation objects:

boo
[
    expr:   (call/cc (lambda (return) (set (quote boo) return) (return "foo ")))
    state:  val
    values: (call/cc
             [recurse...]
             )
    sexprs: ()
    frame:  {}
]
[
    expr:   (+ "hello " (call/cc (lambda (return) (set (quote boo) return) (return "foo "))) "world")
    state:  formal
    values: (+
             "hello "
             )
    sexprs: ((call/cc (lambda (return) (set (quote boo) return) (return "foo ")))
             "world"
             )
    frame:  {}
]

The top stack frame is about to return from the call/cc spot with a value; supply a value to 'boo' and that's where you start. The next frame is in the middle of computing formals for the + s-expression. It's found the + function, and the "hello " string and has yet to get the value from call/cc or the value of the "world" string. Once the call/cc "returns", that value will get moved to the values list and the sexpr list will move forward one spot to compute the "world" value.

Implementing this whole mechanism took only a few dozen lines of code as the existing stack contexts were already a continuation in effect. The hardest piece was figuring out that I needed to copy the entire stack each time the continuation was created or executed as it is effectively destroyed in the process of evaluation.

I haven't implemented dynamic-wind yet; when I did that for nickle, it was a bit of a pain threading execution through the unwind paths.

Re-using Frames

I decided to try and re-use frames (those objects which hold atom/value bindings for each lexical scope). It wasn't that hard; the only trick was to mark frames which have been referenced from elsewhere as not-for-reuse and then avoid sticking those in the re-use queue. This reduced allocations even further so that for simple looping or tail-calling code, the allocator may never end up being called.

How Big Is It?

I've managed to squeeze the interpreter and all of the rest of the AltOS system into 25kB of Cortex-M0 code. That leaves space for the 4kB boot loader and 3kB of flash to save/restore the 3kB heap across resets.

Adding builtins to control timers and GPIOs would make this a reasonable software load for an Arduino; offering a rather different programming model for those with a taste for adventure. Modern ARM-based Arduino boards have plenty of flash and ram for this. It might be interesting to get this running on the Arduino Zero; there's no real reason to replace the OS either; porting the lisp interpreter into the bare Arduino environment wouldn't take long.

19 November, 2016 09:20AM

November 18, 2016

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 0.12.8: And more goodies

Yesterday the eighth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R where the Windows binary has by now been generated too; the Debian package is on its way as well. This 0.12.8 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, and the 0.12.7 release in September --- making it the twelveth release at the steady bi-montly release frequency. While we are keeping with the pattern, we have managed to include quite a lot of nice stuff in this release. None of it is a major feauture, though, and so we have not increased the middle number.

Among the changes this release are (once again) much improved exception handling (thanks chiefly to Jim Hester), better large vector support (by Qiang), a number of Sugar extensions (mostly Nathan, Qiang and Dan) and beginnings of new DateVector and DatetimeVectors classes, and other changes detailed below. We plan to properly phase in the new date(time) classes. For now, you have to use a #define such as this one in Rcpp.h which remains commented-out for now. We plan to switch this on as the new default no earlier than twelve months from now.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 843 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by eightyfour packages, or a full ten percent, just since the last release in early September!

Again, we are lucky to have such a large group of contributors. Among them, we have invited Nathan Russell to the Rcpp Core team given his consistently excellent pull requests (as well as many outstanding Stackoverflow answers for Rcpp). More details on changes are below.

Changes in Rcpp version 0.12.8 (2016-11-16)

  • Changes in Rcpp API:

    • String and vector elements now use extended R_xlen_t indices (Qiang in PR #560)

    • Hashing functions now return unsigned int (Qiang in PR #561)

    • Added static methods eye(), ones(), and zeros() for select matrix types (Nathan Russell in PR #569)

    • The exception call stack is again correctly reported; print methods and tests added too (Jim Hester in PR #582 fixing #579)

    • Variatic macros no longer use a GNU extensions (Nathan in PR #575)

    • Hash index functions were standardized on returning unsigned integers (Also PR #575)

  • Changes in Rcpp Sugar:

    • Added new Sugar functions rowSums(), colSums(), rowMeans(), colMeans() (PR #551 by Nathan Russell fixing #549)

    • Range Sugar now used R_xlen_t type for start/end (PR #568 by Qiang Kou)

    • Defining RCPP_NO_SUGAR no longer breaks the build. (PR #585 by Daniel C. Dillon)

  • Changes in Rcpp unit tests

    • A test for expression vectors was corrected.

    • The constructor test for datetime vectors reflects the new classes which treats Inf correctly (and still as a non-finite value)

  • Changes in Rcpp Attributes

    • An 'empty' return was corrected (PR #589 fixing issue #588, and with thanks to Duncan Murdoch for the heads-up)
  • Updated Date and Datetime vector classes:

    • The DateVector and DatetimeVector classes were renamed with a prefix old; they are currently typedef'ed to the existing name (#557)

    • New variants newDateVector and newDatetimeVector were added based on NumericVector (also #557, #577, #581, #587)

    • By defining RCPP_NEW_DATE_DATETIME_VECTORS the new classes can activated. We intend to make the new classes the default no sooner than twelve months from this release.

    • The capabilities() function can also be used for presence of this feature

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 November, 2016 11:41AM

November 17, 2016

Reproducible builds folks

Reproducible Builds: week 81 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday November 6 and Saturday November 12 2016:

Media coverage

Matthew Garrett blogged about Tor, TPMs and service integrity attestation and how reproducible builds are the base for systems integrity.

The Linux Foundation announced renewed funding for us as part of the Core Infrastructure Initiative. Thank you!

Outreachy updates

Maria Glukhova has been accepted into the Outreachy winter internship and will work with us the Debian reproducible builds team.

To quote her words

siamezzze: I've been accepted to #outreachy winter internship - going to
work with Debian reproducible builds team. So excited about that! <3
Debian

Toolchain development and fixes

dpkg:

  • Thanks to a series of dpkg uploads by Guillem Jover, all our toolchain changes are now finally available in sid!
  • This means your packages should now be reproducible without having to use our custom APT repository.
  • Ximin Luo opened #843925 to remind the fact that dpkg-buildpackage should sign buildinfo files.
  • We hope to have detailed post about the new dpkg and the new .buildinfo files for debian-devel-announce soon!

debrebuild:

  • srebuild / debrebuild work was resumed by Johannes Schauer and others in #774415.

Bugs filed

Chris Lamb:

Daniel Shahaf:

Niko Tyni:

Reiner Herrman:

Reviews of unreproducible packages

136 package reviews have been added, 5 have been updated and 7 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (29)
  • Niko Tyni (1)

diffoscope development

A new version of diffoscope 62~bpo8+1 was uploaded to jessie-backports by Mattia Rizzolo.

Meanwhile in git, Ximin Luo greatly improved speed by fixing a O(n2) lookup which was causing diffs of large packages such as GCC and glibc to take many more hours than was necessary. When this commit is released, we should hopefully see full diffs for such packages again. Currently we have 197 source packages which - when built - diffoscope fails to analyse.

buildinfo.debian.net development

  • Submissions with duplicate Installed-Build-Depends entries are rejected now that a bug in dpkg causing them has been fixed. Thanks to Guillem Jover.
  • Add a new page for every (source, version) combination, for example diffoscope 62.
  • DigitalOcean have generously offered to sponsor the hardware buildinfo.debian.net is running on.

tests.reproducible-builds.org

Debian:

  • For privacy reasons, the new dpkg-genbuildinfo includes Build-Path only if it is under /build. HW42 updated our jobs so this is the case for our builds too, so you can see the build path in the .buildinfo files.
  • HW42 also updated our jobs to vary the basename of the source extraction directory. This detects packages that incorrectly assume a $pkg-$version directory naming scheme (which is what dpkg-source -x gives but is not mandated by Debian nor always-true) or that they're being built from a SCM.
  • The new dpkg-genbuildinfo also records a sanitised Environment. This is different in our builds, so HW42, Reiner and Holger updated our jobs to hide these differences from diffoscope output.
  • Package-set improvements:
  • Valerie Young contributed four patches for our long-planned transition from SQLite to PostgreSQL.
  • In anticipation of the freeze, already-tested packages from unstable and testing on amd64 are now scheduled with equal priority.

reproducible-builds.org website

F-Droid was finally added to our list of partner projects. (This was an oversight and they had already been working with us for some time.)

Misc.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

17 November, 2016 07:46PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Book presentation by @arenitasoria: Hacker ethics, security and surveillance

At the beginning of this year, Irene Soria invited me to start a series of talks on the topic of hacker ethics, security and surveillance. I presented a talk titled Cryptography and identity: Not everything is anonymity.

The talk itself is recorded and available in archive.org (sidenote: I find it amazing that Universidad del Claustro de Sor Juana uses archive.org as their main multimedia publishing platform!)

But as part of this excercise, Irene invited me to write a chapter for a book covering the series. And, yes, she delivered!

So, finally, we will have the book presentation:

I know, not everybody following my posts (that means... Only those at or near Mexico City) will be able to join. But the good news: The book, as soon as it is presented, will be published under a CC BY-SA license. Of course, I will notify when it is ready.

17 November, 2016 07:24PM by gwolf

Urvika Gola

Reaching out to Outreachy

The past few weeks have been a whirlwind of work with my application process for Outreachy taking my full attention.

When I got to know about Outreachy, I was intrigued as well as dubious.  I had many unanswered questions in my mind. Honestly, I had that fear of failure which prevented me from submitting my partially filled application form. I kept contemplating if the application was ‘good enough’, if my answers were ‘perfect’ and had the right balance of du-uh and oh-ah!
In moments of doubt, it is important to surround yourself with people who believe in you more than you believe in yourself. I’ve been fortunate enough to have two amazing people, my sister Anjali, who is an engineer at Intel and my friend, Pranav Jain  who completed his GSoC 16 with Debian. 
They believed in me when I sat staring at my application and encouraged me to click that final button.

When I initially applied for Outreachy, I was given a task for building Lumicall and subsequent task was to examine a BASH script which solves the DNS-01 Challenge.
I deployed the DNS-01 challenge in Java and tested my solution against a server.
Within a limited time frame, I figured things out and wrote my solution in Java and then eagerly  waited for the results to come out. Going through a full cycle of :

lifecycle.JPG

I was elated with joy when I got to know I’ve been selected for Outreachy to work with Debian. I was excited about open source & found the idea of working on the project open source fun because of the numerous possibilities of contributing towards a  voice video and chat communication software.

My project mentor, Daniel Pocock, played a pivotal role in the time after I had submitted my application. Like a true mentor, he replied to my queries promptly and guided me towards finding the solutions to problems on my own. He exemplified how to feel comfortable with developing on open source. I felt inspired and encouraged to move along in my work.

Beyond him, The MiniDebConf  was when I was finally introduced to the Debian community. It was an overwhelming experience and I felt proud to have come so far..  It was pretty cool to see JitsiMeet being used for this video call. I was also introduced to two of my mentors , Juliana Louback & Bruno Magalhães . I am very excited to learn from them.

I am glad I applied for Outreachy which helped me identify my strengths and I am totally excited to be working with Debian on the project and learn as much as I can throughout the period.

I am not a blog person, this is my first blog ever! I would love to share my experience with you all in the hopes of inspiring someone else who is afraid of clicking that final button!


17 November, 2016 06:51PM by urvikagola

hackergotchi for Jonathan Dowland

Jonathan Dowland

Docker lecture

This morning I gave a guest lecture to students at the EPSRC Centre for Doctoral Training in Cloud Computing for Big Data. The subject was a gentle introduction to Docker.

This was the first guest lecture I've given for a couple of years so I thought I was a little rusty but I had a good time giving it and hopefully it went across OK.

slides.pdf including speaker notes in boxes at the bottom of each slide; handouts.pdf (3 slides to a page with space for notes): demo steps.txt (the steps I followed for some of the demos).

I mentioned a couple of things worth linking to here

There was some discussion about alternatives to Docker, things which were briefly mentioned include

17 November, 2016 04:30PM

Uwe Kleine-König

Installing Debian Stretch on a Turris Omnia

Recently I got "my" Turris Omnia and it didn't take long to replace the original firmware with Debian.

If you want to reproduce, here is what you have to do:

Open the case of the Turris Omnia, connect the hacker pack (or an RS232-to-TTL adapter) to access the U-Boot prompt (see Turris Omnia: How to use the "Hacker pack"). Then download the installer and device tree:

# cd /srv/tftp
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/vmlinuz
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/initrd.gz
# wget https://www.kleine-koenig.org/tmp/armada-385-turris-omnia.dtb

(The latter is not included yet in Debian, but I'm working on that.)

and after connecting the Turris Omnia's WAN to a dhcp managed network start it in U-Boot:

dhcp
setenv serverip 192.168.1.17
tftpboot 0x01000000 vmlinuz
tftpboot 0x02000000 armada-385-turris-omnia.dtb
tftpboot 0x03000000 initrd.gz
bootz 0x01000000 0x03000000:$filesize 0x02000000

With 192.168.1.17 being the IPv4 of the machine you have the tftp server running.

I suggest to use btrfs as rootfs because that works well with U-Boot. Before finishing the installation put the dtb in the rootfs as /boot/dtb.

To then boot into Debian do in U-Boot:

setenv mmcboot=btrload mmc 0 0x01000000 boot/vmlinuz\; btrload mmc 0 0x02000000 boot/dtb\; btrload mmc 0 0x03000000 boot/initrd.img\; bootz 0x01000000 0x03000000:$filesize 0x02000000
setenv bootargs console=ttyS0,115200 rootfstype=btrfs rootdelay=2 root=/dev/mmcblk0p1 rootflags=commit=5 rw
saveenv
boot

Known issues:

  • rtc doesn't work (workaround: mw 0xf10184a0 0xfd4d4cfa in U-Boot)
  • SFP and switch don't work, MAC addresses are random
  • wifi fails to probe

If you have problems, don't hesitate to contact me.

Also check the Debian Wiki for further details.

17 November, 2016 09:36AM

hackergotchi for Martín Ferrari

Martín Ferrari

Replacing proprietary Play Services with MicroG

For over a year now, I have been using CyanogenMod in my Nexus phone. At first I just installed some bundle that brought all the proprietary Google applications and libraries, but later I decided that I wanted more control over it, so I did a clean install with no proprietary stuff.

This was not so great at the beginning, because the base system lacks the geolocation helpers that allow you to get a position in seconds (using GSM antennas and Wifi APs). And so, every time I opened OsmAnd (a great maps application, free software and free maps), I would have to wait minutes for the GPS radio to locate enough satellites.

Shortly after, I found about the UnifiedNLP project that provided a drop-in replacement for the location services, using pluggable location providers. This worked great, and you could choose to use the Apple or Mozilla on-line providers, or off-line databases that you could download or build yourself.

This worked well for most of my needs, and I was happy about it. I also had F-droid for installing free software applications, DAVdroid for contacts and calendar synchronisation, K9 for reading email, etc. I still needed some proprietary apps, but most of the stuff in my phone was Free Software.

But sadly, more and more application developers are buying into the vendor lock-in that is Google Play Services, which is a set of APIs that offer some useful functionality (including the location services), but that require non-free software that is not part of the AOSP (Android Open-Source Project). Mostly, this is because they make push notifications a requirement, or because they want you to be able to buy stuff from the application.

This is not limited to proprietary software. Most notably, the Signal project refuses to work without these libraries, or even to distribute the pre-compiled binaries on any platform that is not Google Play! (This is one of many reason why I don't recommend Signal to anybody).

And of course, many very useful services that people use every day require you to install proprietary applications, which care much less about your choices of non-standard platforms.

For the most part, I had been able to just get the package files for these applications1 from somewhere, and have the functionality I wanted.

Some apps would just work perfectly, others would complain about the lack of Play Services, but offer a degraded experience. You would not get notifications unless you re-opened the application, stuff like that. But they worked. Lately, some of the closed-source apps I sometimes use stopped working altogether.

So, tired of all this, I decided to give the MicroG project a try.

MicroG

MicroG is a direct descendant of UnifiedNLP and the NOGAPPS project. I had known about it for a while, but the installation procedures always put me off.

LWN published an article about them recently, and so I decided to spend a good chunk of time making it work. This blog post is to help making this a bit less painful.

Some prerequisites:

  • No Gapps installed.
  • Rooted phone (at least for the mechanism I describe here).
  • Working ADB with root access to the phone.
  • UnifiedNLP needs to be uninstalled (MicroG carries its own version of it).

Signature spoofing

The main problem with the installation, is that MicroG needs to pretend to be the original Google bundle. It has to show the same name, but most importantly, it has to spoof its cryptographic signatures so other applications do not realise it is not the real thing.

OmniROM and MarshROM (other alternative firmwares for Android) provide support for signature spoofing out of the box. If you are running these, go ahead and install MicroG, it will be very easy! Sadly, the CyanogenMod refused allowing signature spoofing, citing security concerns2, so most users will have to go the hard way.

The options for enabling this feature are basically two: either patch some core libraries H4xx0r style, or use the "Xposed framework". Since I still don't really understand what this Xposed thing is, and it is one of these projects that distributes files on XDA forums, I decided to go the dirty way.

Patching the ROM

Note: this method requires a rooted phone, java, and adb.

The MicroG wiki links to three different projects for patching your ROM, but turns out that two of them would not work at all with CyanogenMod 11 and 13 (the two versions I tried), because the system is "odexed" (whatever that means, the Android ecosystem is really annoying).

I actually upgraded CM to version 13 just to try this, so a couple of hours went there, with no results.

The project that did work is Haystack by Lanchon, which seems to be the cleaner and better developed of the three. Also the one with the most complex documentation.

In a nutshell, you need to download a bunch of files from the phone, apply a series of patches to it, and then reupload it.

To obtain the files and place them in the maguro (the codename for my phone) directory:

$ ./pull-fileset maguro

Now you need to apply some patches with the patch-fileset script. The patches are located in the patches directory:

$ ls patches
sigspoof-core
sigspoof-hook-1.5-2.3
sigspoof-hook-4.0
sigspoof-hook-4.1-6.0
sigspoof-hook-7.0
sigspoof-ui-global-4.1-6.0
sigspoof-ui-global-7.0

The patch-fileset script takes these parameters:

patch-fileset <patch-dir> <api-level> <input-dir> [ <output-dir> ]

Note that this requires knowing your Android version, and the API level. If you don't specify an output directory, it will append the patch name to the input directory name. Note that you should also check the output of the script for any warnings or errors!

First, you need to apply the patch that hooks into your specific version of Android (6.0, API level 23 in my case):

$ ./patch-fileset patches/sigspoof-hook-4.1-6.0 23 maguro
>>> target directory: maguro__sigspoof-hook-4.1-6.0
[...]
*** patch-fileset: success

Now, you need to add the core spoofing module:

$ ./patch-fileset patches/sigspoof-core 23 maguro__sigspoof-hook-4.1-6.0
>>> target directory: maguro__sigspoof-hook-4.1-6.0__sigspoof-core
[...]
*** patch-fileset: success

And finally, add the UI elements that let you enable or disable the signature spoofing:

$ ./patch-fileset patches/sigspoof-ui-global-4.1-6.0 23 maguro__sigspoof-hook-4.1-6.0__sigspoof-core
>>> target directory: maguro__sigspoof-hook-4.1-6.0__sigspoof-core__sigspoof-ui-global-4.1-6.0
[...]
*** patch-fileset: success

Now, you have a bundle ready to upload to your phone, and you do that with the push-fileset script:

$ ./push-fileset maguro__sigspoof-hook-4.1-6.0__sigspoof-core__sigspoof-ui-global-4.1-6.0

After this, reboot your phone, go to settings / developer settings, and at the end you should find a checkbx for "Allow signature spoofing" which you should now enable.

Installing MicroG

Now that the difficult part is done, the rest of the installation is pretty easy. You can add the MicroG repository to F-Droid and install the rest of the modules from there. Check the installation guide for all the details.

Once all the parts are in place, and after a last reboot, you should find a MicroG settings icon that will check that everything is working correctly, and give you the choice of which components to enable.

So far, other applications believe this phone has nothing weird, I get quick GPS fixes, push notifications seem to work... Not bad at all for such a young project!

Hope this is useful. I would love to hear your feedback in the comments!


  1. Which is a pretty horrible thing, having to download from fishy sites because Google refuses to let you use the marketplace without their proprietary application. I used to use a chrome extension to trick Google Play into believing your phone was downloading it, and so you could get the APK file, but now that I have no devices running stock Android, that does not work any more. ↩

  2. Actually, I am still a bit concerned about this, because it is not completely clear to me how protected this is against malicious actors (I would love to get feedback on this!). ↩

2 comments

17 November, 2016 05:59AM

November 16, 2016

Carl Chenet

Retweet 0.10: Automatically retweet now using regex

Retweet 0.10, a self-hosted Python app to automatically retweet and like tweets from another user-defined Twitter account, was released this November, 17th.

With this release Retweet is now able to retweet only if a tweet matches a user-provided regular expression (regex) pattern. This feature was fully provided by Vanekjar, lots of thanks to him!

Retweet 0.10 is already in production for Le Journal du hacker, a French-speaking Hacker News-like website, LinuxJobs.fr, a French-speaking job board and this very blog.

fiesta

What’s the purpose of Retweet?

Let’s face it, it’s more and more difficult to communicate about our projects. Even writing an awesome app is not enough any more. If you don’t appear on a regular basis on social networks, everybody thinks you quit or that the project is stalled.

But what if you already have built an audience on Twitter for, let’s say, your personal account. Now you want to automatically retweet and like all tweets from the account of your new project, to push it forward.

Sure, you can do it manually, like in the old good 90’s… or you can use Retweet!

Twitter Out Of The Browser

Have a look at my Github account for my other Twitter automation tools:

  • Feed2tweet, a RSS-to-Twitter command-line tool
  • db2twitter, get data from SQL database (several supported), build tweets and send them to Twitter
  • Twitterwatch, monitor the activity of your Twitter timeline and warn you if no new tweet appears

What about you? Do you use tools to automate the management of your Twitter account? Feel free to give me feedback in the comments below.

16 November, 2016 11:00PM by Carl Chenet

hackergotchi for Shirish Agarwal

Shirish Agarwal

The long tail in a common’s man journey to debconf16 – 2

This is an extension of part 1 which I shared few days ago. This would be a longish one so please bear.

First of all somebody emailed me this link so in the future a layover at Doha Airport will be a bit expensive from before, approx INR 700/- added to the ticket costs😦

Moving on, Let me share an experience I shared one of the last few days I had while I was in Cape Town –

Singer singing some great oldies from 60's , 70's till 90's.

Singer singing some great oldies from 60’s , 70’s till 90’s.

I had booked a place near Long Street, Cape Town using Bernelle’s help. What I had not known at that time that near Long Street there are free walking tours every couple of hours. I took part in all the tours and those were nice experiences. Where they start the walk, there was the gentleman pictured above. I was amazed by this gentleman’s rich voice. He strummed lot of classics from the 60’s, 70’s till the 90’s . I had two coffees and thought I was at a premium rock concert. It was a bitter-sweet experience for me because I could see that he has such prodigious talent and still he had to struggle to survive to make ends meet. I did my 2 bit but wish I could have done something more.

Side note – Before I forget there is one trick of feh which I use to view images without it getting very high-resolution (especially on my low-end systems) –

┌─[shirish@debian] - [/run/user/1000/gvfs/mtp:host=%5Busb%3A001%2C006%5D/Card/DCIM/Camera] - [4621]
└─[$] feh -g 1350x1000 .

This actually makes it far far easier to traverse through the 1000 odd photos of the trip that I have in my personal archive without doing any sort of conversion methodology. Btw, it took me time but finally was able to create an album at gallery.debconf.org . Haven’t been able to upload photos as came across an error which I have shared at https://lists.debconf.org/lurker/message/20161113.215659.fce58823.en.html

Moving on, here’s the funny story/experience I wanted to share –

could have been arrested ;)

What happened was this. This is from the Doha Airport. I had seen big buggies (ones similar to golf carts) which was ferrying people from end of the concourse to the other. I had been walking the whole day and even with the horizontal escalators and everything, it takes a toll. I was half-tired, half-sleepy and saw a buggy stationed. From behind it looked like the buggies I had seen. As there was no place to park my behind there, I entered into the buggy and sat there. Around 15-20 minutes later a Doha cop in another buggy came to me and asked me if something had happened ?

I had no clue what he was talking about. He told/shared/asked me in friendly tone whether I had committed a crime or wanted to report a crime. When I replied in negative to both, he asked then why I was sitting there. I replied it was for stretching my legs and it was the buggy which was being used to transport people from A. to B. He gently told me I had entered into the wrong one and it was actually a cop buggy. I couldn’t believe it. He did go his own way as he saw I was dead-tired. After 10-15 minutes, half-believingly I came out of the buggy and to my shock the gentleman was right. There was nothing to do but solder on to find a spot in this big airport. I shared this with few friends and family and managed to elicit few laughs hence sharing.

The somewhat sad one was I had met a couple with a baby. Now as shared before, Most Airports including the Doha Airport is Air-Conditioned/Climate-Controlled and is probably in mid-20’s so it was more than cold for me. The couples with the baby were from Asian sub-continent. From their clothes and the way they were, they were not very well off. I do remember them sharing that they had a death in the family and hence were going. I didn’t know at that point in time that there was something called bereavement fares and if they were able to take opportunity of those tickets. But this is besides the point . The issue was that their baby had been running a high-fever and the A/C was making matters worse. I had seen a pharmacy but no clinic in the airport. It was much later I came across http://dohahamadairport.com/airport-guide/facilities-services/medical-emergencies but as can be seen on the web-page it doesn’t tell whether the services are chargeable or not. I assume it would be paid, although in some of the ‘developed/industrialized’ countries it is rumoured not to be for simple ailments such as the baby was going through. Have no idea if that’s true or not. I also don’t know how it equates with travel insurance as well as most travel insurance is also supposed to help you in situations like these. I was concerned as it was a baby and babies as all know are very very fragile. If anybody has an idea or had similar experience would like to know specifically related to International Airport environment as it has ‘transit’ issues unlike in domestic airports where I don’t think it would be a bit more easy.

Now coming to my own inadequacies/lack of foresight which I had mentioned I will share, I had asked/queried and got to lead a Debian-installation workshop on the Open Debian Day. I had done a few earlier and had installed it a few times on my system and for my friends, relatives and some clients. The only bad experiences I had were to do with UEFI but even those in the jessie releases had got resolved quite a bit, so was pretty confident. The day before the Installfest was to happen, ‘Mensah Nyarko Yaa Dufie’ (one full name) of Ghana approached me to install Debian on her system. I had some older version of the Debian DVD either 8.1 or 8.3 and had known that 8.5 had been released just a few days back. Had seen pretty fast internet (as far as downloading Debian DVD) is concerned hence asked her to wait a bit while I downloaded the newest image. I sha256summed it to make sure that the image was bit-to-bit perfect.

Now I hadn’t bought a pen drive/disk from India as I was under the impression that in such conferences, pen drives should not be an issue. I had asked Bernelle privately before via e-mail as well and she had assured me that some pen-drives would be available. She gave me a handful of HP pen drives. The pen drives as we came know during our usage were somewhat flaky. It would pop out/lose connection even with the slightest nudge to the lappy.

Somehow I was able to transfer the image to the usb disk. As people say hindsight is 50:50 maybe it was not such a smart move on my part to download the big DVD image and maybe I should have got the netinstall iso . Be careful, the link I have just shared is of the old version, if you have good web link and want to try the newest stable netinstall head to cdimage.debian.org . Apart from that goof-up I dunno (still) of anyway to know if a copy from an .iso image to usb was successful or not and did it do correctly –

I did the following command –

sudo dd if=/path_to/debian-dvd.iso of=/dev/usb-mount-point

which is usually /dev/sdb on all of my systems . Her system was a brand new HP (don’t remember the model details) which she had bought just a few weeks/months before debconf. We tried a few times but it failed at installing the boot-loader stage. I asked Ritesh Raj Saraff (a friend and DD) and while he had some ideas, none of them worked. Ritesh later pointed out Steve McIntyre and shared he is part of the Debian-Installer team. At that point in time, I had no clue who Steve McIntyre was otherwise I probably would not have approached him. He quickly acquiesced to my request and shared that he would be there for the workshop. With load of my mind little bit, I apologized to mensah and asked her to be at the workshop the following day. I had no clue what was wrong at this point in time, whether it was the iso image in the usb disk or a UEFI issue. This also wasn’t good for my confidence but as somebody from the Debian-Installer team was there, I was somewhat relaxed.

Next day, some more people came for the Installfest. While I had made 2-3 copies, clearly it was not enough as more people came. I was in a frenzy and asked Deven Bansod, Keerthana Krishnan, Prabaharan Jaminy (the whole GSOC and Outreachy attendees) to volunteer to help out in making more iso images on usb disks. I introduced mensah to Steve McIntyre and we tried 2-3 times to get debian installed on the system but it didn’t move from the same place. Ritesh shared that dd had a memory leak and hence cat was a better way to do it. So we did –

$ cat debian.iso > /dev/sdb
and soon other machines had debian sporting on their desktops.

But mensa’s lappy wouldn’t get move from the boot-loader stage. Suddenly Steve had the bright idea (light bulb moment) that maybe the .iso is corrupted/usb disk is bad or something is incomplete. We started on another usb disk.

Now this is where I have a query – While I don’t want to compare, in Ubuntu there was an image self-checking mechanism where probably behind the scenes (backend) the checksums published in a file are compared with checksums generated by apps. which are on the .iso image. While it does extend your time, the end result is you know if there is some issue on the decompressed image on the usb disk. AFAIK we don’t have anything similar. The only two things I know is the wiki page and of course the various checksums of the image as shared at http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-cd/ or http://cdimage.debian.org/debian-cd/8.6.0/amd64/iso-dvd/

If anybody knows of any movement or a bug in the BTS which I can follow for the above issue please let me know.

This time Steve was able to install it without any issues. I asked him whether he had to make some specific FAT/Ex-FAT/NTFS partitions as some new UEFI-based lappies need one or more but he replied in the negative. While mensa did get her debian install, the GUI didn’t come while command-prompt was available. Then Steve added backports to the sources.list, got the new kernel, new Intel/Nvidia drivers (think it was one of those hybrid models IIRC) and she was able to boot into GNOME-Debian.

I didn’t saw any bug-reports about checksumming state of the applications before installation but did couple of reports about badblocks support and memory checking and from action on both bug-reports it is also need of the hour (although the earlier one has been marked as won’t fix :().

In this whole thing, I liked/appreciated the way Steve handled things, I intuitively understood/knew that he wasn’t just part of the Debian-installer team but someone better. I can’t explain it but it was there. A little investigation in the evening and it turned out that he had been Debian Project Leader for two consecutive years (2008 and 2009) . In hindsight it probably was a good thing I didn’t know that before otherwise I probably wouldn’t have interacted with him and it would have been my loss. To have been the DPL and still being so humble while technically being so proficient, I was amazed and didn’t know what to make of it.

Here i.e. in India, if somebody wins even the mohalla elections (neighbourhood elections) the person carries a big chip on her/is shoulder not just till he is on the seat but even beyond, and here was an example of a previous DPL asking time from one of the developers in a video if it’s possible in the next couple of days.

Lastly,last week have able to report 2 bugs upstream. The first one is of youtube-dl . It’s somewhat complicated hence will not go there atm. The second and more surprising one was from ‘nano’ our esteemed text-editor- Hopefully the bug will be fixed once a new version comes.


Filed under: Miscellenous Tagged: #buggy, #cop, #Debconf16, #doha airport, #Installfest, #nano, #singer, #youtube-dl, travel

16 November, 2016 08:10PM by shirishag75

hackergotchi for Joey Hess

Joey Hess

Linux.Conf.Au 2017 presentation on Propellor

On January 18th, I'll be presenting "Type driven configuration management with Propellor" at Linux.Conf.Au in Hobart, Tasmania. Abstract

Linux.Conf.Au is a wonderful conference, and I'm thrilled to be able to attend it again.

16 November, 2016 03:13PM

Bits from Debian

Debian Contributors Survey 2016

The Debian Contributor Survey launched last week!

In order to better understand and document who contributes to Debian, we (Mathieu ONeil, Molly de Blanc, and Stefano Zacchiroli) have created this survey to capture the current state of participation in the Debian Project through the lense of common demographics. We hope a general survey will become an annual effort, and that each year there will also be a focus on a specific aspect of the project or community. The 2016 edition contains sections concerning work, employment, and labour issues in order to learn about who is getting paid to work on and with Debian, and how those relationships affect contributions.

We want to hear from as many Debian contributors as possible—whether you've submitted a bug report, attended a DebConf, reviewed translations, maintain packages, participated in Debian teams, or are a Debian Developer. Completing the survey should take 10-30 minutes, depending on your current involvement with the project and employment status.

In an effort to reflect our own ideals as well as those of the Debian project, we are using LimeSurvey, an entirely free software survey tool, in an instance of it hosted by the LimeSurvey developers.

Survey responses are anonymous, IP and HTTP information are not logged, and all questions are optional. As it is still likely possible to determine who a respondent is based on their answers, results will only be distributed in aggregate form, in a way that does not allow deanonymization. The results of the survey will be analyzed as part of ongoing research work by the organizers. A report discussing the results will be published under a DFSG-free license and distributed to the Debian community as soon as it's ready. The raw, disaggregated answers will not be distributed and will be kept under the responsibility of the organizers.

We hope you will fill out the Debian Contributor Survey. The deadline for participation is: 4 December 2016, at 23:59 UTC.

If you have any questions, don't hesitate to contact us via email at:

16 November, 2016 02:45PM by Molly de Blanc

Russ Allbery

Review: The Philosopher Kings

Review: The Philosopher Kings, by Jo Walton

Series: Thessaly #2
Publisher: Tor
Copyright: June 2015
ISBN: 0-7653-3267-1
Format: Hardcover
Pages: 345

Despite the cliffhanger at the end of The Just City, The Philosopher Kings doesn't pick up immediately afterwards. Argh. It's a great book (as I'm about to describe), but I really wanted to also read that book that happened in between. Still, this is the conclusion to the problem posed in The Just City, and I wouldn't recommend reading this book on its own (or, really, either book separate from the other).

Despite the unwanted gap, and another change at the very start of the book that I won't state explicitly since it's a spoiler but that made me quite unhappy (despite driving the rest of the plot), this is much closer to the book that I wanted to read. Walton moves away from the SF philosophical question that drove much of the second half of The Just City in favor of going back to arguments about human organization, the nature of justice, choices between different modes of life, and the nature of human relationships. Those were the best parts of The Just City, and they're elaborated here in some fascinating ways that wouldn't have been possible in the hothouse environment of the previous book.

I also thought Apollo was more interesting here than in the previous book. Still somewhat infuriating, particularly near the start, but I felt like I got more of what Walton was trying for, and more of what Apollo was attempting to use this existence to understand. And, once the plot hits its stride towards the center of the book, I started really liking Apollo. I guess it took a book and a half for him to mature enough to be interesting.

A new viewpoint character, Arete, gets most of the chapters in this book, rather than following the pattern of The Just City and changing viewpoint characters every chapter. Her identity is a spoiler for The Just City, so I'll leave that a mystery. She's a bit more matter-of-fact and observational than Maia, but she does that thing that I love in Walton's characters: take an unexpected, fantastic situation, analyze and experiment with it, and draw some practical and matter-of-fact conclusions about how to proceed.

I think that's the best way to describe this entire series: take a bunch of honest, thoughtful, and mostly good people, put them into a fantastic situation (at first Plato's Republic, a thought experiment made real, and then some additional fantasy complexities), and watch them tackle that situation like reasonable human beings. There is some drama, of course, because humans will disagree and will occasionally do awful, or just hurtful, things to each other. But the characters try to defuse the drama, try to be thoughtful and fair and just in their approach, and encourage change, improvement, and forgiveness in others. I don't like everyone in these books, but the vast majority of them are good people (and the few who aren't stand out), and there's something satisfying in reading about them. And the philosophical debate is wonderful throughout this book (which I'm not saying entirely because the characters have a similar reaction to a newly-introduced philosophical system as I did as a reader, although that certainly helps).

I'm not saying much about the plot since so much would spoil the previous book. But Walton adds some well-done complexities and complications, and while I was dubious about them at the start of the book, I definitely came around. I enjoyed watching the characters reinvent some typical human problems, but still come at them from a unique and thoughtful angle and come up with some novel solutions. And the ending took me entirely by surprise, in a very good way. It's better than the best ending I could have imagined for the book, providing some much-needed closure and quite a bit of explanation. (And, thankfully, does not end on another cliffhanger; in fact, I'm quite curious to see what the third book is going to tackle.)

Recommended, including the previous book, despite the bits that irritated me.

Followed by Necessity.

Rating: 9 out of 10

16 November, 2016 04:41AM

November 15, 2016

Antoine Beaupré

The Turris Omnia router: help for the IoT mess?

The Turris Omnia router is not the first FLOSS router out there, but it could well be one of the first open hardware routers to be available. As the crowdfunding campaign is coming to a close, it is worth reflecting on the place of the project in the ecosystem. Beyond that, I got my hardware recently, so I was able to give it a try.

A short introduction to the Omnia project

The Turris Omnia Router

The Omnia router is a followup project on CZ.NIC's original research project, the Turris. The goal of the project was to identify hostile traffic on end-user networks and develop global responses to those attacks across every monitored device. The Omnia is an extension of the original project: more features were added and data collection is now opt-in. Whereas the original Turris was simply a home router, the new Omnia router includes:

  • 1.6GHz ARM CPU
  • 1-2GB RAM
  • 8GB flash storage
  • 6 Gbit Ethernet ports
  • SFP fiber port
  • 2 Mini-PCI express ports
  • mSATA port
  • 3 MIMO 802.11ac and 2 MIMO 802.11bgn radios and antennas
  • SIM card support for backup connectivity

Some models sold had a larger case to accommodate extra hard drives, turning the Omnia router into a NAS device that could actually serve as a multi-purpose home server. Indeed, it is one of the objectives of the project to make "more than just a router". The NAS model is not currently on sale anymore, but there are plans to bring it back along with LTE modem options and new accessories "to expand Omnia towards home automation".

Omnia runs a fork of the OpenWRT distribution called TurrisOS that has been customized to support automated live updates, a simpler web interface, and other extra features. The fork also has patches to the Linux kernel, which is based on Linux 4.4.13 (according to uname -a). It is unclear why those patches are necessary since the ARMv7 Armada 385 CPU has been supported in Linux since at least 4.2-rc1, but it is common for OpenWRT ports to ship patches to the kernel, either to backport missing functionality or perform some optimization.

There has been some pressure from backers to petition Turris to "speedup the process of upstreaming Omnia support to OpenWrt". It could be that the team is too busy with delivering the devices already ordered to complete that process at this point. The software is available on the CZ-NIC GitHub repository and the actual Linux patches can be found here and here. CZ.NIC also operates a private GitLab instance where more software is available. There is technically no reason why you wouldn't be able to run your own distribution on the Omnia router: OpenWRT development snapshots should be able to run on the Omnia hardware and some people have installed Debian on Omnia. It may require some customization (e.g. the kernel) to make sure the Omnia hardware is correctly supported. Most people seem to prefer to run TurrisOS because of the extra features.

The hardware itself is also free and open for the most part. There is a binary blob needed for the 5GHz wireless card, which seems to be the only proprietary component on the board. The schematics of the device are available through the Omnia wiki, but oddly not in the GitHub repository like the rest of the software.

Hands on

I received my own router last week, which is about six months late from the original April 2016 delivery date; it allowed me to do some hands-on testing of the device. The first thing I noticed was a known problem with the antenna connectors: I had to open up the case to screw the fittings tight, otherwise the antennas wouldn't screw in correctly.

Once that was done, I simply had to go through the usual process of setting up the router, which consisted of connecting the Omnia to my laptop with an Ethernet cable, connecting the Omnia to an uplink (I hooked it into my existing network), and go through a web wizard. I was pleasantly surprised with the interface: it was smooth and easy to use, but at the same time imposed good security practices on the user.

Install wizard performing automatic updates

For example, the wizard, once connected to the network, goes through a full system upgrade and will, by default, automatically upgrade itself (including reboots) when new updates become available. Users have to opt-in to the automatic updates, and can chose to automate only the downloading and installation of the updates without having the device reboot on its own. Reboots are also performed during user-specified time frames (by default, Omnia applies kernel updates during the night). I also liked the "skip" button that allowed me to completely bypass the wizard and configure the device myself, through the regular OpenWRT systems (like LuCI or SSH) if I needed to.

The Omnia router about to rollback to latest snapshot

Notwithstanding the antenna connectors themselves, the hardware is nice. I ordered the black metal case, and I must admit I love the many LED lights in the front. It is especially useful to have color changes in the reset procedure: no more guessing what state the device is in or if I pressed the reset button long enough. The LEDs can also be dimmed to reduce the glare that our electronic devices produce.

All this comes at a price, however: at \$250 USD, it is a much higher price tag than common home routers, which typically go for around \$50. Furthermore, it may be difficult to actually get the device, because no orders are being accepted on the Indiegogo site after October 31. The Turris team doesn't actually want to deal with retail sales and has now delegated retail sales to other stores, which are currently limited to European deliveries.

A nice device to help fight off the IoT apocalypse

It seems there isn't a week that goes by these days without a record-breaking distributed denial-of-service (DDoS) attack. Those attacks are more and more caused by home routers, webcams, and "Internet of Things" (IoT) devices. In that context, the Omnia sets a high bar for how devices should be built but also how they should be operated. Omnia routers are automatically upgraded on a nightly basis and, by default, do not provide telnet or SSH ports to run arbitrary code. There is the password-less wizard that starts up on install, but it forces the user to chose a password in order to complete the configuration.

Both the hardware and software of the Omnia are free and open. The automatic update's EULA explicitly states that the software provided by CZ.NIC "will be released under a free software licence" (and it has been, as mentioned earlier). This makes the machine much easier to audit by someone looking for possible flaws, say for example a customs official looking to approve the import in the eventual case where IoT devices end up being regulated. But it also makes the device itself more secure. One of the problems with these kinds of devices is "bit rot": they have known vulnerabilities that are not fixed in a timely manner, if at all. While it would be trivial for an attacker to disable the Omnia's auto-update mechanisms, the point is not to counterattack, but to prevent attacks on known vulnerabilities.

The CZ.NIC folks take it a step further and encourage users to actively participate in a monitoring effort to document such attacks. For example, the Omnia can run a honeypot to lure attackers into divulging their presence. The Omnia also runs an elaborate data collection program, where routers report malicious activity to a central server that collects information about traffic flows, blocked packets, bandwidth usage, and activity from a predefined list of malicious addresses. The exact data collected is specified in another EULA that is currently only available to users logged in at the Turris web site. That data can then be turned into tweaked firewall rules to protect the overall network, which the Turris project calls a distributed adaptive firewall. Users need to explicitly opt-in to the monitoring system by registering on a portal using their email address.

Turris devices also feature the Majordomo software (not to be confused with the venerable mailing list software) that can also monitor devices in your home and identify hostile traffic, potentially leading users to take responsibility over the actions of their own devices. This, in turn, could lead users to trickle complaints back up to the manufacturers that could change their behavior. It turns out that some companies do care about their reputations and will issue recalls if their devices have significant enough issues.

It remains to be seen how effective the latter approach will be, however. In the meantime, the Omnia seems to be an excellent all-around server and router for even the most demanding home or small-office environments that is a great example for future competitors.

Note: this article first appeared in the Linux Weekly News.

15 November, 2016 03:28PM

Enrico Zini

Software quality in 2016

Ansible's default output, including the stderr of failed commands, is JSON encoded, which makes reading Jenkins' output hard.

Ansible however has Callback plugins that could be used. In that page it says:

Ansible comes with a number of callback plugins that you can look at for examples. These can be found in lib/ansible/plugins/callback.

That is a link to a git repo with just a pile of Python sources and no, say README.md index to what they do. Hopefully they have some docstring with a short description of what they do? no.

Actually, some do, but just because someone copypasted the default one and didn't even bother removing its docstring.

frustration

15 November, 2016 12:01PM

Héctor Orón Martínez

Open Build Service in Debian needs YOU! ☞

“Open Build Service is a generic system to build and distribute packages from sources in an automatic, consistent and reproducible way.”

 

openSUSE distributions’ build system is based on a generic framework named Open Build Service (OBS), I have been using these tools in my work environment, and I have to say, as Debian developer, that it is a great tool. In the current blog post I plan for you to learn the very basics of such tool and provide you with a tutorial to get, at least, a Debian package building.

 

Fig 1 – Open Build Service Architecture

The figure above shows Open Build Service, from now on OBS, software architecture. There are several parts which we should differenciate:

  • Web UI / API (obs-api)
  • Backend (obs-server)
  • Build daemon / worker (obs-worker)
  • CLI tool to manage API (osc)

Each one of the above packages can be installed in separated machines as a distributed architecture, it is very easy to split the system into several machines running the services, however in the tutorial below everything installs in one machine.


BACKEND

The backend is composed of several scripts written either in shell or Perl. There are several services running in the backend:

  • Source service
  • Repository service
  • Scheduler service
  • Dispatcher service
  • Warden service
  • Publisher service
  • Signer service
  • DoD service

The backend manages source packages (any format such RPM, DEB, …) and schedules them for a build in the worker. Once the package is built it can be published in a repository for the wider audience or kept unpublished and used by other builds.


WORKER

System can have several worker machines which are encharged to perform the package builds. There are different options that can be configured (see /etc/default/obsworker) such enabling switch, number of worker instances, jobs per instance. This part of the system is written in shell and/or Perl language.


WEB UI / API

The frontend allows in a clickable way to get around most options OBS provides: setup projects, upload/branch/delete packages, submit review requests, etc. As an example, you can see a live instance running at https://build.opensuse.org/

The frontend parts are really a Ruby-on-rails web application, we (mainly thanks to Andrew Lee with ruby team help) have tried to get it nicely running, however we have had lots of issues due to javascripts or rubygems malfunctioning. Current webui is visible and provides some package status, however most actions do not work properly, or configurations cannot be applied as editor does not save changes, projects or packages in a project are not listed either. If you are a Ruby-on-rails expert or if you are able to help us out with some of the webui issues we get at Debian that would be really appreciated from our side.


OSC CLI

OSC is a managing command line tool, written in Python, that interfaces with OBS API to be able to perform actions, edit configurations, do package reviews, etc.


 

Now that we have done a general overview of the system, let me introduce you to OBS with a practical tutorial.

TUTORIAL: Build a Debian package against Debian 8.0 using Download On Demand (DoD) service.

15 November, 2016 11:05AM by zumbi

hackergotchi for Keith Packard

Keith Packard

AltOS-Lisp

A Tiny Lisp for AltOS

I took a bit of a diversion over the last week or so when I wondered how small a lisp interpreter I could write, and whether I could fit that into one of the processors that AltOS runs on. It turns out, you can write a tiny lisp interpreter that fits in about 25kB of ram with a 3kB heap for dynamic data.

I decided to target our ChaosKey boards; they're tiny, and I've got a lot of them. That processor offers 28kB of usable flash space (after the 4kB boot loader) and 6kB of ram with the processor running at a steaming 48MHz.

I'm not at all sure this is useful, but I always enjoy doing language implementations, and this one presented some 'interesting' challenges:

  • Limited RAM. I don't have space to do a classic stop/copy collector.

  • Limited stack. A simple lisp implementation uses the C stack for all recursion in execution and memory collection. I don't have enough ram for that.

Iterative Compacting Allocator

I'm betting someone has built one of these before, but I couldn't find one, so I wrote my own.

The basic strategy is to walk the heap to find a subset of the active objects which are allocated sequentially in memory with only unused storage between them. These objects are then compacted in-place, and then the heap is walked again to update all references to the moved objects. Then, the process is restarted to find another subset and move them.

By looking for these subsets starting at the bottom of the heap, and working upwards towards the top, the whole heap can be compacted into a contiguous chunk at the bottom of memory.

Allocation involves moving a pointer along at the top of active memory; when it gets to the top of the heap, collect and see if there's space now.

As always, the hardest part was to make sure all active memory was tied down. The second hardest part was to make sure that all active pointers were updated after any allocation, in case a collect moved the underlying object. That was just bookkeeping, but did consume much of the development time.

One additional trick was to terminate the recursion during heap walking by flagging active cons cell locations in a global bitmap and then walking that separately, iterating until that bitmap is empty. Nested lambdas form another recursion which should probably get the same approach, but I haven't done that yet.

An unexpected "benefit" of the tiny heap is that the collector gets called a lot, so any referencing bugs will have a good chance of being uncovered in even a short program execution.

ROM-able Lisp

Instead of implementing all of the language in C, I wanted to be able to implement various pieces in Lisp itself. Because of the complex nature of the evaluation process, adding things like 'let' or even 'defun' turn out to be dramatically simpler in Lisp. However, I didn't want to consume bunches of precious RAM to hold these basic functions.

What I did was to create two heaps, one in ROM and the other in RAM. References are be tagged as to which heap they're in.

16-bit Values

Lisp programs use a pile of references. Using a full 32 bits for each one would mean having a lot less effective storage. So, instead, I use an offset from the base of the heap. The top bit of the offset is used to distinguish between the ROM heap and the RAM heap.

I needed a place to store type information, so I settled on using the bottom two bits of the references. This allows for four direct type values. One of these values is used to indicate an indirect type, where the type is stored in the first byte of the object. The direct types are:

ValueType
0Cons cell
114-bit int
2String
3Other

With 2 tag bits, the allocator needs to work in 32-bit units as the references couldn't point to individual bytes. Finally, I wanted 0 to be nil, so I add four to the offsets within the heaps.

The result is that the ROM and RAM heaps can each cover up to 32k - 4 bytes.

Note that ints are not stored in the heap; instead they are immediate values stored in 14 bits, providing a range of -8192 to 8191. One can imagine wanting more range in ints at some point.

Heap-based Evaluator

A simple lisp implementation uses the fact that eval is re-entrant and do the operation on the C stack:

val eval(val exprs) {
    val vals;

    while (exprs) {
        vals = append(vals, eval(car(exprs)));
        exprs = exprs->cdr;
    }
    return execute (car(vals), cdr(vals));
}

This makes things really simple and provides for a clean framework for implementing various bits of lisp, including control flow and macros. However, it rapidly consumes all of the available memory for a stack, while also requiring separate bookkeeping for the in-use memory in each frame.

I replaced this design with one which keeps the lisp stack on the heap, and then performs eval with a state machine with all state stored in global variables so that the memory manager can reference them directly.

Each eval operation is performed in a separate 'stack' context, which holds the entire eval state except for the current value, which lives in a separate global variable and is used to pass values out of one stack frame and into another. When the last stack context is finished, the evaluation terminates and the value is returned to the caller.

There are nine states in the state machine, each of which is implemented in a separate function, making the state machine a simple matter of pulling the current state from the top of the stack and invoking the associated function:

while (ao_lisp_stack) {
    if (!(*evals[ao_lisp_stack->state])() || ao_lisp_exception) {
        ao_lisp_stack_clear();
        return AO_LISP_NIL;
    }
}
return ao_lisp_v;

Because there's no C recursion involved, catching exceptions is a simple matter of one test at this level.

Primitives like progn, while, cond and eval all take special magic in the state machine to handle; getting all of that working took several attempts before I found the simple loop shown above.

Lexical Scoping

The last time I did a lisp interpreter, I implemented dynamic scoping. Atoms were all global and had values associated directly with them. Evaluating a lambda started by saving all of the existing global values for the parameter atoms and then binding the new values. When finished, the previous values would be restored. This is almost correct, but provides surprising results for things like:

> (setq baz 1)
> (def foo (lambda (bar) (+ baz bar)))
> (def bletch (lambda (baz) (foo baz)))
> (bletch 2)
4

The value that foo gets for 'baz' is 2 instead of 1 under dynamic scoping, which most people find surprising. This time, I was determined to use lexical scoping, and it turned out to be surprisingly easy.

The first trick was to separate the atoms from their 'value'; each atom can have a different value in different lexical scopes. So, each lexical scope gets a 'frame' object, those contain the value for each atom defined in that scope. There's a global scope which holds all of the globally defined values (like baz, foo and bletch above). Each frame points to its enclosing scope, so you can search upwards to find the right value.

The second trick was to realize that the lexical scope of a lambda is the scope in which the lambda itself is evaluated, and that the evaluation of a lambda expression results in a 'function' object, which contains the lambda and its enclosing scope:

> (def foo (lambda (bar bletch)
       ((lambda (baz)
          (+ baz bar))
        bletch)))
> (foo 2 3)
5

In this case, the inner lambda in foo can 'see' the value of bar from the enclosing lambda. More subtly, even if the inner lambda were executed multiple times, it would see the same baz, and could even change it. This can be used to implement all kinds of craziness, including generators:

> (defun make-inc (add)
  ((lambda (base)
     (lambda ()
       (progn
     (setq base (+ base add))
     base)))
   0)
  )

> (setq plus2 (make-inc 2))
> (plus2)
2
> (plus2)
4

The current implementation of each frame is a simple array of atom/value pairs, with a reference to the parent frame to form the full scope. There are dramatically faster implementations of this same concept, but the goal here was small and simple.

A Tiny Allocator Optimization

With eval consuming heap space for stacks, frames and argument lists, the interpreter was spending a lot of time in the collector. As a simple optimization, I added some free lists for stack frames and cons cells.

Stack frames are never referenced when they're finished, so they can always go on the free list. Cons cells used to construct argument lists for functions are usually free.

Builtin functions have a bit which indicates whether they might hold on to a reference to the argument list. Interpreted lambdas can't get the list while nlambdas, lexprs and macros do.

Each lambda execution creates a new frame, and while it would be possible to discover if that frame 'escapes' the lambda, I decided to not attempt to cache free ones yet.

Save and Restore

To make the lisp interpreter more useful in tiny computers, I added the ability to save and restore the entire heap to flash. This requires leaving enough space in the flash to preserve the heap, further constraining the amount of flash available for the application.

Code

All of this code is in the 'lisp' branch of my AltOS repository:

AltOS

The lisp interpreter is independent from the rest of AltOS and could be re-purposed for another embedded operating system. It runs fine on ChaosKey hardware, and also on the STM32F042 Nucleo-32 board

There's also a test framework which runs on Linux, and is how I developed almost all of the code. That's in the src/test directory in the above repository, and is called 'ao_lisp_test'.

Towers of Hanoi

Here's an implementation of the classic recursive Towers of Hanoi game; it shows most of the current features of the language.

;
; Towers of Hanoi
;
; Copyright © 2016 Keith Packard <[email protected]>
;
; This program is free software; you can redistribute it and/or modify
; it under the terms of the GNU General Public License as published by
; the Free Software Foundation, either version 2 of the License, or
; (at your option) any later version.
;
; This program is distributed in the hope that it will be useful, but
; WITHOUT ANY WARRANTY; without even the implied warranty of
; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
; General Public License for more details.
;

; ANSI control sequences

(defun move-to (col row)
  (patom "\033[" row ";" col "H" nil)
  )

(defun clear ()
  (patom "\033[2J" nil)
  )

(defun display-string (x y str)
  (move-to x y)
  (patom str)
  )

; Here's the pieces to display

(setq stack '("*" "**" "***" "****" "*****" "******" "*******"))

(setq top (+ (length stack) 3))

;
; Here's all of the stacks of pieces
; This is generated when the program is run
;
(setq stacks nil)

; Display one stack, clearing any
; space above it

(defun display-stack (x y clear stack)
  (cond ((= 0 clear)
     (cond (stack (progn
            (display-string x y (car stack))
            (display-stack x (1+ y) 0 (cdr stack))
            )
              )
           )
     )
    (t (progn
         (display-string x y "          ")
         (display-stack x (1+ y) (1- clear) stack)
         )
       )
    )
  )

; This should probably be included in the rom image...

(defun length (list)
  (cond (list (1+ (length (cdr list))))
    (0)
    )
  )

; Position of the top of the stack on the screen
; Shorter stacks start further down the screen

(defun stack-pos (y stack)
  (- y (length stack))
  )

; Display all of the stacks, spaced 20 columns apart

(defun display-stacks (x y stacks)
  (cond (stacks (progn
          (display-stack x 0 (stack-pos y (car stacks)) (car stacks))
          (display-stacks (+ x 20) y (cdr stacks)))
        )
    )
  )

; Display all of the stacks, then move the cursor
; out of the way and flush the output

(defun display ()
  (display-stacks 0 top stacks)
  (move-to 1 21)
  (flush)
  )

; Reset stacks to the starting state, with
; all of the pieces in the first stack and the
; other two empty

(defun reset-stacks ()
  (setq stacks (list stack nil nil))
  (length stack)
  )

; more functions which could usefully
; be in the rom image

(defun min (a b)
  (cond ((< a b) a)
    (b)
    )
  )

(defun nth (list n)
  (cond ((= n 0) (car list))
    ((nth (cdr list) (1- n)))
    )
  )

; Replace a stack in the list of stacks
; with a new value

(defun replace (list pos member)
  (cond ((= pos 0) (cons member (cdr list)))
    ((cons (car list) (replace (cdr list) (1- pos) member)))
    )
  )

; Move a piece from the top of one stack
; to the top of another

(defun move-piece (from to)
  (let ((from-stack (nth stacks from))
    (to-stack (nth stacks to))
    (piece (car from-stack)))
    (setq from-stack (cdr from-stack))
    (setq to-stack (cons piece to-stack))
    (setq stacks (replace stacks from from-stack))
    (setq stacks (replace stacks to to-stack))
    (display)
    (delay 100)
    )
  )

; The implementation of the game

(defun _hanoi (n from to use)
  (cond ((= 1 n)
     (progn
      (move-piece from to)
      nil)
     )
    (t
     (progn
      (_hanoi (1- n) from use to)
      (_hanoi 1 from to use)
      (_hanoi (1- n) use to from)
      )
     )
    )
  )

; A pretty interface which
; resets the state of the game,
; clears the screen and runs
; the program

(defun hanoi ()
  (setq len (reset-stacks))
  (clear)
  (_hanoi len 0 1 2)
  )

15 November, 2016 07:11AM

Russ Allbery

Review: The Broken Kingdoms

Review: The Broken Kingdoms, by N.K. Jemisin

Series: Inheritance #2
Publisher: Orbit
Copyright: November 2010
Printing: September 2011
ISBN: 0-316-04395-8
Format: Mass market
Pages: 395

The Broken Kingdoms is a fairly direct sequel to The Hundred Thousand Kingdoms and depends heavily on the end of that book. It had been a long time since I'd read the previous book (about five years), and I looked up plot summaries to remind myself what happened. It turned out that I probably didn't have to do that; the explanation does come when it's critical. But this book will definitely spoil the end of The Hundred Thousand Kingdoms.

Oree is an artist who sells her work to tourists in Shadow, the city beneath the World Tree. It's a good enough living, particularly for a blind immigrant from Nimaro, the area settled by the survivors of the destruction of Maro. Oree is not strictly entirely blind, since she can see magic, but that's not particularly helpful in daily life. She's content to keep that quiet, along with her private paintings that carry a strange magic not found in her public trinkets.

One of the many godlings who inhabit Shadow is Oree's former lover, so she has some connection to the powerful of the city. But she prefers her quiet life — until, that is, she finds a man at sunrise in a pile of muck and takes him home to clean him up. A man who she ends up taking care of, despite the fact that he never speaks to her, and despite his total lack of desire or apparent capability to take care of himself or avoid any danger. Not that it seems to matter, since he comes back to life every time he dies.

If you've read The Hundred Thousand Kingdoms, you have a pretty good guess at who the man Oree calls Shiny actually is. But that discovery is not the core plot of this book. Someone is killing the godlings. They're not immortal, although they don't age, but killing them should require immense power or the intervention of the Three, the gods who run the metaphysical universe of this series. Neither of those seem to be happening, and still godlings are being murdered. Nahadoth is not amused: the humans and godlings have one month to find the killer before he does something awful to all of them. Then Shiny somehow kills a bunch of priests of Itempas, and the Order is after both him and Oree. Desperate, she turns to her former boyfriend and the godlings for help, and is pulled into the heart of a dark conspiracy.

The Broken Kingdoms adds a few new elements to Jemisin's world-building, although it never quite builds up to the level of metaphysics of the previous book. But it's mostly a book about Oree: her exasperated care of Shiny, her attempts to navigate her rapidly complicating life, and her determination to do the right thing for her friends. It's the sort of book that pits cosmic power and grand schemes against the determined inner morality of a single person who is more powerful than she thinks she is. That part of the story I liked quite a lot.

Shiny, and Oree's complicated relationship with Shiny, I wasn't as fond of. Oree treats him like a broken and possibly healing person, which is true, but he's also passively abusive in his dark moping. Jemisin tries very hard throughout the book to help the reader try to grasp a bit of what must be going through Shiny's head, and she does succeed at times, but I never much cared for what I found there. And neither Nahadoth nor Yeine, when they finally make their appearance, are very likable. (Yeine in particular I found deeply disappointing and not up to her level of ethics in the first book.) Oree is still quite capable of carrying the story single-handed, and I did like her godling friends. But I felt like the ending required liking Shiny a lot more than I did, or being a lot more sympathetic to Nahadoth and Yeine than I was, and it left a bad taste in my mouth. I enjoyed reading about Oree, but I felt like this story gave her a remarkably depressing ending.

This book is also structured with a long middle section where everything seems to get more and more horrible and the antagonists are doing awful things. It's a common structural way to build tension that I rarely like. Even knowing that there's doubtless an upward arc and protagonist triumph coming, those sections are often unpleasant and difficult to read through, and I had that reaction here.

The Broken Kingdoms is less of a weird romance than The Hundred Thousand Kingdoms (although there is some romance), so you may enjoy it more if you thought that angle was overdone. It does have some interesting world-building, particularly at the godling level, and Lil is one of my favorite characters. I think Oree got a raw deal from the story and would have preferred a different ending, but I'm not sorry I read it.

Followed by The Kingdoms of Gods.

Rating: 7 out of 10

15 November, 2016 03:29AM

November 14, 2016

Mike Gabriel

Debian Edu development sprint in Oslo from Nov 25th - Nov 27th 2016

For those of you, who already thought about joining us in Oslo for our Debian Edu sprint, here comes your short reminder for signing up on this wiki page and then book your travel.

For those of you, who have learned about our upcoming sprint just now, feel heartily invited to meet and join the Debian Edu team (and friends) in Oslo. Check with your family and friends, if they may let you go. Do that now, put your name onto our wiki page and and book your journey.

Those of you, who cannot travel to Oslo, but feel like being interested in Debian and educational topics around Free Software, put a note into your calendar, so you don't forget to join us on IRC over that weekend (and any other time if you like): #debian-edu on irc.debian.org.

Looking forward to meeting you at end of November,
Mike (aka sunweaver)

14 November, 2016 08:08PM by sunweaver

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, October 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change this month. We still need a couple of supplementary sponsors to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file 29. The situation improved slightly compared to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

14 November, 2016 05:15PM by Raphaël Hertzog

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, October 2016

I was assigned 13.75 hours of work by Freexian's Debian LTS initiative and worked all of them.

I reviewed the fix for CVE-2016-7796 in wheezy's systemd, which needed substantial changes and a few iterations to get right.

I updated linux to the 3.2.82 stable release (and 3.2.82-rt119 for PREEMPT_RT), and added fixes for several security issues including CVE-2016-5195 "Dirty Cow". I uploaded and issued DLA-670-1.

In my role as Linux 3.2 stable maintainer, I made a 3.2.83 release fixing just that issue, and started to prepare a 3.2.84 release with many more fixes.

I cleaned up my work on imagemagick, but didn't go further through the backlog of issues. I put the partly updated package on people.debian.org for another LTS maintatainer to pick up.

14 November, 2016 05:03PM

Dimitri John Ledkov

/boot less LVM rootfs in Zesty


On Ubuntu many of the default boot loaders support booting kernels located on LVM volumes. This includes following platforms

  • i686, x86_64 bios grub2
  • arm64, armhf, i686, x86_64 UEFI grub2
  • PReP partitions on IBM PowerPC
  • zipl on IBM zSystems
For all of the above the d-i has been modified in Zesty to create LVM based installations without a dedicated /boot partition. We shall celebrate this achievement. Hopefully this means one doesn't need to remove kernels as much, or care about sizing /boot volume appropriately any more.

If there are more bootloaders in Ubuntu that support booting off LVM, please do get in touch with me. I'm interested if I can safely enable following platforms as well:
  • armhf with u-boot
  • arm64 with u-boot
  • ppc64el with PReP volume
ps. boots pic is from here

14 November, 2016 03:11PM by Dimitri John Ledkov ([email protected])

November 13, 2016

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2016/40-45

time for a quick update, I guess. here's the list of release-critical bugs in debian I've worked on during the last couple of weeks.

  • #825608 – libnet-jifty-perl: "libnet-jifty-perl: FTBFS: t/006-uploads.t failure"
    patch test for compatibility with newer Encode, not uploaded in the end because the problem is more likely in the code (pkg-perl)
  • #826192 – libmath-gsl-perl: "libmath-gsl-perl: FTBFS with GSL 2"
    import new upstream release (pkg-perl)
  • #828386 – src:libcrypt-openssl-pkcs12-perl: "libcrypt-openssl-pkcs12-perl: FTBFS with openssl 1.1.0"
    sponsor upload with upstream patch (pkg-perl)
  • #828388 – src:libcrypt-openssl-x509-perl: "libcrypt-openssl-x509-perl: FTBFS with openssl 1.1.0"
    cherry-pick 2 commits from upstream git repo (pkg-perl)
  • #828389 – src:libcrypt-smime-perl: "libcrypt-smime-perl: FTBFS with openssl 1.1.0"
    import new upstream release (pkg-perl)
  • #828408 – src:libpoe-filter-ssl-perl: "libpoe-filter-ssl-perl: FTBFS with openssl 1.1.0"
    use openssl 1.0.2, & downgrade severity (pkg-perl)
  • #830280 – src:libfurl-perl: "libfurl-perl: accesses the internet during build"
    disable DNS queries during build (pkg-perl)
  • #834730 – src:libdist-zilla-plugins-cjm-perl: "libdist-zilla-plugins-cjm-perl: FTBFS: Failed 1/9 test programs. 2/126 subtests failed."
    add patch from CPAN RT (pkg-perl)
  • #839200 – libcpanplus-perl: "libcpanplus-perl: FTBFS: Failed test 'Cwd has correct version in report'"
    add patch from upstream git (pkg-perl)
  • #839505 – src:mongodb: "mongodb: FTBFS: Tests failures"
    propose a possible solution
  • #839580 – src:request-tracker4: "request-tracker4: FTBFS in testing (failed tests)"
    prepare a workaround patch
  • #839987 – libcompress-raw-lzma-perl: "libcompress-raw-lzma-perl: Version Mismatch due to new src:xz-utils"
    patch out version check at runtime (pkg-perl)
  • #839992 – libmath-quaternion-perl: "libmath-quaternion-perl: autopkgtest failure: not ok 48 - rotation_angle works"
    some triaging (pkg-perl)
  • #840479 – src:libdbd-firebird-perl: "libdbd-firebird-perl: FTBFS: libfbembed.so not found"
    versioned close (pkg-perl)
  • #840980 – libperinci-sub-normalize-perl: "libperinci-sub-normalize-perl: FTBFS: Can't locate Sah/Schema/rinci/function_meta.pm in @INC"
    add new (build) dependency, after packaging it (pkg-perl)
  • #841545 – src:liborlite-perl: "liborlite-perl: FTBFS: Tests failures"
    fix sqlite commands (pkg-perl)
  • #841562 – src:libvideo-fourcc-info-perl: "libvideo-fourcc-info-perl: FTBFS: dh_auto_build: perl Build returned exit code 255"
    fix sqlite commands (pkg-perl)
  • #841573 – src:libdbix-class-perl: "libdbix-class-perl: FTBFS: Tests failures"
    patch test suite as recommended by upstream (pkg-perl)
  • #842460 – libplack-middleware-csrfblock-perl: "libplack-middleware-csrfblock-perl: FTBFS: missing dependencies on HTML::Parser"
    add missing (build) dependency (pkg-perl)
  • #842462 – libweb-simple-perl: "libweb-simple-perl: FTBFS: Can't locate HTTP/Body.pm in @INC"
    add missing (build) dependency (pkg-perl)
  • #842722 – src:kgb-bot: "kgb-bot: FTBFS (failing tests)"
    fix version comparison in test
  • #843704 – libnet-pcap-perl: "libnet-pcap-perl: FTBFS: t/09-error.t fails with newer libpcap"
    add patch from CPAN RT (pkg-perl)

13 November, 2016 10:23PM

hackergotchi for Thorsten Glaser

Thorsten Glaser

“I don’t like computers”

cnuke@ spotted something on the internet, and shared. Do read this, including the comments. It’s so true. (My car is 30 years old, I use computers mostly for sirc, lynx and ssh, and I especially do not buy any product that needs to be “online” to work.)

Nice parts of the internet, to offset this, though, do exist. IRC as a way of cheap (affordable), mostly reliant, communication that’s easy enough to do with TELNET.EXE if necessary. Fanfiction; easy proliferation of people’s art (literature, in this case). Fast access to documentation and source code; OpenBSD’s AnonCVS was a first, nowadays almost everything (not Tom Dickey’s projects (lynx, ncurses, xterm, cdk, …), nor GNU bash, though) is on a public version control system repository. (Now people need to learn to not rewrite history, just commit whatever shit they do, to record thought process, not produce the perfect-looking patch.) Livestreams too, I guess, but ever since live365.com went dead due to a USA law change on 2016-01-02, it got bad.

13 November, 2016 08:28PM by MirOS Developer tg ([email protected])

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

pkgKitten 0.1.4: Creating R Packages that purr

kitten

A new minor release 0.1.4 of pkgKitten just hit on CRAN this morning.

One small change is that the package manual page it creates now makes use of the (still new-ish and under-documented and under-used) Rd macros described at the end of Section 2.13 of Writing R Extensions:

See the system.Rd file in share/Rd/macros for more details and macro definitions, including macros \packageTitle, \packageDescription, \packageAuthor, \packageMaintainer, \packageDESCRIPTION and \packageIndices.

By using these macros, and referencing them from the DESCRIPTION file, we can avoid redundancy, or worse, inconsitency, between both files. Or just be plain lazy and describe things just once in the higher-level file: A good thing!

Otherwise we fixed a URL to the manual thanks to a PR, and just added some of the regular polish to some of the corners which R CMD check --as-cran is looking into:

Changes in version 0.1.4 (2016-11-13)

  • Utilize newer R macros in package-default manual page.

  • Repair a link to Wrting R Extensions (PR #7 by Josh O'Brien)

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 November, 2016 05:09PM

Andrew Cater

Debian MiniConf, ARM, Cambridge 11/11/16 - Day 2 post 2

It's raining cats and dogs in Cambridge.

Just listening to Lars Wirzenius - who shared an office with Linus Torvalds, owned the computer that first ran Linux, founded the Linux Documentation Project. Living history in more than one sense :)

Live streaming is also happening.

Building work is also happening - so there may be random noise happening occasionally.

13 November, 2016 04:09PM by Andrew Cater ([email protected])

Debian Miniconf ARM Cambridge - 11/11/16 - Day 2 post 1

Lots of chatting at various points  - a couple of extra folk have joined us today.

I've been fighting UEFI network booting: found inconsistencies on various Wiki pages - then found that the laptop I was intending to boot was just too old to support UEFI properly. In the meantime, the person I was intending to help has moved on and hit something else ...

Release team have managed to hammer out a couple of points: various other progress from the video team. It's all going fairly well.

Lunch - and coffee - excellent once again.

Thanks to front desk folks - and ARM folks - who are tolerating the end of their working week being invaded by Debian folk.

13 November, 2016 04:09PM by Andrew Cater ([email protected])

Debian Miniconf, ARM Cambridge 10/11/16 - post 3

Quiet room with occasional outbursts of talking. One of my problems with radio gear down to broken cable so that's OK-ish - I have spares.

And suddenly it's 1720 and we need to leave at about 1730.

2 x radio receivers tried: 2 x transmitters working ready for talk on Saturday.

Spare huge bag of cables appreciated by others - it's amazing how often you find someone else needs stuff :)





13 November, 2016 04:08PM by Andrew Cater ([email protected])

Debian Miniconf ARM, Cambridge 10/11/16 - post 2

Now about 30 people here: the video team are chasing down power leads and cables in readiness for the weekend.

One large open plan room with about 30 small quadrilateral tables - two make a hexagon. Very quiet for open plan: periodically the room falls completely silent - lots of developers quietly coding / reading screens.

Lunch was very good curry: ARM caterers feed us very well indeed :D

The radio I'm trying to program refuses to play with the software: the maintainer is at the back of the room and has offered to sort out a backport to Debian stable. Debian can occasionally seem like a dysfunctional family but it's good to be a member.

The Cubietruck I bought last year is sitting with another developer as I speak - he's going to try adding multiple disks for a RAID array on a machine that only draws 5W or so.

13 November, 2016 04:08PM by Andrew Cater ([email protected])

Debian Miniconf, ARM, Cambridge 10/11/16 - Post 1

More or less just getting going: there are eight of us here.

Release team are round one desk and a couple of others of us are on adjacent tables. Coffee is good, as ever :)

ARM very helpful as ever: they have been able to provide a disabled space for me though parking here is really, really tight

13 November, 2016 04:07PM by Andrew Cater ([email protected])

Petter Reinholdtsen

Coz profiler for multi-threaded software is now in Debian

The Coz profiler, a nice profiler able to run benchmarking experiments on the instrumented multi-threaded program, finally made it into Debian unstable yesterday. Lluís Vilanova and I have spent many months since I blogged about the coz tool in August working with upstream to make it suitable for Debian. There are still issues with clang compatibility, inline assembly only working x86 and minimized JavaScript libraries.

To test it, install 'coz-profiler' using apt and run it like this:

coz run --- /path/to/binary-with-debug-info

This will produce a profile.coz file in the current working directory with the profiling information. This is then given to a JavaScript application provided in the package and available from a project web page. To start the local copy, invoke it in a browser like this:

sensible-browser /usr/share/coz-profiler/viewer/index.htm

See the project home page and the USENIX ;login: article on Coz for more information on how it is working.

13 November, 2016 11:30AM

hackergotchi for Daniel Pocock

Daniel Pocock

Are all victims of French terrorism equal?

Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources

The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).

The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?

François Hollande and Ségolène Royal. Royal's brother Gérard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If François and Ségolène had married (they have four children together), would Gérard be the president's brother-in-law or terrorist-in-law?

The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?

If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Liberté, égalité, fraternité? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?

If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Opération Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?

If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?

If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?

When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?

13 November, 2016 10:50AM by Daniel.Pocock

Hideki Yamane

LTS for PowerPC?

Debian9 "Stretch" drops powerpc as release architecture, it means Debian based powerpc box would need more effort to maintain in the future.

If you use Debian for powerpc box in production environment, it maybe better to consider to join LTS funding and ask LTS team to add it to Jessie LTS architecture. I'm not sure they accept it or not, but if it's okay (as they did for armel and armhf), it may be worth to reduce your trouble.

13 November, 2016 05:19AM by Hideki Yamane ([email protected])

November 12, 2016

John Goerzen

Morning in the Skies

IMG_8515

This is morning. Time to fly. Two boys, happy to open the hangar door and get the plane ready.

It’s been a year since I passed the FAA exam and became a pilot. Memories like these are my favorite reminders why I did. It is such fun to see people’s faces light up with the joy of flying a few thousand feet above ground, of the beauty and freedom and peace of the skies.

I’ve flown 14 different passengers in that time; almost every flight I’ve taken has been with people, which I enjoy. I’ve heard “wow” or “beautiful” so many times, and said it myself even more times.

IMG_6083

I’ve landed in two state parks, visited any number of wonderful small towns, seen historic sites and placid lakes, ascended magically over forests and plains. I’ve landed at 31 airports in 10 states, flying over 13,000 miles.

airports

Not once have I encountered anyone other than friendly, kind, and outgoing. And why not? After all, we’re working around magic flying carpet machines, right?

IMG_7867_bw

(That’s my brother before a flight with me, by the way)

Some weeks it is easy to be glum. This week has been that way for many, myself included. But then, whether you are in the air or on the ground, if you pay attention, you realize we still live in a beautiful world with many wonderful people.

And, in fact, I got a reminder of that this week. Not long after the election, I got in a plane, pushed in the throttle, and started the takeoff roll down a runway in the midst of an Indiana forest. The skies were the best kind of clear blue, and pretty soon I lifted off and could see for miles. Off in the distance, I could see the last cottony remnants of the morning’s fog, lying still in the valleys, surrounding the little farms and houses as if to give them a loving hug. Wow.

Sometimes the flight is bumpy. Sometimes the weather doesn’t cooperate, and it doesn’t happen at all. Sometimes you can fly across four large states and it feels as smooth as glass the whole way.

Whatever happens, at the end of the day, the magic flying carpet machine gets locked up again. We go home, rest our heads on our soft pillows, and if we so choose, remember the beauty we experienced that day.

Really, this post is not about being a pilot. This post is a reminder to pay attention to all that is beautiful in this world. It surrounds us; the smell of pine trees in the forest, the delight in the faces of children, the gentle breeze in our hair, the kind word from a stranger, the very sunrise.

I hope that more of us will pay attention to the moments of clear skies and wind at our back. Even at those moments when we pull the hangar door shut.

IMG_20160716_093627

12 November, 2016 07:35PM by John Goerzen

Lucy Wayland

Diversity and Inclusion, Debian Redux

So, today at Cambridge MiniDebConf, I was scheduled to do a Birds of a Feather (BoF) about Diversity and Inclusion within Debian. I was expecting a handful of people in the breakout room. Instead it was a full blown workshop in the lecture theatre with me nominally facilitating. It went far, far better than I hoped (although a couple of other and myself people had to wrench us back on topic a few times).
There were lots of good ideas, and productive friendly debate (although we were pretty much all coming from the same ball park). There are three points I have taken away from it (others may have different views):
  1. We are damned good at Inclusion, but have a long way to go on the Diversity (which is a problem of the entire tech sector).
  2. Debian is a social project as well as a technical one – our immediately accessible documentation does not reflect this.
  3. We are currently too reactive and passive when it comes to social issues and getting people involved. It is essential that we become more proactive.

Combined with the recent Diversity drive from Debconf 2016, I really believe we can do this. Thank-you all you who attended, contributed, and approached me afterwards.

Edit: Video here – Debian Diversity and Inclusion Workshop

Edit Edit: video link fixed.


12 November, 2016 07:28PM by aardvarkoffnord

hackergotchi for Wouter Verhelst

Wouter Verhelst

New Toy: Nikon D7200

Last month, I was abroad with my trusty old camera, but without its SD cards. Since the old camera has an SD only slot, which does not accept SDHC (let alone SDXC) cards, I cannot use it with cards larger than 2GiB. Today, such cards are not being manufactured anymore. So, I found myself with a few options:

  1. Forget about the camera, just don't take any photos. Given the nature of the trip, I did not fancy this option.
  2. Go on eBay or some such, and find a second-hand 2GiB card.
  3. Find a local shop, and buy a new camera body.

While option 2 would have worked, the lack of certain features on my old camera had meant that I'd been wanting to buy a new camera body for a while, but it just hadn't happened yet; so I decided to go with option 3.

The Nikon D7200 is the latest model in the Nikon D7xxx series of cameras, a DX-format ("APS-C") camera that is still fairly advanced. Slightly cheaper than the D610, the cheapest full-frame Nikon camera (which I considered for a moment until I realized that two of my three lenses are DX-only lenses), it is packed with a similar amount of features. It can shoot photos at shutter speeds of 1/8000th of a second (twice as fast as my old camera), and its sensor can be set to ISO speeds of up to 102400 (64 times as much as the old one) -- although for the two modes beyond 25600, the sensor is switched to black-and-white only, since the amount of color available in such lighting conditions is very very low already.

A camera which is not only ten years more recent than the older one, but also is targeted at a more advanced user profile, took some getting used to at first. For instance, it took a few days until I had tamed the camera's autofocus system, which is much more advanced than the older one, so that it would focus on the things I wanted it to focus on, rather than just whatever object happens to be closest.

The camera shoots photos at up to twice the resolution in both dimensions (which combines to it having four times the amount of megapixels as the old body), which is not something I'm unhappy about. Also, it does turn out that a DX camera with a 24 megapixel sensor ends up taking photos with a digital resolution that is much higher than the optical resolution of my lenses, so I don't think more than 24 megapixels is going to be all that useful.

The builtin WiFi and NFC communication options are a nice touch, allowing me to use Nikon's app to take photos remotely, and see what's going through the lens while doing so. Additionally, the time-lapse functionality is something I've used already, and which I'm sure I'll be using again in the future.

The new camera is definitely a huge step forward from the old one, and while the price over there was a few hundred euros higher than it would have been here, I don't regret buying the new camera.

The result is nice, too:

DSC_1012

All in all, I'm definitely happy with it.

12 November, 2016 08:48AM

November 11, 2016

hackergotchi for Jonathan Dowland

Jonathan Dowland

Vinyl is killing Vinyl (but that's ok)

I started buying vinyl records about 16 years ago, but recently I've become a bit uncomfortable being identified as a "vinyl lover". The market is ascendant, with vinyl album sales growing for 8 consecutive years, at least in the UK. So why am I uncomfortable about it?

A quick word about audio fidelity/quality here. I don't subscribe to the school of thought that audio on vinyl is inherently better than digital audio, far from it. I'm aware of its limitations. For recordings that I love, I try to seek out the best quality version available, which is almost always digital. Some believe that vinyl is immune to the "loudness war" brickwall mastering plaguing some modern releases, but for some of the worst offenders (Depeche Mode's Playing The Angel; Red Hot Chili Pepper's Californication) I haven't found the vinyl masterings to sound any different.

16 years ago

Let's go back to why I started buying vinyl. Back when I started, the world was a very different place to what it is today. You could not buy most music in a digital form: it was 3 more years before the iTunes Store was opened, and it was Mac-only at first, and the music it sold was DRM-crippled for the first 5 or so years afterwards. The iPod had not been invented yet and there was no real market for personal music players. Minidiscs were still around, but Net-MD (the only sanctioned way to get digital music onto them from a computer) was terrible.

old-ish LPs

old-ish LPs

Buying vinyl 16 years ago was a way to access music that was otherwise much harder to reach. There were still plenty of albums, originally recorded and released before CDs, which either had not been re-issued digitally at all, or had been done so early, and badly. Since vinyl was not fashionable, the second hand market was pretty cheap. I bought quite a lot of stuff for pennies at markets and car boot sales.

Some music—such as b-sides and 12" mixes and other mixes prepared especially for the format—remains unavailable and uncollected on CD. (I'm a big fan of the B-side culture that existed prior to CDs. I might write more about that one day.)

10 years ago

modern-ish 7 inches

modern-ish 7 inches

Fast forward to around 10 years ago. Ephemeral digital music is now much more common, the iPod and PMPs are well established. High-street music stores start to close down, including large chains like MOS, Our Price, and Virgin. Streaming hasn't particularly taken off yet, attempts to set up digital radio stations are fought by the large copyright owners. Vinyl is still not particularly fashionable, but it is still being produced, in particular for singles for up-and-coming bands in 7" format. You can buy a 7" single for between £1 and £4, getting the b-side with it. The b-side is often exclusive to the 7" release as an incentive to collectors. I was very prepared to punt £1-2 on a single from a group I was not particularly familiar with just to see what they were like. I discovered quite a lot of artists this way. One of the songs we played at our wedding was such an exclusive: a recording of the Zutons' covering Jackie Wilson's "Higher and Higher", originally broadcast once on Colin Murray's Evening Session radio show.

Now

Vangelis - Blade Runner OST

An indulgence

So, where are we now?

Vinyl album sales are a huge growth market. They are very fashionable. Many purchasers are younger people who are new to the format; it's believed many don't have the means to play the music on the discs. Many (most?) albums are now issued as 12" vinyl in parallel with digital releases. These are usually exactly the same product (track listing, mixes, etc.) and usually priced at exactly twice that of the CD (with digital prices normally a fraction under that).

The second hand market for 12" albums has inflated enormously. Gone are the bargains that could be had, a typical second hand LP is now priced quite close to the digital price for a popular/common album in most places.

The popularity of vinyl has caused a huge inflation in the price of most 7" singles, which average somewhere between £8-£10 each, often without any b-side whatsoever. The good news is—from my observations—the 2nd hand market for 7" singles hasn't been affected quite as much. I guess they are not as desirable to buyers.

The less said about Record Store Day, the better.

So, that's all quite frustrating. But most of the reasons I used to buy vinyl have gone away anyway. Many of the rushed-to-market CD masterings have been reworked and reissued, correcting the earlier problems. B-side compilations are much more common so there are far fewer obscure tracks or mixes, and when the transfer has been done right, you're getting those previously-obscure tracks in a much higher quality. Several businesses exist to sell 2nd hand CDs for rock bottom prices, so it's still possible to get popular music very cheaply.

The next thing to worry about is probably streaming services.

11 November, 2016 05:28PM

hackergotchi for Chris Lamb

Chris Lamb

Awarded Core Infrastructure Initiative grant for Reproducible Builds

I'm delighted to announce that I have been awarded a grant from the Core Infrastructure Initiative (CII) to fund my previously-voluntary work on Reproducible Builds.

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I'd like to sincerely thank the CII, not only for their material support but also for their recognition of my existing contributions. I am looking forward to working with my co-grantees towards fulfilling our shared goal.

You can read the CII's press release here.

11 November, 2016 05:04PM

November 10, 2016

hackergotchi for Matthew Garrett

Matthew Garrett

Tor, TPMs and service integrity attestation

One of the most powerful (and most scary) features of TPM-based measured boot is the ability for remote systems to request that clients attest to their boot state, allowing the remote system to determine whether the client has booted in the correct state. This involves each component in the boot process writing a hash of the next component into the TPM and logging it. When attestation is requested, the remote site gives the client a nonce and asks for an attestation, the client OS passes the nonce to the TPM and asks it to provide a signed copy of the hashes and the nonce and sends them (and the log) to the remote site. The remoteW site then replays the log to ensure it matches the signed hash values, and can examine the log to determine whether the system is trustworthy (whatever trustworthy means in this context).

When this was first proposed people were (justifiably!) scared that remote services would start refusing to work for users who weren't running (for instance) an approved version of Windows with a verifiable DRM stack. Various practical matters made this impossible. The first was that, until fairly recently, there was no way to demonstrate that the key used to sign the hashes actually came from a TPM[1], so anyone could simply generate a set of valid hashes, sign them with a random key and provide that. The second is that even if you have a signature from a TPM, you have no way of proving that it's from the TPM that the client booted with (you can MITM the request and either pass it to a client that did boot the appropriate OS or to an external TPM that you've plugged into your system after boot and then programmed appropriately). The third is that, well, systems and configurations vary so much that outside very controlled circumstances it's impossible to know what a "legitimate" set of hashes even is.

As a result, so far remote attestation has tended to be restricted to internal deployments. Some enterprises use it as part of their VPN login process, and we've been working on it at CoreOS to enable Kubernetes clusters to verify that workers are in a trustworthy state before running jobs on them. While useful, this isn't terribly exciting for most people. Can we do better?

Remote attestation has generally been thought of in terms of remote systems requiring that clients attest. But there's nothing that requires things to be done in that direction. There's nothing stopping clients from being able to request that a server attest to its state, allowing clients to make informed decisions about whether they should provide confidential data. But the problems that apply to clients apply equally well to servers. Let's work through them in reverse order.

We have no idea what expected "good" values are

Yes, and this is a problem. CoreOS ships with an expected set of good values, and we had general agreement at the Linux Plumbers Conference that other distributions would start looking at what it would take to do the same. But how do we know that those values are themselves trustworthy? In an ideal world this would involve reproducible builds, allowing anybody to grab the source code for the OS, build it locally and verify that they have the same hashes.

Ok. So we're able to verify that the booted OS was good. But how about the services? The rkt container runtime supports measuring each container into the TPM, which means we can verify which container images were started. If container images are also built in such a way that they're reproducible, users can grab the source code, rebuild the container locally and again verify that it has the same hashes. Users can then be sure that the remote site is running the code they're looking at.

Or can they? Not really - a general purpose OS has all kinds of ways to inject code into containers, so an admin could simply replace the binaries inside the container after it's been measured, or ptrace() the server, or modify rkt so it generates correct measurements regardless of the image or, well, there's lots they could do. So a general purpose OS is probably a bad idea here. Instead, let's imagine an immutable OS that does nothing other than bring up networking and then reads a config file that tells it which container images to download and run. This reduces the amount of code that needs to support reproducible builds, making it easier for a client to verify that the source corresponds to the code the remote system is actually running.

Is this sufficient? Eh sadly no. Even if we know the valid values for the entire OS and every container, we don't know the legitimate values for the system firmware. Any modified firmware could tamper with the rest of the trust chain, making it possible for you to get valid OS values even if the OS has been subverted. This isn't a solved problem yet, and really requires hardware vendor support. Let's handwave this for now, or assert that we'll have some sidechannel for distributing valid firmware values.

Avoiding TPM MITMing

This one's more interesting. If I ask the server to attest to its state, it can simply pass that through to a TPM running on another system that's running a trusted stack and happily serve me content from a compromised stack. Suboptimal. We need some way to tie the TPM identity and the service identity to each other.

Thankfully, we have one. Tor supports running services in the .onion TLD. The key used to identify the service to the Tor network is also used to create the "hostname" of the system. I wrote a pretty hacky implementation that generates that key on the TPM, tying the service identity to the TPM. You can ask the TPM to prove that it generated a key, and that allows you to tie both the key used to run the Tor service and the key used to sign the attestation hashes to the same TPM. You now know that the attestation values came from the same system that's running the service, and that means you know the TPM hasn't been MITMed.

How do you know it's a TPM at all?

This is much easier. See [1].



There's still various problems around this, including the fact that we don't have this immutable minimal container OS, that we don't have the infrastructure to ensure that container builds are reproducible, that we don't have any known good firmware values and that we don't have a mechanism for allowing a user to perform any of this validation. But these are all solvable, and it seems like an interesting project.

"Interesting" isn't necessarily the right metric, though. "Useful" is. And I think this is very useful. If I'm about to upload documents to a SecureDrop instance, it seems pretty important that I be able to verify that it is a SecureDrop instance rather than something pretending to be one. This gives us a mechanism.

The next few years seem likely to raise interest in ensuring that people have secure mechanisms to communicate. I'm not emotionally invested in this one, but if people have better ideas about how to solve this problem then this seems like a good time to talk about them.

[1] More modern TPMs have a certificate that chains from the TPM's root key back to the TPM manufacturer, so as long as you trust the TPM manufacturer to have kept control of that you can prove that the signature came from a real TPM

comment count unavailable comments

10 November, 2016 08:48PM

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

apt-offline 1.7.2 released

I am happy to announce the release of apt-offline 1.7.2. This has turned out in time for the next release of Debian, i.e. Debian Stretch.

A long standing cosmetic issue in CLI of the progress bar total item count has been fixed. There are also a bunch of other bug fixes, for which the specifics are present in the git logs.

Also, in this release, we've tried to catch-up on the Graphical Interface, adding the GUI equivalent of the features, that were added to apt-offline in the recent past.

 

In 1.7.1, we added the changelog option, and in this release, we've added the GUI equivalent of it.

 

The 'set' command had had many new options. So in this release the 'set' command's GUI has those options added.

 

I hope you like this release. Please test and file bug reports in time so that we have a bug free version for Debian Stretch.

Future tasks now will focus on porting the GUI to PyQt5 first and then apt-offline to Python3. Any help is welcome.

You can download apt-offline from the github page or the alioth page. For Debian users, the package will show up in the repository soon.

PS: What is apt-offline ?

Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

Categories: 

Keywords: 

Like: 

10 November, 2016 05:01PM by Ritesh Raj Sarraf

November 09, 2016

Russ Allbery

Some thoughts on the US elections

I apparently am not going to get anything done today until I write this. Some thoughts, in no particular order.

  • The most heart-breaking thing for me this morning, and last night, is the reactions from people I know who are not white, not male. Who are LGBT, or immigrants, or Muslim. They're hurt, and they're scared, and they feel like the United States just slammed the door in their faces.

    A lot of Trump supporters will be offended by this, or dismissive of it. A lot of Trump supporters don't feel like that was what the campaign was about at all. And I strongly believe that many, many people voted for Trump for reasons that have absolutely nothing to do with sexism or racism. But whether or not you believe Trump supported the alt-right, the alt-right supported Trump, and a lot of people are really scared this morning. That feeling is real.

    To all of those people, all I can say is this: the most meaningful inclusiveness is how we all treat each other on a day-to-day basis. How we, as individuals, act towards other individuals. Governments can change a lot of things that matter a great deal in terms of legal recognitions and legal protections, but they can't take away our individual determination to see each other as fellow humans and to treat every person with respect and open-hearted welcome.

    If you believe, as I do, in welcoming and supporting every single person, regardless of race, creed, gender, sexuality, or any other such distinction, now is a really good time to say so, and to act like it. To your friends, to your co-workers, to the people you meet in stores, to the people you see on the street. Whoever you voted for. People are scared. People are hurt. People need to hear that they're not alone, that the world didn't turn on them last night.

    As a well-off white man, a member of, supposedly, the winning demographic class of this election last night, I want to say to everyone in the US who is angry and scared and despairing today: I have your back. Nothing has changed for me. Nothing has changed in how I'm going to see you. To the extent that I can contribute to this, the US will continue to become more inclusive, more welcoming, and more supportive at the level of day-to-day interactions between all of us. Workplaces that have a true ethical committment to diversity will continue to support that. Multicultural, diverse cities that have welcomed everyone in all their wonderful variety will continue to do so.

    An election can cause a lot of damage. I'm scared too. But no matter what, I believe in tolerance, I believe in diversity, I believe love wins, and there are a lot of people out there like me. A lot. And we'll continue to act in accordance with those principles no matter what government is elected.

  • There is going to be a lot of ink spilled over the next few days dissecting this election, and a lot of theories put forward for why it went the way it did. A lot of that is going to come in the form of blaming people, and a lot of that analysis is going to be more of the same insider political horse race analysis. I think we should question that. Sharply.

    Going all the way back to the US primaries, and also looking at votes in other countries like Brexit in the UK, a much more foundational theme leaps out at me.

    The status quo is not working for people.

    Technocratic government by political elites is not working for people. Business as usual is not working for people. Minor tweaks to increasingly arcane systems is not working for people. People are feeling lost in bureaucracy, disaffected by elections that do not present a clear alternate vision, and depressed by a slow slide into increasingly dismal circumstances.

    Government is not doing what we want it to do for us. And people are getting left behind. The left in the United States (of which I'm part) has for many years been very concerned about the way blacks and other racial minorities are systematically pushed to the margins of our economy, and how women are pushed out of leadership roles. Those problems are real. But the loss of jobs in the industrial heartland, the inability of a white, rural, working-class man to support his family the way his father supported him, the collapse of once-vibrant communities into poverty and despair: those problems are real too.

    The status quo is not working for anyone except for a few lucky, highly-educated people on the coasts. People, honestly, like me, and like many of the other (primarily white and male) people who work in tech. We are one of the few beneficiaries of a system that is failing the vast majority of people in this country.

    I don't think right now is the best time to talk about the solutions I favor. For good or bad, the US just asked Trump to try his approach. We'll see how that goes. But I think it's very important to see how important this failure of our institutions and our economy was in the outcome of this election, and to see the echoes of that in Sanders's campaign on the Democratic side, and to think hard about what that means.

    This is something that unites us as a country. The status quo is not working for the vast majority of people in this country, whether black or white or Latinx, whether urban or rural, of any gender.

    Let me talk for a moment to the left in the US. The temptation in human psychology, when one is scared and angry, is to fall back on zero-sum thinking. To try to get back what we feel like was stolen from us by "those people." The left has been criticizing the right in the US for that type of thinking for years now. But you will see the same style of thinking on the left this morning because it's just human psychology. So here's the test. Do we really believe in inclusiveness and in finding a way to escape the zero-sum trap? If so, the way forward isn't to write off half the country as racist, or ignorant, or duped, or otherwise to react out of anger and create more divisions. It's to regroup and rebuild on top of something that unites us.

    The status quo isn't working. We all need something better than incremental tweaks of a broken system by elitist technocrats funded by inherited money and multinational corporations.

  • The result of this election was a huge surprise largely because the voices of a substantial portion of Americans were not heard by the polls. If you talk to those Americans, you will quickly find that they're unsurprised, because they don't feel heard by anything else in our society either.

    This sort of failure is possible because we're not talking to each other. Many Clinton voters do not know a single Trump voter. Many Trump voters do not know a single Clinton voter.

    There are many causes for this, but as someone who works in tech, I think we have to own a large part of this failure. We, as the people who write modern communication tools, have failed our country, and are failing the world.

    The two communication mediums on the rise, the ones that are replacing traditional newspapers and TV news as the source of information for a vast number of Americans, are Facebook and Twitter. Both of them, whatever their merits for other uses, are absolutely awful for our political discussions, for our understanding of each other, and for our democracy.

    Facebook is a closed bubble of people who think like you. It is optimized and designed to expose you to your people: to the people you are the most connected to, to the people you therefore probably agree with, to the people who think the same way and react to the same things. Everything from reactions by your friends down to the news you see on Facebook is filtered to align with your implicit biases as best as Facebook's algorithms can determine them. It isolates you from disagreement by design. You can, of course, reach out intentionally, and families will always cut across political divides to some extent, but Facebook will default you into a bubble in which you are not having thoughtful, intelligent discussions with people who disagree with you.

    Twitter, by contrast, is a public screaming match. To express any controversial political opinion on Twitter, left or right, is to invite an onslaught by a raging mob. A small number of people can manage to heavily filter that environment and have some semblance of a conversation. Almost no one is going to bother. It feels profoundly dangerous. It's terrifying to say something that might attract real attention. Only very unusual people are able to risk opening up their heart and mind on Twitter and being vulnerable enough to possibly change their minds.

    We have to do better than this.

    I don't know how to do better than this. I don't have any grand plan. I'm not the person to start a project. I don't have a start-up idea, or a free software concept. But if we, as programmers and designers and free software developers, cannot do better than this, who will?

    We have to have a way to enable thoughtful conversations between people with real and profound political disagreements in an environment where there is some mutual respect, some foundation of politeness, and a sufficiently supportive environment that people are willing to risk being convinced. And it has to somehow bypass the filter bubble and allow us to come into contact with people who do not think like us, do not come from the same walk of life, the same region, the same race, the same religion, the same economic circumstances.

    This is a profound challenge. But the news media is not going to suddenly revive. TV news is not going to magically become a venue for intelligent and thoughtful discussion. And people largely do not change their minds through being preached at by "thought leaders." People change their minds through contact with other people, through having their assumptions and conclusions questioned in an environment that supports enough of a foundational level of decency that they can get out of the trap of being afraid and defensive.

    We don't have that platform. We need it. Or I fear we're in for a continual whipsaw of zero-sum voting, as factions with no communication channels to each other whip up xenophobia in an attempt to outvote each other.

I don't have any profound conclusions. I'm honestly pretty upset. And pretty scared. But we have to talk to each other. And we have to listen to each other. And we have to persaude each other. And we have to be willing to be persuaded.

And please go tell someone this morning that you have their back.

09 November, 2016 05:11PM

Enrico Zini

On SPF

I woke up this morning with some Django server error mails in my inbox:

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 9: ordinal not in range(128)
…
 'REMOTE_USER': u'…[email protected]',

I did what one does in cases like these, I tried to fix the bug and mailed [email protected] asking them to try again and let me know if it works.

I get a bounce:

  <Actual user's email>
    (generated from …[email protected])
    SMTP error from remote mail server after MAIL FROM:<[email protected]> SIZE=3948:
    host … […]: 550 Please see http://www.openspf.org/Why?id=enrico%40enricozini.org&ip=2001%3a41c8%3a1000%3a21%3a%3a21%3a21&receiver=bq :
    Reason: mechanism

I resent the mail to the actual user's address, and it went through. Job done, at least until they get back to me telling me that my fix didn't work.

Lessons learnt:

  • Activating SPF checks breaks receiving email via a forwarding address.
  • Activating SPF checks breaks hiding an email address behind a forwarding address.

09 November, 2016 09:10AM

hackergotchi for Daniel Pocock

Daniel Pocock

Understanding what lies behind Trump and Brexit

As the US elections finish, many people are scratching their heads wondering what it all means. For example, is Trump serious about the things he has been saying, or is he simply saying whatever was most likely to make a whole bunch of really stupid people crawl out from under their rocks to vote for him? Was he serious about winning at all, or was it just the ultimate reality TV experiment? Will he show up for work in 2017, or like Australia's billionaire Clive Palmer, will he set a new absence record for an elected official? Ironically, Palmer and Trump have both been dogged by questions over their business dealings, will Palmer's descent towards bankruptcy be replicated in the ongoing fraud trial against Trump University and similar scandals?

While the answer to those questions may not be clear for some time, some interesting observations can be made at this point.

The world has been going racist. In the UK, for example, authorities have started putting up anti-Muslim posters with an eery resemblance to Hitler's anti-Jew propaganda. It makes you wonder if the Brexit result was really the "will of the people", or were the people deliberately whipped up into a state of irrational fear by a bunch of thugs seeking political power?

Who thought The Man in the High Castle was fiction?

In January 2015, a pilot of The Man in the High Castle, telling the story of a dystopian alternative history where Hitler has conquered America, was the most-watched original series on Amazon Prime.

It appears Trump supporters have already been operating US checkpoints abroad for some time, achieving widespread notoriety when they blocked a family of British Muslims from visiting Disneyland in 2015. Ambushing them at the last moment as they were about to board their flight, it is unthinkable how anybody could be so cruel. When you reflect on statements made by Trump and the so-called "security" practices around the world, this would appear to be only a taste of things to come though.

Is it a coincidence that Brexit and Trump both happened in the same year that the copyright on Mein Kampf expired? Ironically, in the chapter on immigration Hitler specifically singles out the U.S.A. for his praise, is that the sort of rave review that Trump aspires to when he talks about making America great again?

US voters have traditionally held concerns about the power of the establishment. The US Federal Reserve has been in the news almost every week since the financial crisis, but did you know that the very concept of central banking was thrown out the window four times in America's history? Is Trump the type of hardliner who will go down this path again, or will it be business as usual? In his book Rich Dad's Guide to Investing in Gold & Silver, Robert Kiyosaki and Michael Maloney encourage people to consider putting most of their wealth into gold and silver bullion. Whether you like the politics of Trump and Brexit or not, are we entering an era where it will be prudent for people to keep at least ten percent of net wealth in this asset class again? Online dealers like BullionVault in Europe already appear to be struggling under the pressure as people rush to claim the free grams of bullion credited to newly opened accounts.

The Facebook effect

In recent times, there has been significant attention on the question of how Facebook and Google can influence elections, some European authorities have even issued alerts comparing this threat to terrorism. Yet in the US election, it was simple email that stole the limelight (or conveniently diverted attention from other threats), first with Clinton's private email server and later with Wikileaks exposing the entire email history of Clinton's chief of staff. The Podesta emails, while being boring for outsiders, are potentially far more damaging as they undermine the morale of Clinton's grass roots supporters. These people are essential for knocking on doors and distributing leaflets in the final phase of an election campaign, but after reading about Clinton's close relationship with big business, many of them may well have chosen to stay home. Will future political candidates seek to improve their technical competance, or will they simply be replaced by candidates who are born hackers and fluent in the language of a digital world?

09 November, 2016 08:23AM by Daniel.Pocock

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

You Know Who Else Won Elections?

[Donald Trump]

You didn't possibly think my streak of serious posts could last did you?

09 November, 2016 07:33AM

November 08, 2016

Uwe Kleine-König

Installing Debian Stretch on an Omnia Turris

Recently I got "my" Omnia Turris and it didn't take long to replace the original firmware with Debian.

If you want to reproduce, here is what you have to do:

Open the case of the Omnia Turris, connect the hacker pack (or an RS232-to-TTL adapter) to access the U-Boot prompt (see Turris Omnia: How to use the "Hacker pack"). Then download the installer and device tree:

# cd /srv/tftp
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/vmlinuz
# wget https://d-i.debian.org/daily-images/armhf/daily/netboot/initrd.gz
# wget https://www.kleine-koenig.org/tmp/armada-385-turris-omnia.dtb

(The latter is not included yet in Debian, but I'm working on that.)

and after connecting the Omnia Turris's WAN to a dhcp managed network start it in U-Boot:

dhcp
setenv serverip 192.168.1.17
tftpboot 0x01000000 vmlinuz
tftpboot 0x02000000 armada-385-turris-omnia.dtb
tftpboot 0x03000000 initrd.gz
bootz 0x01000000 0x03000000:$filesize 0x02000000

With 192.168.1.17 being the IPv4 of the machine you have the tftp server running.

I suggest to use btrfs as rootfs because that works well with U-Boot. Before finishing the installation put the dtb in the rootfs as /boot/dtb.

To then boot into Debian do in U-Boot:

setenv mmcboot=btrload mmc 0 0x01000000 boot/vmlinuz\; btrload mmc 0 0x02000000 boot/dtb\; btrload mmc 0 0x03000000 boot/initrd.img\; bootz 0x01000000 0x03000000:$filesize 0x02000000
setenv bootargs console=ttyS0,115200 rootfstype=btrfs rootdelay=2 root=/dev/mmcblk0p1 rootflags=commit=5 rw
saveenv
boot

Known issues:

  • rtc doesn't work (workaround: mw 0xf10184a0 0xfd4d4cfa in U-Boot)
  • SFP and switch don't work, MAC addresses are random
  • wifi fails to probe

If you have problems, don't hesitate to contact me.

08 November, 2016 08:56PM

hackergotchi for Jonathan Carter

Jonathan Carter

A few impressions of DebConf 16 in Cape Town

DebConf16 Group Photo

DebConf16 Group Photo by Jurie Senekal.

DebConf16

Firstly, thanks to everyone who came out and added their own uniqueness and expertise to the pool. The feedback received so far has been very positive and I feel that the few problems we did experience was dealt with very efficiently. Having a DebConf in your hometown is a great experience, consider a bid for hosting a DebConf in your city!

DebConf16 Open Festival (5 August)

The Open Festival (usually Debian Open Day) turned out pretty good. It was a collection of talks, a job fair, and some demos of what can be done with Debian. I particularly liked Hetzner’s stand. I got to show off some 20 year old+ Super Mario skills and they had some fun brain teasers as well. It’s really great to see a job stand that’s so interactive and I think many companies can learn from them.

The demo that probably drew the most attention was from my friend Georg who demoed some LulzBot Mini 3D Printers. They really seem to love Debian which is great!

DebConf (6 August to 12 August)

If I try to write up all my thoughts and feeling about DC16, I’ll never get this post finished. Instead, here as some tweets from DebConf that other have written:

 

 

 

 

Day Trip

We had 3 day trips:

Brought to you by

orga

DebConf16 Orga Team.

See you in Montréal!

DebConf17 dates:

  • DebCamp:  31 July to 4 August 2017
  • DebConf: 6 August to 12 August 2017
  • More details on the DebConf Wiki.

The DC17 sponsorship brochure contains a good deal of information, please share it with anyone who might be interested in sponsoring DebConf!

Media

08 November, 2016 08:01PM by jonathan

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.1.0: New features, some fixes

A new release of anytime is now on CRAN following the four releases in September and October.

anytime aims to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects -- and does so without requiring a format string. See the anytime page for a few examples.

Release 0.1.0 adds several new features. New functions utctime() and utcdate() parse to coordinated universal time (UTC). Several new formats were added. Digit-only formats like 'YYYYMMDD' with or without 'HHMMSS' (or even with fractional secodns 'HHMMSS.ffffff') are supported more thoroughly. Some examples:

R> library(anytime)
R> anytime("20161107 202122")   ## all digits
[1] "2016-11-07 20:21:22 CST"
R> utctime("2016Nov07 202122")  ## UTC parse example
[1] "2016-11-07 14:21:22 CST"
R> 

The NEWS file summarises the release:

Changes in anytime version 0.1.0 (2016-11-06)

  • New functions utctime() and utcdate() were added to parse input as coordinated universal time; the functionality is also available in anytime() and anydate() via a new argument asUTC (PR #22)

  • New (date)time format for RFC822-alike dates, and expanded existing datetime formats to all support fractional seconds (PR #21)

  • Extended functionality to support not only ‘YYYYMMDD’ (without a separator, and not covered by Boost) but also with ‘HHMM’, ‘HHMMSS’ and ‘HHMMSS.ffffff’ (PR #30 fixing issue #29)

  • Extended functionality to support ‘HHMMSS[.ffffff]’ following other date formats.

  • Documentation and tests have been expanded; typos corrected

  • New (unexported) helper functions setTZ, testOutput, setDebug

  • The testFormat (and testOutput) functions cannot be called under RStudio (PR #27 fixing issue #25).

  • More robust support for non-finite values such as NA, NaN or Inf (Fixing issue #16)

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 November, 2016 01:24AM

gettz 0.0.3

A minor release 0.0.3 of gettz arrived on CRAN two days ago.

gettz provides a possible fallback in situations where Sys.timezone() fails to determine the system timezone. That can happen when e.g. the file /etc/localtime somehow is not a link into the corresponding file with zoneinfo data in, say, /usr/share/zoneinfo.

This release adds a second #ifdef to permit builds on Windows for the previous R release (ie r-oldrel-windows). No new code, or new features.

Courtesy of CRANberries, there is a comparison to the previous release.

More information is on the gettz page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 November, 2016 01:19AM

November 07, 2016

hackergotchi for Daniel Pocock

Daniel Pocock

Quickstart SDR with gqrx, GNU Radio and the RTL-SDR dongle

Software Defined Radio (SDR) provides many opportunities for both experimentation and solving real-world problems. It is not exactly a new technology but it has become significantly more accessible due to the increases in desktop computing power (for performing the DSP functions) and simultaneous reduction in the cost of SDR hardware.

Thanks to the availability of a completely packaged gqrx and GNU Radio solution, you can now get up and running in less than half an hour and spending less than fifty dollars/pounds/euros.

We provided a full demo of the Debian Hams gqrx solution at Mini DebConf Vienna (video) and hope to provide a similar demo at MiniDebConf Cambridge on the coming weekend of 12-13 November.

gqrx is also available for Fedora users.

Choosing hardware

There are many different types of hardware, ranging from the low-cost RTL-SDR USB dongles to full duplex multi-transceiver systems.

My recommendation is to start with an RTL-SDR dongle due to extremely low cost, this will give you an opportunity to reflect on the opportunities of this technology before putting money into one of the transceivers and their accessories. The RTL-SDR dongle also benefits from being a small self-contained solution that you can carry around and experiment with or demo just about anywhere.

Important: Don't buy the cheapest generic RTL TV/radio receivers. It is absolutely essential to buy one of the units that has been explicitly promoted for SDR. These typically have a temperature compensated crystal oscillator (TCXO) which is absolutely essential for the reception of narrowband voice and digital signals. Without this, it is only possible to receive wideband broadcash FM radio and TV channels.

For those who want to try it out with us at MiniDebConf Cambridge, Technofix has UK stock (online ordering), they are about £26.

Getting gqrx up and running fast

Note: to avoid the wrong kernel module being loaded automatically, it is recoemmended that you don't connect the RTL-SDR dongle before you install the packages. If you did already connect it, you may need to reboot or rmmod dvb_usb_rtl28xxu.

If you are using a Debian jessie system, you can get all the necessary packages from jessie-backports.

If you haven't already enabled backports, you can do so with a command like this:


$ sudo echo "deb http://ftp.ch.debian.org/debian jessie-backports main" >> /etc/apt/sources.list

Make sure your local index is updated and then install the necessary packages:


$ sudo apt-get update
$ sudo apt-get install -t jessie-backports gqrx-sdr rtl-sdr

Running it for the first time

Once the packages are installed, connect the RTL-SDR dongle to the computer and then start the gqrx GUI from a terminal:


$ gqrx

If the GUI fails to appear, look carefully at the error messages. It may be that the wrong kernel module has been loaded.

The properties window appears, select the RTL-SDR dongle:

Now the main screen will appear. Choose the wideband FM mode "WFM (mono)" and change the frequency to a value in the FM broadcast band such as 100MHz. Click the "Power on" button in the top left corner, just under the "File" menu, to start reception. Click in the middle of a strong signal to tune to that station. If you don't hear anything, check the squelch setting (it should be more negative than the signal strength value) and increase the Gain control at the bottom right hand side of the window.

Looking for ham / amateur radio signals

A popular band for hams is between 144 - 148 MHz (in some countries only a subset of this band is used). This is referred to as the two-meter band, as that is the wavelength at this frequency.

Hams often use the narrowband FM mode in this band, especially with repeater stations. Change the "Mode" setting from "WFM" to "Narrow FM" and change the frequency to a value in the middle of the band. Look for signals in the radio spectrum and click on them to hear them.

If you are not sure which part of the band to look in, search for the two-meter band plan for your country/region and look for the repeater output frequencies in the band plan.

07 November, 2016 07:56PM by Daniel.Pocock

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

New Laptop / Problems with Windows part 896,324

I had mentioned previously that I had been forced to purchase a new laptop. I decided that I didn't want another Thinkpad. The Lenovo ones no longer have the high quality they had in the IBM days and while support is still pretty good by todays dismal standards it's not worth the premium price. (If I'm buying it with my own money that is.) I had heard good thing about Dells' Linux support so I looked into their offerings and ended up buying a Precision 7510. Mind you this model came with Windows 7 installed but I didn't mind. As I wanted to install Debian according to my own specs anyway, I was ok with just knowing that the hardware would be compatible. So I prepared a Jessie USB installation stick (This model doesn't have a CD/DVD drive.) and shrunk down the Windows installation (but not deleted it altogether for reasons to be explained below.)

At this point it is traditional to give a long, tortured account of how Heaven and Earth had to be moved to get Linux installed. But that is a thing of the past. The combination of good hardware and the excellent work of the debian-installer team, made the setup a breeze with only a couple of minor bumps in the road. One is that the kernel on the Jessie cd was not quite up to snuff. Downloading 4.6.0 from backports did the trick. Post-install, to get the most out of my nifty new 4K display, I needed the latest, alas non-free, nvidia-drivers. And for stable wifi (I always install over ethernet for this reason) I had to install the firmware-iwlwifi package. Everything else—even my printer—either "just worked" or needed only minor fiddling around.

Having used this machine for a while, the biggest problem I have is with the keyboard. It is nowhere near as tactile and comfortable to use as the old IBM Thinkpads. Even Lenovo Thinkpad keyboards are better. I'm a hunt-and-peck type myself but it is annoying. I think a real touch typist would hate it. The cursor and home, end, page up, page down etc. keys are in the wrong place and home and end are actually function keys. There is a pointer and a trackpad and two sets of mouse buttons which seems like a waste of space. In fact much space is wasted everywhere, space which could be used to improve the keyboard. Other than that I like it. The battery life is not the best but fairly good. It's a bit heavier than I was used to but I've gotten used to it. Although I didn't go with the SSD option, it is not that noisy; again you can get used to it. All in all, I think it is worth it for the price.


I installed Debian but I only really use it as a base to run VMWare Workstation. I occasionally have to support software across multiple platforms but I don't want the hassle or expense of multiple computers so I have Windows (the original installation upgraded to Windows 10) and Mac OS X running in VMs. Plus I have another VM running Kubuntu LTS for my day to day computing, another Debian install running sid for packaging, and Minix. Backups are as simple as making a snapshot of the VM. If something accidently gets screwed up, I can easily revert it back to a known good state. Ideally, I would like to replace VMWare with a free solution such as qemu or virtualbox etc. but as far as I know VMWare is far ahead in emulation capabilities (OpenGL support for example.) which is vital for efficiently using the proprietary OS's.

Things were going swimmingly until a few days ago which brings me to part two of this post. I booted into the Windows 10 VM only to be greeted by a message from the Windows boot manager that "A component of the operating system has expired." I tried going back to a snapshot from September (when this definitely was working) but I still got the same thing. A bit of googling revealed this has happened to others and the advice seemed to be to reset the computers date and reinstall Windows 10. It took several tries but I finally got that done, completed the task I needed to do and shut it down. At the end of the day I shut the whole laptop down and thought no more of it.

The next day I boot up and...where is grub? It seems that during the Windows reinstall, it had overwritten grub with the Windows boot loader. And while grub is nice enough to add an entry for Windows when detected, Windows does not extend the same courtesy to Linux. Ok time to bring out my trusty USB stick again and reinstall grub. Oops I've wiped it off to store other things. No matter, download another image and do it again. Reboot and...back in Windows. Fiddle around in the EFI settings until I can get it to boot from USB.

Now i'm in the shell provided by debian-installer so I can mount and chroot my Linux partition and reinstall grub. Except no I can't because it is Luks encrypted. Ok apt-get install cryptsetup, open it with my passphrase and now I can mount the partion, chroot into and reinstall grub. Except no I can't because it is a logical volume group. Back to apt-get, install lvm2, vgscan (because of course I've forgotten the name of the group,) vgchange and now I can mount, chroot, etc. etc. Except no I can't.


# mount /dev/mapper/vg00-root /mnt
# chroot /mnt
# grub-install /dev/sda
error: cannot find a device for /boot/grub (is /dev mounted?).

sigh


# mount /dev/sda5 /boot
special device /dev/sda5 does not exist.

Well, /dev is mounted but it does indeed not contain a device called sda5.


# /etc/init.d/udev start
udev requires a mounted procfs.  not started.

Very well then.


# mount -t proc none /proc
# /etc/init.d/udev start

Nope. proc needs sysfs.


# mount -t sysfs none /sys
# /etc/init.d/udev start

Still no. You get a warning about how it is a bad idea to run udev from an interactive shell and there is still not /dev/sda5. Time to start googling again. It turns out what I should have done is open another shell from the installer environment and do...


# mount --bind /dev/ /mnt/dev

Now I can mount /boot/grub and reinstall grub and it should all work right?

I should be so lucky. Ok back to square one. I now did what I should have done in the first place and searched the Debian wiki. Sure enough there is a page which deals exactly with my predicament. Finally I get everything installed correctly and triumphantly reboot into Linux.

Of course now Windows doesn't work again...

07 November, 2016 07:01PM

Reproducible builds folks

Reproducible Builds: week 80 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 30 and Saturday November 5 2016:

Upcoming events

  • Chris Lamb and Holger Levsen will be presenting at MiniDebConfCambridge 2016 in Cambridge, United Kingdom on November 10th-13th.

  • Vagrant Cascadian will be presenting Introduction to Reproducible Builds at the SeaGL.org Conference in Seattle, USA on November 12th.

  • The next IRC meeting will be held on November 15th.

  • Reproducible Debian Hackathon - A small hackathon organized in Boston, USA on December 3rd and 4th. If you are interested in attending, please contact Valerie Young (spectranaut in the #debian-reproducible IRC channel on irc.oftc.net.)

  • The second Reproducible Builds World Summit will be held in Berlin, Germany on December 13th-15th.

Reproducible work in other projects

Bugs filed

Reviews of unreproducible packages

81 package reviews have been added, 14 have been updated and 43 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

1 issue type has been removed:

1 issue type has been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (17)

diffoscope development

  • diffoscope 62 was uploaded to unstable by Mattia Rizzolo:

    • Add rudimentary support for OpenDocumentFormat files. (Michel Messerschmidt)
    • Detect JSON order-only differences and print them nicely. (Mattia Rizzolo)

buildinfo.debian.net development

tests.reproducible-builds.org

Reproducible Debian:

  • With thanks to Profitbricks continued sponsorship, Holger setup two new amd64 build nodes (and the associated Jenkins jobs) with 15/16 cores and 48GB RAM each for a total of four such amd64 nodes to double the build capacity of our amd64 build network.

Misc.

Also with thanks to Profitbricks sponsoring the "hardware" resources, Holger created a 13 core machine with 24GB RAM and 100GB SSD based storage so that Ximin can do further tests and development on GCC and other software on a fast machine.

This week's edition was written by Chris Lamb, Ximin Luo, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

07 November, 2016 05:17PM

Petter Reinholdtsen

How to talk with your loved ones in private

A few days ago I ran a very biased and informal survey to get an idea about what options are being used to communicate with end to end encryption with friends and family. I explicitly asked people not to list options only used in a work setting. The background is the uneasy feeling I get when using Signal, a feeling shared by others as a blog post from Sander Venima about why he do not recommend Signal anymore (with feedback from the Signal author available from ycombinator). I wanted an overview of the options being used, and hope to include those options in a less biased survey later on. So far I have not taken the time to look into the individual proposed systems. They range from text sharing web pages, via file sharing and email to instant messaging, VOIP and video conferencing. For those considering which system to use, it is also useful to have a look at the EFF Secure messaging scorecard which is slightly out of date but still provide valuable information.

So, on to the list. There were some used by many, some used by a few, some rarely used ones and a few mentioned but without anyone claiming to use them. Notice the grouping is in reality quite random given the biased self selected set of participants. First the ones used by many:

Then the ones used by a few.

Then the ones used by even fewer people

And finally the ones mentioned by not marked as used by anyone. This might be a mistake, perhaps the person adding the entry forgot to flag it as used?

Given the network effect it seem obvious to me that we as a society have been divided and conquered by those interested in keeping encrypted and secure communication away from the masses. The finishing remarks from Aral Balkan in his talk "Free is a lie" about the usability of free software really come into effect when you want to communicate in private with your friends and family. We can not expect them to allow the usability of communication tool to block their ability to talk to their loved ones.

Note for example the option IRC w/OTR. Most IRC clients do not have OTR support, so in most cases OTR would not be an option, even if you wanted to. In my personal experience, about 1 in 20 I talk to have a IRC client with OTR. For private communication to really be available, most people to talk to must have the option in their currently used client. I can not simply ask my family to install an IRC client. I need to guide them through a technical multi-step process of adding extensions to the client to get them going. This is a non-starter for most.

I would like to be able to do video phone calls, audio phone calls, exchange instant messages and share files with my loved ones, without being forced to share with people I do not know. I do not want to share the content of the conversations, and I do not want to share who I communicate with or the fact that I communicate with someone. Without all these factors in place, my private life is being more or less invaded.

07 November, 2016 09:25AM

November 06, 2016

hackergotchi for Shirish Agarwal

Shirish Agarwal

The long tail in a common’s man journey to debconf16 – 1

I was going to put a technical post but saw the discussion of one of the meetings of the debconf meets and decided to share a novice’s travel experience.

Before I start here’s the discussion log http://meetbot.debian.net/debconf-team/2016/debconf-team.2016-10-20-20.01.log.html

and specifically this part which hit me (using fake names for discussion as haven’t taken permission from the folks to cite them by name.) –

20:36:52 abcd: $100 CAD is a lot for some, but you’d only need it if you won’t sleep in sponsored accom, which arguably is acceptable.
20:37:04 it would, efgh, fixed sponsorship sum for everybody and allocation of rooms completely decoupled. Hotel gets the money from everybody and the “base fee” from DebConf.
20:37:15 people who can’t afford also have special needs and may be uncomfortable in sharing rooms. That’s quite frequently in our community. Managing each case will be much more complicated.
20:37:31 hijk: we could set aside budget for such special needs, for sure.
20:37:43 I’m talking about managing each case
20:37:46 hijk: yes, but we’ll have the special cases no matter what.
20:37:48 yes, and the special cases need to be catered for regardless of how everybody else is housed
20:38:06 hijk: room allocation already includes this.
20:38:19 people having to expose their personal problems to have us permitting them staying in the hotel
20:38:23 that’s just too weird

It just goes on. I dunno whether I’m weird or not or the experience I would share is just normal, this I would leave for you to decide.

As have shared before, some friends of mine from the free software community had cajoled me last year to apply for debconf bursary (debconf15), which surprisingly got approved, but as it was late and my pre-conceived myths/notions of visas taking a looooong time decided not to go further. Many things take a long time to happen in the Indian bureaucratic maze. For instance have been in a civil case for almost a decade now among other things so know and accept that things take their own sweet time otherwise known as ‘Indian patience’😉

Did the application and again, surprise, surprise this time too I was approved. Luckily, had done the application for bursary early so was a bit positive on the visa-front. There was a goof-up at the embassay but thanks to people at travel.stackexchange.com where I asked quite a few questions, I was a bit informed and travel was relatively hassle-free. Internally though, I was nervous as hell. I had been feeling like a ‘conman’ or a ‘fraud’ or being an ‘imposter’ because I knew before-hand that the project is so huge and had done the mistake of putting up a talk and a workshop where the big guns would be, which again was accepted (not good). The only thing I was thinking of as a saving grace is that there might be some newbies who don’t know about the project at all (on Open Day) and hopefully I could help with that but as you will see, even there I was fully inadequate.

I live in Pune which is around 3.5 hours from Mumbai (BOM) from where international flights take off. While Pune has an Airport, due to defence considerations, there cannot be much improvement either for domestic or International carriers. There have been attempts to have an exclusive civilian Airport for a long time (almost a decade) and would still take a decade or more.

Hence had decided to take an early morning train from Pune to Mumbai, change couple of locals and finally land up at the Mumbai International Airport. Hind-sight as they say is 50:50, while I do have friends in Mumbai, I also found about a homestay which is closer to the Airport and still relatively budget-friendly.

Anyways, met few friends but as was paranoid about missing connections found myself in front of the Airport at 20:00 hrs. with about 7 hours + to go before my flight. While there is nothing to do around the airport rather than hanging around, just hung around outside the airport as knew that inside the airport will be chilling and once you go in, you cannot come out or at least it’s an inconvenience to the security therein. The International Airport in on three levels, the basement is for vehicles, the first level to receive International and Domestic passengers and the upper-most level exclusively for people flying internationally. This again, came to know when I tried to enter into the ones meant for Domestic and International Passengers coming into the city.

Came to the check-in counter at around 02:00 hrs, did the security thing and just had to wait as the flight was of 0400 hrs (from my limited search experience, the cheapest flights are at such times when nobody else (i.e.civilized people) wants to fly). Entered Doha around 5:15 Doha time and saw a much much bigger airport than either the Mumbai International Airport or/and the Delhi International Airport . While I have written some negative stuff about Doha, there were two positives that I am sure, I had forgotten to share –

a. There were no transit Visa Fees that I had to pay. Most countries and airports I researched have something called transit visa and that can really get expensive, so saved money on that.

b. The free ride into the city and back with voluntary tipping the driver or/and guide (approx. 3-4 hours)

While the second from what I could tell/know is a gimmick, this is something I wish other countries and airports emulate.

There are hotels in the airport and I could have had hotel accommodation if I had booked a slightly more expensive ticket, roughly INR 5k/- each way which would have given me a bit more legroom as well as stay as my layover was more than 24 hours. But this information was known at last minute. Qatar Airways has just a toll-free number and trying more than a few times gave up. They don’t have an office in the city. When I reached the check-in counter they said if I had upgraded to ‘Y’ class I would have had the hotel thing. Changing tickets at the last moment was too expensive and anyways for hotel accommodation for layovers they required at least 24 hours notice.

Had to make do with recliners and chairs which are not really comfortable. There were only a couple of waiting rooms on air-side which had a view of the aircraft and hence were a bit more pleasing than those which were on the land-side and were fully blocked without a view. I wish there was a map of the Airport from within the Airport as even with the single terminal it is really easy to get lost.

Somehow the day and night went by and took my second flight and reached Cape Town, South Africa. Throughout the journey had been stressed as had to be awake at all times and make sure that nothing gets stolen. Having attendants at toilets were also good so that there is no possibility of any violence there. So it had been 2 days, no shower and no sleep.

Later also came to know about Airport Sleeping Pods and shower stalls but these also seem to be less in number, at few airports and there always be a bit of premium attached to them as airports are a monopoly business.

Anyways, reached the venue. Throughout the travel there was quite a bit of unnamed fear which I later came to know after seeing Dr. Ramanujan’s ‘The Man who knew Infinity‘ . It was/is the fear or unknown, while in the movie it is articulated as fear of crossing seven seas, symbolically it is the fear or unknown.

Now while I was dead tired, I still pushed myself as I didn’t want to have the effects of jet lag interfere with the normal sleeping and waking patterns. I did freshen myself but didn’t allow myself the luxury of the bath-tub as I knew that if I went in, I would not come out that day. Met all the people, learnt who’s who, where things are happening etc. and slowly night came. Night came and I was so-looking forward to sleep but sleep was not to be. I later learnt it could be either of the two reasons, it could either have been ‘travel-induced insomnia‘ or/and what is known as the first night effect‘.

It was only on the second day when I was in bath-tub for about 2/3 hours I could feel the tension leaving my body. I finally realized that I am in Cape Town, South Africa and could enjoy and be surprised at seeing birds within few feet of me🙂 .

Now I don’t know whether I’m the only weird/paranoid one, I do know that it would not have been easier for me at least for the first night as I was turning and twisting throughout the night. I opened the lights, read for some time hoping for sleep to take over but that didn’t work. Tried quite a few things but sleep didn’t come. If I had been sleeping with other people I dunno how they would have reacted. I myself am a light sleeper (most of the time) and if I had sleep coming and somebody else acted or been the way I was, I wouldn’t been able to sleep. However much you try, whatever is the natural reaction is, will be. There are still some bits to share but that would be in part 2.


Filed under: Miscellenous Tagged: #air-travel, #Debconf16, #paranoia, #sleep

06 November, 2016 10:38PM by shirishag75

Russ Allbery

Review: Digger

Review: Digger, by Ursula Vernon

Publisher: Sofawolf
Copyright: October 2013
ISBN: 1-936689-32-4
Format: Graphic novel
Pages: 837

As Digger opens, the eponymous wombat is digging a tunnel. She's not sure why, or where to, since she hit a bad patch of dirt. It happens sometimes, underground: pockets of cave gas and dead air that leave one confused and hallucinating. But this one was particularly bad, it's been days, she broke into a huge cave system, and she's thoroughly lost. Tripping on an ammonite while running from voices in the dark finally helps her come mostly to her senses and start tunneling up, only to break out at the feet of an enormous statue of Ganesh. A talking statue of Ganesh.

Digger is a web comic that ran from 2005 to 2011. The archives are still on the web, so you can read the entire saga for free. Reviewed here is the complete omnibus edition, which collects the entire strip (previously published in six separate graphic novels containing two chapters each), a short story, a bonus story that was published in volume one, a bunch of random illustrated bits about the world background, author's notes from the web version, and all of the full-color covers of the series chapters (the rest of the work is in black and white). Publication of the omnibus was originally funded by a Kickstarter, but it's still available for regular sale. (I bought it normally via Amazon long after the Kickstarter finished.) It's a beautiful and durable printing, and I recommend it if you have the money to buy things you can read for free.

This was a very long-running web comic, but Digger is a single story. It has digressions, of course, but it's a single coherent work with a beginning, middle, and end. That's one of the impressive things about it. Another is that it's a fantasy work involving gods, magic, oracles, and prophecies, but it's not about a chosen one, and it's not a coming of age story. Digger (Digger-of-Needlessly-Convoluted-Tunnels, actually, but Digger will do) is an utterly pragmatic wombat who considers magic to be in poor taste (as do all right-thinking wombats), gods to be irritating underground obstacles that require care and extra bracing, and prophecies to not be worth the time spent listening to them. It's a bit like the famous Middle Earth contrast between the concerns of the hobbits and the affairs of the broader world, if the hobbits were well aware of the broader world, able to deal with it, but just thought all the magic was tacky and irritating.

Magic and gods do not, of course, go away just because one is irritated by them, and Digger eventually has to deal with quite a lot of magic and mythology while trying to figure out where home is and how to get back to it. However, she is drawn into the plot less by any grand danger to the world and more because she keeps managing to make friends with everyone, even people who hate each other. It's not really an explicit goal, but Digger is kind-hearted, sensible, tries hard to do the right thing, and doesn't believe in walking away from problems. In this world, that's a recipe for eventual alliances from everything from warrior hyenas to former pirate shrews, not to mention a warrior cult, a pair of trolls, and a very confused shadow... something. All for a wombat who would rather be digging out a good root cellar. (She does, at least, get a chance to dig out a good root cellar.)

The characters are the best part, but I love everything about this story. Vernon's black and white artwork isn't as detailed as, say, Dave Sim at his best, and some of the panels (particularly mostly dark ones) seemed a bit scribbly. But it's mostly large-panel artwork with plenty of room for small touches and Easter eggs (watch for the snail, and the cave fish graffiti that I missed until it was pointed out by the author's notes), and it does the job of telling the story. Honestly, I like the black and white panels better than the color chapter covers reproduced in the back. And the plot is solid and meaty, with a satisfying ending and some fantastic detours (particularly the ghosts).

I think my favorite bits, though, are the dialogue.

"Do you have any idea how long twelve thousand years is?"
"I know it's not long enough to make a good rock."

Digger is snarky in all the right ways, and sees the world in terms of tunnels, digging, and geology. Vernon is endlessly creative in how she uses that to create comebacks, sayings, analysis, and an entire culture.

This is one of the best long-form comics I've read: a solid fantasy story with great characters, reliably good artwork, a coherent plot arc, wonderful dialogue, a hard-working and pragmatic protagonist (who happens to be female), and a wonderfully practical sense of morality and ethics. I'm sorry it's over. If you've not already read it, I highly recommend it.

Remember tunnel 17!

Rating: 9 out of 10

06 November, 2016 06:49PM

Niels Thykier

Improvements in apt-file 3.1.2

Yesterday, I just uploaded apt-file 3.1.2 into unstable, which comes with a few things I would like to highlight.

 

  • We fixed an issue where apt-file would not show top-level files in source packages. (bug#676642). Thanks to Paul Wise for the proposed solution.
  • Paul Wise also fixed a bug where apt-file list -I dsc <source-pkg> would fail to list all files in the source package if said file was also in other packages.
  • We added –filter-suites / –filter-origins options that can be used to narrow the search space.  Example: apt-file search --filter-suites unstable lintian/checks/

You can also set defaults in the config file – if you want to always search in unstable, simply do:

# echo 'apt-file::Search-Filter::Suite "unstable";' >> /etc/apt/apt-file.conf

For the suite filter, either a code name (“sid”) or a suite name (“unstable”) will work.  Please note that the filters are case-sensitive – suites/code names generally use all lowercase, whereas origins appear to use title-case (i.e. “unstable” vs. “Debian”).

 


Filed under: apt-file, Debian

06 November, 2016 08:13AM by Niels Thykier

Russell Coker

Is a Thinkpad Still Like a Rolls-Royce

For a long time the Thinkpad has been widely regarded as the “Rolls-Royce of laptops”. Since 2003 one could argue that Rolls-Royce is no longer the Rolls-Royce of cars [1]. The way that IBM sold the Think business unit to Lenovo and the way that Lenovo is producing both Thinkpads and cheaper Ideapads is somewhat similar to the way the Rolls-Royce trademark and car company were separately sold to companies that are known for making cheaper cars.

Sam Varghese has written about his experience with Thinkpads and how he thinks it’s no longer the Rolls-Royce of laptops [2]. Sam makes some reasonable points to support this claim (one of which only applies to touchpad users – not people like me who prefer the Trackpoint), but I think that the real issue is whether it’s desirable to have a laptop that could be compared to a Rolls-Royce nowadays.

Support

The Rolls-Royce car company is known for great reliability and support as well as features that other cars lack (mostly luxury features). The Thinkpad marque (both before and after it was sold to Lenovo) was also known for great support. You could take a Thinkpad to any service center anywhere in the world and if the serial number indicated that it was within the warranty period it would be repaired without any need for paperwork. The Thinkpad service centers never had any issue with repairing a Thinkpad that lacked a hard drive just as long as the problem could be demonstrated. It was also possible to purchase an extended support contract at any time which covered all repairs including motherboard replacement. I know that not everyone had as good an experience as I had with Thinkpad support, but I’ve been using them since 1998 without problems – which is more than I can say for most hardware.

Do we really need great reliability from laptops nowadays? When I first got a laptop hardly anyone I knew owned one. Nowadays laptops are common. Having a copy of important documents on a USB stick is often a good substitute for a reliable laptop, when you are in an environment where most people own laptops it’s usually not difficult to find someone who will let you use theirs for a while. I think that there is a place for a laptop with RAID-1 and ECC RAM, it’s a little known fact that Thinkpads have a long history of supporting the replacement of a CD/DVD drive with a second hard drive (I don’t know if this is still supported) but AFAIK they have never supported ECC RAM.

My first Thinkpad cost $3,800. In modern money that would be something like $7,000 or more. For that price you really want something that’s well supported to protect the valuable asset. Sam complains about his new Thinkpad costing more than $1000 and needing to be replaced after 2.5 years. Mobile phones start at about $600 for the more desirable models (IE anything that runs Pokemon Go) and the new Google Pixel phones range from $1079 to $1,419. Phones aren’t really expected to be used for more than 2.5 years. Phones are usually impractical to service in any way so for most of the people who read my blog (who tend to buy the more expensive hardware) they are pretty much a disposable item costing $600+. I previously wrote about a failed Nexus 5 and the financial calculations for self-insuring an expensive phone [3]. I think there’s no way that a company can provide extended support/warranty while making a profit and offering a deal that’s good value to customers who can afford to self-insure. The same applies for the $499 Lenovo Ideapad 310 and other cheaper Lenovo products. Thinkpads (the higher end of the Lenovo laptop range) are slightly more expensive than the most expensive phones but they also offer more potential for the user to service them.

Features

My first Thinkpad was quite underpowered when compared to desktop PCs, it had 32M of RAM and could only be expanded to 96M at a time when desktop PCs could be expanded to 128M easily and 256M with some expense. It had a 800*600 display when my desktop display was 1280*1024 (37% of the pixels). Nowadays laptops usually start at about 8G of RAM (with a small minority that have 4G) and laptop displays start at about 1366*768 resolution (51% of the pixels in a FullHD display). That compares well to desktop systems and also is capable of running most things well. My current Thinkpad is a T420 with 8G of RAM and a 1600*900 display (69% of FullHD), it would be nice to have higher resolution but this works well and it was going cheap when I needed a new laptop.

Modern Thinkpads don’t have some of the significant features that older ones had. The legendary Butterfly Keyboard is long gone, killed by the wide displays that economies of scale and 16:9 movies have forced upon us. It’s been a long time since Thinkpads had some of the highest resolution displays and since anyone really cared about it (you only need pixels to be small enough that you can’t see them).

For me one of the noteworthy features of the Thinkpads has been the great keyboard. Mechanical keys that feel like a desktop keyboard. It seems that most Thinkpads are getting the rubbery keyboard design made popular by Apple. I guess this is due to engineering factors in designing thin laptops and the fact that most users don’t care.

Matthew Garrett has blogged about the issue of Thinkpad storage configured as “RAID mode” without any option to disable it [4]. This is an annoyance (which incidentally has been worked around) and there are probably other annoyances like it. Designing hardware and an OS are both complex tasks. The interaction between Windows and the hardware is difficult to get right from both sides and the people who design the hardware often don’t think much about Linux support. It has always been this way, the early Thinkpads had no Linux support for special IBM features (like fan control) and support for ISA-PnP was patchy. It is disappointing that Lenovo doesn’t put a little extra effort into making sure that Linux works well on their hardware and this might be a reason for considering another brand.

Service Life

I bought my curent Thinkpad T420 in October 2013 [5] It’s more than 3 years old and has no problems even though I bought it refurbished with a reduced warranty. This is probably the longest I’ve had a Thinkpad working well, which seems to be a data point against the case that modern Thinkpads aren’t as good.

I bought a T61 in February 2010 [6], it started working again (after mysteriously not working for a month in late 2013) and apart from the battery lasting 5 minutes and a CPU cooling problem it still works well. If that Thinkpad had cost $3,800 then I would have got it repaired, but as it cost $796 (plus the cost of a RAM upgrade) and a better one was available for $300 it wasn’t worth repairing.

In the period 1998 to 2010 I bought a 385XD, a 600E, a T21, a T43, and a T61 [6]. During that time I upgraded laptops 4 times in 12 years (I don’t have good records of when I bought each one). So my average Thinkpad has lasted 3 years. The first 2 were replaced to get better performance, the 3rd was replaced when an employer assigned me a Thinkpad (and sold it to be when I left), and 4 and 5 were replaced due to hardware problems that could not be fixed economically given the low cost of replacement.

Conclusion

Thinkpads possibly don’t have the benefits over other brands that they used to have. But in terms of providing value for the users it seems that they are much better than they used to be. Until I wrote this post I didn’t realise that I’ve broken a personal record for owning a laptop. It just keeps working and I hadn’t even bothered looking into the issue. For some devices I track how long I’ve owned them while thinking “can I justify replacing it yet”, but the T420 just does everything I want. The battery still lasts 2+ hours which is a new record too, with every other Thinkpad I’ve owned the battery life has dropped to well under an hour within a year of purchase.

If I replaced this Thinkpad T420 now it will have cost me less than $100 per year (or $140 per year including the new SSD I installed this year), that’s about 3 times better than any previous laptop! I wouldn’t feel bad about replacing it as I’ve definitely got great value for money from it. But I won’t replace it as it’s doing everything I want.

I’ve just realised that by every measure (price, reliability, and ability to run all software I want to run) I’ve got the best Thinkpad I’ve ever had. Maybe it’s not like a Rolls-Royce, but I’d much rather drive a 2016 Tesla than a 1980 Rolls-Royce anyway.

06 November, 2016 05:45AM by etbe

November 05, 2016

Russ Allbery

podlators 4.09

This package contains the Pod::Man and Pod::Text formatters for Perl.

This is a bug-fix release that fixes a long-standing problem with Pod::Text on EBCDIC systems. The code to handle non-breaking spaces and soft hyphens hard-coded the ASCII code points and deleted the open bracket character on EBCDIC systems.

The fix here adopts the same fix that was done in Pod::Simple (but with backward compatibility to older versions of Pod::Simple).

I also made a bit more progress on modernizing the test suite. All of the Pod::Man tests now use a modern coding style, and most of them have been moved to separate snippets, which makes it easier to look at the intended input and output and to create new tests.

You can get the latest version from the podlators distribution page.

05 November, 2016 09:28PM