August 04, 2019

Thorsten Alteholz

My Debian Activities in July 2019

FTP master

After the release of Buster I could start with real work in NEW again. Even the temperature could not hinder me to reject something. So this month I accepted 279 packages and rejected 15. The overall number of packages that got accepted was 308.

Debian LTS

This was my sixty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.5h. During that time I did LTS uploads of:

  • [DLA 1849-1] zeromq3 security update for one CVE
  • [DLA 1833-2] bzip2 regression update for one patch
  • [DLA 1856-1] patch security update for one CVE
  • [DLA 1859-1] bind9 security update for one CVE
  • [DLA 1864-1] patch security update for one CVE

I am glad that I could finish the bind9 upload this month.
I also started to work on ruby-mini-magick and python2.7. Unfortunatley when building both packages (even without new patches), the test suite fails. So I first have to fix that as well.

Last but not least I did ten days of frontdesk duties. This was more than a week as everybody was at DebConf and I seemed to be the only one at home …

Debian ELTS

This month was the fourteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-2 of bzip2 for an upstream regression
  • ELA-144-1 of patch for one CVE
  • ELA-147-1 of patch for one CVE
  • ELA-148-1 of bind9 for one CVE

I also did some days of frontdesk duties.

Other stuff

This month I reuploaded some go packages, that would not migrate due to being binary uploads.

I also filed rm bugs to remove all alljoyn packages. Upstream is dead, no one is using this software anymore and bugs won’t be fixed.

04 August, 2019 06:30PM by alteholz

hackergotchi for Emmanuel Kasper

Emmanuel Kasper

Debian 9 -> 10 Ugrade report

I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

04 August, 2019 03:23PM by Emmanuel Kasper ([email protected])

hackergotchi for Debian GSoC Kotlin project blog

Debian GSoC Kotlin project blog

Packaging Dependencies Part 2; and plan on how to.

Mapping and packaging dependencies part 1.

Hey all, I had my exams during weeks 8 ad 9 so I couldn't update my blog nor get much accomplished; but last week was completely free so I managed to finish packaging all the dependencies from pacakging dependencies part 1. Since some of you may not remember how I planned to tackle pacakging dependencies I'll mention it here one more time.

"I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it."

This is taken from my last blog which was specifically on packaging dependencies in part 1. Now I am happy to tell all of you that packaging dependencies for part 1 is now complete and all the needed pacakges are either in the new queue or already in sid archive as of 04 August 2019. I would like to thank ebourg, seamlik and andrewsh for helping me with this.

How to build kotlin 1.3.30 after dependency packaging part 1 and design choices.

Before I go into how to build the project as it is now I'll briefly talk of some of the choices I made while packaging dependencies in part 1 and general things you should know.

Two dependencies in part 1 were Jcabi-aether and sonatype-aether, both of these are incompatible with maven-3 and these were only used in one single file in the entire dist task graph. Considering the time it would take to migrate these dependencies to maven-3 I chose to patch out the one file that needed both of these and that change is denoted by this {commit](https://salsa.debian.org/m36-guest/kotlin-1.3.30/commit/cb298ba550ca9f727ff66e4ffca0cb73e9ee03f1). Also it must be noted that so far we are only trying to build the dist task which only and only builds the basic kotlin compiler; it doesn't build the maven artifacts with poms nor does it build the kotlin-gradle-plugin. Those things are built and installed in the local maven repository (.m2 file in surce project when you invoke debuild) using the install task which I am planning to do once we finish successfully building the dist task. Invoking the install task in our master as of Aug 04 2019 will build and install all available maven artifacts into the local maven repo but this again will not have kotlin-gradle-plugin or such since I have removed those subprojects as they aren't needed by the dist task. Keeping them would mean that I have to convert and patch them to groovy if they are written in .kts since they are evaluated during the initialization phase.

Now we are ready to build the project. I have written a simple makefile which copies all the needed bootstrap jars and prebuilts to their proper places. All you need to build the project is

1.git clone https://salsa.debian.org/m36-guest/kotlin-1.3.30.git  
2.cd kotlin-1.3.30
3.git checkout buildv1  
4.debian/pseudoBootstrap bootstrap
5.debuild -b -rfakeroot -us -uc
Note that we need only do steps 1 though 4 the very first time you are building this project. everytime after that just invoke step 5

Packaging dependencies part 2.

Now packaging dependencies part 2 involves package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build. This is the folder that is taking up the most space in Kotlin-1.3.30-temp-requirements. The sole purpose of this task is reduce the jars in this folder and substitue them with jar from the debian environment. I have managed to map out the needed jars from these for the dist task graph and they are

``` saif@Hope:/srv/chroot/KotlinCh/home/kotlin/kotlin-1.3.30-debian-maintained/buildSrc/prepare-deps/intellij-sdk/repo/kotlin.build.custom.deps/183.5153.4$ ls -R .: intellij-core intellij-core.ivy.xml intellijUltimate intellijUltimate.ivy.xml jps-standalone jps-standalone.ivy.xml

./intellij-core:
asm-all-7.0.jar  intellij-core.jar  java-compatibility-1.0.1.jar

./intellijUltimate:
lib

./intellijUltimate/lib:
asm-all-7.0.jar  guava-25.1-jre.jar  jna.jar           log4j.jar      openapi.jar    picocontainer-1.2.jar  platform-impl.jar   trove4j.jar
extensions.jar   jdom.jar            jna-platform.jar  lz4-1.3.0.jar  oro-2.0.8.jar  platform-api.jar       streamex-0.6.7.jar  util.jar

./jps-standalone:
jps-model.jar
```

This folder is treated as an ant repository and the code to that is here. Build.gradle files use this via methods like this which tells the project to take only the needed jars from the collection. I am planning on replacing this with just plain old maven repository resolution using format like compile(groupID:artifactId:version) but we will need the jars to be in our system anyways, atleast now we know that this particular file structure can be avoided.

Please note that these jars listed above by me are only needed for the dist task and the ones needed for other subprojects in the original install task can still be found here.

Like I did for packaging part 1, I will post all the needed pacakges with their source links here in this blog.

So if any of you kind souls want to help me out please kindly take on any of these and package them.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

04 August, 2019 11:52AM by Saif Abdul Cassim

hackergotchi for Mike Gabriel

Mike Gabriel

MATE 1.22 landed in Debian unstable

Last week, I did a bundle upload of (nearly) all MATE 1.22 related components to Debian unstable. Packages should have been built by now for most of the 24 architectures supported by Debian (I just fixed an FTBFS of mate-settings-daemon on non-Linux host archs). The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Credits

Again a big thanks goes to the packaging team and also to the upstream maintainers of the MATE desktop environment. Martin Wimpress and I worked on most parts of the packaging for the 1.22 release series this time. On the upstream side, a big thanks goes to all developers, esp. Vlad Orlov and Wolfgang Ulbrich for fixing / reviewing many many issues / merge requests. Good work, folks!!! plus Big Thanks!!!

References


light+love,
Mike Gabriel (aka sunweaver)

04 August, 2019 10:55AM by sunweaver

August 03, 2019

hackergotchi for Andy Simpkins

Andy Simpkins

Debconf19: Curitiba, Brazil – AV Setup

I write this on Monday whilst sat in the airport in São Paulo awaiting my onward flight back to the UK and the fun of the change of personnel in Downing street that has been something I have fortunately been able to ignore whilst at DebConf.  [Edit: and finishing writing the Saturday after getting home after much sleep]

Arriving on the first Sunday of DebCamp meant that I was one of the first people to arrive; however most of the video team were either arriving about the same time or had landed before me.  We spent most of our daytime time during DebCamp setting up for the following weeks conference.

Step one was getting a network operational.  We had been offered space for our servers in a university machine room, but chose instead to occupy the two ‘green’ rooms below the main auditorium stage, using one as a makeshift V/NOC and the other as our machine room as this enabled us continuous and easy access [0] to our servers whilst separating us from the fan noise.  I ran additional network cable between the back of the stage and our makeshift machine room, routing the cable around the back of the stage and into the ceiling void to just outside the V/NOC was relatively simple.   Routing into the V/NOC needed a bit of help to get the cable through a small gap we found where some other cables ran through the ‘fire break’.  Getting a cable between the two ‘green rooms’ however was a PITA.  Many people, including myself, eventually giving up before I finally returned to the problem and with the aid of a fully extended server rail gaffer taped to a clothing rail to make a 4m long pole I was eventually able to deliver a cable through the 3 floor supports / fire breaks that separated the two rooms (and before someone suggests I should have used a ‘fish’ wire that was what we tried first).   The university were providing us with backbone network but it did take a couple of meetings to get our video network in it’s own separate VLAN and get it to pass traffic unmolested between nodes.

The final network setup (for video that is – the conference was piggy-backing on the university WiFi network and there was also a DebConf network in the National Inn) was to make live the fibre links that had been installed prior to our arrival.  Two links had been pulled through so that we could join the ‘Video Confrencia’ room and the ‘Front Desk’ to the rest of the university network, however when we came to commission them we discovered that the wrong media converters had been supplied, they should have been for single mode fibre but multi-mode converters had been delivered.  Nothing that the university IT department couldn’t solve, indeed they did as soon as we pointed out the mistake.  The provided us with replacement media converters capable of driving a signal down *both* single and multi-mode fiber, something I have not seen before.

For the rest of the week Paddatrapper and myself spent most of our time running cables and setting up the three talk rooms that were to be filmed.  Phls had been able to provide us with details of the venue’s AV system AND scale plans of the three talk rooms, this along with the photos provided by the local team, & Tumbleweed’s visit to the sight enabled us to plan the cable runs right down to the location of power sockets.

I am going to add scale plans & photos to the things that we request for all future DebConfs.  They made planning and setup so much easier and faster.  Of cause we still ended up running more cables than we originally expected – we ran Ethernet front to back in all three rooms when we originally intended to only do this in Video Confrencia (the small BoF room), this was because it turned out that the sockets at different ends of the room were on differing packet switches that in turn feed into the university backbone.  We were informed that the backbone is 1Gb/s which meant that the video LAN would have consumed the entire bandwidth of the backbone with nothing left over.

We have 200Mb/s streams from between OPSIS frame grabbers and a 2nd 200Mb/s output stream from each room.  That equates to exactly 1Gb/s (the video-confrencia BoF room is small enough that we were always going to run a front/back cable) and that is before any backups of recordings to our server.  As it turns out that wasn’t true but by then we had already run the cables and got things working…

I won’t blog about the software setup the servers, our back-end CDN or the review process – this is not my area of expertise.  You need to thank Olasd, Tumbleweed & Ivo for the on-site system setup and Walter for the review process.  Actually I there is also Carlfk, Ubec, Valhalla and I am sure numerous other people that I am too tired to remember, I appologise for forgetting you…

So back to physical setup.  The main auditorium was operational.  I had re-patched the mixing desk to give a setup as close as possible in all three talk rooms – we are most interested in audio for the stream/recording and so use the main mix output for this, and move the room PA onto a sub group output.  Unusually for a DebConf, I found that I needed to ensure that there *was* a ground connection at the desk for all output feeds – It appears that there was NO earth in the entire auditorium; well there was at some point back in time but had been systematically removed either by cutting off the earth pin on power plugs, or unfortunately for us, by cutting and removing cables from any bonding points, behind sockets etc.   Done, probably, because RCDs kept tripping and clearly the problem is that there is an earth present to leak into and not that there is a leak in the first place, or just long cable runs into inductive loads that mean that a different ‘trip curve’ needed to be selected <sigh>.

We still had significant mains hum on the PA system (slightly less than was present before I started room setup so nothing I had done).  The venue AV team pointed out that they had a magnetic coupler AND an audio DSP unit in front of the PA amplifier stack – telling me that this was to reduce the hum. Fortunately for us the venue had 4 equalisers that I could use, one for each of the mics So I was able to knock out 60Hz, 120Hz and dip higher harmonics, this again made an improvement.  Apparently we were getting the best results in living memory of the on-site AV team so at this point I stooped tweaking the setup “It was good enough”, we could live with the remaining hum.

The other two talk rooms were pretty much the same setup, only the rooms are smaller.  The exception being that whilst we do have a small portable PA in the Video Conferancia room we only use it for audio from the presenters laptop – the room was so small there was no point in amplifying presenters…

Right I could now move on to ‘lighting’.  We were supposed to use the flood ‘work’ lights above the stage, but quite a few of the halogen lamps were blown.  This meant that there were far too many ‘dark’ patches along the stage.  Additionally the colour temperatures of the different work lights were all over the place, and this would cause havoc with white balance, still we could have lived with this…  I asked about getting the lamps replaced.  Initially I was told no, but once I pointed out the problem to a more senior member of staff they agreed that the lamps could be replaced and that it would be done the following day.  It wasn’t.  I offered that we could replace the lamps but was then told that they would now be doing this as part of a service in a few weeks time.  I was however told that instead, if I was prepared to rig them myself, that we could use the stage lights on the dimmers.  Win!  This would have been my preferred option all along and I suspect we were only offered this having started to build a reasonable working relationship with the site AV team.  I was able to sign out a bunch of lamps from the stores and rig then as I saw fit.  I was given a large wooden step ladder, and shown how to access the catwalk.  I could now rig lights where I wanted them.

Two over head floods and two spots were used to light the lectern from different angles.  Three overhead floods and three focused cans were used to light the discussion table.  I also hung to forward facing spots to illuminate someone stood at the question mic, and finally 4 cans (2 focus cans and a pair of 1kW par cans sharing the same plug) to add some light to the front 5 or 6 rows of the audience.  The Venue AV team repaired the DMX cable to the lighting dimmers and we were good to go…  well just as soon as I had worked out the DMX addressing / cable patching at the dimmer banks and then there was a short time whilst I read the instructions for the desk – enough to apply ‘soft patches’ so I could allocate a fader to each dimmer channel we were using.  I then red the instructions a bit further and came back the following day and programmed appropriate scenes so that the table could be lit using one ‘slider’, the lectern by another and so on.  JMW came back later in the week and updated the program again to add a timed fade up or down and we also set a maximum level on the audience lights to stop us from ‘blinding’ people in the first couple of rows (we set the maximum value of that particular scene to be 20% available intensity).

Lighting in the mini auditorium was from simple overhead ‘domestic’ lamps, I still needed to get some bulbs replaced, and then move / adjust them to best light a speaker stood at the lectern or a discussion panel sat at the table.   Finally we had no control of lighting in Video Confeencia (about normal for a DebConf).

Later in the week we revisited the hum problem again.  We confirmed that the Hum was no longer being emitted out of the desk, so it must have be on the cable run to the stack or in the stack itself.  The hum was still annoying and Kyle wanted to confirm that the DSP at the top of the amp stack was correctly setup – could we improve things?  It took a little persuasion but eventually we were granted permission, and the password, to access the DSP.  The DSP had not been configured properly at all.  Kyle applied a 60Hz notch filter, and this made some difference.  I suggested a comb filter which Kyle then applied for 60Hz and 5 or 6 orders of harmonics, that did the trick (thanks Kyle – I wouldn’t have had a clue how to drive the DSP).  There was no longer any perceivable noise coming out of the left hand speakers, but there was still a noticeable, but much lower, hum from the right.  We removed the input cable to the amp stack and yes the hum was still there, so we were picking up noise between the amps and the speaker!  a quick confirmation of turning off the lighting dimmers and the noise dropped again.  I started chasing the right hand speaker cables – they run up and over the stage along the catwalk, in the same bundle as all the unearthed lighting AND permanent power cables.  We were inducing mains noise directly onto the speaker cables.  The only fix for this would be to properly screen AND separate the speaker fed cables.  Better yet send a balanced audio feed, separated from the power cables, to the right hand side of the stage and move the right hand amplifiers to that side of the stage.  Nothing we could do – but something that we could point out to the venue AV team, who strangely, hadn’t considered this before…

 

 

[0] Where continuous access meant “whilst we had access to the site” (the whole campus is closed overnight)

03 August, 2019 06:37PM by andy

Jonas Meurer

debian lts report 2019.07

Debian LTS report for July 2019

This month I was allocated 17 hours. I also had 2 hours left over from Juney, which makes a total of 19 hours. I spent all of them on the following tasks/ issues.

  • DLA-1843-1: Fixed CVE-2019-10162 and CVE-2019-10163 in pdns.
  • DLA-1852-1: Fixed CVE-2019-9948 in python3.4. Also found, debugged and fixed several further regressions in the former CVE-2019-9740 patches. <<<<<<< HEAD
    • [Improved testing of LTS uploads]: We had some discussion on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing on the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.

  • Improved testing of LTS uploads: We had some internal discussion in the Debian LTS team on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing of the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.

    9e6e9e3f9c6b424db781275a8916e298c970e611

03 August, 2019 03:44PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.6

A shiny new release 0.2.6 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version updates to CCTZ release 2.3 from April, plus changes accrued since then. It also switches to tinytest which, among other benefits, permits continued testing of the installed package.

Changes in version 0.2.6 (2019-08-03)

  • Synchronized with upstream CCTZ release 2.3 plus commits accrued since then (Dirk in #30).

  • The package now uses tinytest for unit tests (Dirk in #31).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

03 August, 2019 12:45PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Philippe Mengual (jpmengual)
  • Taowa Munene-Tardif (taowa)
  • Georg Faerber (georg)
  • Kyle Robbertze (paddatrapper)
  • Andy Li (andyli)
  • Michal Arbet (kevko)
  • Sruthi Chandran (srud)
  • Alban Vidal (zordhak)
  • Denis Briand (denis)
  • Jakob Haufe (sur5r)

The following contributors were added as Debian Maintainers in the last two months:

  • Bobby de Vos
  • Jongmin Kim
  • Bastian Germann
  • Francesco Poli

Congratulations!

03 August, 2019 08:00AM by Jean-Pierre Giraud

Elana Hashman

My favourite bash alias for git

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1
Unpacking objects: 100% (4/4), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Enumerating objects: 295, done.
remote: Counting objects: 100% (295/295), done.
remote: Compressing objects: 100% (167/167), done.
remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151
Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done.
Resolving deltas: 100% (686795/686795), done.
Checking out files: 100% (20279/20279), done.

real    1m31.035s
user    1m17.856s
sys     0m7.782s

elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow
Cloning into 'kubernetes-shallow'...
remote: Enumerating objects: 34305, done.
remote: Counting objects: 100% (34305/34305), done.
remote: Compressing objects: 100% (22976/22976), done.
remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0
Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done.
Resolving deltas: 100% (17247/17247), done.

real    0m31.495s
user    0m3.941s
sys     0m1.228s

Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() {
    git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD
}

alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1
Unpacking objects: 100% (5/5), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

03 August, 2019 04:00AM by Elana Hashman

August 02, 2019

Sven Hoexter

From 30 to 230 docker containers per host

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machines were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096
fs.inotify.max_user_watches = 32768

Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999

NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity
LimitNPROC=infinity
TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

02 August, 2019 02:44PM

hackergotchi for Vincent Bernat

Vincent Bernat

Securing BGP on the host with origin validation

An increasingly popular design for a datacenter network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Some of the servers have a speech balloon expliciting the IP prefix they want to handle.
BGP on the host with a spine-leaf IP fabric. A BGP session is established over each link and each host advertises its own IP prefixes.

While routing on the host eliminates the security problems related to Ethernet networks, a server may announce any IP prefix. In the above picture, two of them are announcing 2001:db8:cc::/64. This could be a legit use of anycast or a prefix hijack. BGP offers several solutions to improve this aspect and one of them is to reuse the features around the RPKI.

Short introduction to the RPKI

On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFC 7454 explains the best practices to avoid such issues.

IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:2

$ bgpq3 -l v6-IMPORT-APPLE -6 -R 48 -m 48 -A -J -E AS-APPLE
policy-options {
 policy-statement v6-IMPORT-APPLE {
replace:
  from {
    route-filter 2403:300::/32 upto /48;
    route-filter 2620:0:1b00::/47 prefix-length-range /48-/48;
    route-filter 2620:0:1b02::/48 exact;
    route-filter 2620:0:1b04::/47 prefix-length-range /48-/48;
    route-filter 2620:149::/32 upto /48;
    route-filter 2a01:b740::/32 upto /48;
    route-filter 2a01:b747::/32 upto /48;
  }
 }
}

The RPKI (RFC 6480) adds public-key cryptography on top of it to sign the authorization for an AS to be the origin of an IP prefix. Such record is a Route Origination Authorization (ROA). You can browse the databases of these ROAs through the RIPE’s RPKI Validator instance:

Screenshot from an instance of RPKI validator showing the validity of 85.190.88.0/21 for AS 64476
RPKI validator shows one ROA for 85.190.88.0/21

BGP daemons do not have to download the databases or to check digital signatures to validate the received prefixes. Instead, they offload these tasks to a local RPKI validator implementing the “RPKI-to-Router Protocol” (RTR, RFC 6810).

For more details, have a look at “RPKI and BGP: our path to securing Internet Routing.”

Using origin validation in the datacenter

While it is possible to create our own RPKI for use inside the datacenter, we can take a shortcut and use a validator implementing RTR, like GoRTR, and accepting another source of truth. Let’s work on the following topology:

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Three of the physical hosts are validators and RTR sessions are established between them and the top-of-the-rack routers—except their own top-of-the-racks.
BGP on the host with prefix validation using RTR. Each server has its own AS number. The leaf routers establish RTR sessions to the validators.

You assume we have a place to maintain a mapping between the private AS numbers used by each host and the allowed prefixes:3

ASN Allowed prefixes
AS 65005 2001:db8:aa::/64
AS 65006 2001:db8:bb::/64,
2001:db8:11::/64
AS 65007 2001:db8:cc::/64
AS 65008 2001:db8:dd::/64
AS 65009 2001:db8:ee::/64,
2001:db8:11::/64
AS 65010 2001:db8:ff::/64

From this table, we build a JSON file for GoRTR, assuming each host can announce the provided prefixes or longer ones (like 2001:db8:aa::­42:d9ff:­fefc:287a/128 for AS 65005):

{
  "roas": [
    {
      "prefix": "2001:db8:aa::/64",
      "maxLength": 128,
      "asn": "AS65005"
    }, {
      "…": "…"
    }, {
      "prefix": "2001:db8:ff::/64",
      "maxLength": 128,
      "asn": "AS65010"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65006"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65009"
    }
  ]
}

This file is deployed to all validators and served by a web server. GoRTR is configured to fetch it and update it every 10 minutes:

$ gortr -refresh=600 \
        -verify=false -checktime=false \
        -cache=http://127.0.0.1/rpki.json
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

The refresh time could be lowered but GoRTR can be notified of an update using the SIGHUP signal. Clients are immediately notified of the change.

The next step is to configure the leaf routers to validate the received prefixes using the farm of validators. Most vendors support RTR:

Platform Over TCP? Over SSH?
Juniper JunOS ✔️
Cisco IOS XR ✔️ ✔️
Cisco IOS XE ✔️
Cisco IOS ✔️
Arista EOS
BIRD ✔️ ✔️
FRR ✔️ ✔️
GoBGP ✔️

Configuring JunOS

JunOS only supports plain-text TCP. First, let’s configure the connections to the validation servers:

routing-options {
    validation {
        group RPKI {
            session validator1 {
                hold-time 60;         # session is considered down after 1 minute
                record-lifetime 3600; # cache is kept for 1 hour
                refresh-time 30;      # cache is refreshed every 30 seconds
                port 8282;
            }
            session validator2 { /* OMITTED */ }
            session validator3 { /* OMITTED */ }
        }
    }
}

By default, at most two sessions are randomly established at the same time. This provides a good way to load-balance them among the validators while maintaining good availability. The second step is to define the policy for route validation:

policy-options {
    policy-statement ACCEPT-VALID {
        term valid {
            from {
                protocol bgp;
                validation-database valid;
            }
            then {
                validation-state valid;
                accept;
            }
        }
        term invalid {
            from {
                protocol bgp;
                validation-database invalid;
            }
            then {
                validation-state invalid;
                reject;
            }
        }
    }
    policy-statement REJECT-ALL {
        then reject;
    }
}

The policy statement ACCEPT-VALID turns the validation state of a prefix from unknown to valid if the ROA database says it is valid. It also accepts the route. If the prefix is invalid, the prefix is marked as such and rejected. We have also prepared a REJECT-ALL statement to reject everything else, notably unknown prefixes.

A ROA only certifies the origin of a prefix. A malicious actor can therefore prepend the expected AS number to the AS path to circumvent the validation. For example, AS 65007 could annonce 2001:db8:dd::/64, a prefix allocated to AS 65006, by advertising it with the AS path 65007 65006. To avoid that, we define an additional policy statement to reject AS paths with more than one AS:

policy-options {
    as-path EXACTLY-ONE-ASN "^.$";
    policy-statement ONLY-DIRECTLY-CONNECTED {
        term exactly-one-asn {
            from {
                protocol bgp;
                as-path EXACTLY-ONE-ASN;
            }
            then next policy;
        }
        then reject;
    }
}

The last step is to configure the BGP sessions:

protocols {
    bgp {
        group HOSTS {
            local-as 65100;
            type external;
            # export [ … ];
            import [ ONLY-DIRECTLY-CONNECTED ACCEPT-VALID REJECT-ALL ];
            enforce-first-as;
            neighbor 2001:db8:42::a10 {
                peer-as 65005;
            }
            neighbor 2001:db8:42::a12 {
                peer-as 65006;
            }
            neighbor 2001:db8:42::a14 {
                peer-as 65007;
            }
        }
    }
}

The import policy rejects any AS path longer than one AS, accepts any validated prefix and rejects everything else. The enforce-first-as directive is also pretty important: it ensures the first (and, here, only) AS in the AS path matches the peer AS. Without it, a malicious neighbor could inject a prefix using an AS different than its own, defeating our purpose.4

Let’s check the state of the RTR sessions and the database:

> show validation session
Session                                  State   Flaps     Uptime #IPv4/IPv6 records
2001:db8:4242::10                        Up          0   00:16:09 0/9
2001:db8:4242::11                        Up          0   00:16:07 0/9
2001:db8:4242::12                        Connect     0            0/0

> show validation database
RV database for instance master

Prefix                 Origin-AS Session                                 State   Mismatch
2001:db8:11::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::10                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::11                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::10                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::11                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::10                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::11                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::10                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::11                       valid

  IPv4 records: 0
  IPv6 records: 18

Here is an example of accepted route:

> show route protocol bgp table inet6 extensive all
inet6.0: 11 destinations, 11 routes (8 active, 0 holddown, 3 hidden)
2001:db8:bb::42/128 (1 entry, 0 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd050470
                Next-hop reference count: 4
                Source: 2001:db8:42::a12
                Next hop: 2001:db8:42::a12 via em1.0, selected
                Session Id: 0x0
                State: <Active NotInstall Ext>
                Local AS: 65006 Peer AS: 65000
                Age: 12:11
                Validation State: valid
                Task: BGP_65000.2001:db8:42::a12+179
                AS path: 65006 I
                Accepted
                Localpref: 100
                Router ID: 1.1.1.1

A rejected route would be similar with the reason “rejected by import policy” shown in the details and the validation state would be invalid.

Configuring BIRD

BIRD supports both plain-text TCP and SSH. Let’s configure it to use SSH. We need to generate keypairs for both the leaf router and the validators (they can all share the same keypair). We also have to create a known_hosts file for BIRD:

(validatorX)$ ssh-keygen -qN "" -t rsa -f /etc/gortr/ssh_key
(validatorX)$ echo -n "validatorX:8283 " ; \
              cat /etc/bird/ssh_key_rtr.pub
validatorX:8283 ssh-rsa AAAAB3[…]Rk5TW0=
(leaf1)$ ssh-keygen -qN "" -t rsa -f /etc/bird/ssh_key
(leaf1)$ echo 'validator1:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ echo 'validator2:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ cat /etc/bird/ssh_key.pub
ssh-rsa AAAAB3[…]byQ7s=
(validatorX)$ echo 'ssh-rsa AAAAB3[…]byQ7s=' >> /etc/gortr/authorized_keys

GoRTR needs additional flags to allow connections over SSH:

$ gortr -refresh=600 -verify=false -checktime=false \
      -cache=http://127.0.0.1/rpki.json \
      -ssh.bind=:8283 \
      -ssh.key=/etc/gortr/ssh_key \
      -ssh.method.key=true \
      -ssh.auth.user=rpki \
      -ssh.auth.key.file=/etc/gortr/authorized_keys
INFO[0000] Enabling ssh with the following authentications: password=false, key=true
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

Then, we can configure BIRD to use these RTR servers:

roa6 table ROA6;
template rpki VALIDATOR {
   roa6 { table ROA6; };
   transport ssh {
     user "rpki";
     remote public key "/etc/bird/known_hosts";
     bird private key "/etc/bird/ssh_key";
   };
   refresh keep 30;
   retry keep 30;
   expire keep 3600;
}
protocol rpki VALIDATOR1 from VALIDATOR {
   remote validator1 port 8283;
}
protocol rpki VALIDATOR2 from VALIDATOR {
   remote validator2 port 8283;
}

Unlike JunOS, BIRD doesn’t have a feature to only use a subset of validators. Therefore, we only configure two of them. As a safety measure, if both connections become unavailable, BIRD will keep the ROAs for one hour.

We can query the state of the RTR sessions and the database:

> show protocols all VALIDATOR1
Name       Proto      Table      State  Since         Info
VALIDATOR1 RPKI       ---        up     17:28:56.321  Established
  Cache server:     rpki@validator1:8283
  Status:           Established
  Transport:        SSHv2
  Protocol version: 1
  Session ID:       0
  Serial number:    1
  Last update:      before 25.212 s
  Refresh timer   : 4.787/30
  Retry timer     : ---
  Expire timer    : 3574.787/3600
  No roa4 channel
  Channel roa6
    State:          UP
    Table:          ROA6
    Preference:     100
    Input filter:   ACCEPT
    Output filter:  REJECT
    Routes:         9 imported, 0 exported, 9 preferred
    Route change stats:     received   rejected   filtered    ignored   accepted
      Import updates:              9          0          0          0          9
      Import withdraws:            0          0        ---          0          0
      Export updates:              0          0          0        ---          0
      Export withdraws:            0        ---        ---        ---          0

> show route table ROA6
Table ROA6:
    2001:db8:11::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:11::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:aa::/64-128 AS65005  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:bb::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:cc::/64-128 AS65007  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:dd::/64-128 AS65008  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ee::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ff::/64-128 AS65010  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)

Like for the JunOS case, a malicious actor could try to workaround the validation by building an AS path where the last AS number is the legitimate one. BIRD is flexible enough to allow us to use any AS to check the IP prefix. Instead of checking the origin AS, we ask it to check the peer AS with this function, without looking at the AS path:

function validated(int peeras) {
   if (roa_check(ROA6, net, peeras) != ROA_VALID) then {
      print "Ignore invalid ROA ", net, " for ASN ", peeras;
      reject;
   }
   accept;
}

The BGP instance is then configured to use the above function as the import policy:

protocol bgp PEER1 {
   local as 65100;
   neighbor 2001:db8:42::a10 as 65005;
   ipv6 {
      import keep filtered;
      import where validated(65005);
      # export …;
   };
}

You can view the rejected routes with show route filtered, but BIRD does not store information about the validation state in the routes. You can also watch the logs:

2019-07-31 17:29:08.491 <INFO> Ignore invalid ROA 2001:db8:bb::40:/126 for ASN 65005

Currently, BIRD does not reevaluate the prefixes when the ROAs are updated. There is work in progress to fix this. If this feature is important to you, have a look at FRR instead: it also supports the RTR protocol and triggers a soft reconfiguration of the BGP sessions when ROAs are updated.


  1. Notably, the data flow and the control plane are separated. A node can remove itself by notifying its peers without losing a single packet. ↩︎

  2. People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. ↩︎

  3. We are using 16-bit AS numbers for readability. Because we need to assign a different AS number for each host in the datacenter, in an actual deployment, we would use 32-bit AS numbers. ↩︎

  4. Cisco routers and FRR enforce the first AS by default. It is a tunable value to allow the use of route servers: they distribute prefixes on behalf of other routers. ↩︎

02 August, 2019 09:16AM by Vincent Bernat

hackergotchi for Junichi Uekawa

Junichi Uekawa

Started wanting to move stuff to docker.

Started wanting to move stuff to docker. Especially around build systems. If things are mutable they will go bad and fixing them is annoying.

02 August, 2019 04:55AM by Junichi Uekawa

August 01, 2019

hackergotchi for Mike Gabriel

Mike Gabriel

My Work on Debian LTS/ELTS (July 2019)

In July 2019, I have worked on the Debian LTS project for 15.75 hours (of 18.5 hours planned) and on the Debian ELTS project for another 12 hours (as planned) as a paid contributor.

LTS Work

  • Upload to jessie-security: libssh2 (DLA 1730-3) [1]
  • Upload to jessie-security: libssh2 (DLA 1730-4) [2]
  • Upload to jessie-security: glib2.0 (DLA 1866-1) [3]
  • Upload to jessie-security: wpa (DLA 1867-1) [4]

The Debian Security package archive only has arch-any buildds attached, so source packages that build at least one arch-all bin:pkg must include the arch-all DEBs from a local build. So, ideally, we upload source + arch-all builds and leave the arch-any builds to the buildds. However, this seems to be problematic when doing the builds using sbuild. So, I spent a little time on...

  • sbuild: Try to understand the mechanism of building arch-all + source package (i.e. omit arch-any uploads)... Unfortunately, there is no "-g" option (like in dpkg-buildpackage). Neither does the parameter combination ''--source --arch-all --no-arch-any'' result in a source + arch-all build. More investigation / communication with the developers of sbuild required here. To be continued...

ELTS Work

  • Upload to wheezy-lts: freetype (ELA 149-1) [5]
  • Upload to wheezy-lts: libssh2 (ELA 99-3) [6]

References

01 August, 2019 06:24PM by sunweaver

hackergotchi for Gunnar Wolf

Gunnar Wolf

Goodbye, pgp.gwolf.org

I started running an SKS keyserver a couple of years ago (don't really remember, but I think it was around 2014). I am, as you probably expect me to be given my lines of work, a believer of the Web-of-Trust model upon which the PGP network is built. I have published a couple of academic papers (Strengthening a Curated Web of Trust in a Geographically Distributed Project, with Gina Gallegos, Cryptologia 2016, and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project’s cryptographic keyring, with Victor González Quiroga, Journal of Internet Services and Applications, 2018) and presented several conferences regarding some aspects of it, mainly in relation to the Debian project.

Even in light of the recent flooding attacks (more info by dkg, Daniel Lange, Michael Altfield, others available; GnuPG task tracker). I still believe in the model. But I have had enough of the implementation's brittleness. I don't know how much to blame SKS and how much to blame myself, but I cannot devote more time to fiddling around to try to get it to work as it should — I was providing an unstable service. Besides, this year I had to rebuild the database three times already due to it getting corrupted... And yesterday I just could not get past of segfaults when importing.

So, I have taken the unhappy decision to shut down my service. I have contacted both the SKS mailing list and the servers I was peering with. Due to the narrow scope of a single SKS server, possibly this post is not needed... But it won't hurt, so here it goes.

01 August, 2019 03:25PM by gwolf

hackergotchi for Thomas Goirand

Thomas Goirand

My work during DebCamp / DebConf

Lots of uploads

Grepping my IRC log for the BTS bot output shows that I uploaded roughly 244 times in Curitiba.

Removing Python 2 from OpenStack by uploading OpenStack Stein in Sid

Most of these uploads were uploading OpenStack Stein from Experimental to Sid, with a breaking record of 96 uploads in a single day. As the work for Python 2 removal was done before the Buster release (uploads in Experimental), this effectively removed a lot of Python 2 support.

Removing Python 2 from Django packages

But once that was done, I started uploading some Django packages. Indeed, since Django 2.2 was uploaded to Sid with the removal of Python 2 support, a lot of dangling python-django-* needed to be fixed. Not only Python 2 support needed to be removed from them, but often, patches were needed in order to fix at least unit tests since Django 2.2 removed a lot of things that were deprecated since a few earlier versions. I went through all of the django packages we have in Debian, and I believe I fixed most of them. I uploaded 43 times some Django packages, fixing 39 packages.

Removing Python 2 support from non-django or OpenStack packages

During the Python BoF at Curitiba, we collectively decided it was time to remove Python 2, and that we’ll try to do as much of that work as possible before Bullseye. Details of this will come from our dear leader p1otr, so I’ll let him write the document and wont comment (yet) on how we’re going to proceed. Anyway, we already have a “python2-rm” release tracker. After the Python BOF, I then also started removing Python 2 support on a few package with more generic usage. Hopefully, touching only leaf packages, without breaking things. I’m not sure of the total count of packages that I touched, probably a bit less than a dozen.

Horizon broken in Sid since the beginning of July

Unfortunately, Horizon, the OpenStack dashboard, is currently still broken in Debian Sid. Indeed, since Django 1.11, the login() function in views.py has been deprecated in the favor of a LoginView class. And in Django 2.2, the support for the function has been removed. As a consequence, since the 9th of July, when Django 2.2 was uploaded, Horizon’s openstack_auth/views.py is boken. Upstream says they are targeting Django 2.2 for next February. That’s a way too late. Hopefully, someone will be able to fix this situation with me (it’s probably a bit too much for Django my skills). Once this is fixed, I’ll be able to work on all the Horizon plugins which are still in Experimental. Note that I already fixed all of Horizon’s reverse dependencies in Sid, but some of the patches need to be upstreamed.

Next work (from home): fixing piuparts

I’ve already written a first attempt at a patch for piuparts, so that it uses Python 3 and not Python 2 anymore. That patch is already as a merge request in Salsa, though I haven’t had the time to test it yet. What’s remaining to do is: actually test using Puiparts with this patch, and fix debian/control so that it switches to Python 2.

01 August, 2019 11:34AM by Goirand Thomas

Sylvain Beucler

Debian LTS - July 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) project, which extends the security support for past Debian releases, as a paid contributor.

In July, the monthly sponsored hours were split evenly among contributors depending on their max availability - I declared max 30h and got 18.5h.

My time was mostly spend on Front-Desk duties, as well as improving our scripts&docs.

Current vulnerabilities triage:

  • CVE-2019-13117/libxslt CVE-2019-13118/libxslt: triage (affected, dla-needed)
  • CVE-2019-12781/python-django: triage (affected)
  • CVE-2019-12970/squirrelmail: triage (affected)
  • CVE-2019-13147/audiofile: triage (postponed)
  • CVE-2019-12493/poppler: jessie triage (postponed)
  • CVE-2019-13173/node-fstream: jessie triage (node-* not supported)
  • exiv2: jessie triage (5 CVEs, none to fix - CVE-2019-13108 CVE-2019-13109 CVE-2019-13110 CVE-2019-13112 CVE-2019-13114)
  • CVE-2019-13207/nsd: jessie triage (affected, posponed)
  • CVE-2019-11272/libspring-security-2.0-java: jessie triage (affected, dla-needed)
  • CVE-2019-13312/ffmpeg: (libav) jessie triage (not affected)
  • CVE-2019-13313/libosinfo: jessie triage (affected, postponed)
  • CVE-2019-13290/mupdf: jessie triage (not-affected)
  • CVE-2019-13351/jackd2: jessie triage (affected, postponed)
  • CVE-2019-13345/squid3: jessie triage (2 XSS: 1 unaffected, 1 reflected affected, dla-needed)
  • CVE-2019-11841/golang-go.crypto: jessie triage (affected, dla-needed)
  • Call for triagers for the upcoming weeks

Past undermined issues triage:

  • libgig: contact maintainer about 17 pending undetermined CVEs
  • libsixel: contact maintainer about 6 pending undetermined CVEs
  • netpbm-free - actually an old Debian-specific fork: contact original reporter for PoCs and attach them to BTS; CVE-2017-2579 and CVE-2017-2580 not-affected, doubts about CVE-2017-2581

Documentation:

Tooling - bin/lts-cve-triage.py:

  • filter out 'undetermined' but explicitely 'ignored' packages (e.g. jasperreports)
  • fix formatting with no-colors output, hint that color output is available
  • display lts' nodsa sub-states
  • upgrade unsupported packages list to jessie

01 August, 2019 08:41AM

hackergotchi for Steve Kemp

Steve Kemp

Building a computer - part 3

This is part three in my slow journey towards creating a home-brew Z80-based computer. My previous post demonstrated writing some simple code, and getting it running under an emulator. It also described my planned approach:

  • Hookup a Z80 processor to an Arduino Mega.
  • Run code on the Arduino to emulate RAM reads/writes and I/O.
  • Profit, via the learning process.

I expect I'll have to get my hands-dirty with a breadboard and naked chips in the near future, but for the moment I decided to start with the least effort. Erturk Kocalar has a website where he sells "shields" (read: expansion-boards) which contain a Z80, and which is designed to plug into an Arduino Mega with no fuss. This is a simple design, I've seen a bunch of people demonstrate how to wire up by hand, for example this post.

Anyway I figured I'd order one of those, and get started on the easy-part, the software. There was some sample code available from Erturk, but it wasn't ideal from my point of view because it mixed driving the Z80 with doing "other stuff". So I abstracted the core code required to interface with the Z80 and packaged it as a simple library.

The end result is that I have a z80 retroshield library which uses an Arduino mega to drive a Z80 with something as simple as this:

#include <z80retroshield.h>


//
// Our program, as hex.
//
unsigned char rom[32] =
{
    0x3e, 0x48, 0xd3, 0x01, 0x3e, 0x65, 0xd3, 0x01, 0x3e, 0x6c, 0xd3, 0x01,
    0xd3, 0x01, 0x3e, 0x6f, 0xd3, 0x01, 0x3e, 0x0a, 0xd3, 0x01, 0xc3, 0x16,
    0x00
};


//
// Our helper-object
//
Z80RetroShield cpu;


//
// RAM I/O function handler.
//
char ram_read(int address)
{
    return (rom[address]) ;
}


// I/O function handler.
void io_write(int address, char byte)
{
    if (address == 1)
        Serial.write(byte);
}


// Setup routine: Called once.
void setup()
{
    Serial.begin(115200);


    //
    // Setup callbacks.
    //
    // We have to setup a RAM-read callback, otherwise the program
    // won't be fetched from RAM and executed.
    //
    cpu.set_ram_read(ram_read);

    //
    // Then we setup a callback to be executed every time an "out (x),y"
    // instruction is encountered.
    //
    cpu.set_io_write(io_write);

    //
    // Configured.
    //
    Serial.println("Z80 configured; launching program.");
}


//
// Loop function: Called forever.
//
void loop()
{
    // Step the CPU.
    cpu.Tick();
}

All the logic of the program is contained in the Arduino-sketch, and all the use of pins/ram/IO is hidden away. As a recap the Z80 will make requests for memory-contents, to fetch the instructions it wants to execute. For general purpose input/output there are two instructions that are used:

IN A, (1)   ; Read a character from STDIN, store in A-register.
OUT (1), A  ; Write the character in A-register to STDOUT

Here 1 is the I/O address, and this is an 8 bit number. At the moment I've just configured the callback such that any write to I/O address 1 is dumped to the serial console.

Anyway I put together a couple of examples of increasing complexity, allowing me to prove that RAM read/writes work, and that I/O reads and writes work.

I guess the next part is where I jump in complexity:

  • I need to wire a physical Z80 to a board.
  • I need to wire a PROM to it.
    • This will contain the program to be executed - hardcoded.
  • I need to provide power, and a clock to make the processor tick.

With a bunch of LEDs I'll have a Z80-system running, but it'll be isolated and hard to program. (Since I'll need to reflash the RAM/ROM-chip).

The next step would be getting it hooked up to a serial-console of some sort. And at that point I'll have a genuinely programmable standalone Z80 system.

01 August, 2019 06:00AM

hackergotchi for Kurt Kremitzki

Kurt Kremitzki

Summer Update for FreeCAD & Debian Science Work

Hello, and welcome to my "summer update" on my free software work on FreeCAD and the Debian Science team. I call it a summer update because it was winter when I last wrote, and quite some time has elapsed since I fell out of the monthly update habit. This is a high-level summary of what I've been working on since March.

FreeCAD 0.18 Release & Debian 10 Full Freeze Timing

/images/freecadsplash.png


The official release date of FreeCAD 0.18 ( release notes ) is March 12, 2019, although the git tag for it wasn't pushed until March 14th. This timing was a bit unfortunate as the full freeze for Debian 10 went into effect March 12th, with a de-facto freeze date of March 2nd due to the 10 day testing migration period. To compound things, since this was my first Debian release as a packaging contributor, I didn't do things quite right such that while I probably could have gotten FreeCAD 0.18 into Debian 10, I didn't. Instead, what's available is a pre-release version from about a month before the release which is missing a few bugfixes and refinements.

On the positive side, this is an impetus for me to learn about Debian Backports, a way to provide non-bugfix updates to Debian Stable users. The 0.18 release line has already had several bugfix releases; I've currently got Debian Testing/Unstable as well as the Ubuntu Stable PPA up-to-date with version 0.18.3. As soon as I'm able, I'll get this version into Debian Backports, too.

FreeCAD PPA Improvements

Another nice improvement I've recently made is migrating the packaging for the Ubuntu Stable and Daily PPAs to Debian's GitLab instance at https://salsa.debian.org/science-team/freecad by creating the ppa/master and ppa/daily branches. Having all the Debian and Ubuntu packaging in one place means that propagating updates has become a matter of git merging and pushing. Once any changes are in place, I simply have to trigger an import and build on Launchpad for the stable releases. For the daily builds, changes are automatically synced and the debian directory from Salsa is combined with the latest synced upstream source from GitHub, so daily builds no longer have to be triggered manually. However, this has uncovered another problem in our process which being worked on at the FreeCAD forums. (Thread: Finding a solution for the 'version.h' issue

Science Team Package Updates

/images/bunny.png


The main Science Team packages I've been working on recently have been OpenCASCADE, Netgen, Gmsh, and OpenFOAM.

For OpenCASCADE, I have uploaded the third bugfix release in the 7.3.0 series. Unfortunately, their versioning scheme is a bit unusual, so this version is tagged 7.3.0p3. This is unfortunate because dpkg --compare-versions 7.3.0p3+dfsg1 gt 7.3.0+dfsg1 evaluates to false. As such, I've uploaded this package as 7.3.3, with plans to contact upstream to discuss their bugfix release versioning scheme. Currently, version 7.4.0 has an upstream target release date for the end of August, so there will be an opportunity to convince them to release 7.4.1 instead of 7.4.0p1. If you're interested in the changes contained in this upload, you can refer to the upstream git log for more information.

In collaboration with Nico Schlömer and Anton Gladky, the newest Gmsh, version 4.4.1, has been uploaded to wait in the Debian NEW queue. See the upstream changelog for more information on what's new.

I've also prepared the package for the newest version of Netgen, 6.2.1905. Unfortunately, uploading this is blocked because 6.2.1810 is still in Debian NEW. However, I've tested compiling FreeCAD against Netgen, and I've been able to get the integration with it working again, so once I'm able to do this upload, I'll be able to upload a new and improved FreeCAD with the power of Netgen meshing.

I've also begun working on packaging the latest OpenFOAM release, 1906. I've gotten a little sidetracked, though, as a pecularity in the way upstream prepares their tarballs seems to be triggering a bug in GNU tar. I should have this one uploaded soon. For a preview in what'll be coming, see the release notes for version 1906.

GitLab CI Experimentation with salsa.debian.org

Some incredibly awesome Debian contributors have set up the ability to use GitLab CI to automate the testing of Debian packages (see documentation.)

I did a bit of experimentation with it. Unfortunately, both OpenCASCADE and FreeCAD exceeded the 2 hour time limit. There's a lot of promise in it for smaller packages, though!

Python 2 Removal in Debian Underway

/images/deadsnakes.jpeg


Per pythonclock.org, Python 2 has less than 5 months until it's end-of-life, so the task of removing it for the next version of Debian has begun. For now, it's mainly limited to leaf packages with nothing depending on them. As such, I've uploaded Python 3-only packages for new upstream releases of python-fluids (a fluid dynamics engineering & design library) and python-ulmo (provides clean & simple access to public hydrology and climatology data).

Debian Developer Application

I've finally applied to become a full Debian Developer, which is an exciting prospect. I'll be more able to enact improvements without having to bug, well, mostly Anton, Andreas, and Tobias. (Thanks!) I'm also looking forward to having access to more resources to improve my packages on other architectures, particularly arm64 now that the Raspberry Pi 4 is out and potentially a serious candidate for a low-powered FreeCAD workstation.

The process is slow and calculating, as it should be, so it'll be some time before I'm officially in, but it sure will be cause for celebration.

Google Summer of Code Mentoring

/images/gsoc.png

CC-BY-SA-4.0, Aswinshenoy.


I'm mentoring a Google Summer of Code project for FreeCAD this year! (See forum thread.) My student is quite new to FreeCAD and Debian/Ubuntu, so the first half of the project has involved relatively the deep-end topics of using Debian packaging to distribute bugfixes for FreeCAD and to learn by exploring related packages in its ecosystem. In particular, focus was given to OpenCAMLib, since there is a lot of user and developer interest in FreeCAD's potential for generating toolpaths for machining and manufacturing the models created in the program.

Now that he's officially swimming and not sinking, the next phase is working on making development and packaging-related improvements for FreeCAD on Windows, which is in even rougher shape than Debian/Ubuntu, but more his area of familiarity. Stay tuned for the final results!

Thanks to my sponsors

This work is made possible in part by contributions from readers like you! You can send moral support my way via Twitter @thekurtwk. Financial support is also appreciated at any level and possible on several platforms: Patreon, Liberapay, and PayPal.

01 August, 2019 04:47AM by Kurt Kremitzki

Paul Wise

FLOSS Activities July 2019

Changes

Issues

Review

Administration

  • apt-xapian-index: migrated repo to Salsa, merged some branches and patches
  • Debian: redirect user support request, answer porterbox access query,
  • Debian wiki: ping team member, re-enable accounts, unblock IP addresses, whitelist domains, whitelist email addresses, send unsubscribe info, redirect support requests
  • Debian QA services: deploy changes
  • Debian PTS: deploy changes
  • Debian derivatives census: disable cron job due to design flaws

Communication

Sponsors

The File::LibMagic, purple-discord, librecaptcha & harmony work was sponsored by my employer. All other work was done on a volunteer basis.

01 August, 2019 02:20AM

July 31, 2019

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities (2019-07)

DC19 Group Photo

Group photo above taken at DebConf19 by Agairs Mahinovs.

2019-07-03: Upload calamares-settings-debian (10.0.20-1) (CVE 2019-13179) to debian unstable.

2019-07-05: Upload calamares-settings-debian (10.0.25-1) to debian unstable.

2019-07-06: Debian Buster Live final ISO testing for release, also attended Cape Town buster release party.

2019-07-08: Sponsor package ddupdate (0.6.4-1) for debian unstable (mentors.debian.net request, RFS: #931582)

2019-07-08: Upload package btfs (2.19-1) to debian unstable.

2019-07-08: Upload package calamares (3.2.11-1) to debian unstable.

2019-07-08: Request update for util-linux (BTS: #931613).

2019-07-08: Upload package gnome-shell-extension-dashtodock (66-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-multi-monitors (18-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-system-monitor (38-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-tilix-dropdown (7-1) to debian unstable.

2019-07-08: Upload package python3-aniso8601 (7.0.0-1) to debian unstable.

2019-07-08: Upload package python-3-flask-restful (0.3.7-2) to debian unstable.

2019-07-08: Upload package xfce4-screensaver (0.1.6) to debian unstable.

2019-07-09: Sponsor package wordplay (8.0-1) (mentors.debian.net request).

2019-07-09: Sponsor package blastem (0.6.3.2-1) (mentors.debian.net request) (Closes RFS: #931263).

2019-07-09: Upload gnome-shell-extension-workspaces-to-dock (50-1) to debian unstable.

2019-07-09: Upload bundlewrap (3.6.1-2) to debian unstable.

2019-07-09: Upload connectagram (1.2.9-6) to debian unstable.

2019-07-09: Upload fracplanet (0.5.1-5) to debian unstable.

2019-07-09: Upload fractalnow (0.8.2-4) to debian unstable.

2019-07-09: Upload gnome-shell-extension-dash-to-panel (19-2) to debian unstable.

2019-07-09: Upload powerlevel9k (0.6.7-2) to debian unstable.

2019-07-09: Upload speedtest-cli (2.1.1-2) to debian unstable.

2019-07-11: Upload tetzle (2.1.4+dfsg1-2) to debian unstable.

2019-07-11: Review mentors.debian.net package hipercontracer (1.4.1-1).

2019-07-15 – 2019-07-28: Attend DebCamp and DebConf!

My DebConf19 mini-report:

There is really too much to write about that happened at DebConf, I hope to get some time and write seperate blog entries on those really soon.

  • Participated in Bursaries BoF, I was chief admin of DebConf bursaries in this cycle. Thanks to everyone who already stepped up to help with next year.
  • Gave a lightning talk titled “Can you install Debian within a lightning talk slot?” where I showed off Calamares on the latest official live media. Spoiler alert: it barely doesn’t fit in the allotted time, something to fix for bullseye!
  • Participated in a panel called “Surprise, you’re a manager!“.
  • Hosted “Debian Live BoF” – we made some improvements for the live images during the buster cycle, but there’s still a lot of work to do so we held a session to cut out our initial work for Debian 11.
  • Got the debbug and missed the day trip, I hope to return to this part of Brazil one day, so much to explore in just the surrounding cities.
  • The talk selection this year was good, there’s a lot that I learned and caught up on that I probably wouldn’t have done if it wasn’t for DebConf. Talks are recorded so (http archive, YouTube). PS: If you find something funny, please link (with time stamp) on the FunnyMoments wiki page (that page is way too bare right now).

31 July, 2019 06:51PM by jonathan

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in July 2019

Here is my monthly update covering what I have been doing in the free software world during July 2019 (previous month):


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom. Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.


This month:

I spent significant amount of time working on our website this month, including:

  • Split out our non-fiscal sponsors with a description [...] and make them non-display three-in-a-row [...].
  • Correct references to "1&1 IONOS" (née Profitbricks). [...]
  • Lets not promote yet more ambiguity in our environment names! [...]
  • Recreate the badge image, saving the .svg alongside it. [...]
  • Update our fiscal sponsors. [...][...][...]
  • Tidy the weekly reports section on the news page [...], fixup the typography on the documentation page [...] and make all headlines stand out a bit more [...].
  • Drop some old CSS files and fonts. [...]
  • Tidy news page a bit. [...]
  • Fixup a number of issues in the report template and previous reports. [...][...][...][...][...][...]

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [...].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) [...]
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [...] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [...].
  • Cease ignoring test failures in stable-backports. [...]
  • Add missing textual DESCRIPTION headers for .zip and "Mozilla"-optimised .zip files. [...]
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. [...]
  • Update some reporting:
    • Re-add "return code" noun to "Command foo exited with X" error messages. [...]
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. [...]
    • Skip the extra newline in Output:\nfoo. [...]
  • Add some explicit return values to appease Pylint, etc. [...]
  • Also include the python3-tlsh in the Debian test dependencies. [...]
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [...][...][...][...][...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Support OpenJDK ".jmod" files. [...]
  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. [...]
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don't just run the tests but build the Debian package instead using Salsa's centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [...][...]
  • Update tests:
    • Don't build release Git tags on salsa.debian.org. [...]
    • Merge the debian branch into the master branch to simplify testing and deployment [...] and update debian/gbp.conf to match [...].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. [...]

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS (ELTS) project.


Uploads

I also made "sourceful" uploads to unstable to ensure migration to testing after recent changes that prevent maintainer-supplied packages entering bullseye for bfs (1.5-3), redis (5:5.0.5-2), lastpass-cli (1.3.3-2), python-daiquiri (1.5.0-3) and I finally performed a sponsored upload of elpy (1.29.1+40.gb929013-1).


FTP Team

As a Debian FTP assistant I ACCEPTed 19 packages: aiorwlock, bolt, caja-mediainfo, cflow, cwidget, dgit, fonts-smc-gayathri, gmt, gnuastro, guile-gcrypt, guile-sqlite3, guile-ssh, hepmc3, intel-gmmlib, iptables, mescc-tools, nyacc, python-pdal & scheme-bytestructures. I additionally filed a bug against scheme-bytestructures for having a seemingly-incomplete debian/copyright file. (#932466)

31 July, 2019 03:31PM

hackergotchi for Michael Prokop

Michael Prokop

Some useful bits about Linux hardware support and patched Kernel packages

Disclaimer: I started writing this blog post in May 2018, when Debian/stretch was the current stable release of Debian, but published this article in August 2019, so please keep the version information (Debian releases + kernels not being up2date) in mind.

The kernel version of Debian/stretch (4.9.0) didn’t support the RAID controller as present in Lenovo ThinkSystem SN550 blade servers yet. The RAID controller was known to be supported with Ubuntu 18.10 using kernel v4.15 as well as with Grml ISOs using kernel v4.15 and newer. Using a more recent Debian kernel version wasn’t really an option for my customer, as there was no LTS kernel version that could be relied on. Using the kernel version from stretch-backports could have be an option, though it would be our last resort only, since the customer where this applied to controls the Debian repositories in usage and we’d have to track security issues more closely, test new versions of the kernel on different kinds of hardware more often,… whereas the kernel version from Debian/stable is known to be working fine and is less in a flux than the ones from backports. Alright, so it doesn’t support this new hardware model yet, but how to identify the relevant changes in the kernel to have a chance to get it supported in the stable Debian kernel?

Some bits about PCI IDs and related kernel drivers

We start by identifying the relevant hardware:

root@grml ~ # lspci | grep 'LSI.*RAID'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
root@grml ~ # lspci -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)

Which driver gets used for this device?

root@grml ~ # lspci -k -s '08:00.0'
08:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID Tri-Mode SAS3404 (rev 01)
        Subsystem: Lenovo ThinkSystem RAID 530-4i Flex Adapter
        Kernel driver in use: megaraid_sas
        Kernel modules: megaraid_sas

So it’s the megaraid_sas driver, let’s check some version information:

root@grml ~ # modinfo megaraid_sas | grep version
version:        07.703.05.00-rc1
srcversion:     442923A12415C892220D5F0
vermagic:       4.15.0-1-grml-amd64 SMP mod_unload modversions

But how does the kernel know which driver should be used for this device? We start by listing further details about the hardware device:

root@grml ~ # lspci -n -s 0000:08:00.0
08:00.0 0104: 1000:001c (rev 01)

The 08:00.0 describes the hardware slot information ([domain:]bus:device.function), the 0104 describes the class (with 0104 being of type RAID bus controller, also see /usr/share/misc/pci.ids by searching for ‘C 01’ -> ’04`), the (rev 01) obviously describes the revision number. We’re interested in the 1000:001c though. The 1000 identifies the vendor:

% grep '^1000' /usr/share/misc/pci.ids
1000  LSI Logic / Symbios Logic

The `001c` finally identifies the actual model. Having this information available, we can check the mapping of the megaraid_sas driver, using the `modules.alias` file of the kernel:

root@grml ~ # grep -i '1000.*001c' /lib/modules/$(uname -r)/modules.alias
alias pci:v00001000d0000001Csv*sd*bc*sc*i* megaraid_sas
root@grml ~ # modinfo megaraid_sas | grep -i 001c
alias:          pci:v00001000d0000001Csv*sd*bc*sc*i*

Bingo! Now we can check this against the Debian/stretch kernel, which doesn’t support this device yet:

root@stretch:~# modinfo megaraid_sas | grep version
version:        06.811.02.00-rc1
srcversion:     64B34706678212A7A9CC1B1
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
root@stretch:~# modinfo megaraid_sas | grep -i 001c
root@stretch:~#

No match here – bingo²! Now we know for sure that the ID 001c is relevant for us. How do we identify the corresponding change in the Linux kernel though?

The file drivers/scsi/megaraid/megaraid_sas.h of the kernel source lists the PCI device IDs supported by the megaraid_sas driver. Since we know that kernel v4.9 doesn’t support it yet, while it’s supported with v4.15 we can run "git log v4.9..v4.15 drivers/scsi/megaraid/megaraid_sas.h" in the git repository of the kernel to go through the relevant changes. It’s easier to run "git blame drivers/scsi/megaraid/megaraid_sas.h" though – then we’ll stumble upon our ID from before – `0x001C` – right at the top:

[...]
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   59) #define PCI_DEVICE_ID_LSI_VENTURA                 0x0014
754f1bae0f1e3 (Shivasharan S              2017-10-19 02:48:49 -0700   60) #define PCI_DEVICE_ID_LSI_CRUSADER                0x0015
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   61) #define PCI_DEVICE_ID_LSI_HARPOON                 0x0016
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   62) #define PCI_DEVICE_ID_LSI_TOMCAT                  0x0017
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   63) #define PCI_DEVICE_ID_LSI_VENTURA_4PORT               0x001B
45f4f2eb3da3c (Sasikumar Chandrasekaran   2017-01-10 18:20:43 -0500   64) #define PCI_DEVICE_ID_LSI_CRUSADER_4PORT      0x001C
[...]

Alright, the relevant change was commit 45f4f2eb3da3c:

commit 45f4f2eb3da3cbff02c3d77c784c81320c733056
Author: Sasikumar Chandrasekaran […]
Date:   Tue Jan 10 18:20:43 2017 -0500

    scsi: megaraid_sas: Add new pci device Ids for SAS3.5 Generic Megaraid Controllers
    
    This patch contains new pci device ids for SAS3.5 Generic Megaraid Controllers
    
    Signed-off-by: Sasikumar Chandrasekaran […]
    Reviewed-by: Tomas Henzl […]
    Signed-off-by: Martin K. Petersen […]

diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index fdd519c1dd57..cb82195a8be1 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -56,6 +56,11 @@
 #define PCI_DEVICE_ID_LSI_INTRUDER_24          0x00cf
 #define PCI_DEVICE_ID_LSI_CUTLASS_52           0x0052
 #define PCI_DEVICE_ID_LSI_CUTLASS_53           0x0053
+#define PCI_DEVICE_ID_LSI_VENTURA                  0x0014
+#define PCI_DEVICE_ID_LSI_HARPOON                  0x0016
+#define PCI_DEVICE_ID_LSI_TOMCAT                   0x0017
+#define PCI_DEVICE_ID_LSI_VENTURA_4PORT                0x001B
+#define PCI_DEVICE_ID_LSI_CRUSADER_4PORT       0x001C
[...]

Custom Debian kernel packages for testing

Now that we identified the relevant change, what’s the easiest way to test this change? There’s an easy way how to build a custom Debian package, based on the official Debian kernel but including further patch(es), thanks to Ben Hutchings. Make sure to have a Debian system available (I was running this inside an amd64 system, building for amd64), with according deb-src entries in your apt’s sources.list and enough free disk space, then run:

% sudo apt install dpkg-dev build-essential devscripts fakeroot
% apt-get source -t stretch linux
% cd linux-*
% sudo apt-get build-dep linux
% bash debian/bin/test-patches -f amd64 -s none 0001-scsi-megaraid_sas-Add-new-pci-device-Ids-for-SAS3.5-.patch

This generates something like a linux-image-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb for you (next to further Debian packages like linux-headers-4.9.0-6-amd64_4.9.88-1+deb9u1a~test_amd64.deb + linux-image-4.9.0-6-amd64-dbg_4.9.88-1+deb9u1a~test_amd64.deb), ready for installing and testing on the affected system. The Kernel Handbook documents this procedure as well, I just wasn’t aware of this handy `debian/bin/test-patches` so far though.

JFTR: sadly the patch with the additional PCI_DEVICE_ID* was not enough (also see #900349), we seem to need further patches from the changes between v4.9 and v4.15, though this turned up to be no longer relevant for my customer and it’s also working with Debian/buster nowadays.

31 July, 2019 07:00AM by mika

July 29, 2019

Candy Tsai

Outreachy Week 8 – Week 9: Remote or In-Office Working

The Week 9 blog prompt recommended by Outreachy was to write about my career goals. To be honest, this is a really hard topic for me. As long as a career path involves some form of coding, creating and learning new things, I’m willing to take it on. The best situation could be that it is also doing something good for the society. This might be because that “something that I am too passionate for” doesn’t yet exist in my life. For now, I wish I’d still be coding 5 years from now. It’s just that simple. The only thing that I would like to see improvement upon is gender balance for this industry.

As for working environment, I would like to share some thoughts after having experienced both extremes of totally remote work and complete in-office work. There are a lot of articles out there comparing the pros and cons. Here are just my opinions on the time spent not working:

  • Dozing off
  • Socializing

Dozing off

Our concentration time is limited and there definitely will be times when we doze off a bit. Just a list of things that I had done before in both places. I think I’m being too honest here 🙁

Office:

  • Browsing random pages
  • Checking useless e-mails
  • Talk to someone else also dozing off
  • Using social apps (e.g. Messenger)

Hoping people don’t think I’m doing these things for the whole day.

Remote:

  • Cook something to eat
  • Laundry or other house chores
  • Watch videos
  • Have a German lesson for an hour

I actually don’t take breaks between meals when working remotely.

In conclusion, I think dozing off in an office really really fits the definition of purely wasting time. You have peer pressure to look productive the whole 8 hours which just simply isn’t human. The things I do when I’m working remotely are actually things done after work from office. So I’ll give a vote for remote here.

Socializing

Office:

I had colleagues that I would love to go out with outside of work when I had an office job. One of the reasons that I stayed in a job is because of my colleagues. They were wonderful people and also great “friends”.

Remote:

The main means of communication is either text or video chat. Usually, they are for “work” purposes. I think my mentors are already kind enough to be there to support me whenever I’m stuck and I’m grateful for that! Don’t want to let them feel like they need to spend that much time on me. Although this might be different than “real” remote work, but I think it probably won’t be too distant from what I’m experiencing right now. I wouldn’t really want to specifically open a video chat just to talk about our daily lives through it.

I would vote for an office environment in this case since you can work and make friends at the same time which is pretty convenient for an introvert like me. If I don’t feel like making new friends, then probably I would choose remote work. I think I probably will change my preference as I get older.

Last but not least, as always my progress report for debci.

Video Report of the Internship

Link: https://youtu.be/89r4HqJL8KE

Week 8

  • Filming and editing my video for sharing the debci project for DebConf 2019
  • Fixing merge requests

Week 9

29 July, 2019 09:54AM by Candy Tsai

Russ Allbery

Review: All the Birds in the Sky

Review: All the Birds in the Sky, by Charlie Jane Anders

Publisher: Tor
Copyright: January 2016
ISBN: 1-4668-7112-1
Format: Kindle
Pages: 315

When Patricia was six years old, she rescued a wounded bird, protected it from her sister, discovered that she could talk to animals, and found her way to the Parliament Tree. There, she was asked the Endless Question, which she didn't know how to answer, and was dumped back into her everyday life. Her magic apparently disappeared again, except not quite entirely.

Laurence liked video games and building things. From schematics he found on the Internet, he built a wrist-watch time machine that could send him two seconds forward into the future. That was his badge of welcome, the thing that marked him as part of the group of cool scientists and engineers, when he managed to sneak away to visit a rocket launch.

Patricia and Laurence meet in junior high school, where both of them are bullied and awkward and otherwise friendless. They strike up an unlikely friendship based on actually listening to each other, Patricia getting Laurence out of endless outdoor adventures arranged by his parents, and the supercomputer Laurence is building in his closet. But it's not clear whether that friendship can survive endless abuse, the attention of an assassin, and their eventual recruitment into a battle between magic and technology of which they're barely aware.

So, first, the world-building in All the Birds in the Sky is subtly brilliant. I had been avoiding this book because I'd gotten the impression it was surreal and weird, which often doesn't work for me. But it's not, and that's due to careful and deft authorial control. This is a book in which two kids are sitting in a shopping mall watching people's feet go by on an escalator and guessing at their profession, and this happens:

The man in black slippers and worn gray socks was an assassin, said Patricia, a member of a secret society of trained killers who stalked their prey, looking for the perfect moment to strike and kill them undetected.

"It's amazing how much you can tell about people from their feet," said Patricia. "Shoes tell the whole story."

"Except us," said Laurence. "Our shoes are totally boring. You can't tell anything about us."

"That's because our parents pick out our shoes," said Patricia. "Just wait until we're grown up. Our shoes will be insane."

In fact, Patricia had been correct about the man in the gray socks and black shoes. His name was Theodolphus Rose, and he was a member of the Nameless Order of Assassins. He had learned 873 ways to murder someone without leaving even a whisper of evidence, and he'd had to kill 419 people to reach the number nine spot in the NOA hierarchy. He would have been very annoyed to learn that his shoes had given him away, because he prided himself on blending with his surroundings.

Anders maintains that tone throughout the book: dry, a little wry, matter-of-fact with a quirked smile, and utterly certain. The oddity of this world is laid out on the page without apologies, clear and comprehensible and orderly even when it's wildly strange. It's very easy as a reader to just start nodding along with magical academies and trans-dimensional experiments because Anders gives you the structure, pacing, and description that you need to build a coherent image.

The background work is worthy of this book's Nebula award. I just wish I'd liked the story better.

The core of my dislike is the characters, although for two very different reasons. Laurence is straight out of YA science fiction: geeky, curious, bullied, desperate to belong to something, loyal, and somewhere between stubborn and indecisive. But below that set of common traits, I never connected with him. He was just... there, doing predictable Laurence things and never surprising me or seeming to grow very much.

Laurence eventually goes to work for the Ten Percent Project, which is trying to send 10% of the population into space because clearly the planet is doomed. The blindness of that goal, and the degree to which the founder of that project resembled Elon Musk, was a bit too real to be funny. I kept waiting for Anders to either make a sharper satirical point or to let Laurence develop his own character outside of the depressing reality of techno-utopianism, but the story stayed finely balanced on that knife edge until it stopped being funny and started being awful.

Patricia, on the other hand, I liked from the very beginning. She's independent, determined, angry, empathetic, principled, and thoughtful, and immediately became the character I was cheering for. And every other major character in this novel is absolutely horrific to her.

The sheer amount of abusive gaslighting Patricia is subjected to in this book made me ill. Everyone from her family to her friends to her fellow magicians demean her, squash her, ignore her, trivialize her, shove her into boxes, try to get her to stop believing in things that happened to her, and twist every bit of natural ambition she has into new forms of prison. Even Laurence participates in this; although he's too clueless to be a major source of it, he's set up as her one port in the storm and then basically abandons her. I started the book feeling sorry for her; by the end of the book, I wanted Patricia to burn her life down with fire and start over with a completely new batch of humans. There's no way that she could do worse.

I want to be clear: I think this is an intentional authorial choice. I think Anders is entirely aware of how awful people are being, and the story of Laurence and Patricia barely managing to keep their heads above water despite them is the story she chose to write. A lot of other people loved it; this is more of a taste mismatch with the book than a structural flaw. But there are only so many paternalistic, abusive assholes passing themselves off as authority figures I can take in one book, and this book flew past my threshold and just kept going. Patricia and Laurence are mostly helpless against these people and have to let their worlds be shaped by them even when they know it's wrong, which makes it so, so much harder to bear.

The place where I think Anders did lose control of the plot, at least a little, is the ending. I can't fairly say that it came out of nowhere, since Anders was dropping hints throughout the book, but I did feel like it robbed the characters of agency in a way that I found emotionally unsatisfying as a reader, particularly since everyone in the book had been trying to take away Patricia's agency from nearly the first page. To have the ending then do the same thing added insult to injury in a way that I couldn't stomach. I can see the levels of symbolism knit together by this choice of endings, but, at least in my opinion, it would have been so much more satisfying, and somewhat redeeming of all the shit that Patricia had to go through, if she had been in firm control of how the symbolism came together.

This one's going to be a matter of taste, I think, and the world-building is truly excellent and much better than I had been expecting. But it's firmly in the "not for me" pile.

Rating: 5 out of 10

29 July, 2019 03:47AM

July 28, 2019

hackergotchi for Keith Packard

Keith Packard

snekboard-0.2

Snekboard v0.2 Update

I've built six prototypes of snekboard version 0.2. They're working great and I'm happy with the design.

New Motor Driver

Having discovered that the TI DRV8838 wasn't up to driving the Lego Power Functions Medium motor (8883) because of it's start-up current draw, I went back and reworked the snekboard circuit to use TI DRV8800 instead. That controller can provide up to 2.8A and doesn't have any trouble with this motor.

The DRV8800 is larger than the DRV8838, so it took a bit of re-wiring to fit them on the circuit board.

New Power Source Selector

In version 0.1, I was using two DFLS130L Schottky diodes to automatically select between the on-board lithium polymer battery and USB to power the board. That "worked", except that there was enough leakage back through them that when the USB connector was unplugged, the battery charge indicator LEDs both lit up, which left me with the choice of disabling those indicators or draining the battery.

To fix that, I found an automatic power selector (with current limit!) part, the TPS2121. This should avoid frying the board when you short the motor controller outputs, although those also have current limiting circuits. Defense in depth!

One issue I found was that this circuit draws current even when the output is disconnected, so I changed the power switch from a SPST to DPST and now control USB and battery power separately.

CircuitPython

I included a W25Q16 2MB NOR flash chip on the board so that it could also run CircuitPython. Before finalizing the design, I thought it might be a good idea to actually get that running.

I've submitted a pull request with the necessary changes. I hope to see that merged at some point, which will allow users to select between CircuitPython and snek.

Smoothing Speed Changes

While the 9V supply on snekboard is designed to supply plenty of current for the motors, if you ask it to suddenly change how much it is producing, it places a huge load on the battery. When this happens, the battery voltage drops below the brown-out value for the SoC and the board resets.

I experimented with how to resolve this by ramping the power up and down in the snek application. That worked great; the motors could easily switch from full speed in one direction to full speed in the other direction.

Instead of having users add code to every snek application, I decided to move this functionality down into the snek implementation. I did this by modifying the PWM and direction pins values in a function called from the timer interrupt. This lets the application continue to run at full speed, while the motor controller slowly adjusts its output. No more resets when switching from full forward to full reverse.

Future Plans

I've got the six v0.2 prototypes that I'll be able to use in for the upcoming class year, but I'm unsure of whether there would be enough interest in the broader community to have more of them made. Let me know if you'd be interested in purchasing snekboards; if I get enough responses, I'll look at running them through Crowd Supply or similar.

28 July, 2019 08:20PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.5

A new release of the anytime package is arriving on CRAN. This is the sixteenth release, and comes a good month after the 0.3.4 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release brings a reworked fallback mechanism enabled via the useR=TRUE option. Because Windows remains a challenging platform which, among other more important ailments, also does not provide timezone information, we no longer rely on the RApiDatetime package which exposes parts of the R API. This works everywhere where timezone information is available, but less so on Windows. Instead, we now use Rcpp::Function to call directly back into R. This received a considerable amount of testing, and the package should now work even better when either a timezone is set, or the Windows fallback is used, or both. My thanks to Christoph Sax for patiently testing and helping to debug this, as well as for his two pull requests contributing to this release (even if one of these is now redundant as we no longer use RApiDatetime).

The full list of changes follows.

Changes in anytime version 0.3.5 (2019-07-28)

  • Fix use of Rcpp::Function-accessed Sys.setenv(), name all arguments in call to C++ (Christoph Sax in #95).

  • Relax constraint on Windows testing in several test files (Christoph Sax in #97).

  • Fix an issue related to TZ environment variable setting (Dirk in #101).

  • Change useR=TRUE behaviour by directly calling R via Rcpp (Dirk in #103 fixing #96).

  • Several updates to unit testing files aiming for more robust behaviour across platforms.

  • Updated documentation in manual pages, README and vignette.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 July, 2019 03:37PM

hackergotchi for Joachim Breitner

Joachim Breitner

Custom firmware for the YQ8003 bicycle light

This blog post is about 18 months late, but better late than never...

The YQ8003

1½ years ago, when I was still a daredevil that was biking in Philly I got interested in these fancy strips of LED lights that you put into your bike wheel and when you drive fast enough, they form a stable image, both because of the additional visibility and safety, but also because the seem to be fun gadgets.

There are brands like Monkey Lights, but they are pretty expensive, and there are cheaper similar no-name products available, such as the YQ8003, which you can either order from China or hope to find on eBay for around $30 per piece.

The YQ8003 bike light

The YQ8003 bike light

Sucky software

The hardware is nice: water proof, easy to install, bright, long-lasting battery. But the software, oh my!

You need Windows to load your own pictures onto the device, and the application is really unpleasant to use, you can’t easily save your edits and sequences of images and so on.

But also the software on the device itself (which sports a microcontroller) was unsatisfying: The transformation it applies to the image assumes that the bar of LEDs goes through the center of the wheel. Obviously that is wrong, as there is the hub. With a small hub the difference is not so bad, but I have rather large hubs (a generator in the front hub, and internal gears in the rear hub), and this make the image not stable, but jump back and forth a bit.

Time to DIY!

So obviously I had to do something about it. At first I planned to to just find out how to load my own pictures onto the hardware, using the existing software on the device. So I needed to find out the protocol.

I was running their program on Windows in VirtualBox, and quickly noticed that the USB connection that you use to load your data onto the YQ8003 is actually a serial-over-USB port. I found a sniffer for serial communication and used that to dump what the Windows app sent to the device. That was all pretty hairy, and I only did it once (and deleted the Windows setup soon), but luckily one dump was sufficient.

I did not find out where in the data sent to the light the image was encoded. But I did find that the protocol used to talk to the device is a standard protocol to talk to microcontrollers, something called “STC ISP”. With that information, I could find out that the microcontroller is a STC12LE5A60S2 with 22MHz and 60KB of RAM, and that it is “8051 compatible”, whatever that means.

So this is how I, for the first and so far only time, ventured into microcontroller territory. It was pretty straight-forward to get a toolchain to compile programs for this microcontroller (using sdcc) and to upload code to it (using stcgal), and I could talk to my code over the serial port. This is promising!

Reverse engineering

I also quickly found out how the magnet (which the device uses to notice when the wheel has done one rotation) is accessed: It triggers interrupt 0.

But finding out how to actually access the LEDs and might them light up was very tricky. This kind of information is not specific to the microcontroller (STC12LE5A60S2), for which I could find documentation, but really depends on how it is wired up.

I was able to extract, from the serial port communication dump mentioned earlier, the firmware in a way I could send it to the microcontroller. So I could always go back to a working state. Moreover I could disassemble that code, and try to make sense of it. But I could not make sense of it, i.e. could not understand .

So if thinking does not help, maybe brute force does? I wrote a program that would take the working firmware, zero out parts of it. Then I would try that firmware and note if it still works. This way, my program would zero out ever more of the firmware, until only a few instructions are left that would still make the LEDs light up.

In the end I had, I think, 13 instructions left that made the LEDs light up lightly. Success! Or so I thought … the resulting program was pretty non-sensical. It essentially increments a value and writes another value to the address stored in the first value. So it just spews data all over the address range, wrapping around when at the end. No surprise it triggers the LEDs somewhere along the way…

(Still, I published the program to minimize binary data under the name bisect-binary – maybe you’ll find it useful for something.)

I actually don’t remember how I eventually figured out what to do, and which bytes and bits to toggle in which order. Maybe more reading, and some advice to look for from people who know more about LEDs.

bSpokeLight

With that knowledge I could finally write my own firmware and user application. The part that goes onto the device is written in C and compiled with sdcc. And the part that runs on your computer is a command line application written in Haskell, that takes the pictures and animations you want, applies the necessary transformations (now taking the width of your hub into account!) and embeds that into the compiled C code to produce a firmware file that you can load onto your device using stcgal.

It support images in all common formats, produces 8 colors and can store up to 8 images on the device, which then circle according to the time you specify. I dubbed the software bSpokeLight.

The light in action with more lights at the GPN19 (The short shutter speed of the camera prevents the visual effect in the eye that allows you to see the images)

The light in action with more lights at the GPN19 (The short shutter speed of the camera prevents the visual effect in the eye that allows you to see the images)

It actually supports reading GIF animations, but I found that they are much harder to recognize later, unless I rotate the wheel very fast and you know what to look for. I am not sure if this is a limitation of the hardware (and our eyes), a problem with my code or a problem with the particular animations I have tried. Will need to experiment more.

Can you see the swing dancing couple?

Can you see the swing dancing couple?

As always, I am sharing the code in the hope that others find it useful as well. Thanks to Haskell, Nix and the iohk-nix project I can easily provide pre-compiled binaries for Windows and Linux, statically compiled for the latter for distribution-independence. Let me know if you try to use it and how that went.

28 July, 2019 09:30AM by Joachim Breitner ([email protected])

hackergotchi for Holger Levsen

Holger Levsen

20190728-minidebcamp-fosdem

Mini DebCamp Fosdem 2020?

So someone from Belgium just brought up the excellent idea of having a Mini DebCamp before and/or after FOSDEM 2020. I like it! What do you think?

On Monday after FOSDEM there will be again the Copyleft-Event from SFC, so maybe 3 days of hacking before FOSDEM would be better, but still, whatever, for planing these details there's now #debconf-fosdem on OFTC ;)

It's just an idea, but seriously, we'd only need to rent/find a room for 23-42 hackers nearby, and we'd be set. Debian people are good at self organizing, if they have network and a roof.

Also, there might be beer in Belgium, someone from Belgium just confirmed.

28 July, 2019 03:23AM

July 27, 2019

hackergotchi for Bits from Debian

Bits from Debian

DebConf19 closes in Curitiba and DebConf20 dates announced

DebConf19 group photo - click to enlarge

Today, Saturday 27 July 2019, the annual Debian Developers and Contributors Conference came to a close. Hosting more than 380 attendees from 50 different countries over a combined 145 event talks, discussion sessions, Birds of a Feather (BoF) gatherings, workshops, and activities, DebConf19 was a large success.

The conference was preceded by the annual DebCamp held 14 July to 19 July which focused on individual work and team sprints for in-person collaboration toward developing Debian and host to a 3-day packaging workshop where new contributors were able to start on Debian packaging.

The Open Day held on July 20, with over 250 attendees, enjoyed presentations and workshops of interest to the wider audience, a Job Fair with booths from several of the DebConf19 sponsors and a Debian install fest.

The actual Debian Developers Conference started on Sunday 21 July 2019. Together with plenaries such as the the traditional 'Bits from the DPL', lightning talks, live demos and the announcement of next year's DebConf (DebConf20 in Haifa, Israel), there were several sessions related to the recent release of Debian 10 buster and some of its new features, as well as news updates on several projects and internal Debian teams, discussion sessions (BoFs) from the language, ports, infrastructure, and community teams, along with many other events of interest regarding Debian and free software.

The schedule was updated each day with planned and ad-hoc activities introduced by attendees over the course of the entire conference.

For those who were not able to attend, most of the talks and sessions were recorded for live streams with videos made, available through the Debian meetings archive website. Almost all of the sessions facilitated remote participation via IRC messaging apps or online collaborative text documents.

The DebConf19 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf20 will be held in Haifa, Israel, from 23 August to 29 August 2020. As tradition follows before the next DebConf the local organizers in Israel will start the conference activites with DebCamp (16 August to 22 August), with particular focus on individual and team work toward improving the distribution.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential open source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Argentina, and Bosnia and Herzegovina. More information about DebConf is available from https://debconf.org/.

About Infomaniak

Infomaniak is Switzerland's largest web-hosting company, also offering backup and storage services, solutions for event organizers, live-streaming and video on demand services. It wholly owns its datacenters and all elements critical to the functioning of the services and products provided by the company (both software and hardware).

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf for more than ten years, and is also a Debian partner sponsoring parts of Salsa's continuous integration infrastructure within Google Cloud Platform.

About Lenovo

As a global technology leader manufacturing a wide portfolio of connected products, including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions, Lenovo understands how critical open systems and platforms are to a connected world.

Contact Information

For further information, please visit the DebConf19 web page at https://debconf19.debconf.org/ or send mail to [email protected].

27 July, 2019 09:40PM by Laura Arjona Reina and Donald Norwood

Jonathan Wiltshire

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, July 2019

I was assigned 18.5 hours of work by Freexian's Debian LTS initiative and worked all those hours this month.

I prepared and released Linux 3.16.70 with various fixes from upstream. I then rebased jessie's linux package on this. Later in the month, I picked the fix for CVE-2019-13272, uploaded the package, and issued DLA-1862-1. I also released Linux 3.16.71 with just that fix.

I backported the latest security update for Linux 4.9 from stretch to jessie and issued DLA-1863-1.

27 July, 2019 01:40PM

Talk: What's new in the Linux kernel (and what's missing in Debian)

As planned, I presented my annual talk about Linux kernel changes at DebConf on Monday—remotely. (I think this was a DebConf first.)

A video recording is already available (high quality, low quality). The slides are linked from my talks page and from the DebConf event page.

Thanks again to the video team for taking the time to work out video and audio routing with me.

27 July, 2019 01:24PM

hackergotchi for Laura Arjona Reina

Laura Arjona Reina

A new home for Debian in the Mastodon / ActivityPub fediverse: follow @[email protected] (and possible future moves)

TL;DR

Recent events in the fediverse in general and related to fosstodon.org instance in particular have made me rethink the place where I’d like to handle the @debian account in the Mastodon/GNU Social/ActivityPub fediverse.
I couldn’t decide a “final” place yet, but I’m exploring options (including selfhosting).

For now, I’ve moved the account to @[email protected] – Please follow @debian there. Thank you Framasoft for administering and providing the service.

(Some) context

Note: This paragraph is updated (2019-07-28), thanks to the people pointing to me that it was unclear, I hope this new wording and details clarifies more my position.

For a summary of what happened plus some thoughts thrown to the table you can read this article by Brandon ‘LinuxLiaison’ Nolet and this one by ’emsenn’. I’ve been thinking about all this, and I decided to leave the fosstodon.org instance because I believe there are underlying issues that the provided apology does not solve, and do not help to foster the welcoming, diverse and inclusive environment where I’d like to be, for me, and for this non-official debian account. There is more info out there and several different personal opinions, so I guess people interested in learn more about the context can find by themselves.

Roadmap

  • Starting 2019-07-28 I’ll post the micronews.debian.org RSS feed in @[email protected]
  • I will continue posting the micronews.debian.org RSS feed to @[email protected] too, to give time for this news to spread and people to move.
  • I will fix a toot to this blog post in both accounts, because  @[email protected] may be temporary (or not. we’ll see).
  • On 1 September I will stop sending the micronews feed to @[email protected]  and I will only post a toot to this blog post from time to time.
  • On 1 October I will stop posting anything from @[email protected] and close the account or make it dormant or whatever.
  • I don’t think I will take a new decision of a final or future move before October. I will try to put time on exploring options from September until the end of the year. Depending on my availability and the available help from Debian friends, the final home of the @debian account in the fediverse will be settled soon or later… you know, “when it’s ready”.

Thanks for understanding, and for your help

All this caught me in a “bad moment” (very busy with Debian and non-Debian stuff + personally, lower energy than usually). I apologise for not giving much details and also for not reacting quicker.

I appreciate if you can spread this news so people follow the new account easily.
I would like to thank the friends that gave me some heads up about what was happening, and helped me to understand in a time where I could have not much time to read everything, and also were patient to wait for me to take a decision.

Reminder: the account, wherever it’s hosted, is a mirror of micronews.debian.org

Finally, I would like to remind everybody that the @debian account in the fediverse, wherever is hosted, is not official. It just posts the RSS feed provided by https://micronews.debian.org, which is one the official source of news about Debian. Micronews includes short news produced or selected by the Debian Publicity team and also broadcasts links to the longer official announcements posted in the other official channels: the Debian blog, the Debian website or the Debian announce and news mailing lists.

27 July, 2019 12:13PM by larjona

Enrico Zini

Opinion Sort

«Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.

This discrepancy is common in public life, where people are frequently impelled— whether by their own propensities or by the demands of others—to speak extensively about matters of which they are to some degree ignorant.

Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs.

The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world.»

(From Harry G. Frankfurt's On Bullshit)

Opinion Sort

In a world where it is more important to have a quick opinion than a thorough understanding, I propose this novel sorting algoritihm.

def opinion_sort(list: List[Any], post: Callable[List]):
    """
    list: a list of elements to sort in place
    post: a callable that requires a sorted list as input and does
          proper error checking, as they should do
    """
    if list[0] > list[1]:
        swap(list[0], list[1])
    while True:
        try:
            # Assert opinion: "It is a sorted list!"
            post(list)
        except NotSortedException as e:
            # Someone disagrees, and they have a good point
            swap(list[e.unsorted_idx_1], list[e.unsorted_idx_2])
        else:
            break
    # The list is now sorted, and the callable has to agree

This algorithm is the most efficient sorting algorithm, because it can sort a list by only looking at the first two elements.

27 July, 2019 09:29AM

Opinion Sort

«Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.

This discrepancy is common in public life, where people are frequently impelled— whether by their own propensities or by the demands of others—to speak extensively about matters of which they are to some degree ignorant.

Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs.

The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world.»

(From Harry G. Frankfurt's On Bullshit)

Opinion Sort

In a world where it is more important to have a quick opinion than a thorough understanding, I propose this novel sorting algoritihm.

def opinion_sort(list: List[Any], post: Callable[List]):
    """
    list: a list of elements to sort in place
    post: a callable that requires a sorted list as input and does
          proper error checking, as they should do
    """
    if list[0] > list[1]:
        swap(list[0], list[1])
    while True:
        try:
            # Assert opinion: "It is a sorted list!"
            post(list)
        except NotSortedException as e:
            # Someone disagrees, and they have a good point
            swap(list[e.unsorted_idx_1], list[e.unsorted_idx_2])
        else:
            break
    # The list is now sorted, and the callable has to agree

This algorithm is the most efficient sorting algorithm, because it can sort a list by only looking at the first two elements.

27 July, 2019 09:29AM

July 26, 2019

hackergotchi for Eddy Petri&#537;or

Eddy Petrișor

Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without students going mad?

Update 2019-Jul-27: In the code below my StackVec type was more complicated than it had to be, I had been using StackVec<'a, &'a mut T> instead of StackVec<'a, T> where T: 'a. I am unsure how I ended up making the type so complicated, but I suspect the lifetimes mismatch errors and the attempt to implement IntoIterator were the reason why I made the original mistake.

Corrected code accordingly.



I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}
The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}

Corrected code as of 2019-Jul-27:
pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, T> {
    type Item = T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop().clone();
        self.index += 1;

        result
    }
}



I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}
The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}
Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

26 July, 2019 11:49PM by eddyp ([email protected])

Jonathan Wiltshire

Daisy and George at Debian’s Conference Dinner

Daisy and George have spent the week at the Debian Conference. Tonight is the conference dinner.

The menu is more complicated than usual, because it is in both Portuguese and English.

Daisy and George have made many friends this week.

Dinner is over. It’s time for some serious work.

26 July, 2019 11:31PM by Jon

Giovanni Mascellani

My take on OpenPGP best practices

After having seen a few talks at DebConf on GnuPG and related things, I would like to document here how I currently manage my OpenPGP keys, in the hope they can be useful for other people or for discussion. This is not a tutorial, meaning that I do not give you the commands to do what I am saying, otherwise it would become way too long. If there is the need to better document how to implement these best practices, I will try to write another post.

I actually do have two OpenPGP certificates, D9AB457E and E535FA6D. The first one is RSA 4096 and the second one is Curve25519. The reason for having two certificates is algorithm diversity: I don't know which one between RSA and Curve25519 will be the first to be considered less secure or insecure, therefore I would like to be ready for both scenarios. Having two certificates already allows me to do signature hunting on both, in such a way that it is easy to transition from one to the other as soon as there is the need.

The key I currently use is the RSA one, which is also the one available in the Debian keyring.

(If you search on the keyservers you will find many other keys with my name; they are obsolete, meant for my internal usage or otherwise not in use; just ignore them!)

Even if the two primary keys are different, their subkeys are the same (apart from some older cruft now revoked), meaning that they have the same key material. This is useful, because I can use the same hardware token for both keys (most hardware token only have three key slot, one for each subkey capability, so to have two primary keys ready for use you need two tokens, unless the two keys share their subkeys). I have one subkey for each subkey capability (sign, encrypt and authentication), wich are Curve25519 keys and are stored in a Nitrokey Start token. I also have, but tend to not use, one RSA subkey for each capability, which are stored on a OpenPGP card. Thanks to some date tweaking, both certificates are configured in such a way that Curve25519 subkeys are always preferred over RSA subkeys, but I also want to retain the RSA keys for corner cases where Curve25519 is not available.

The reason to choose Curve25519 over RSA for default usage is that they are faster and generate smaller signatures. I have no idea which one is considered more secure, but I believe that neither of them is the weak link in my security chain.

The primary keys have an expiration date, which is always my birthday. Such choice is for remembering, a couple of months in advance, to extend it of one year, so that the key remains valid. Choosing the update interval here is of course a compromise between security and convenience. One year seems fine. I see no advantage in setting an expiration date on subkeys, since I can always use the primary key to revoke them. It might be useful to set an expiration date if I had a subkey rotation strategy, but I don't, and unfortunately with OpenPGP is a bit difficult to have one, since all subkeys are stored forever in the certificate, which would quickly become bloated.

The primary keys' private material is stored in a external disk that is normally disconnected from any computer, so completely inaccessible from the Internet. I connect it to my computer when I need to do operations that require the primary key, like signing other keys, managing subkeys or extending the key validity. This setup is not ideal, because it would be better to only connect the external storage to a machine that is always offline (and therefore is less likely to have been compromised). But that would require maintaining another machine, and as usual one has to compromise between security and convenience. Also, that external disk also contains other data, so it gets connected to my laptop also for other operations than working with OpenPGP certificates. I could improve here, but it is still better than bringing the primary key as a file in my computer.

I also have copies of my keys' private material (both for primary keys and subkeys) and revokation certificates on a bunch of paper sheets hidden somewhere in my house, just in case the external disk should fail. A common tool for this step is paperkey, although I did follow this tutorial to encode the secret key in a number of data matrices.

Overall, while my setup is perfectible, I believe it also reasonably secure for my use case, and quite convenient to use.

26 July, 2019 09:30PM by Giovanni Mascellani

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Vote craziness

Of all the things I've seen in Debian, spamming DDs with a vote that's not a vote (“which of these terrible things the DPL did are the worst causes of everything that's wrong in the world”) has to be among the craziest. (I won't link to it here.)

26 July, 2019 06:00PM

hackergotchi for Michael Prokop

Michael Prokop

Debian buster: changes in coreutils #newinbuster

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

One package that isn’t new but its tools are used by many of us is coreutils, providing many essential system utilities. We have coreutils v8.26-3 in Debian/stretch and coreutils v8.30-3 in Debian/buster. Compared to the changes between jessie and stretch there are no new tools, but there are some new options available that I’d like to point out.

New features/options

b2sum + md5sum + sha1sum + sha224sum + sha256sum + sha384sum + sha512sum (compute and check message digest):

  -z, --zero           end each output line with NUL, not newline, nd disable file name escaping

cp (copy files and directories):

  Use --reflink=never to ensure a standard copy is performed.

env (run a program in a modified environment):

  -C, --chdir=DIR      change working directory to DIR
  -S, --split-string=S  process and split S into separate arguments;
                        used to pass multiple arguments on shebang lines
  -v, --debug          print verbose information for each processing step

ls (list directory contents), dir + vdir (list directory contents):

  --hyperlink[=WHEN]     hyperlink file names; WHEN can be 'always' (default if omitted), 'auto', or 'never'

This –hyperlink option is especially worth mentioning if you’re using a recent terminal emulator (especially based on VTE), see Hyperlinks (a.k.a. HTML-like anchors) in terminal emulators for further information.

rm (remove files or directories):

  --preserve-root=all   do not remove '/' (default); with 'all', reject any command line argument on a separate device from its parent

split (split a file into pieces):

  -x                      use hex suffixes starting at 0, not alphabetic
  --hex-suffixes[=FROM]  same as -x, but allow setting the start value

timeout (run a command with a time limit):

  -v, --verbose  diagnose to stderr any signal sent upon timeout

Changes:

date (print or set the system date and time):

--rfc-2822 (AKA -R) was renamed into --rfc-email, while --rfc-2822 is still supported

nl (write each FILE to standard output, with line numbers added):

Old default options: -bt        -fn -hn -i1 -l1 -nrn   -sTAB   -v1 -w6 
New default options: -bt -d'\:' -fn -hn -i1 -l1 -n'rn' -s<tab> -v1 -w6

26 July, 2019 04:45PM by mika

Debian buster: changes in util-linux #newinbuster

Debian buster is there, and similar to what we had with #newinwheezy, #newinjessie and #newinstretch it’s time for #newinbuster!

Update on 2019-07-26 22:55 UTC: Cyril Brulebois pointed out, that findmnt (find a filesystem) was available in Debian/stretch already as part of the mount package, updated the blog post accordingly

One package that isn’t new but its tools are used by many of us is util-linux, providing many essential system utilities. We have util-linux v2.29.2-1+deb9u1 in Debian/stretch and util-linux v2.33.1-0.1 in Debian/buster. There are many new options available and we also have a few new tools available.

Tools that have been taken over from / moved to other packages

  • cfdisk + fdisk + sfdisk (tools to display or manipulate a disk partition table) were moved from util-linux to fdisk
  • findmnt (find a filesystem) is no longer shipped via the mount binary package (of util-linux source package) but part of the util-linux binary package itself nowadays
  • setpriv (run a program with different Linux privilege settings) is no longer shipped as separate binary package of util-linux but part of the util-linux binary package itself nowadays
  • su (change user ID or become superuser) was moved from login package (kudos to Andreas Henriksson for this!)

Deprecated / removed tools

Tools that are no longer shipped with util-linux as of Debian/buster:

  • line binary (copies one line (up to a newline) from standard input to standard output), the head binary is its suggested replacement
  • pg binary (browse pagewise through text files), it’s marked deprecated in POSIX since 1997
  • tailf binary (follow the growth of a log file), it was deprecated in 2017 and `tail -f` from coreutils works fine
  • tunelp binary (set various parameters for the lp device), parallel port printers are suspected to be extinct by now

New tools

blkzone (run zone command on a device):

Usage:
 blkzone <command> [options] <device>

Run zone command on the given block device.

Commands:
 report       Report zone information about the given device
 reset        Reset a range of zones.

Options:
 -o, --offset <sector>  start sector of zone to act (in 512-byte sectors)
 -l, --length <sectors> maximum sectors to act (in 512-byte sectors)
 -c, --count <number>   maximum number of zones
 -v, --verbose          display more details

 -h, --help             display this help
 -V, --version          display version

For more details see blkzone(8).

chmem (configure memory, set a particular size or range of memory online or offline):

Usage:
 chmem [options] [SIZE|RANGE|BLOCKRANGE]

Set a particular size or range of memory online or offline.

Options:
 -e, --enable       enable memory
 -d, --disable      disable memory
 -b, --blocks       use memory blocks
 -z, --zone <name>  select memory zone (see below)
 -v, --verbose      verbose output
 -h, --help         display this help
 -V, --version      display version

Supported zones:
 DMA
 DMA32
 Normal
 Highmem
 Movable
 Device

For more details see chmem(8).

choom (display and adjust OOM-killer score):

Usage:
 choom [options] -p pid
 choom [options] -n number -p pid
 choom [options] -n number command [args...]]

Display and adjust OOM-killer score.

Options:
 -n, --adjust <num>     specify the adjust score value
 -p, --pid <num>        process ID

 -h, --help             display this help
 -V, --version          display version

For more details see choom(1).

fincore (count pages of file contents in core):

Usage:
 fincore [options] file...

Options:
 -J, --json            use JSON output format
 -b, --bytes           print sizes in bytes rather than in human readable format
 -n, --noheadings      don't print headings
 -o, --output <list>   output columns
 -r, --raw             use raw output format

 -h, --help            display this help
 -V, --version         display version

Available output columns:
       PAGES  file data resident in memory in pages
        SIZE  size of the file
        FILE  file name
         RES  file data resident in memory in bytes

For more details see fincore(1).

lsmem (list the ranges of available memory with their online status):

Usage:
 lsmem [options]

List the ranges of available memory with their online status.

Options:
 -J, --json           use JSON output format
 -P, --pairs          use key="value" output format
 -a, --all            list each individual memory block
 -b, --bytes          print SIZE in bytes rather than in human readable format
 -n, --noheadings     don't print headings
 -o, --output <list>  output columns
     --output-all     output all columns
 -r, --raw            use raw output format
 -S, --split <list>   split ranges by specified columns
 -s, --sysroot <dir>  use the specified directory as system root
     --summary[=when] print summary information (never,always or only)

 -h, --help           display this help
 -V, --version        display version

Available output columns:
      RANGE  start and end address of the memory range
       SIZE  size of the memory range
      STATE  online status of the memory range
  REMOVABLE  memory is removable
      BLOCK  memory block number or blocks range
       NODE  numa node of memory
      ZONES  valid zones for the memory range

For more details see lsmem(1).

New features/options

agetty + getty (alternative Linux getty):

  --list-speeds          display supported baud rates

blkid (locate/print block device attributes) gained a bunch of long options:

Options:

  --cache-file          same as -c 
  --no-encoding         same as -d
  --garbage-collect     same as -g
  --output              same as -o
  --list-filesystems    same as -k
  --match-tag           same as -s
  --match-token         same as -t
  --list-one            same as -l
  --label               same as -L
  --uuid                same as -U

Low-level probing options:

  --probe               same as -p
  --info                same as -i
  --size                same as -S
  --offset              same as -O
  --usages              same as -u
  --match-types         same as -n

dmesg (print or control the kernel ring buffer):

  -p, --force-prefix          force timestamp output on each line of multi-line messages

fallocate (preallocate or deallocate space to a file):

  -i, --insert-range   insert a hole at range, shifting existing data
  -x, --posix          use posix_fallocate(3) instead of fallocate(2)

findmnt (find a filesystem):

  --output-all       output all available columns
  --pseudo           print only pseudo-filesystems
  --real             print only real filesystems
  --tree             enable tree format output is possible

fstrim (discard unused blocks on a mounted filesystem):

  -A, --fstab         trim all supported mounted filesystems from /etc/fstab
  -n, --dry-run       does everything, but trim

hwlock (read or set the hardware clock (RTC)):

  -l                 same as --localtime
  --delay <sec>      delay used when set new RTC time
  -v, --verbose      display more details

lsblk (list block devices):

Options:

  -z, --zoned          print zone model
  -T, --tree           use tree format output
  --sysroot >dir<  use specified directory as system root

Available output columns:

  PATH     path to the device node
  FSAVAIL  filesystem size available
  FSSIZE   filesystem size
  FSUSED   filesystem size used
  FSUSE%   filesystem use percentage
  PTUUID   partition table identifier (usually UUID)
  PTTYPE   partition table type
  ZONED    zone model

lscpu (display information about the CPU architecture):

  -J, --json              use JSON for default or extended format

lslocks (list local system locks):

Options:

  -b, --bytes            print SIZE in bytes rather than in human readable format
      --output-all       output all columns

Available output columns:

  TYPE  kind of lock

lslogins (display information about known users in the system):

Options:

      --output-all         output all columns

Available output columns:

  PWD-METHOD  password encryption method

lsns (list namespaces):

Options:

      --output-all       output all columns
  -W, --nowrap           don't use multi-line representation

Available output columns:

  NETNSID  namespace ID as used by network subsystem
     NSFS  nsfs mountpoint (usually used network subsystem)

nsenter (run program with namespaces of other processes):

  -a, --all              enter all namespaces
      --output-all     output all columns
  -S, --sector-size <num>  overwrite sector size
      --list-types     list supported partition types and exit

rename.ul (rename files):

  -n, --no-act        do not make any changes
  -o, --no-overwrite  don't overwrite existing files
  -i, --interactive   prompt before overwrite

runuser (run a command with substitute user and group ID):

  -w, --whitelist-environment <list>  don't reset specified variables
  -P, --pty                       create a new pseudo-terminal

setsid (run a program in a new session):

  -f, --fork     always fork

setterm (set terminal attributes):

  --resize                          reset terminal rows and columns

unshare (run program with some namespaces unshared from parent):

  --kill-child[=<signame>]  when dying, kill the forked child (implies --fork), defaults to SIGKILL

wipefs (wipe a signature from a device):

Options:

  -i, --noheadings    don't print headings
  -J, --json          use JSON output format
  -O, --output <list> COLUMNS to display (see below)

Available output columns:
     UUID  partition/filesystem UUID
    LABEL  filesystem LABEL
   LENGTH  magic string length
     TYPE  superblok type
   OFFSET  magic string offset
    USAGE  type description
   DEVICE  block device name

zramctl (set up and control zram devices):

  -a, --algorithm lzo|lz4|lz4hc|deflate|842   compression algorithm to use (new compression algorithms lz4hc, deflate + 842)
       --output-all          output all columns

Deprecated and removed options

hwlock (read or set the hardware clock (RTC)):

  --badyear        ignore RTC's year because the BIOS is broken
  -c, --compare    periodically compare the system clock with the CMOS clock
  --getepoch       print out the kernel's hardware clock epoch value
  --setepoch       set the kernel's hardware clock epoch value to the value given with --epoch

unshare (run program with some namespaces unshared from parent):

  -s     (use --setgroups instead)

26 July, 2019 04:43PM by mika

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.2: Small Polish

The second maintenance release of Rcpp, following up on the 10th anniversary and the 1.0.0. release, was prepared last Saturday and released to both the Rcpp drat repo and CRAN. Following all the manual inspection (including a false positive result from reverse dependencies), it has finally arrived on CRAN earlier today. The corresponding Debian package was also uploaded, and binaries have since been built.

Just like for Rcpp 1.0.1, we have a four month gap between releases which seems appropriate given both the changes still being made (see below) and the relative stability of Rcpp. It still takes work to release this as we run multiple extensive sets of reverse dependency checks so maybe one day we will switch to six month cycle.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1713 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 176 in BioConductor. Per the (partial) logs of CRAN downloads, we have had over one million downloads a month following the previous release.

This release features a number of different pull requests by four different contributors as detailed below.

Changes in Rcpp version 1.0.2 (2019-07-20)

  • Changes in Rcpp API:

    • Files in src/ are now consistentely lowercase (Dirk in #956).

    • The Rcpp 'API Version' is now accessible via getRcppVersion() (Dirk in #963).

  • Changes in Rcpp Attributes:

    • The second END wrapper macro also gets UNPROTECT and a variable reference suppressing compiler warnings (Dirk in #953 fixing #951).

    • Default function arguments are parsed correctly (Pierrick Roger in #977 fixing #975)

  • Changes in Rcpp Sugar:

    • Added decreasing parameter to sort_unique() (James Balamuta in #958 addressing #950).
  • Changes in Rcpp Deployment:

    • Travis CI unit tests are now always running irrespective of the package version (Dirk in #954).
  • Changes in Rcpp Documentation:

    • The Rcpp-modules vignette now covers the RCPP_EXPOSED_* macros, and the Rcpp-extending vignette references it (Ralf Stubner in #959 fixing #952)

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 July, 2019 12:56AM

July 25, 2019

hackergotchi for Jonathan Dowland

Jonathan Dowland

Beatrice Dowland

My second daughter, Beatrice Dowland, was born in the last week or so; we are all healthy and happy (but tired). I'm taking most of August off from work (and similar activities). See you soon!

(previously)

25 July, 2019 02:26PM

July 24, 2019

Jose M. Calhariz

at daemon 3.2.0

There is a new version of at daemon, 3.2.0. It was implemented some new features, so the bump on the minor version.

You can download the source and the signature from http://software.calhariz.com/at/

The changelog:

at 3.2.0 (2019-07-24):
  Jose M Calhariz
        Print time of new job before the input of the commands, Closes #863045
        Do not drop seconds on -t option, Closes #792040
        Start using nice levels from 0 instead of 2. Closes #519716
        Correctly handle DST when specifying a UTC time. Closes #364975
  Gerhard Poul:
        Add flag to send email to other user. MR 5

24 July, 2019 11:38PM by Jose M Calhariz

Hideki Yamane

mmdebstrap is nice tool, but newest deboostrap is not so bad :)

mmdebstrap is fast because it uses apt for package dependency resolution and download. Yeah, it's true, almost right - but most of the reason for "fast" is just about "downloading packages", I guess.

debootstrap uses wget for download packages, it's serial execution so it waits for each download and mmdebstrap - apt does not do so. If you use "--cache-dir" option for debootstrap, exec time is almost the same.

$ time sudo mmdebstrap unstable unstable-chroot
(snip)
real 2m58.670s
user 0m23.559s
sys 0m26.387s

$ time sudo debootstrap sid sid
(snip)
real 7m22.955s
user 0m57.450s
sys 0m37.894s
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid
(snip)
real 2m44.752s

user 0m54.504s
sys 0m33.666s

Anyway, I should consider "--use-apt" option or something for debootstrap - for future release :)

24 July, 2019 01:02PM by Hideki Yamane ([email protected])

Elana Hashman

How to grant (Tom Marble) Debian Maintainer access

I run the Debian Clojure Team, which means that occasionally folks volunteer to help out with Clojure packaging. This is awesome! Since I'm lazy, I don't want to have to sponsor every package upload for folks who have proven their aptitude at packaging. Hence, sometimes I need to grant Debian Maintainers upload access to team packages.

Folks typically point at this email as documentation of how to grant DM access on packages. However, I have zero desire to hand-craft artisanal dak commands. So, I try to leverage some existing tools I already have installed on my system to help me out—namely, the dcut tool from the dput-ng package.

The commands

Tom Marble wanted DM access to the libjava-jdbc-clojure package, after I suggested he try doing a new version upload for it. I previously gave him DM access to maintain shimdandy and com-hypirion-io-clojure. But I couldn't remember exactly how I did it...

According to the dcut manpage, this should be as simple as running

dcut dm --uid "Tom Marble" --allow libjava-jdbc-clojure

However, there is a slight problem: I don't normally run dput (or dcut) on a machine with my Debian key present, since I keep my only copy on my laptop. For various reasons (mostly related to intertia, external monitors, and wifi drivers), I run Linux Mint on my laptop, and the version of dcut available there doesn't actually work properly, so I can't just run dcut locally...

What to do about this?

It turns out that there is an undocumented flag, -S or --save, that will save the generated commands locally.

dcut -s -S dm --uid "Tom Marble" --allow libjava-jdbc-clojure

The -s flag, or --simulate, ensures that we don't try to upload the file to the archive just yet. This will produce a file in the current directory with a name similar to ehashman-1564016122.dak-commands. Take a look:

ehashman@corn-syrup:~$ cat ehashman-1564016122.dak-commands

Archive: ftp.upload.debian.org
Uploader: Elana Hashman <[email protected]>

Action: dm
Fingerprint: 884A52C4AC8ABB931D158FA840BFEE868B055D9A
Allow: libjava-jdbc-clojure

Now is a good time to verify that the key and package is correct. You can then sign this file:

gpg --sign --armour --clearsign ehashman-1564016122.dak-commands

And use dcut to upload it:

dcut upload -f ehashman-1564016122.dak-commands

Once the file has been processed, check the FTP Master DM log to make sure your DM changes have been set correctly.

See you on the next episode of "me creating problems for myself with scary Debian tools" 👋

References

24 July, 2019 04:00AM by Elana Hashman

July 23, 2019

hackergotchi for Aigars Mahinovs

Aigars Mahinovs

Debconf 19 photos

The main feed for my photos from Debconf 19 in Curitiba, Brazil is currently in my GPhoto album. I will later also sync it to Debconf git share.

The first batch is up, but now the hardest part comes - the group photo will be happening a bit later today :)

Update: the group photo is ready! The smaller version is in the GPhoto album, but full version is linked from DebConf/19/Photos

Update 2: The day trip phtos are up and also the photos are in Debconf Git LFS share.

23 July, 2019 04:02PM by aigarius

Molly de Blanc

Free software activities (June 2019)

I know this is almost a month late, but I am sharing it nonetheless. My June was dominated by my professional and personal life, leaving little time for expansive free software activities. I’ll write a little more in my OSI report for June.

A photo of a multi-use path with trees in the background. There is a short pole in the foreground with a "Catuion Newt Crossing."

Activities (Personal)

  • The biggest thing I did was head over to the Other Cambridge (a.k.a. Cambridge Prime, a.k.a. Cambridge, UK) for a Debian sprint with the Debian Project Leader, Debian Account Managers, and Debian Anti-Harassment team.
  • We had some Anti-Harassment meetings.
  • We had some Outreach meetings.
  • I helped both teams prep for DebConf.

Activities (Professional)

  • Worked on organizing sponsorships for GUADEC. If you’re interested in attending or sponsoring GUADEC, I highly recommend it!
  • Wrote profiles of members of the GNOME community for the GNOME Engagement blog. I also wrote a newsletter for Friends of GNOME. You can see both online.
  • Attended Diversity & Inclusion team meetings, participated in the Engagement team discussions, and spoke with several GUADEC organizers.

23 July, 2019 02:14PM by mollydb

hackergotchi for David Bremner

David Bremner

Yet another buildinfo database.

What?

I previously posted about my extremely quick-and-dirty buildinfo database using buildinfo-sqlite. This year at DebConf, I re-implimented this using PostgreSQL backend, added into some new features.

There is already buildinfo and buildinfos. I was informed I need to think up a name that clearly distinguishes from those two. Thus I give you builtin-pho.

There's a README for how to set up a local database. You'll need 12GB of disk space for the buildinfo files and another 4GB for the database (pro tip: you might want to move the localation of your PostgreSQL data_directory, depending on how roomy your /var is)

Demo 1: find things build against old / buggy Build-Depends

select distinct p.source,p.version,d.version, b.path
from
      binary_packages p, builds b, depends d
where
      p.suite=&apossid&apos and b.source=p.source and
      b.arch_all and p.arch = &aposall&apos
      and p.version = b.version
      and d.id=b.id and d.depend=&aposdh-elpa&apos
      and d.version < debversion &apos1.16&apos

Demo 2: find packages in sid without buildinfo files

select distinct p.source,p.version
from
      binary_packages p
where
      p.suite=&apossid&apos
except
        select p.source,p.version
from binary_packages p, builds b
where
      b.source=p.source
      and p.version=b.version
      and ( (b.arch_all and p.arch=&aposall&apos) or
            (b.arch_amd64 and p.arch=&aposamd64&apos) )

Disclaimer

Work in progress by an SQL newbie.

23 July, 2019 03:00AM

July 22, 2019

Candy Tsai

Outreachy Week 6 – Week 7: Getting Code Merge

Already half way through the internship! I have implemented some features and opened a merge request. So… what now? Let’s get those changes merged once and for all! Since I’m already at mid-point, there’s also a video shared on what I’ve done so far in this project.

  • Breaking large merge request into smaller pieces
  • Thoughts on remote pair programming
  • Video sharing for the current progress with the project

Making that video was probably the most time-consuming part. Paying great respects to all YouTubers out there!

Breaking The Merge Request

When I looked back at my merge request, it actually started out quite small and precise. After discussions in the merge request, I started to fix things in the same merge request and then it just got bigger and bigger and we had to seperate out the “mergable parts” to make actual progress in this project.

Remote Pair Programming

You can’t overhear what others are doing or learn something about your colleagues through gossip over lunch break when working remotely. So after being stuck for quite a bit, terceiro suggested that we try pair programming.

After our first remote pair programming session, I think there should be no difference in pair programming in person. We shared the same terminal, looked at the same code and discussed just like people standing side by side.

Through our pair programming session, I found out that I had a bad habit. I didn’t run tests on my code that often, so when I had failing tests that didn’t fail before, I spent more time debugging than I should have. Pair programming gave insight to how others work and I think little improvements go a long way.

Week 6

And then I took almost a week off, so my week 7 was delayed.

Week 7

I found out that I can make small merge requests and list the merge requests it depends on. Gitlab will automatically handle the rest for me once a request is merged.

  • finally finished breaking down my large merge request
  • added the history section

22 July, 2019 10:20AM by Candy Tsai

hackergotchi for Daniel Lange

Daniel Lange

Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <[email protected]>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <[email protected]>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <[email protected]>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Update

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

  * gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
  * gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
  * gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
  * gpg: New import option "self-sigs-only".
  * gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
  * dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
  * dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
  * dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
  * dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
  * gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

22 July, 2019 01:16AM by Daniel Lange

Security is hard, open source security unnecessarily harder

Now it is a commonplace that security is hard. It involves advanced mathematics and a single, tiny mistake or omission in implementation can spoil everything.

And the only sane IT security can be open source security. Because you need to assess the algorithms and their implementation and you need to be able to completely verify the implementation. You simply can't if you don't have the code and can compile it yourself to produce a trusted (ideally reproducible) build. A no-brainer for everybody in the field.

But we make it unbelievably hard for people to use security tools. Because these have grown over decades fostered by highly intelligent people with no interest in UX.
"It was hard to write, so it should be hard to use as well."
And then complain about adoption.

PGP / gpg has received quite some fire this year and the good news is this has resulted in funding for the sole gpg developer. Which will obviously not solve the UX problem.

But the much worse offender is OpenSSL. It is so hard to use that even experienced hackers fail.

IRC wallop on hackint

Now, securely encrypting a mass communication media like IRC is not possible at all. Read Trust is not transitive: or why IRC over SSL is pointless1.
Still it makes wiretapping harder and that may be a good thing these days.

LibreSSL has forked the OpenSSL code base "with goals of modernizing the codebase, improving security, and applying best practice development processes". No UX improvement. A cleaner code for the chosen few. Duh.

I predict the re-implementations and gradual improvement scenarios will fail. The nearly-impossible-to-use-right situation with both gpg and (much more importantly) OpenSSL cannot be fixed by gradual improvements and however thorough code reviews.

Now the "there's an App for this" security movement won't work out on a grand scale either:

  1. Most often not open source. Notable exceptions: ChatSecure, TextSecure.
  2. No reference implementations with excellent test servers and well documented test suites but products. "Use my App.", "No, use MY App!!!".
  3. Only secures chat or email. So the VC-powered ("next WhatsApp") mass-adoption markets but not the really interesting things to improve upon (CA, code signing, FDE, ...).
  4. While everybody is focusing on mobile adoption the heavy lifting is still on servers. We need sane libraries and APIs. No App for that.

So we need a new development, a new code, a new open source product. Sadly so the Core Infrastructure Initiative so far only funds existing open source projects in dire needs and people bug hunting.

It basically makes the bad solutions of today a bit more secure and ensures maintenance of decade old crufty code bases. That way it extends the suffering of everybody using the inadequate solutions of today.

That's inevitable until we have a better stack but we need to look into getting rid of gpg and OpenSSL and replacing it with something new. Something designed well from the ground up, technically and from a user experience perspective.

Now who's in for a five year funding plan? $3m2 annually. ROCE 0. But a very good chance to get the OBE awarded.

Keep calm and enjoy the silence

Updates:

21.07.19: A current essay on "The PGP problem" is making rounds and lists some valid issues with the file format, RFCs and the gpg implementation. The GnuPG-users mailing list has a discussion thread on the issues listed in the essay.

19.01.19: Daniel Kahn Gillmor, a Senior Staff Technologist at the ACLU, tried to get his gpg key transition correct. He put a huge amount of thought and preparation into the transition. To support Autocrypt (another try to get GPG usable for more people than a small technical elite), he specifically created different identities for him as a person and his two main email addresses. Two days later he has to invalidate his new gpg key and back-off to less "modern" identity layouts because many of the brittle pieces of infrastructure around gpg from emacs to gpg signature management frontends to mailing list managers fell over dead.

28.11.18: Changed the Quakenet link on why encrypting IRC is useless to an archive.org one as they have removed the original content.

13.03.17: Chris Wellons writes about why GPG is a failure and created a small portable application Enchive to replace it for asymmetric encryption.

24.02.17: Stefan Marsiske has written a blog article: On PGP. He argues about adversary models and when gpg is "probably" 3 still good enough to use. To me a security tool can never be a sane choice if the UI is so convoluted that only a chosen few stand at least a chance of using it correctly. Doesn't matter who or what your adversary is.
Stefan concludes his blog article:

PGP for encryption as in RFC 4880 should be retired, some sunk-cost-biases to be coped with, but we all should rejoice that the last 3-4 years had so much innovation in this field, that RFC 4880 is being rewritten[Citation needed] with many of the above in mind and that hopefully there'll be more and better tools. [..]

He gives an extensive list of tools he considers worth watching in his article. Go and check whether something in there looks like a possible replacement for gpg to you. Stefan also gave a talk on the OpenPGP conference 2016 with similar content, slides.

14.02.17: James Stanley has written up a nice account of his two hour venture to get encrypted email set up. The process is speckled with bugs and inconsistent nomenclature capable of confusing even a technically inclined person. There has been no progress in the last ~two years since I wrote this piece. We're all still riding dead horses. James summarizes:

Encrypted email is nothing new (PGP was initially released in 1991 - 26 years ago!), but it still has a huge barrier to entry for anyone who isn't already familiar with how to use it.

04.09.16: Greg Kroah-Hartman ends an analysis of the Evil32 PGP keyid collisions with:

gpg really is horrible to use and almost impossible to use correctly.

14.11.15:
Scott Ruoti, Jeff Andersen, Daniel Zappala and Kent Seamons of BYU, Utah, have analysed the usability [local mirror, 173kB] of Mailvelope, a webmail PGP/GPG add-on based on a Javascript PGP implementation. They describe the results as "disheartening":

In our study of 20 participants, grouped into 10 pairs of participants who attempted to exchange encrypted email, only one pair was able to successfully complete the assigned tasks using Mailvelope. All other participants were unable to complete the assigned task in the one hour allotted to the study. Even though a decade has passed since the last formal study of PGP, our results show that Johnny has still not gotten any closer to encrypt his email using PGP.

  1. Quakenet has removed that article citing "near constant misrepresentation of the presented argument" sometime in 2018. The contents (not misrepresented) are still valid so I have added and archive.org Wayback machine link instead. 

  2. The estimate was $2m until end of 2018. The longer we wait, the more expensive it'll get. And - obviously - ever harder. E.g. nobody needed to care about sidechannel attacks on big-LITTLE five years ago. But now they start to hit servers and security-sensitive edge devices. 

  3. Stefan says "probably" five times in one paragraph. Probably needs an editor. The person not the application. 

22 July, 2019 01:15AM by Daniel Lange

Giovanni Mascellani

Bootstrappable Debian BoF

Greetings from DebConf 19 in Curitiba! Just a quick reminder that I will run a Bootstrappable Debian BoF on Tuesday 23rd, at 13.30 Brasilia time (which is 16.30 UTC, if I am not mistaken). If you are curious about bootstrappability in Debian, why do we want it and where we are right now, you are welcome to come in person if you are at DebCon or to follow the streaming.

22 July, 2019 12:30AM by Giovanni Mascellani

July 21, 2019

hackergotchi for Vincent Bernat

Vincent Bernat

A Makefile for your Go project (2019)

My most loathed feature of Go was the mandatory use of GOPATH: I do not want to put my own code next to its dependencies. I was not alone and people devised tools or crafted their own Makefile to avoid organizing their code around GOPATH.

Hopefully, since Go 1.11, it is possible to use Go’s modules to manage dependencies without relying on GOPATH. First, you need to convert your project to a module:1

$ go mod init hellogopher
go: creating new go.mod: module hellogopher
$ cat go.mod
module hellogopher

Then, you can invoke the usual commands, like go build or go test. The go command resolves imports by using versions listed in go.mod. When it runs into an import of a package not present in go.mod, it automatically looks up the module containing that package using the latest version and adds it.

$ go test ./...
go: finding github.com/spf13/cobra v0.0.5
go: downloading github.com/spf13/cobra v0.0.5
?       hellogopher     [no test files]
?       hellogopher/cmd [no test files]
ok      hellogopher/hello       0.001s
$ cat go.mod
module hellogopher

require github.com/spf13/cobra v0.0.5

If you want a specific version, you can either edit go.mod or invoke go get:

$ go get github.com/spf13/[email protected]
go: finding github.com/spf13/cobra v0.0.4
go: downloading github.com/spf13/cobra v0.0.4
$ cat go.mod
module hellogopher

require github.com/spf13/cobra v0.0.4

Add go.mod to your version control system. Optionally, you can also add go.sum as a safety net against overriden tags. If you really want to vendor the dependencies, you can invoke go mod vendor and add the vendor/ directory to your version control system.

Thanks to the modules, in my opinion, Go’s dependency management is now on a par with other languages, like Ruby. While it is possible to run day-to-day operations—building and testing—with only the go command, a Makefile can still be useful to organize common tasks, a bit like Python’s setup.py or Ruby’s Rakefile. Let me describe mine.

Using third-party tools

Most projects need some third-party tools for testing or building. We can either expect them to be already installed or compile them on the fly. For example, here is how code linting is done with Golint:

BIN = $(CURDIR)/bin
$(BIN):
    @mkdir -p $@
$(BIN)/%: | $(BIN)
    @tmp=$$(mktemp -d); \
       env GO111MODULE=off GOPATH=$$tmp GOBIN=$(BIN) go get $(PACKAGE) \
        || ret=$$?; \
       rm -rf $$tmp ; exit $$ret

$(BIN)/golint: PACKAGE=golang.org/x/lint/golint

GOLINT = $(BIN)/golint
lint: | $(GOLINT)
    $(GOLINT) -set_exit_status ./...

The first block defines how a third-party tool is built: go get is invoked with the package name matching the tool we want to install. We do not want to pollute our dependency management and therefore, we are working in an empty GOPATH. The generated binaries are put in bin/.

The second block extends the pattern rule defined in the first block by providing the package name for golint. Additional tools can be added by just adding another line like this.

The last block defines the recipe to lint the code. The default linting tool is the golint built using the first block but it can be overrided with make GOLINT=/usr/bin/golint.

Tests

Here are some rules to help running tests:

TIMEOUT  = 20
PKGS     = $(or $(PKG),$(shell env GO111MODULE=on $(GO) list ./...))
TESTPKGS = $(shell env GO111MODULE=on $(GO) list -f \
            '{{ if or .TestGoFiles .XTestGoFiles }}{{ .ImportPath }}{{ end }}' \
            $(PKGS))

TEST_TARGETS := test-default test-bench test-short test-verbose test-race
test-bench:   ARGS=-run=__absolutelynothing__ -bench=.
test-short:   ARGS=-short
test-verbose: ARGS=-v
test-race:    ARGS=-race
$(TEST_TARGETS): test
check test tests: fmt lint
    go test -timeout $(TIMEOUT)s $(ARGS) $(TESTPKGS)

A user can invoke tests in different ways:

  • make test runs all tests;
  • make test TIMEOUT=10 runs all tests with a timeout of 10 seconds;
  • make test PKG=hellogopher/cmd only runs tests for the cmd package;
  • make test ARGS="-v -short" runs tests with the specified arguments;
  • make test-race runs tests with race detector enabled.

go test includes a test coverage tool. Unfortunately, it only handles one package at a time and you have to explicitely list the packages to be instrumented, otherwise the instrumentation is limited to the currently tested package. If you provide too many packages, the compilation time will skyrocket. Moreover, if you want an output compatible with Jenkins, you need some additional tools.

COVERAGE_MODE    = atomic
COVERAGE_PROFILE = $(COVERAGE_DIR)/profile.out
COVERAGE_XML     = $(COVERAGE_DIR)/coverage.xml
COVERAGE_HTML    = $(COVERAGE_DIR)/index.html
test-coverage-tools: | $(GOCOVMERGE) $(GOCOV) $(GOCOVXML) # ❶
test-coverage: COVERAGE_DIR := $(CURDIR)/test/coverage.$(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
test-coverage: fmt lint test-coverage-tools
    @mkdir -p $(COVERAGE_DIR)/coverage
    @for pkg in $(TESTPKGS); do \ # ❷
        go test \
            -coverpkg=$$(go list -f '{{ join .Deps "\n" }}' $$pkg | \
                    grep '^$(MODULE)/' | \
                    tr '\n' ',')$$pkg \
            -covermode=$(COVERAGE_MODE) \
            -coverprofile="$(COVERAGE_DIR)/coverage/`echo $$pkg | tr "/" "-"`.cover" $$pkg ;\
     done
    @$(GOCOVMERGE) $(COVERAGE_DIR)/coverage/*.cover > $(COVERAGE_PROFILE)
    @go tool cover -html=$(COVERAGE_PROFILE) -o $(COVERAGE_HTML)
    @$(GOCOV) convert $(COVERAGE_PROFILE) | $(GOCOVXML) > $(COVERAGE_XML)

First, we define some variables to let the user override them. In ❶, we require the following tools—built like golint previously:

  • gocovmerge merges profiles from different runs into a single one;
  • gocov-xml converts a coverage profile to the Cobertura format, for Jenkins;
  • gocov is needed to convert a coverage profile to a format handled by gocov-xml.

In ❷, for each package to test, we run go test with the -coverprofile argument. We also explicitely provide the list of packages to instrument to -coverpkg by using go list to get a list of dependencies for the tested package and keeping only our owns.

Build

Another useful recipe is to build the program. While this could be done with just go build, it is not uncommon to have to specify build tags, additional flags, or to execute supplementary build steps. In the following example, the version is extracted from Git tags. It will replace the value of the Version variable in the hellogopher/cmd package.

VERSION ?= $(shell git describe --tags --always --dirty --match=v* 2> /dev/null || \
            echo v0)
all: fmt lint | $(BIN)
    go build \
        -tags release \
        -ldflags '-X hellogopher/cmd.Version=$(VERSION)' \
        -o $(BIN)/hellogopher main.go

The recipe also runs code formatting and linting.


The excerpts provided in this post are a bit simplified. Have a look at the final result for more perks, including fancy output and integrated help!


  1. For an application not meant to be used as a library, I prefer to use a short name instead of a name derived from an URL, like github.com/vincentbernat/hellogopher. It makes it easier to read import sections:

    import (
            "fmt"
            "os"
    
            "hellogopher/cmd"
    
            "github.com/pkg/errors"
            "github.com/spf13/cobra"
    )
    

    ↩︎

21 July, 2019 07:20PM by Vincent Bernat

hackergotchi for Bits from Debian

Bits from Debian

DebConf19 starts today in Curitiba

DebConf19 logo

DebConf19, the 20th annual Debian Conference, is taking place in Curitiba, Brazil from from July 21 to 28, 2019.

Debian contributors from all over the world have come together at Federal University of Technology - Paraná (UTFPR) in Curitiba, Brazil, to participate and work in a conference exclusively run by volunteers.

Today the main conference starts with over 350 attendants expected and 121 activities scheduled, including 45- and 20-minute talks and team meetings ("BoF"), workshops, a job fair as well as a variety of other events.

The full schedule at https://debconf19.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming available from the DebConf19 website of the events happening in the three talk rooms: Auditório (the main auditorium), Miniauditório and Sala de Videoconferencia. Or you can join the conversation about what is happening in the talk rooms: #debconf-auditorio, #debconf-miniauditorio and #debconf-videoconferencia (all those channels in the OFTC IRC network).

You can also follow the live coverage of news about DebConf19 on https://micronews.debian.org or the @debian profile in your favorite social network.

DebConf is committed to a safe and welcome environment for all participants. During the conference, several teams (Front Desk, Welcome team and Anti-Harassment team) are available to help so both on-site and remote participants get their best experience in the conference, and find solutions to any issue that may arise. See the web page about the Code of Conduct in DebConf19 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf19, particularly our Platinum Sponsors: Infomaniak, Google and Lenovo.

21 July, 2019 07:10PM by Laura Arjona Reina

hackergotchi for Holger Levsen

Holger Levsen

20190721-piuparts-was-not-down

piuparts.debian.org was not down for maintenance

I hadn't shut down piuparts.debian.org for maintenance, I just said so, to make you attend my talk, as my last call for help at DebConf17 was attended by 3 people only...

So please join the session about piuparts(d.o.) today at 14:30 localtime.

Please help help help!

21 July, 2019 05:31PM

Sylvain Beucler

Planet clean-up

planet.gnu.org logo

I did some clean-up / resync on the planet.gnu.org setup :)

  • Fix issue with newer https websites (SNI)
  • Re-sync Debian base config, scripts and packaging, update documentation; the planet-venus package is still in bad shape though, it's not officially orphaned but the maintainer is unreachable AFAICS
  • Fetch all Savannah feeds using https
  • Update feeds with redirections, which seem to mess-up caching

21 July, 2019 04:57PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RPushbullet 0.3.2

RPpushbullet demo

A new release 0.3.2 of the RPushbullet package is now on CRAN. RPushbullet is interfacing the neat Pushbullet service for inter-device messaging, communication, and more. It lets you easily send alerts like the one to the left to your browser, phone, tablet, … – or all at once.

This is the first new release in almost 2 1/2 years, and it once again benefits greatly from contributed pull requests by Colin (twice !) and Chan-Yub – see below for details.

Changes in version 0.3.2 (2019-07-21)

  • The Travis setup was robustified with respect to the token needed to run tests (Dirk in #48)

  • The configuration file is now readable only by the user (Colin Gillespie in #50)

  • At startup initialization is now more consistent (Colin Gillespie in #53 fixing #52)

  • A new function to fetch prior posts was added (Chanyub Park in #54). `

Courtesy of CRANberries, there is also a diffstat report for this release. More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

21 July, 2019 02:28PM

July 20, 2019

Jose M. Calhariz

New release of switchconf 0.0.16

I have not touched switchconf for a long time. Being at DebCamp19 was a good time to work on it.

I have moved the development of switchconf from a private svn repo to a git repo in salsa: https://salsa.debian.org/debian/switchconf Created a virtual host called http://software.calhariz.com were I will publish the sources of the software that I take care. Updated the Makefile to the git repo and released version 0.0.16.

You can download the latest version of switchconf from here: http://software.calhariz.com/switchconf

20 July, 2019 11:56PM by Jose M Calhariz

John Goerzen

Alas, Poor PGP

Over in The PGP Problem, there’s an extended critique of PGP (and also specifics of the GnuPG implementation) in a modern context. Robert J. Hansen, one of the core GnuPG developers, has an interesting response:

First, RFC4880bis06 (the latest version) does a pretty good job of bringing the crypto angle to a more modern level. There’s a massive installed base of clients that aren’t aware of bis06, and if you have to interoperate with them you’re kind of screwed: but there’s also absolutely nothing prohibiting you from saying “I’m going to only implement a subset of bis06, the good modern subset, and if you need older stuff then I’m just not going to comply.” Sequoia is more or less taking this route — more power to them.

Second, the author makes a couple of mistakes about the default ciphers. GnuPG has defaulted to AES for many years now: CAST5 is supported for legacy reasons (and I’d like to see it dropped entirely: see above, etc.).

Third, a couple of times the author conflates what the OpenPGP spec requires with what it permits, and with how GnuPG implements it. Cleaner delineation would’ve made the criticisms better, I think.

But all in all? It’s a good criticism.

The problem is, where does that leave us? I found the suggestions in the original author’s article (mainly around using IM apps such as Signal) to be unworkable in a number of situations.

The Problems With PGP

Before moving on, let’s tackle some of the problems identified.

The first is an assertion that email is inherently insecure and can’t be made secure. There are some fairly convincing arguments to be made on that score; as it currently stands, there is little ability to hide metadata from prying eyes. And any format that is capable of talking on the network — as HTML is — is just begging for vulnerabilities like EFAIL.

But PGP isn’t used just for this. In fact, one could argue that sending a binary PGP message as an attachment gets around a lot of that email clunkiness — and would be right, at the expense of potentially more clunkiness (and forgetfulness).

What about the web-of-trust issues? I’m in agreement. I have never really used WoT to authenticate a key, only in rare instances trusting an introducer I know personally and from personal experience understand how stringent they are in signing keys. But this is hardly a problem for PGP alone. Every encryption tool mentioned has the problem of validating keys. The author suggests Signal. Signal has some very strong encryption, but you have to have a phone number and a smartphone to use it. Signal’s strength when setting up a remote contact is as strong as SMS. Let that disheartening reality sink in for a bit. (A little social engineering could probably get many contacts to accept a hijacked SIM in Signal as well.)

How about forward secrecy? This is protection against a private key that gets compromised in the future, because an ephemeral session key (or more than one) is negotiated on each communication, and the secret key is never stored. This is a great plan, but it really requires synchronous communication (or something approaching it) between the sender and the recipient. It can’t be used if I want to, for instance, burn a backup onto a Bluray and give it to a friend for offsite storage without giving the friend access to its contents. There are many, many situations where synchronous key negotiation is impossible, so although forward secrecy is great and a nice enhancement, we should assume it to be always applicable.

The saltpack folks have a more targeted list of PGP message format problems. Both they, and the article I link above, complain about the gpg implementation of PGP. There is no doubt truth to these. Among them is a complaint that gpg can emit unverified data. Well sure, because it has a streaming mode. It exits with a proper error code and warnings if a verification fails at the end — just as gzcat does. This is a part of the API that the caller needs to be aware of. It sounds like some callers weren’t handling this properly, but it’s just a function of a streaming tool.

Suggested Solutions

The Signal suggestion is perfectly reasonable in a lot of cases. But the suggestion to use WhatsApp — a proprietary application from a corporation known to brazenly lie about privacy — is suspect. It may have great crypto, but if it uploads your address book to a suspicious company, is it a great app?

Magic Wormhole is a pretty neat program I hadn’t heard of before. But it should be noted it’s written in Python, so it’s probably unlikely to be using locked memory.

How about backup encryption? Backups are a lot more than just filesystem; maybe somebody has a 100GB MySQL or zfs send stream. How should this be encrypted?

My current estimate is that there’s no magic solution right now. The Sequoia PGP folks seem to have a good thing going, as does Saltpack. Both projects are early in development, so as a privacy-concerned person, should you trust them more than GPG with appropriate options? That’s really hard to say.

Additional Discussions

20 July, 2019 11:15PM by John Goerzen