June 13, 2018

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- June 2018

I’d like to push a substantive release of Policy but I’m waiting for DDs to review and second patches in the following bugs. I’d be grateful for your involvement!

If a bug already has two seconds, or three seconds if the proposer of the patch is not a DD, please consider reviewing one of the others, instead, unless you have a particular interest in the topic of the bug.

If you’re not a DD, you are welcome to review, but it might be a more meaningful contribution to spend your time writing patches bugs that lack them, instead.

#786470 [copyright-format] Add an optional “License-Grant” field

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#880920 Document Rules-Requires-Root field

#891216 Requre d-devel consultation for epoch bump

#897217 Vcs-Hg should support -b too

13 June, 2018 02:15PM

Enrico Zini

Progress bar for file descriptors

I ran gzip on an 80Gb file, it's processing, but who knows how much it has done yet, and when it will end? I wish gzip had a progressbar. Or MySQL. Or…

Ok. Now every program that reads a file sequentially can have a progressbar:

https://gitlab.com/spanezz/fdprogress

fdprogress

Print progress indicators for programs that read files sequentially.

fdprogress monitors file descriptor offsets and prints progressbars comparing them to file sizes.

Pattern can be any glob expression.

usage: fdprogress [-h] [--verbose] [--debug] [--pid PID] [pattern]

show progress from file descriptor offsets

positional arguments:
  pattern            file name to monitor

optional arguments:
  -h, --help         show this help message and exit
  --verbose, -v      verbose output
  --debug            debug output
  --pid PID, -p PID  PID of process to monitor

pv

pv has a --watchfd option that does most of what fdprogress is trying to do: use that instead.

13 June, 2018 11:43AM

hackergotchi for Norbert Preining

Norbert Preining

Microsoft fixed the Open R Debian package

I just got notice that Microsoft has updated the Debian packaging of Open R to properly use dpkg-divert. I checked the Debian packaging scripts and they now properly divert R and Rscript, and revert back to the Debian provided (r-base) version after removal of the packages.

The version 3.5.0 has been rereleased, if you have downloaded it from MRAN you will need to redownload the file and be careful to use the new one, the file name of the downloaded file is the same.

Thanks Microsoft for the quick fix, it is good news that those playing with Open R will not be left with a hosed system.

PS: I guess this post will by far not get the incredible attention the first one got 😉

13 June, 2018 10:09AM by Norbert Preining

June 12, 2018

hackergotchi for Jonathan McDowell

Jonathan McDowell

Hooking up Home Assistant to Alexa + Google Assistant

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443>
  ServerName my.external.host

  ProxyPreserveHost On
  ProxyRequests off

  RewriteEngine on

  # External access for Google Assistant
  ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant
  RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P]
  RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P]

  SSLEngine on
  SSLCertificateFile /etc/ssl/my.external.host.crt
  SSLCertificateKeyFile /etc/ssl/private/my.external.host.key
  SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
</VirtualHost>

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

google_assistant:
  project_id: homeautomation-8fdab
  client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N
  access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq
  agent_user_id: [email protected]
  api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea
  exposed_domains:
    - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa
ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home
RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to https://my.external.host/external/amazon/stringrandom. configuration.yaml required less configuration than the Google equivalent:

alexa:
  smart_home:
    filter:
      include_entities:
        - light.dining_room_lights
        - light.living_room_lights
        - light.kitchen
        - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

12 June, 2018 08:21PM

John Goerzen

Syncing with a memory: a unique use of tar –listed-incremental

I have a Nextcloud instance that various things automatically upload photos to. These automatic folders sync to a directory on my desktop. I wanted to pull things out of that directory without deleting them, and only once. (My wife might move them out of the directory on her computer, and I might arrange them into targets on my end.)

In other words, I wanted to copy a file from a source to a destination, but remember what had been copied before so it only ever copies once.

rsync doesn’t quite do this. But it turns out that tar’s listed-incremental feature can do exactly that. Ordinarily, it would delete files that were deleted on the source. But if we make the tar file with the incremental option, but extract it without, it doesn’t try to delete anything at extract time.

Here’s my synconce script:

#!/bin/bash

set -e

if [ -z "$3" ]; then
    echo "Syntax: $0 snapshotfile sourcedir destdir"
    exit 5
fi

SNAPFILE="$(realpath "$1")"
SRCDIR="$2"
DESTDIR="$(realpath "$3")"

cd "$SRCDIR"
if [ -e "$SNAPFILE" ]; then
    cp "$SNAPFILE" "${SNAPFILE}.new"
fi
tar "--listed-incremental=${SNAPFILE}.new" -cpf - . | \
    tar -xf - -C "$DESTDIR"
mv "${SNAPFILE}.new" "${SNAPFILE}"

Just have the snapshotfile be outside both the sourcedir and destdir and you’re good to go!

12 June, 2018 11:27AM by John Goerzen

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

R 3.5.0 on Debian and Ubuntu: An Update

Overview

R 3.5.0 was released a few weeks ago. As it changes some (important) internals, packages installed with a previous version of R have to be rebuilt. This was known and expected, and we took several measured steps to get R binaries to everybody without breakage.

The question of but how do I upgrade without breaking my system was asked a few times, e.g., on the r-sig-debian list as well as in this StackOverflow question.

Debian

Core Distribution As usual, we packaged R 3.5.0 as soon as it was released – but only for the experimental distribution, awaiting a green light from the release masters to start the transition. A one-off repository [drr35](https://github.com/eddelbuettel/drr35) was created to provide R 3.5.0 binaries more immediately; this was used, e.g., by the r-base Rocker Project container / the official R Docker container which we also update after each release.

The actual transition was started last Friday, June 1, and concluded this Friday, June 8. Well over 600 packages have been rebuilt under R 3.5.0, and are now ready in the unstable distribution from which they should migrate to testing soon. The Rocker container r-base was also updated.

So if you use Debian unstable or testing, these are ready now (or will be soon once migrated to testing). This should include most Rocker containers built from Debian images.

Contributed CRAN Binaries Johannes also provided backports with a -cran35 suffix in his CRAN-mirrored Debian backport repositories, see the README.

Ubuntu

Core (Upcoming) Distribution Ubuntu, for the upcoming 18.10, has undertaken a similar transition. Few users access this release yet, so the next section may be more important.

Contributed CRAN and PPA Binaries Two new Launchpad PPA repositories were created as well. Given the rather large scope of thousands of packages, multiplied by several Ubuntu releases, this too took a moment but is now fully usable and should get mirrored to CRAN ‘soon’. It covers the most recent and still supported LTS releases as well as the current release 18.04.

One PPA contains base R and the recommended packages, RRutter3.5. This is source of the packages that will soon be available on CRAN. The second PPA (c2d4u3.5) contains over 3,500 packages mainly derived from CRAN Task Views. Details on updates can be found at Michael’s R Ubuntu Blog.

This can used for, e.g., Travis if you managed your own sources as Dirk’s r-travis does. We expect to use this relatively soon, possibly as an opt-in via a variable upon which run.sh selects the appropriate repository set. It will also be used for Rocker releases built based off Ubuntu.

In both cases, you may need to adjust the sources list for apt accordingly.

Others

There may also be ongoing efforts within Arch and other Debian-derived distributions, but we are not really aware of what is happening there. If you use those, and coordination is needed, please feel free to reach out via the the r-sig-debian list.

Closing

In case of questions or concerns, please consider posting to the r-sig-debian list.

Dirk, Michael and Johannes, June 2018

12 June, 2018 01:27AM

June 11, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #163

Here’s what happened in the Reproducible Builds effort between Sunday June 3 and Saturday June 9 2018:

Development work

Upcoming events

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

In addition, Mattia Rizzolo has been working in a large refactor of the Python part of the setup.

Documentation updates

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo, Santiago Torres, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

11 June, 2018 09:45PM

hackergotchi for Sune Vuorela

Sune Vuorela

Kirigaming – Kolorfill

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

Kolorfill

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

11 June, 2018 07:07PM by Sune Vuorela

Shashank Kumar

Google Summer of Code 2018 with Debian - Week 4

After working on designs and getting my hands dirty with KIVY for the first 3 weeks, I became comfortable with my development environment and was able to deliver features within a couple of days with UI, tests, and documentation. In this blog, I explain how I converted all my Designs into Code and what I've learned along the way.

The Sign Up

New Contributor Wizard - SignUp

In order to implement above design in KIVY, the best way is to write a user kv-lang. It involves writing a kv file which contains widget tree of the layout and a lot more. One can learn more about kv-lang from the documentation. To begin with, let us look at the simplest kv file.

BoxLayout:
    Label:
        text: 'Hello'
    Label:
        text: 'World'
KV Language

In KIVY, in order to build UI widgets are used. Also, widget base class is what is derived to create all other UI elements like layouts, button, label and so on in KIVY. Indentation is used in kv just like in Python to define children. In our kv file above, we're using BoxLayout which allows us to arrange all its children in either horizontal(by default) or vertical orientation. So, both the Labels will be oriented horizontally one after another.

Just like children widgets, one can also set values to properties like Hello to text of the first Label in above code. More information about what properties can be defined for BoxLayout and Label can be seen from their API documentaion. All which remains is importing this .kv (say sample.kv) file from your module which runs KIVY app. You might notice that for now Language and Timezone are kept static. The reason is, Language support architecture is yet to be finalized and both the options would require a Drop Down list, design and implementation for which will be handled separately.

In order for me to build the UI following the design, I had to experiment with widgets. When all was done, signup.kv file contained the resultant UI.

Validations

Now, the good part is we have a UI, the user can input data. And the bad part is user can input any data! So, it's very important to validate whether the user is submitting data in the correct format or not. Specifically for Sign Up module, I had to validate Email, Passwords and Full Name submitted by the user. Validation module can be found here which contains classes and methods for what I intended to do.

It's important that user gets feedback after validation if something is wrong with the input. This is done by exchanging the Label's text with error message and color with bleeding red by calling prompt_error_message for unsuccessful validation.

Updating The Database

After successful validation, Sign Up module steps forward to update the database in sqlite3 module. But before that, Email and Full Name is cleaned for any unnecessary whitespaces, tabs and newline characters. Universally unique identifier or uuid is generated for the user_id. Plain text Password in changed to sha256 hash string for security. Finally, sqlite3 is integrated to updatedb.py to update the database. SQlite database is stored in a single file and named new_contributor_wizard.db. For user information, the table named USERS is created if not present during initialization of UpdateDB instance. Finally, information is stored or error is returned if the Email already exists. This is how the USERS schema looks like.

id VARCHAR(36) PRIMARY KEY,
email UNIQUE,
pass VARCHAR(64),
fullname TEXT,
language TEXT,
timezone TEXT

After the Database is updated, i.e. successful account creation of user, the natural flow is to take the user to the Dashboard screen. In order to make this feature atomic, integration with Dashboard would be done once all 3 (SignUp, SignIn, and Dashboard) features are merged. So, in order to showcase successful sign-up, I've used text confirmation. Below is the screencast of how the feature looks and what changes it makes in the database.

The Sign In

New Contributor Wizard - SignIn

If you look into the difference in UI of SignIn module in comparison with the SignUp, you might notice a few changes.

  • The New Contributor Wizard is now right-aligned
  • Instead of 2 columns taking user information, here we have just one with Email and Password

Hence, the UI experiences only a little change and the result can be seen in singin.py.

Validations

Just like in the Sign Up modules, we are not trusting user's input to be sane. Hence, we validate whether the user is giving us a good format Email and Password. The resultant validations of Sign In modules can be seen in validations.py.

Updating The Database

After successful validation, next step would be cleaning Email and hashing the Password entered by the user. Here we have two possibilities of unsuccessful signin,

  • Either the Email entered by the user doesn't exist in the database
  • Or the Password entered by the user is not correct

Else, the user is signed in successfully. For the unsuccessful signin, I have created a exceptions.py module to prompt the error correctly. updatedb.py contains the database operations for Sign In module.

The Exceptions

Exceptions.py of Sign In contains Exception classes and they are defined as

  • UserError - this class is used to throw an exception when Email doesn't exist
  • PasswordError - this class is used to throw an exception when Password doesn't match the one saved in the database with the corresponding email.

All these modules are integrated with signin.py and the resultant feature can be seen in action in the screencast below. Also, here's the merge request for the same.

The Dashboard

New Contributor Wizard - Dashboard

The Dashboard is completely different than the above two modules. If New Contributor Wizard is the culmination of different user stories and interactive screen then Dashboard is the protagonist of all the other features. A successful SignIn or SignUp will direct the user to the Dashboard. All the tutorials and tools will be available to the user henceforth.

The UI

There are 2 segments of the Dashboard screen, one is for all the menu options on the left and another is for the tutorials and tools for the selected menu option on the right. So, it was needed to change the screen on the right all the time while selecting the menu options. KIVY provides a widget named Screen Manager to manage such an issue gracefully. But in order to have control over the transition of just a part of the screen rather than the entire screen, one has to dig deep into the API and work it out. Here's when I remembered a sentence from the Zen of Python, "Simple is better than complex" and I chose the simple way of changing the screen i.e. by adding/removing widget functions.

In the dashboard.py, I'm overidding on_touch_down function to check which menu option the user clicks on and calling enable_menu accordingly.

The menu options on the left are not the Button widget. I had an option of using the Button directly but it would need customization to make them look pretty. Instead, I used BoxLayout and Label to incorporate a button like feature. In enable_menu I only check on top of which option user is clicking using the touch API. Now, all I have to do is highlight the selected option and unfocus all the other options. The final UI can be seen here in dashboard.kv.

Courseware

Along with highlighting the selected option, Dashboard also changes to the courseware i.e. tools and tutorials for the selected option on the right. To provide a modular structure to application, all these options are build as separate modules and then integrated into the Dashboard. Here are all the modules for the courseware build for the Dashboard,

  • blog - Users will be given tools to create and deploy their blogs and also learn the best practices.
  • cli - Understanding Command Line Interface will be the goal with all the tutorials provided in this module.
  • communication - Communication module will have tutorials for IRC and mailing lists and showcase best communication practices. The tools in this module will help user subscribe to the mailing lists of different open source communities.
  • encryption - Encrypting communication and data will be tough using this module.
  • how_to_use - This would be an introductory module for the user for them to understand how to user this application.
  • vcs - Version Control Systems like git is important while working on a project whether personal or with a team and everything in between.
  • way_ahead - This module will help users reach out to different open source communities and organizations. It will also showcase open source project to the user with respect to their preference and information about programs like Google Summer of Code and Outreachy.
Settings

Below the menu are the options for settings. These settings also have separate modules just like courseware. Specifically, they are described as

  • application_settings - Would help out user to manage setting which are specific to KIVY application like resolutions.
  • theme_settings - User can manage theme related setting like color schema using this option
  • profile_settings - Would help the user manage information about themselves

The merge request which incorporates the Dashboard feature in the project can be seen in action in the screencast below.

The Conclusion

The week 4 was a bit satisfying for me as I felt like adding value to the project with these merge requests. As soon as the merge requests are reviewed and merged in the repository, I'll work on integrating all these features together to create a seamless experience as it should be for the user. There are few necessary modifications to be made in the features like supporting multiple languages and adding the gradient to the background as it can be seen in the design. I'll create issues on redmine for the same and will work on them as soon as integration is done. My next task would be designing how tutorials and tasks would look in the right segment of the Dashboard.

11 June, 2018 06:30PM by Shashank Kumar

hackergotchi for Norbert Preining

Norbert Preining

Microsoft’s failed attempt on Debian packaging

Just recently Microsoft Open R 3.5 was announced, as an open source implementation of R with some improvements. Binaries are available for Windows, Mac, and Linux. I dared to download and play around with the files, only to get shocked how incompetent Microsoft is in packaging.

From the microsoft-r-open-mro-3.5.0 postinstall script:

#!/bin/bash

#TODO: Avoid hard code VERSION number in all scripts
VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk  -F. '{print $1 "." $2 "." $3}'`
INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}"

echo $VERSION

ln -s "${INSTALL_PREFIX}/lib64/R/bin/R" /usr/bin/R
ln -s "${INSTALL_PREFIX}/lib64/R/bin/Rscript" /usr/bin/Rscript

rm /bin/sh
ln -s /bin/bash /bin/sh

First of all, the ln -s will not work in case the standard R package is installed, but much worse, forcibly relinking /bin/sh to bash is something I didn’t expect to see.

Then, looking at the prerm script, it is getting even more funny:

#!/bin/bash

VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk  -F. '{print $1 "." $2 "." $3}'`
INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}/"

rm /usr/bin/R
rm /usr/bin/Rscript
rm -rf "${INSTALL_PREFIX}/lib64/R/backup"

Stop, wait, you are removing /usr/bin/R without even checking that it points to the R you have installed???

I guess Microsoft should read a bit up, in particular about dpkg-divert and proper packaging. What came in here was such an exhibition of incompetence that I can only assume they are doing it on purpose.

PostScriptum: A short look into the man page of dpkg-divert will give a nice example how it should be done.

PPS: I first reported these problems in the R Open Forums and later got an answer that they look into it.

11 June, 2018 09:13AM by Norbert Preining

John Goerzen

Running Digikam inside Docker

After my recent complaint about AppImage, I thought I’d describe how I solved my problem. I needed a small patch to Digikam, which was already in Debian’s 5.9.0 package, and the thought of rebuilding the AppImage was… unpleasant.

I thought – why not just run it inside Buster in Docker? There are various sources on the Internet for X11 apps in Docker. It took a little twiddling to make it work, but I did.

My Dockerfile was pretty simple:

FROM debian:buster
MAINTAINER John Goerzen 

RUN apt-get update && \
    apt-get -yu dist-upgrade && \
    apt-get --install-recommends -y install firefox-esr digikam digikam-doc \
         ffmpegthumbs imagemagick minidlna hugin enblend enfuse minidlna pulseaudio \
         strace xterm less breeze && \
    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN adduser --disabled-password --uid 1000 --gecos "John Goerzen" jgoerzen && \
    rm -r /home/jgoerzen/.[a-z]*
RUN rm /etc/machine-id
CMD /usr/bin/docker

RUN mkdir -p /nfs/personalmedia /run/user/1000 && chown -R jgoerzen:jgoerzen /nfs /run/user/1000

I basically create the container and my account in it.

Then this script starts up Digikam:

#!/bin/bash

set -e

# This will be unnecessary with docker 18.04 theoretically....  --privileged see
# https://stackoverflow.com/questions/48995826/which-capabilities-are-needed-for-statx-to-stop-giving-eperm
# and https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1755250

docker run -ti \
       -v /tmp/.X11-unix:/tmp/.X11-unix -v "/run/user/1000/pulse:/run/user/1000/pulse" -v /etc/machine-id:/etc/machine-id \
       -v /etc/localtime:/etc/localtime \
       -v /dev/shm:/dev/shm -v /var/lib/dbus:/var/lib/dbus -v /var/run/dbus:/var/run/dbus -v /run/user/1000/bus:/run/user/1000/bus  \
       -v "$HOME:$HOME" -v "/nfs/personalmedia/Pictures:/nfs/personalmedia/Pictures" \
     -e DISPLAY="$DISPLAY" \
     -e XDG_RUNTIME_DIR="$XDG_RUNTIME_DIR" \
     -e DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" \
     -e LANG="$LANG" \
     --user "$USER" \
     --hostname=digikam \
     --name=digikam \
     --privileged \
     --rm \
     jgoerzen/digikam "$@"  /usr/bin/digikam

The goal here was not total security isolation; if it had been, then all the dbus mounting and $HOME mounting was a poor idea. But as an alternative to AppImage — well, it worked perfectly. I could even get security updates if I wanted.

11 June, 2018 07:35AM by John Goerzen

June 10, 2018

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 3.0.1

Weblate 3.0.1 has been released today. It contains several bug fixes, most importantly possible migration issue on users when migrating from 2.20. There was no data corruption, just some of the foreign keys were possibly not properly migrated. Upgrading from 3.0 to 3.0.1 will fix this as well as going directly from 2.20 to 3.0.1.

Full list of changes:

  • Fixed possible migration issue from 2.20.
  • Localization updates.
  • Removed obsolete hook examples.
  • Improved caching documentation.
  • Fixed displaying of admin documentation.
  • Improved handling of long language names.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

10 June, 2018 08:15PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppZiggurat 0.1.5

ziggurats

A maintenance release 0.1.5 of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. Other changes that have been pending are a minor rewrite of DOIs in DESCRIPTION, a corrected state setter thanks to a PR by Ralf Stubner, and a tweak for function registration to have user_norm_rand() visible.

The NEWS file entry below lists all changes.

Changes in version 0.1.5 (2018-06-10)

  • Description rewritten using doi for references.

  • Re-setting the Ziggurat generator seed now correctly re-sets state (Ralf Stubner in #7 fixing #3)

  • Dynamic registration reverts to manual mode so that user_norm_rand() is visible as well (#7).

  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 June, 2018 06:27PM

RcppGSL 0.3.6

A maintenance update 0.3.6 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. The (this time really boring) NEWS file entry follows:

Changes in version 0.3.6 (2018-06-10)

  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 June, 2018 06:20PM

RcppClassic 0.9.10

A maintenance release RcppClassic 0.9.9 is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made.

CRANberries also reports the changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 June, 2018 04:36PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, May 2018

I was assigned 15 hours of work by Freexian's Debian LTS initiative and worked all those hours.

I uploaded the pending changes to linux at the beginning of the month, one of which had been embargoed. I prepared and released another update to the Linux 3.2 longterm stable branch (3.2.102). I then made a final upload of linux based on that.

10 June, 2018 03:05PM

John Goerzen

Please stop making the library situation worse with attempts to fix it

I recently had a simple-sounding desire. I would like to run the latest stable version of Digikam. My desktop, however, runs Debian stable, which has 5.3.0, not 5.9.0.

This is not such a simple proposition.


$ ldd /usr/bin/digikam | wc -l
396

And many of those were required at versions that weren’t in stable.

I had long thought that AppImage was a rather bad idea, but I decided to give it a shot. I realized it was worse than I had thought.

The problems with AppImage

About a year ago, I wrote about the problems Docker security. I go into much more detail there, but the summary for AppImage is quite similar. How can I trust all the components in the (for instance) Digikam AppImage image are being kept secure? Are they using the latest libssl and libpng, to avoid security issues? How will I get notified of a security update? (There seems to be no mechanism for this right now.) An AppImage user that wants to be secure has to manually answer every one of those questions for every application. Ugh.

Nevertheless, the call of better facial detection beckoned, and I downloaded the Digikam AppImage and gave it a whirl. The darn thing actually fired up. But when it would play videos, there was no sound. Hmmmm.

I found errors like this:

Cannot access file ././/share/alsa/alsa.conf

Nasty. I spent quite some time trying to make ALSA work, before a bunch of experimentation showed that if I ran alsoft-conf on the host, and selected only the PulseAudio backend, then it would work. I reported this bug to Digikam.

Then I thought it was working — until I tried to upload some photos. It turns out that SSL support in Qt in the AppImage was broken, since it was trying to dlopen an incompatible version of libssl or libcrypto on the host. More details are in the bug I reported about this also.

These are just two examples. In the rather extensive Googling I did about these problems, I came across issue after issue people had with running Digikam in an AppImage. These issues are not limited to the ALSA and SSL issues I describe here. And they are not occurring due to some lack of skill on the part of Digikam developers.

Rather, they’re occurring because AppImage packaging for a complex package like this is hard. It’s hard because it’s based on a fiction — the fiction that it’s possible to make an AppImage container for a complex desktop application act exactly the same, when the host environment is not exactly the same. Does the host run PulseAudio or ALSA? Where are its libraries stored? How do you talk to dbus?

And it’s not for lack of trying. The scripts to build the Digikam appimage support runs to over 1000 lines of code in the AppImage directory, plus another 1300 lines of code (at least) in CMake files that handle much of the work, and another 3000 lines or so of patches to 3rd-party packages. That’s over 5000 lines of code! By contrast, the Debian packaging for the same version of Digikam, including Debian patches but excluding the changelog and copyright files, amounts to 517 lines. Of course, it is reusing OS packages for the dependencies that were already built, but this amounts to a lot simpler build.

Frankly I don’t believe that AppImage really lives up to its hype. Requiring reinventing a build system and making some dangerous concessions on security for something that doesn’t really work in the end — not good in my book.

The library problem

But of course, AppImage exists for a reason. That reason is that it’s a real pain to deal with so many levels of dependencies in software. Even if we were to compile from source like the old days, and even if it was even compatible with the versions of the dependencies in my OS, that’s still a lot of work. And if I have to build dependencies from source, then I’ve given up automated updates that way too.

There’s a lot of good that ELF has brought us, but I can’t help but think that it wasn’t really designed for a world in which a program links 396 libraries (plus dlopens a few more). Further, this world isn’t the corporate Unix world of the 80s; Open Source developers aren’t big on maintaining backwards compatibility (heck, both the KDE and Qt libraries under digikam have both been entirely rewritten in incompatible ways more than once!) The farther you get from libc, the less people seem to care about backwards compatibility. And really, who can blame volunteers? You want to work on new stuff, not supporting binaries from 5 years ago, right?

I don’t really know what the solution is here. Build-from-source approaches like FreeBSD and Gentoo have plenty of drawbacks too. Is there some grand solution I’m missing? Some effort to improve this situation without throwing out all the security benefits that individually-packaged libraries give us in distros like Debian?

10 June, 2018 08:31AM by John Goerzen

June 09, 2018

hackergotchi for Lars Wirzenius

Lars Wirzenius

Hacker Noir developments

I've been slowly writing on would-be novel, Hacker Noir. See also my Patreon post. I've just pushed out a new public chapter, Assault, to the public website, and a patron-only chapter to Patreon: "Ambush", where the Team is ambushed, and then something bad happens.

The Assault chapter was hard to write. It's based on something that happened to me earlier this year. The Ambush chapter was much more fun.

09 June, 2018 06:47PM

New chapter of Hacker Noir on Patreon

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (noir.liw.fi), and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

09 June, 2018 06:45PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppDE 0.1.6

Another maintenance release, now at version 0.1.6, of our RcppDE package is now on CRAN. It follows the most recent (unblogged, my bad) 0.1.5 release in January 2016 and the 0.1.4 release in September 2015.

RcppDE is a "port" of DEoptim, a popular package for derivative-free optimisation using differential evolution optimization, to C++. By using RcppArmadillo, the code becomes a lot shorter and more legible. Our other main contribution is to leverage some of the excellence we get for free from using Rcpp, in particular the ability to optimise user-supplied compiled objective functions which can make things a lot faster than repeatedly evaluating interpreted objective functions as DEoptim (and, in fairness, just like most other optimisers) does.

That is also what lead to this upload: Kyle Baron noticed an issue when nesting a user-supplied compiled function inside a user-supplied compiled objective function -- and when using the newest Rcpp. This has to do with some cleanups we made for how RNG state is, or is not, set and preserved. Kevin Ushey was (once again) a real trooper here and added a simple class to Rcpp (in what is now the development version 0.12.17.2 available on the Rcpp drat repo) and used that here to (selectively) restore behaviour similarly to what we had in Rcpp (but which created another issue for another project). So all that is good now in all use cases. We also have some other changes contributed by Yi Kang some time ago for both JADE style randomization and some internal tweaks. Some packaging details were updated, and that sums up release 0.1.6.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 June, 2018 05:11PM

June 08, 2018

Manuel A. Fernandez Montecelo

Talk about the Debian GNU/Linux riscv64 port at RISC-V workshop

About a month ago I attended the RISC-V workshop (conference, congress) co-organised by the Barcelona Supercomputing Center (BSC) and Universitat Politècnica de Catalunya (UPC).

There I presented a talk with the (unimaginative) name of “Debian GNU/Linux Port for RISC-V 64-bit”, talking about the same topic as many other posts of this blog.

There are 2-3 such RISC-V Workshop events per year, one somewhere in Silicon Valley (initially at UC Berkeley, its birthplace) and the others spread around the world.

The demographics of this gathering are quite different to those of planet-debian; the people attending usually know a lot about hardware and often Linux, GNU toolchains and other FOSS, but sometimes very little about the inner workings of FOSS organisations such as Debian. My talk had these demographics as target, so a lot of its content will not teach anything new for most readers of planet-debian.

Still, I know that some readers are interested in parts of this, now that the slides and videos are published, so here it is:

Also very relevant is that they were using Debian (our very own riscv64 port, recently imported into debian-ports infra) in two of the most important hardware demos in the corridors. The rest were mostly embedded distros to showcase FPS games like Quake2, Doom or similar.


All the feedback that I received from many of the attendees about the availability of the port was very positive and they were very enthusiastic, basically saying that they and their teams were really delighted to be able to use Debian to test their different prototypes and designs, and to drive development.

Also, many used Debian daily in their work and research for other purposes, for example a couple of people were proudly showing to me Debian installed on their laptops.

For me, this feedback is a testament of how much of what we do everyday matters to the world out there.


For the historical curiosity, I also presented a similar talk in a previous workshop (2 years back) at CSAIL / MIT.

At that time the port was in a much more incipient state, mostly a proof of concept (for example the toolchain had not even started to be upstreamed). Links:

08 June, 2018 09:20PM by Manuel A. Fernandez Montecelo

hackergotchi for Junichi Uekawa

Junichi Uekawa

Recently I'm not writing any code.

Recently I'm not writing any code.

08 June, 2018 08:58PM by Junichi Uekawa

hackergotchi for Erich Schubert

Erich Schubert

Elsevier CiteScore™ missing the top conference in data mining

Elsevier Scopus is crap.

It’s really time to abandon Elsevier. German universities canceled their subscriptions. Sweden apparently began now to do so, too. Because Elsevier (and to a lesser extend, other publishers) overcharge universities badly.

Meanwhile, Elsevier still struggles to pretend it offers additional value. For example with the ‘‘horribly incomplete’’ Scopus database. For computer science, Scopus etc. are outright useless.

Elsevier just advertised (spammed) their “CiteScore™ metrics”. “Establishing a new standard for measuring serial citation impact”. Not.

“Powered by Scopus, CiteScore metrics are a comprehensive, current, transparent and “ horribly incomplete for computer science.

An excerpt from Elsevier CiteScore™:

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

Scopus coverage years:from 2002 to 2003, from 2005 to 2015(coverage discontinued in Scopus)

ACM SIGKDD is the top conference for data mining (there are others like NIPS with more focus in machine learning - I’m referring to the KDD subdomain).

But for Elsevier, it does not seem to be important.

Forget Elsevier. Also forget Thomson Reuter’s ISI Web of Science. It’s just the same publisher-oriented crap.

Communications of the ACM: Research Evaluation For Computer Science

Niklaus Wirth, Turing Award winner, appears for minor papers from indexed publications, not his seminal 1970 Pascal report. Knuth’s milestone book series, with an astounding 15,000 citations in Google Scholar, does not figure. Neither do Knuth’s three articles most frequently cited according to Google.

Yes, if you ask Elsevier or Thomson Reuter’s, Donald Knuth’s “the art of computer programming” does not matter. Because it is not published by Elsevier.

They also ignore the fact that open-access gains importance quickly. Many very influencial papers such as “word2vec” have been published first in the open-access preprint server arXiv. Some never even were published anywhere else.

According to Google Scholar, the top venue for artificial intelligence is arXiv cs.LG, and stat.ML is ranked 5. And the top venue for computational linguistics is arXiv cs.CL. In databases and information systems the top venue WWW publishes via ACM, but using open-access links from their web page. The second, VLDB, operates their own server to publish PVLDB as open-access. And number three is arXiv cs.SI, number five is arXiv cs.DB.

Time to move to open-access, and away from overpriced publishers. If you want your paper to be read and cited, publish open-access and not with expensive walled gardens like Elsevier.

08 June, 2018 02:01PM by Erich Schubert

June 07, 2018

Thorsten Alteholz

My Debian Activities in May 2018

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

07 June, 2018 10:51PM by alteholz

My Debian Activities in May 2018

FTP master

This month I accepted 304 packages and rejected 20 uploads. The overall number of packages that got accepted this month was 420.

Debian LTS

This was my forty seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 24.25h. During that time I did LTS uploads of:

    [DLA 1387-1] cups security update for one CVE
    [DLA 1388-1] wireshark security update for 9 CVEs

I continued to work on the bunch of wireshark CVEs and sorted all out that did not affect Jessie or Stretch. At the end I sent my dediff with patches for 20 Jessie CVEs and 38 CVES for Stretch to Moritz so that he could compare them with his own work. Unfortunately he didn’t use all of them.

The CVEs for krb5 were marked as no-dsa by the security team, so there was no upload for Wheezy. Building the package for cups was a bit annoying as the test suite didn’t want to run in the beginning.

I also tested the apache2 package from Roberto twice and let the package do a second round before the final upload.

Last but not least I did a week of frontdesk duties and prepared my new working environment for Jessie LTS and Wheezy ELTS.

Other stuff

During May I did uploads of …

  • libmatthew-java to fix a FTBFS with Java 9 due to a disappearing javah. In the end it resulted in a new upstream version.

I also prepared the next libosmocore transistion by uploading several osmocom packages to experimental. This has to continue in June.

Further I sponsored some glewlwyd packages for Nicolas Mora. He is right on his way to become a Debian Maintainer.

Last but not least I uploaded the new package libterm-readline-ttytter-per, which is needed to bring readline functionality to oysttyer, a command line twitter client.

07 June, 2018 08:53PM by alteholz

Brett Parker

The Psion Gemini

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

07 June, 2018 01:04PM by Brett Parker ([email protected])

hackergotchi for Mario Lang

Mario Lang

Debian on a synthesizer

Bela is a low latency optimized platform for audio applications built using Debian and Xenomai, running on a BeagleBoard Black. I recently stumbled upon this platform while skimming through a modular synthesizer related forum. Bela has teamed up with the guys at Rebel Technologies to build a Bela based system in eurorack module format, called Salt. Luckily enough, I managed to secure a unit for my modular synthesizer.

Picture of the front panel of a Salt and Salt+ module

Inputs and Outputs

Salt features 2 audio (44.1kHz) in, 2 audio out, 8 analog (22kHz) in, 8 analog out, and a number of digital I/Os. And it also features a USB host port, which is what I need to connect a Braille display to it.

Accessible synthesizers

do not really exist. Complex devices like sequencers or basically anything with a elaborate menu structure are usually not usable by the blind. However, Bela, or more specifically, Salt, is actually a game changer. I was able to install brltty and libbrlapi-dev (and a number of C++ libraries I like to use) with just a simple apt invokation.

Programmable module

Salt is marketed as a programmable module. To make life easy for creative people, the Bela platform does provide integration for well-known audio processing systems like PureData, SuperCollider (and recently) Csound. This is great to get started. However, it also allows to write your own C++ applications. Which is what I am doing right now, since I want to implement full Braille integration. So the display of my synthesizer is going to be tactile!

A stable product

Bought in May 2018, Salt shipped with Debian Stretch preinstalled. This means I get to use GCC 6.4 (C++14). Nice to see stable ship in commercial products.

Pepper

pepper is an obvious play on words. The goal for this project is to provide a Bela application for braille display users.

As a proof of concept, I already managed to successfully run a number of LV2 plugins via pepper on my Salt module. In the upcoming days, I hope I can manage to secure enough spare time to actually make more progress with this programming project.

07 June, 2018 10:00AM by Mario Lang

hackergotchi for Norbert Preining

Norbert Preining

Git and Subversion collaboration

Git is great, we all know that, but there are use cases where there completely distributed development model does not shine (see here and here). And while my old git svn mirror of TeX Live subversion was working well, git pull and git svn rebase didn’t work well together, repulling the same changes again and again. Finally, I took the time to experiment and fix this!

Most of the material in this blog is already written up, and the best sources I found are here and here. There practically everything is written down, but when one goes down to business some things work out a bit differently. So here we go.

Aim

Aim of the setup is to be able that several developers can work on a git svn mirror of a central subversion repository. “Work” here means:

  • pull from the git mirror to get the latest changes
  • normal git workflows: branch, develop new features, push new branches to the git mirror
  • commit to the subversion repository using git svn dcommit

and all that with a much redundancy removed as possible.

On solution to this would be that each developer creates his own git-svn mirror. While this is fine in principle, it is error prone, costs lots of time, and everyone has to do git svn rebase etc. We want to be able to use normal git workflows as far as possible.

Layout

The basic layout of our setup is as follows:

The following entities are shown in the above graphics:

  • SvnRepo: the central subversion repository
  • FetchingRepo: the git-svn mirror which does regular fetches and pushes to the BareRepo
  • BareRepo: the central repository which is used by all developers to pull and collaborate
  • DevRepo: normal git clones of the BareRepo on the developers’ computer

The flow of data is also shown in the above diagram:

  • git svn fetch: the FetchingRepo is updated regularly (using cron) to fetch new revisions and new branches/tags from the SvnRepo
  • git push (1): the FetchingRepo pushes changes regularly (using cron) to the BareRepo
  • git pull: developers pull from the BareRepo, can check out remote branches and do normal git workflows
  • git push (2): developers push changes to and creation of new branches to the BareRepo
  • git svn dcommit: developers rebase-merge their changes into the main branch and commit from there to the SvnRepo

Besides the requirement to use git svn dcommit for submitting the changes to the SvnRepo, and the requirement by git svn to have linear histories, everything else can be done with normal workflows.

Procedure

Let us for the following assume that SVNREPO points to the URI of the Subversion repository, and BAREREPO points to the URI of the BareRepo. Furthermore, we refer to the path on the system (server, local) with variables like $BareRepo etc.

Step 1 – preparation of authors-file

To get consistent entries for committers, we need to set up a authors file, giving a mapping from Subversion users to Name/Emails:

svnuser1 = AAA BBB 
svnuser2 = CCC DDD 
...

Let us assume that AUTHORSFILE environment variable points to this file.

Step 2 – creation of fetching repository

This step creates a git-svn mirror, please read the documentation for further details. If the Subversion repository follows the standard layout (trunk, branches, tags), then the following line will work:

git svn clone --prefix="" --authors-file=$AUTHORSFILE -s $SVNREPO

The important part here is the --prefix one. The documentation of git svn says here:

Setting a prefix (with a trailing slash) is strongly encouraged in any case, as your SVN-tracking refs will then be located at “refs/remotes/$prefix/”, which is compatible with Git’s own remote-tracking ref layout (refs/remotes/$remote/). Setting a prefix is also useful if you wish to track multiple projects that share a common repository. By default, the prefix is set to origin/.

Note: Before Git v2.0, the default prefix was “” (no prefix). This meant that SVN-tracking refs were put at “refs/remotes/*”, which is incompatible with how Git’s own remote-tracking refs are organized. If you still want the old default, you can get it by passing –prefix “” on the command line.

While one might be tempted to use a prefix of “svn” or “origin”, both of which I have done, this will complicate (make impossible?) later steps, in particular the synchronization of git pull with git svn fetch.

The original blogs I mentioned in the beginning were written before the switch to default=”origin” was made, so this was the part that puzzled me and I didn’t understand why the old descriptions didn’t work anymore.

Step 3 – cleanup of the fetching repository

By default, git svn creates and checks out a master branch. In this case, the Subversion repositories “master” is the “trunk” branch, and we want to keep it like this. Thus, let us checkout the trunk branch and remove the master, after entering the FetchingRepo, do

cd $FetchingRepo
git checkout trunk
git checkout -b trunk
git branch -d master

The two checkouts are necessary because the first will leave you with a detached head. In fact, no checkout would be fine, too, but git svn does not work over bare repositories, so we need to checkout some branch.

Step 4 – init the bare BareRepo

This is done in the usual way, I guess you know that:

git init --bare $BareRep

Step 5 – setup FetchingRepo to push all branches and push them

The cron job we will introduce later will fetch all new revisions, including new branches. We want to push all branches to the BareRepo. This is done by adjusting the fetch and push configuration, after changing into the FetchingRepo

cd $FetchingRepo
git remote add origin $BAREREPO
git config remote.origin.fetch '+refs/remotes/*:refs/remotes/origin/*'
git config remote.origin.push 'refs/remotes/*:refs/heads/*'
git push origin

What has been done is that fetch should update the remote branches, and push should pull the remote branches to the BareRepo. This ensures that new Subversion branches (or tags, which are nothing else then branches) are also pushed to the BareRepo.

Step 6 – adjust the default checkout branch in the BareRepo

By default the master branch is cloned/checked out in git, but we don’t have a master branch, but “trunk” plays its role. Thus, let us adjust the default in the BareRepo:

cd $BareRepo
git symbolic-ref HEAD refs/heads/trunk

Step 7 – developers branch

Now we are ready to use the bare repo, and clone it onto one of the developers machine:

git clone $BAREREPO

But before we can actually use this item, we need to make sure that git commits sent to the Subversion repository have the same user name and email for the committer. The reason for this is that the commit hash is computed from various information including the name/email (see details here). Thus we need to make sure that the git svn dcommit at the DeveloperRepo and the git svn fetch on the FetchingRepo create the very same hash! Thus, each developer needs to set up an authorsfile with at least his own entry:

cd $DeveloperRepo
echo 'mysvnuser = My Name '  > .git/usermap
git config svn.authorsfile '.git/usermap'

Important: the line for mysvnuser must exactly match the one in the original authorsfile from Step 1!

The final step is to allow the developer to commit to the SvnRepo by adding the necessary information to the git configuration:

git svn init -s $SVNREPO

Warning: Here we rely on two items: First, that the git clone initializes the default origin for the remote name, and second, that git svn init uses the default prefix “origin”, as discussed above.

If this is too shaky for you, the other option is to define the remote name during clone, and use that for the prefix:

git clone -o mirror $BAREREPO
git svn init --prefix=mirror/ -s $SVNREPO

This way the default remote will be “mirror” and all is fine.

Note: Upon your first git svn usage in the DeveloperRepo, as well as always after a pull, you will see messages like:

Rebuilding .git/svn/refs/remotes/origin/trunk/.rev_map.c570f23f-e606-0410-a88d-b1316a301751 ...
rNNNN = 1bdc669fab3d21ed7554064dc461d520222424e2
rNNNM = 2d1385fdd8b8f1eab2a95d325b0d596bd1ddb64f
...

This is a good sign, meaning that git svn does not re-fetch the whole set of revisions, but reuses the one pulled from the BareRepo and only rebuilds the mapping, which should be fast.

Updating the FetchingRepo

Updating the FetchingRepo should be done automatically using cron, the necessary steps are:

cd $FetchingRepo
git svn fetch --all
git push

This will fetch all revisions, and pushes the default configured branches, that are all remote heads to the BareRepo.

Note: If a Developer first commits a change to the SvnRepo using git svn dcommit and before the FetchingRepo updated the BareRepo (i.e., before the next cron run) also uses git pull, he will see something like:

$ git pull
From preining.info:texlive2
 + 10cc435f163...953f9564671 trunk      -> origin/trunk  (forced update)
Already up to date.

This is due to the fact that the remote head is still behind the local head, which can easily be seen by looking at the output of git log: Before the FetchingRepo updated the BareRepo, one would see something like:

$ git log
commit 3809fcc9aa6e0a70857cbe4985576c55317539dc (HEAD -> trunk)
Author: ....

commit eb19b9e6253dbc8bdc4e1774639e18753c4cd08f (origin/trunk, origin/HEAD)
...

and afterwards all of the three refs would point to the same top commit. This is nothing to worry and normal behavior. In fact, the default setup for fetching remotes is to force pull.

Protecting the trunk branch

I found myself sometimes pushing wrongly to trunk instead of using svn dcommit. This can be avoided by posing restriction on pushing. With gitolite, simply add a rule

- refs/heads/trunk = USERID

to the repo stanza of your mirror. When using Git(Lab|Hub) there are options to protect branches.

A more advanced restriction policy would be users to require that created branches are within a certain namespace. For example, a gitolite rule

repo yoursvnmirror
    RW+      = fetching-user
    RW+ dev/ = USERID
    R        = USERID

would only allow the FetchingRepo (identified by fetching-user) to push everywhere, but myself (USERID) to push/rewind/delete etc only branches starting with “dev/”, but read everything.

Workflow for developers

The recommended workflow compatible with this setup is

  • use git pull to update the local developers repository
  • use only branches that are not created/update via git-svn
  • on commit time, (1) rebase you branch on trunk, (2) merge (fast forward) your branch into trunk, (3) commit your changes with git svn dcommit
  • rinse and repeat

More detailed discussion and safety measure as laid out in the git-svn documentation apply as well, worth reading!

07 June, 2018 02:28AM by Norbert Preining

June 06, 2018

Athos Ribeiro

Running OBS Workers and OBS staging instance

This is my third post of my Google Summer of Code 2018 series. Links for the previous posts can be found below: Post 1: My Google Summer of Code 2018 project Post 2: Setting up a local OBS development environment About the stuck OBS workers As I mentioned in my last post, OBS workers were hanging on my local installation. I finally got to the point where the only missing piece of my local instalation (to have a raw OBS install which can build Debian packages) was to figure out this issue with the OBS workers.

06 June, 2018 08:10PM

Sylvain Beucler

Best GitHub alternative: us

Why try to choose the host that sucks less, when hosting a single-file (S)CGI gets you decentralized git-like + tracker + wiki?

Fossil

https://www.fossil-scm.org/

We gotta take the power back.

06 June, 2018 06:16PM

hackergotchi for Joey Hess

Joey Hess

the single most important criteria when replacing Github

I could write a lot of things about the Github acquisition by Microsoft. About Github's embrace and extend of git, and how it passed unnoticed by people who now fear the same thing now that Microsoft is in the picture. About the stultifying effects of Github's centralization, and its retardant effect on general innovation in spaces around git and software development infrastructure.

Instead I'd rather highlight one simple criteria you can consider when you are evaluating any git hosting service, whether it's Gitlab or something self-hosted, or federated, or P2P[1], or whatever:

Consider all the data that's used to provide the value-added features on top of git. Issue tracking, wikis, notes in commits, lists of forks, pull requests, access controls, hooks, other configuration, etc.
Is that data stored in a git repository?

Github avoids doing that and there's a good reason why: By keeping this data in their own database, they lock you into the service. Consider if Github issues had been stored in a git repository next to the code. Anyone could quickly and easily clone the issue data, consume it, write alternative issue tracking interfaces, which then start accepting git pushes of issue updates and syncing all around. That would have quickly became the de-facto distributed issue tracking data format.

Instead, Github stuck it in a database, with a rate-limited API, and while this probably had as much to do with expediency, and a certain centralized mindset, as intentional lock-in at first, it's now become such good lock-in that Microsoft felt Github was worth $7 billion.

So, if whatever thing you're looking at instead of Github doesn't do this, it's at worst hoping to emulate that, or at best it's neglecting an opportunity to get us out of the trap we now find ourselves in.


[1] Although in the case of a P2P system which uses a distributed data structure, that can have many of the same benefits as using git. So, git-ssb, which stores issues etc as ssb messages, is just as good, for example.

06 June, 2018 04:40PM

Russell Coker

BTRFS and SE Linux

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log 
security.selinux: 
0000   73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F    system_u:object_ 
0010   72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73    r:auditd_log_t:s 
0020   30 00                                              0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash 
set -e 
COUNT=$(ps aux|grep [s]bin/auditd|wc -l) 
date 
if [ "$COUNT" = "1" ]; then 
 echo "all good" 
else 
 echo "failed" 
 exit 1 
fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 
37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun  1 
12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do 
 ssh stretch "reboot -nffd" > /dev/null 2>&1 & 
 sleep 20 
done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri  1 Jun 12:26:13 UTC 2018 
all good 
Fri  1 Jun 12:26:33 UTC 2018 
failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log  
37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun  1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

06 June, 2018 11:07AM by etbe

hackergotchi for Evgeni Golov

Evgeni Golov

Not-So-Self-Hosting

I planned to write about this for quite some time now (last time end of April), and now, thanks to the GitHub acquisition by Microsoft and all that #movingtogitlab traffic, I am finally sitting here and writing these lines.

This post is not about Microsoft, GitHub or GitLab, and it's neither about any other SaaS solution out there, the named companies and products are just examples. It's more about "do you really want to self-host?"

Every time a big company acquires, shuts down or changes an online service (SaaS - Software as a Service), you hear people say "told you so, you should better have self-hosted from the beginning". And while I do run quite a lot of own infrastructure, I think this statement is too general and does not work well for many users out there.

Software as a Service

There are many code-hosting SaaS offerings: GitHub (proprietary), GitLab (open core), Pagure (FOSS) to name just a few. And while their licenses, ToS, implementations and backgrounds differ, they have a few things in common.

Benefits:

  • (sort of) centralized service
  • free (as in beer) tier available
  • high number of users (and potential collaborators)
  • high number of hosted projects
  • good (fsvo "good") connection from around the globe
  • no maintenance required from the users

Limitations:

  • dependency on the interest/goodwill of the owner to continue the service
  • some features might require signing up for a paid tier

Overall, SaaS is handy if you're lazy, just want to get the job done and benefit from others being able to easily contribute to your code.

Hosted Solutions

All of the above mentioned services also offer a hosted solution: GitHub Enterprise, GitLab CE and EE, Pagure.

As those are software packages you can install essentially everywhere, you can host the service "in your basement", in the cloud or in any data center you have hardware or VMs running.

However, with self-hosting, the above list of common things shifts quite a bit.

Benefits:

  • the service is configured and secured exactly like you need it
  • the data remains inside your network/security perimeter if you want it

Limitations:

  • requires users to create an own account on your instance for collaboration
  • probably low number of users (and potential collaborators)
  • connection depends on your hosting connection
  • infrastructure (hardware, VM, OS, software) requires regular maintenance
  • dependency on your (free) time to keep the service running
  • dependency on your provider (network/hardware/VM/cloud)

I think especially the first and last points are very important here.

First, many contributions happen because someone sees something small and wants to improve it, be it a typo in the documentation, a formatting error in the manpage or a trivial improvement of the code. But these contributions only happen when the complexity to submit it is low. Nobody not already involved in OpenStack would submit a typo-fix to their Gerrit which needs a Launchpad account… A small web-edit on GitHub or GitLab on the other hand is quickly done, because "everybody" has an account anyways.

Second, while it is called "self-hosting", in most cases it's more of a "self-running" or "self-maintaining" as most people/companies don't own the whole infrastructure stack.

Let's take this website as an example (even though it does not host any Git repositories): the webserver runs in a container (LXC) on a VM I rent from netcup. In the past, netcup used to get their infrastructure from Hetzner - however I am not sure that this is still the case. So worst case, the hosting of this website depends on me maintaining the container and the container host, netcup maintaining the virtualization infrastructure and Hetzner maintaining the actual data center. This also implies that I have to trust those companies and their suppliers as I only "own" the VM upwards, not the underlying infrastructure and not the supporting infrastructure (network etc).

SaaS vs Hosted

There is no silver bullet to that. One important question is "how much time/effort can you afford?" and another "which security/usability constraints do you have?".

Hosted for a dedicated group

If you need a solution for a dedicated group (your work, a big FOSS project like Debian or a social group like riseup), a hosted solution seems like a good idea. Just ensure that you have enough infrastructure and people to maintain it as a 24x7 service or at least close to that, for a long time, as people will depend on your service.

The same also applies if you need/want to host your code inside your network/security perimeter.

Hosted for an individual

Contrary to a group, I don't think a hosted solution makes sense for an individual most of the time. The burden of maintenance quite often outweighs the benefits, especially as you'll have to keep track of (security) updates for the software and the underlying OS as otherwise the "I own my data" benefit becomes "everyone owns me" quite quickly. You also have to pay for the infrastructure, even if the OS and the software are FOSS.

You're also probably missing out on potential contributors, which might have an account on the common SaaS platforms, but won't submit a pull-request for a small change if they have to register on your individual instance.

SaaS for a dedicated group

If you don't want to maintain an own setup (resources/costs), you can also use a SaaS platform for a group. Some SaaS vendors will charge you for some features (they have to pay their staff and bills too!), but it's probably still cheaper than having the right people in-house unless you have them anyways.

You also benefit from a networking effect, as other users of the same SaaS platform can contribute to your projects "at no cost".

Saas for an individual

For an individual, a SaaS solution is probably the best fit as it's free (as in beer) in the most cases and allows the user to do what they intend to do, instead of shaving yaks and stacking turtles (aka maintaining infrastructure instead of coding).

And you again get the networking effect of the drive-by contributors who would not sign up for a quick fix.

Selecting the right SaaS

When looking for a SaaS solution, try to answer the following questions:

  • Do you trust the service to be present next year? In ten years? Is there a sustainable business model?
  • Do you trust the service with your data?
  • Can you move between SaaS and hosted easily?
  • Can you move to a different SaaS (or hosted solution) easily?
  • Does it offer all the features and integrations you want/need?
  • Can you leverage the network effect of being on the same platform as others?

Selecting the right hosted solution

And answer these when looking for a hosted one:

  • Do you trust the vendor to ship updates next year? In ten years?
  • Do you understand the involved software stack and willing to debug it when things go south?
  • Can you get additional support from the vendor (for money)?
  • Does it offer all the features and integrations you want/need?

So, do you really want to self-host?

I can't speak for you, but for my part, I don't want to run a full-blown Git hosting just for my projects, GitHub is just fine for that. And yes, GitLab would be equally good, but there is little reason to move at the moment.

And yes, I do run my own Nextcloud instance, mostly because I don't want to backup the pictures from my phone to "a cloud". YMMV.

06 June, 2018 09:54AM by evgeni

hackergotchi for Thomas Lange

Thomas Lange

FAI 5.7

The new FAI release 5.7 is now available. Packages are uploaded to unstable and are available from the fai-project.org repository. I've also created new FAI ISO images and the special Ubuntu only installation FAI CD is now installing Ubuntu 18.04 aka Bionic. The FAI.me build service is also using the new FAI release.

In summary, the process for this release went very smooth and I am happy that the update of the ISO images and FAI.me service happend very shortly after the new release.

06 June, 2018 06:33AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Disaster a-Brewing

I brewed two new batches of beer last March and I've been so busy since I haven't had time to share how much of a failure it was.

See, after three years I thought I was getting better at brewing beer and the whole process of mashing, boiling, fermenting and bottling was supposed to be all figured out by now.

Turns out I was both greedy and unlucky and - woe is me! - one of my carboy exploded. Imagine 15 liters (out of a 19L batch) spilling out in my bedroom at 1AM with such force that the sound of the rubber bung shattering on the ceiling woke me up in panic. I legitimately thought someone had been shot in my bedroom.

This carboy was full to the brim prior to the beerxplosion

The aftermath left the walls, the ceiling and the wooden floor covered in thick semi-sweet brown liquid.

This was the first time I tried a "new" brewing technique called parti-gyle. When doing a parti-gyle, you reuse the same grains twice to make two different batches of beer: typically, the first batch is strong, whereas the second one is pretty low in alcohol. Parti-gyle used to be way beer was brewed a few hundred years ago. The Belgian monks made their Tripels with the first mash, the Dubbels with the second mash, and the final mash was brewed with funky yeasts to make lighter beers like Saisons.

The reason for my carboy exploding was twofold. First of all, I was greedy and filled the carboy too much for the high-gravity porter I was brewing. When your wort is very sweet, the yeast tends to degas a whole lot more and needs more head space not to spill over. At this point, any homebrewer with experience will revolt and say something like "Why didn't you use a blow-off tube you dummy!". A blow-off tube is a tube that comes out the airlock into a large tub of water and helps contain the effects of violent primary fermentation. With a blow-off tube, instead of having beer spill out everywhere (or worse, having your airlock completely explode), the mess is contained to the water vessel the tube is in.

The thing is, I did use a blow-off tube. Previous experience taught me how useful they can be. No, the real reason my carboy exploded was my airlock clogged up and let pressure build up until the bung gave way. The particular model of airlock I used was a three piece airlock with a little cross at the end of the plastic tube1. Turns out that little cross accumulated yeast and when that yeast dried up, it created a solid plug. Easy to say my airlocks don't have these little crosses anymore...

On a more positive note, it was also the first time I dry-hopped with full cones instead of pellets. I had some leftover cones in the freezer from my summer harvest and decided to use them. The result was great as the cones make for less trub than pellets when dry-hopping.

Recipes

What was left of the porter came out great. Here's the recipe if you want to try to replicate it. The second mash was also surprisingly good and turned out to be a very drinkable brown beer.

Closeup shot of hops floating in my carboy

Party Porter (first mash)

The target boil volume is 23L and the target batch size 17L. Mash at 65°C and ferment at 19°C.

Since this is a parti-gyle, do not sparge. If you don't reach the desired boil size in the kettle, top it off with water until you reach 23L.

Black Malt gives very nice toasty aromas to this porter, whereas the Oat Flakes and the unmalted Black Barley make for a nice black and foamy head.

Malt:

  • 5.7 kg x Superior Pale Ale
  • 450 g x Amber Malt
  • 450 g x Black Barley (not malted)
  • 400 g x Oat Flakes
  • 300 g x Crystal Dark
  • 200 g x Black Malt

Hops:

  • 13 g x Bravo (15.5% alpha acid) - 60 min Boil
  • 13 g x Bramling Cross (6.0% alpha acid) - 30 min Boil
  • 13 g x Challenger (7.0% alpha acid) - 30 min Boil

Yeast:

  • White Labs - American Ale Yeast Blend - WLP060

Party Brown (second mash)

The target boil volume is 26L and the target batch size 18L. Mash at 65°C for over an hour, sparge slowly and ferment at 19°C.

The result is a very nice table beer.

Malt:

same as for the Party Porter, since we are doing a parti-gyle.

Hops:

  • 31 g x Northern Brewer (9.0% alpha acid) - 60 min Boil
  • 16 g x Kent Goldings (5.5% alpha acid) - 15 min Boil
  • 13 g x Kent Goldings (5.5% alpha acid) - 5 min Boil
  • 13 g x Chinook (cones) - Dry Hop

Yeast:

  • White Labs - Nottingham Ale Yeast - WLP039

  1. The same kind of cross you can find in sinks to keep you from dropping objects down the drain by inadvertance. 

06 June, 2018 04:00AM by Louis-Philippe Véronneau

June 05, 2018

hackergotchi for Thomas Goirand

Thomas Goirand

Using a dummy network interface

For a long time, I’ve been very much annoyed by network setups on virtual machines. Either you choose a bridge interface (which is very easy with something like Virtualbox), or you choose NAT. The issue with NAT is that you can’t easily get into your VM (for example, virtualbox doesn’t exposes the gateway to your VM). With bridging, you’re getting in trouble because your VM will attempt to get DHCP from the outside network, which means that first, you’ll get a different IP depending on where your laptop runs, and second, the external server may refuse your VM because it’s not authenticated (for example because of a MAC address filter, or 802.11x auth).

But there’s a solution to it. I’m now very happy with my network setup, which is using a dummy network interface. Let me share how it works.

In the modern Linux kernel, there’s “fake” network interface through a module called “dummy”. To add such an interface, simply load the kernel module (ie: “modprobe dummy”) and start playing. Then you can bridge that interface, and tap it, then plug your VM to it. Since the dummy interface is really living in your computer, you do have access to this internal network with a route to it.

I’m using this setup for connecting both KVM and Virtualbox VMs, you can even mix both. For Virtualbox, simply use the dropdown list for the bridge. For KVM, use something like this in the command line: -device e1000,netdev=net0,mac=08:00:27:06:CF:CF -netdev tap,id=net0,ifname=mytap0,script=no,downscript=no

Here’s a simple script to set that up, with on top, masquerading for both ip4 and ipv6:

# Load the dummy interface module
modprobe dummy

# Create a dummy interface called mynic0
ip link set name mynic0 dev dummy0

# Set its MAC address
ifconfig mynic0 hw ether 00:22:22:dd:ee:ff

# Add a tap device
ip tuntap add dev mytap0 mode tap user root

# Create a bridge, and bridge to it mynic0 and mytap0
brctl addbr mybr0
brctl addif mybr0 mynic0
brctl addif mybr0 mytap0

# Set an IP addresses to the bridge
ifconfig mybr0 192.168.100.1 netmask 255.255.255.0 up
ip addr add fd5d:12c9:2201:1::1/24 dev mybr0

# Make sure all interfaces are up
ip link set mybr0 up
ip link set mynic0 up
ip link set mytap0 up

# Set basic masquerading for both ipv4 and 6
iptables -I FORWARD -j ACCEPT
iptables -t nat -I POSTROUTING -s 192.168.100.0/24 -j MASQUERADE
ip6tables -I FORWARD -j ACCEPT
ip6tables -t nat -I POSTROUTING -s fd5d:12c9:2201:1::/64 -j MASQUERADE

05 June, 2018 08:45PM by Goirand Thomas

hackergotchi for Daniel Pocock

Daniel Pocock

Public Money Public Code: a good policy for FSFE and other non-profits?

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

05 June, 2018 08:40PM by Daniel.Pocock

hackergotchi for Jonathan McDowell

Jonathan McDowell

Getting started with Home Assistant

Having set up some MQTT sensors and controllable lights the next step was to start tying things together with a nicer interface than mosquitto_pub and mosquitto_sub. I don’t yet have enough devices setup to be able to do some useful scripting (turning on the snug light when the study is cold is not helpful), but a web control interface makes things easier to work with as well as providing a suitable platform for expansion as I add devices.

There are various home automation projects out there to help with this. I’d previously poked openHAB and found it quite complex, and I saw reference to Domoticz which looked viable, but in the end I settled on Home Assistant, which is written in Python and has a good range of integrations available out of the box.

I shoved the install into a systemd-nspawn container (I have an Ansible setup which makes spinning one of these up with a basic Debian install simple, and it makes it easy to cleanly tear things down as well). One downside of Home Assistant is that it decides it’s going to install various Python modules once you actually configure up some of its integrations. This makes me a little uncomfortable, but I set it up with its own virtualenv to make it easy to see what had been pulled in. Additionally I separated out the logs, config and state database, all of which normally go in ~/.homeassistant/. My systemd service file went in /etc/systemd/system/home-assistant.service and looks like:

[Unit]
Description=Home Assistant
After=network-online.target

[Service]
Type=simple
User=hass
ExecStart=/srv/hass/bin/hass -c /etc/homeassistant --log-file /var/log/homeassistant/homeassistant.log

MemoryDenyWriteExecute=true
ProtectControlGroups=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Moving the state database needs an edit to /etc/homeassistant/configuration.yaml (a default will be created on first startup, I’ll only mention the changes I made here):

recorder:
  db_url: sqlite:///var/lib/homeassistant/home-assistant_v2.db

I disabled the Home Assistant cloud piece, as I’m not planning on using it:

# cloud:

And the introduction card:

# introduction:

The existing MQTT broker was easily plumbed in:

mqtt:
  broker: mqtt-host
  username: hass
  password: !secret mqtt_password
  port: 8883
  certificate: /etc/ssl/certs/ca-certificates.crt

Then the study temperature sensor (part of the existing sensor block that had weather prediction):

sensor:
  - platform: mqtt
    name: "Study Temperature"
    state_topic: "collectd/mqtt.o362.us/mqtt/temperature-study"
    value_template: "{{ value.split(':')[1] }}"
    device_class: "temperature"
    unit_of_measurement: "°C"

The templating ability let me continue to log into MQTT in a format collectd could parse, while also being able to pull the information into Home Assistant.

Finally the Sonoff controlled light:

light:
  - platform: mqtt
    name: snug
    command_topic: 'cmnd/sonoff-snug/power'

I set http_password (to prevent unauthenticated access) and mqtt_password in /etc/homeassistant/secrets.yaml. Then systemctl start home-assistant brought the system up on http://hass-host:8123/, and the default interface presented the study temperature and a control for the snug light, as well as the default indicators of whether the sun is up or not and the local weather status.

I do have a few niggles with Home Assistant:

  • Single password for access: There’s one password for accessing the API endpoint, so no ability to give different users different access or limit what an external integration can do.
  • Wants an entire subdomain: This is a common issue with webapps; they don’t want to live in a subdirectory under a main site (I also have this issue with my UniFi controller and Keybase, who don’t want to believe my main website is old skool with /~noodles/). There’s an open configurable webroot feature request, but no sign of it getting resolved. Sadly it involves work to both the backend and the frontend - I think a modicum of hacking could fix up the backend bits, but have no idea where to start with a Polymer frontend.
  • Installs its own software: I don’t like the fact the installation of Python modules isn’t an up front thing. I’d rather be able to pull a dependency file easily into Ansible and lock down the installation of new things. I can probably get around this by enabling plugins, allowing the modules to be installed and then locking down permissions but it’s kludgy and feels fragile.
  • Textual configuration: I’m not really sure I have a good solution to this, but it’s clunky to have to do all the configuration via a text file (and I love scriptable configuration). This isn’t something that’s going to work out of the box for non-technical users, and even for those of us happy hand editing YAML there’s a lot of functionality that’s hard to discover without some digging. One of my original hopes with Home Automation was to get a better central heating control and if it’s not usable by any household member it isn’t going to count as better.

Some of these are works in progress, some are down to my personal preferences. There’s active development, which is great to see, and plenty of documentation - both offical on the project website, and in the community forums. And one of the nice things about tying everything together with MQTT is that if I do decide Home Assistant isn’t the right thing down the line, I should be able to drop in anything else that can deal with an MQTT broker.

05 June, 2018 07:05PM

Reproducible builds folks

Reproducible Builds: Weekly report #162

Here’s what happened in the Reproducible Builds effort between Sunday May 27 and Saturday June 2 2018:

Packages reviewed and fixed, and bugs filed

tests-reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

reproducible.org website updates

There were a number of changes to the reproducible-builds.org website this week too, including:

Chris Lamb also updated the diffoscope.org website, including adding a progress bar animation as well as making “try it online” link more prominent and correctiing the source tarball link.

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

05 June, 2018 07:47AM

Russ Allbery

Review: The Obelisk Gate

Review: The Obelisk Gate, by N.K. Jemisin

Series: The Broken Earth #2
Publisher: Orbit
Copyright: August 2016
ISBN: 0-316-22928-8
Format: Kindle
Pages: 448

The Obelisk Gate is the sequel to The Fifth Season and picks up right where it left off. This is not a series to read out of order.

The complexity of The Fifth Season's three entwined stories narrows down to two here, which stay mostly distinct. One follows Essun, who found at least a temporary refuge at the end of the previous book and now is split between learning a new community and learning more about the nature of the world and orogeny. The second follows Essun's daughter, whose fate had been left a mystery in the first book. This is the middle book of a trilogy, and it's arguably less packed with major events than the first book, but the echoing ramifications of those events are vast and provide plenty to fill a novel. The Obelisk Gate never felt slow. The space between major events is filled with emotional processing and revelations about the (excellent) underlying world-building.

We do finally learn at least something about the stone-eaters, although many of the details remain murky. We also learn something about Alabaster's goals, which were the constant but mysterious undercurrent of the first book. Mixed with this is the nature of the Guardians (still not quite explicit, but much clearer now than before), the purpose of the obelisks, something of the history that made this world such a hostile place, and the underlying nature of orogeny.

The last might be a touch disappointing to some readers (I admit it was a touch disappointing to me). There are enough glimmers of forgotten technology and alternative explanations that I was wondering if Jemisin was setting up a quasi-technological explanation for orogeny. This book makes it firmly clear that she's not: this is a fantasy, and it involves magic. I have a soft spot in my heart for apparent magic that's some form of technology, so I was a bit sad, but I do appreciate the clarity. The Obelisk Gate is far more open with details and underlying systems (largely because Essun is learning more), which provides a lot of meat for the reader to dig into and understand. And it remains a magitech world that creates artifacts with that magic and uses them (or, more accurately, used them) to build advanced civilizations. I still see some potential pitfalls for the third book, depending on how Jemisin reconciles this background with one quasi-spiritual force she's introduced, but the world building has been so good that I have high hopes those pitfalls will be avoided.

The world-building is not the best part of this book, though. That's the characters, and specifically the characters' emotions. Jemisin manages the feat of both giving protagonists enough agency that the story doesn't feel helpless while still capturing the submerged rage and cautious suspicion that develops when the world is not on your side. As with the first book of this series, Jemisin captures the nuances, variations, and consequences of anger in a way that makes most of fiction feel shallow.

I realized, while reading this book, that so many action-oriented and plot-driven novels show anger in only two ways, which I'll call "HULK SMASH!" and "dark side" anger. The first is the righteous anger when the protagonist has finally had enough, taps some heretofore unknown reservoir of power, and brings the hurt to people who greatly deserved it. The second is the Star Wars cliche: anger that leads to hate and suffering, which the protagonist has to learn to control and the villain gives into. I hadn't realized how rarely one sees any other type of anger until Jemisin so vividly showed me the vast range of human reaction that this dichotomy leaves out.

The most obvious missing piece is that both of those modes of anger are active and empowered. Both are the anger of someone who can change the world. The argument between them is whether anger changes the world in a good way or a bad way, but the ability of the angry person to act on that anger and for that anger to be respected in some way by the world is left unquestioned. One might, rarely, see helpless anger, but it's usually just the build-up to a "HULK SMASH!" moment (or, sometimes, leads to a depressing sort of futility that makes me not want to read the book at all).

The Obelisk Gate felt like a vast opening-up of emotional depth that has a more complicated relationship to power: hard-earned bitterness that brings necessary caution, angry cynicism that's sometimes wrong but sometimes right, controlled anger, anger redirected as energy into other actions, anger that flares and subsides but doesn't disappear. Anger that one has to live with, and work around, and understand, instead of getting an easy catharsis. Anger with tradeoffs and sacrifices that the character makes consciously, affected by emotion but not driven by it. There is a moment in this book where one character experiences anger as an overwhelming wave of tiredness, a sharp realization that they're just so utterly done with being angry all the time, where the emotion suddenly shifts into something more introspective. It was a beautifully-captured moment of character depth that I don't remember seeing in another book.

This may sound like it would be depressing and exhausting to read, but at least for me it wasn't at all. I didn't feel like I was drowning in negative emotions — largely, I think, because Jemisin is so good at giving her characters agency without having the world give it to them by default. The protagonists are self-aware. They know what they're angry about, they know when anger can be useful and when it isn't, and they know how to guide it and live with it. It feels more empowering because it has to be fought for, carved out of a hostile world, earned with knowledge and practice and stubborn determination. Particularly in Essun, Jemisin is writing an adult whose life is full of joys and miseries, who doesn't forget her emotions but also isn't controlled by them, and who doesn't have the luxury of either being swept away by anger or reaching some zen state of unperturbed calm.

I think one key to how Jemisin pulls this off is the second-person perspective used for Essun's part of the book (and carried over into the other strand, which has the same narrator but a different perspective since this story is being told to Essun). That's another surprise, since normally this style strikes me as affected and artificial, but here it serves the vital purpose of giving the reader a bit of additional distance from Essun's emotions. Following an emotionally calmer retelling of someone else's perspective on Essun made it easier to admire what Jemisin is doing with the nuances of anger without getting too caught up in it.

It helps considerably that the second-person perspective here has a solid in-story justification (not explicitly explained here, but reasonably obvious by the end of the book), and is not simply a gimmick. The answers to who is telling this story and why they're telling it to a protagonist inside the story are important, intriguing, and relevant.

This series is doing something very special, and I'm glad I stuck to it through the confusing and difficult parts in the first book. There's a reason why every book in it was nominated for the Hugo and The Obelisk Gate won in 2017 (and The Fifth Season in 2016). Despite being the middle book of a trilogy, and therefore still leaving unresolved questions, this book was even better than The Fifth Season, which already set a high bar. This is very skillful and very original work and well worth the investment of time (and emotion).

Followed by The Stone Sky.

Rating: 9 out of 10

05 June, 2018 03:22AM

hackergotchi for Norbert Preining

Norbert Preining

Hyper Natural Deduction

After quite some years of research, my colleague Arnold Beckmann and my paper on Hyper Natural Deduction has finally been published in the Journal of Logic and Computation. This paper was the difficult but necessary first step in our program to develop a Curry-Howard style correspondence between standard Gödel logic (and its Hypersequent calculus) and some kind of parallel computations.

The results of this article were first announced at the LICS (Logic in Computer Science) Conference in 2015, but the current version is much more intuitive due to a switch to inductive definition, usage of graph representation for proofs, and finally due to a fix of a serious error. The abstract of the current article read:

We introduce a system of Hyper Natural Deduction for Gödel Logic as an extension of Gentzen’s system of Natural Deduction. A deduction in this system consists of a finite set of derivations which uses the typical rules of Natural Deduction, plus additional rules providing means for communication between derivations. We show that our system is sound and complete for infinite-valued propositional Gödel Logic, by giving translations to and from Avron’s Hypersequent Calculus. We provide conversions for normalization extending usual conversions for Natural Deduction and prove the existence of normal forms for Hyper Natural Deduction for Gödel Logic. We show that normal deductions satisfy the subformula property.

The article (preprint version) by itself is rather long (around 70 pages including the technical appendix), but for those interested, the first 20 pages give a nice introduction and the inductive definition of our system, which suffices for building upon this work. The rest of the paper is dedicated to an extensional definition – not and inductive definition but one via clearly defined properties on the final object – and the proof of normalization.

Starting point of our investigations was Arnon Avron‘s comments on parallel computations and communication when he introduced the Hypersequent calculus (Hypersequents, Logical Consequence and Intermediate Logics for Concurrency, Ann.Math.Art.Int. 4 (1991) 225-248):

The second, deeper objective of this paper is to contribute towards a better understanding of the notion of logical consequence in general, and especially its possible relations with parallel computations.

We believe that these logics […] could serve as bases for parallel λ-calculi.

The name “communication rule” hints, of course, at a certain intuitive interpretation that we have of it as corresponding to the idea of exchanging information between two multiprocesses: […]

In working towards a Curry-Howard (CH) correspondence between Gödel logics and some kind of process calculus, we are guided by the original path, as laid out in the above graphics: Starting from Intuitionistic Logic (IL) and its sequent calculus (LJ) a natural dudcution system (ND) provided the link to the λ-calculus. We started from Gödel logics (GL) and its Hypersequent calculus (HLK) and in this article developed a Hyper Natural Deduction with similar properties as the original Natural Deduction system.

Curry-Howard correspondences provide deep conceptual links between formal proofs and computational programs. A whole range of such CH correspondences have been identified and explored. The most fundamental one is between the natural deduction proof formalism for intuitionistic logic, and a foundational programming language called lambda calculus. This CH correspondence interprets formulas in proofs as types in programs, proof transformations like cut-reduction to computation steps like beta-reduction in lambda calculus. These insights have led to applications of logical tools to programming language technology, and the development of programming languages like CAML and of proof assistants like Coq.

CH correspondences are well established for sequential programming, but are far less clear for parallel programming. Current approaches to establish such links for parallel programming always start from established models of parallel programming like process algebra (CSP, CCS, pi-calculus) and define a related logical formalism. For example the linear logic proof formalism is inspired by the pi-calculus process algebra. Although some links between linear logic and pi-calculus have been established, a deep, inspiring connection is missing. Another problem is that logical formalisms established in this way often lack a clear semantics which is independent of the related computational model on which they are based. Thus, despite years of intense research on this topic, we are far from having a clear and useful answer which leads to strong applications in programming language technology as we have seen for the fundamental CH correspondence for sequential programming.

Although it was a long and tiring path to the current status, it is only the beginning.

05 June, 2018 12:40AM by Norbert Preining

June 04, 2018

Shashank Kumar

Google Summer of Code 2018 with Debian - Week 3

Coming to the third week of GSoC felt like it was part of the daily schedule since ever. Daily updates to mentors, reviews, and evaluations on merge/pull requests and constant learning process kept my schedule full of adrenalin. Here's what I worked on!

How A Project Is Made with Sanyam Khurana

Building an idea into a project seems like a lot of work and excitement right? You can do all sorts of crazy stuff with your code to make it as amazing as possible. Use all sorts of cool-tool in hope of making something out of it at the end. But this is where the problem lies. Diving into the sea of amusement and uncertainty never promises a good ending. And hence, my mentor Sanyam Khurana and I sat down for my intervention in hope of structuring the tasks for good. And this is how a project begins. Sanyam with his experience in Open Source as well as industry taught me the importance of dividing tasks which should be atomic and should clearly define what we are trying to achieve in Plain-English. For example, when you are trying to make a blogging website, you don't create a pull request with all of the functionalities needed for the blog. First, think about the atomic tasks which can be done independently. Now create a series of these tasks (we call them tickets/issues/features). So, you have a ticket for, say, setting up pelican blog. Another for creating a theme for your blog. Another for adding analytics to your blog and so on.

Now, you can also create boards or table with columns which define the state of which these tasks lies. A task may be in development or in testing or review phase. This makes it easier to vizualise what needs to be done, what has been done already and what tasks should be in focus currently. This methodology in a broader sense and proper framework with a lot of disciplines in action is known as Agile Software Development.

Dividing Project into Tasks

After learning much about how to proceed, I sketched out the way in which I can separate out the atomic features needed for the project. We're using Debian hosted Redmine for our project management and I started jotting down the issues, to begin with. Here are the issues which shape the beginning of the project.

  • Create Design Guideline - The first issue in order to create a reference GUI design guideline for the application.
  • Design GUI for Sign Up - Design mockup following the guideline to describe how Sign Up module should look like on the GUI.
  • Design GUI for Sign In - Design mockup following the guideline to describe how Sign In module should look like on the GUI.
  • Create Git repository for the project - Project mentor Daniel created this issue as the first step which marks the beginning of the project.
  • Initializing skeleton KIVY application - After a dedicated repository has been created for our project, a KIVY application has to be setup which should also include tests, documentation, and changelog.
  • Create SignUp Feature - After the skeleton is setup, sign up modules can be implemented which should present a GUI to the user in order to create the account to access the application. This screen should be the first interaction with the user after they run the application for the first time.
  • Create SignIn Feature - If the user is already signed up for an account on the application, this screen will be the medium with which they can Sign In with the credentials.
  • Add a license to Project Repository - Being an open source project, picking up license is a very elaborative process where we have to also look at all the dependencies our application has and other parameters. Hence, this issues is more of a discussion which will conclude by adding a License file in the project repository.

These were some of the key issues which came up after my discussion with Sanyam (except creating git repo which Daniel kickstarted). These issues were enough to begin with and as we progress we can create more issues on Redmine. As part of the first couple of weeks of GSoC, I've already completed the first 3 design issues, I also wrote a blog explaining about my process and the outcome. So, for the third week, I started with initializing skeleton KIVY application.

The First Merge Request

KIVY APP

Don't be confused if you are a Github native, since we are using Debian hosted Gitlab (called salsa), it has Merge Request in place of Pull Request.

The issue which I was trying to solve in my first Merge Request was Initializing skeleton KIVY application. It was just to create a boilerplate from scratch so that development from now on would be smooth. I set out to achieve following things in my Merge Request

  • Add a KIVY application which can create a sample window with sample text on it to showcase that KIVY is working just fine
  • Create the project structure to fit documentation, ui, tests and modules
  • Add pipenv support for virtual environment and dependency management
  • Integrate pylint to test Python code for PEP8 compliance
  • Integrate pytest and write tests for unit and integration testing
  • Adding Gitlab CI support
  • Add a README.md file and write general description about the project and all it's components
  • Add documentation for end user to help them easily run and explain all the features of the application
  • Add documentation for developers to help them build the application from source
  • Add documentation for contributors to share some of the best practices while contributing to this application

Here's the Merge Request which resulted in all of the above additions to the project. It was a lot of pain getting CI to work for the first time, but once you get a green tick, you know what ticks CI to work correctly. Throughout my development process Sanyam helped me with reviews and it finally got merged into the repository by Daniel.

Conclusion

This week kickstarted the main development process for New Contributor Wizard and gave me a chance to learn about project/software management. I will be creating more issues and share about what I'm working in the next week's GSoC blog.

04 June, 2018 06:30PM by Shashank Kumar

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in May 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

Debian LTS

This was my twenty-seventh month as a paid contributor and I have been paid to work 24,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 21.05.2018 until 27.05.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in glusterfs, tomcat7, zookeeper, imagemagick, strongswan, radare2, batik, mupdf and graphicsmagick.
  • I drafted a announcement for Wheezy’s EOL that was later released as DLA-1393-1 and as an official Debian news.
  • DLA-1384-1. I reviewed and uploaded xdg-utils for Abhijith PA.
  • DLA-1381-1. Issued a security update for imagemagick/Wheezy fixing 3 CVE.
  • DLA-1385-1. Issued a security update for batik/Wheezy fixing 1 CVE.
  • Prepared a backport of Tomcat 7.0.88 for Jessie which fixes all open CVE (5) in Jessie. From now on we intend to provide the latest upstream releases for a specific Tomcat branch. We hope this will improve the user experience. It also allows Debian users to get more help from Tomcat developers directly because there is no significant Debian specific delta anymore. The update is pending review by the security team.
  • Prepared a security update for graphicsmagick fixing 19 CVE. I also investigated CVE-2017-10794 and CVE-2017-17913 and came to the conclusion that the Jessie version is not affected. I merged and reviewed another update by László Böszörményi. At the moment the update is pending review by the security team. Together these updates will fix the most important issues in Graphicsmagick/Jessie.
  • DSA-4214-1. Prepared a security update for zookeeper fixing 1 CVE.
  • DSA-4215-1. Prepared a security update for batik/Jessie fixing 1 CVE.
  • Prepared a security update for memcached in Jessie and Stretch fixing 2 CVE. This update is also pending review by the security team.
  • Finished the security update for JRuby (Jessie and Stretch) fixing 5 respectively 7 CVE. However we discovered that JRuby fails to build from source in Jessie and a fix or workaround will most likely break reverse-dependencies. Thus we have decided to mark JRuby as end-of-life in Jessie also because the version is already eight years old.

Misc

  • I reviewed and sponsored xtrkcad for Jörg Frings-Fürst.

Thanks for reading and see you next time.

04 June, 2018 05:46PM by Apo

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

My Free Software Activities in May 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

distro-tracker

With the disappearance of many alioth mailing lists, I took the time to finish proper support of a team email in distro-tracker. There’s no official documentation yet but it’s already used by a bunch of team. If you look at the pkg-security team on tracker.debian.org it has used “pkg-security” as its unique identifier and it has thus inherited from [email protected] as an email address that can be used in the Maintainer field (and it can be used to communicate between all team subscribers that have the contact keyword enabled on their team subscription).

I also dealt with a few merge requests:

I also filed ticket #7283 on rt.debian.org to have local_part_suffix = “+” for tracker.debian.org’s exim config. This will let us bounce emails sent to invalid email addresses. Right now all emails are delivered in a Maildir, valid messages are processed and the rest is silently discarded. At the time of processing, it’s too late to send bounces back to the sender.

pkg-security team

This month my activity is limited to sponsorship of new packages:

  • grokevt_0.5.0-2.dsc fixing one RC bug (missing build-dep on python3-distutils)
  • dnsrecon_0.8.13-1.dsc (new upstream release)
  • recon-ng_4.9.3-1.dsc (new upstream release)
  • wifite_2.1.0-1.dsc (new upstream release)
  • aircrack-ng (add patch from upstream git)

I also interacted multiple times with Samuel Henrique who started to work on the Google Summer of Code porting Kali packages to Debian. He mainly worked on getting some overview of the work to do.

Misc Debian work

I reviewed multiple changes submitted by Hideki Yamane on debootstrap (on the debian-boot mailing list, and also in MR 2 and MR 3). I reviewed and merged some changes on live-boot too.

Extended LTS

I spent a good part of the month dealing with the setup of the Wheezy Extended LTS program. Given the lack of interest of the various Debian teams, it’s hosted on a Freexian server and not on any debian.org infrastructure. But the principle is basically the same as Debian LTS except that the package list is reduced to the set of packages used by Extended LTS sponsors. But the updates prepared in this project are freely available for all.

It’s not too late to join the program, you can always contact me at [email protected] with a source package list that you’d like to see supported and I’ll send you back an estimation of the cost.

Thanks to an initial contribution from Credativ, Emilio Pozuelo Monfort has prepared a merge request making it easy for third parties to host their own security tracker that piggy-back on Debian’s one. For Extended LTS, we thus have our own tracker.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

04 June, 2018 04:56PM by Raphaël Hertzog

Andrew Cater

Colour me untrusting

... but a leopard doesn't change its spots. My GitHub account - opened eight years ago and not used now deleted. [email protected] should not be associated with me in any way shape or form from here on in.

04 June, 2018 04:33PM by Andrew Cater ([email protected])

hackergotchi for Daniel Pocock

Daniel Pocock

Free software, GSoC and ham radio in Kosovo

After the excitement of OSCAL in Tirana, I travelled up to Prishtina, Kosovo, with some of Debian's new GSoC students. We don't always have so many students participating in the same location. Being able to meet with all of them for a coffee each morning gave some interesting insights into the challenges people face in these projects and things that communities can do to help new contributors.

On the evening of 23 May, I attended a meeting at the Prishtina hackerspace where a wide range of topics, including future events, were discussed. There are many people who would like to repeat the successful Mini DebConf and Fedora Women's Day events from 2017. A wiki page has been created for planning but no date has been confirmed yet.

On the following evening, 24 May, we had a joint meeting with SHRAK, the ham radio society of Kosovo, at the hackerspace. Acting director Vjollca Caka gave an introduction to the state of ham radio in the country and then we set up a joint demonstration using the equipment I brought for OSCAL.

On my final night in Prishtina, we had a small gathering for dinner: Debian's three GSoC students, Elena, Enkelena and Diellza, Renata Gegaj, who completed Outreachy with the GNOME community and Qendresa Hoti, one of the organizers of last year's very successful hackathon for women in Prizren.

Promoting free software at Doku:tech, Prishtina, 9-10 June 2018

One of the largest technology events in Kosovo, Doku:tech, will take place on 9-10 June. It is not too late for people from other free software communities to get involved, please contact the FLOSSK or Open Labs communities in the region if you have questions about how you can participate. A number of budget airlines, including WizzAir and Easyjet, now have regular flights to Kosovo and many larger free software organizations will consider requests for a travel grant.

04 June, 2018 08:06AM by Daniel.Pocock

Russ Allbery

rra-c-util 7.2

rra-c-util is my collection of portability and utility code that I reuse in all the C and Perl projects I maintain.

Most of the changes in this release are Autoconf macro improvements prompted by Julien ÉLIE. This release incorporates his work on RRA_PROG_PYTHON, RRA_LIB_PYTHON, and RRA_PYTHON_MODULE macros, which support both Python 2 and Python 3. It also improves the RRA_PROG_PERL macro to make PERL a substitution variable and to check that building against libperl actually works. Finally, RRA_LIB_BDB, RRA_LIB_OPENSSL, RRA_LIB_SASL, and RRA_LIB_ZLIB now check that the headers for the library are found as well as the library itself (based on Julien's work in INN).

The docs/urls test, which was always misnamed, is now style/obsolete-strings, since its role is to check for obsolete patterns in my code (old URLs, that sort of thing). It now checks for my old RRA_MAINTAINER_TESTS environment variable, which I replaced with the Perl Lancaster Consensus environment variables a long time ago.

This release also fixes a few more minor issues with test code and the script to update the version of all Perl modules in a package.

You can get the latest release from the rra-c-util distribution page.

04 June, 2018 02:52AM

wallet 1.4

wallet is a secret management system that I developed at Stanford, primarily to distribute keytab management. As mentioned in an earlier post, I'm not entirely sure it has significant advantages over Vault, but it does handle Kerberos natively and we're still using it for some things, so I'm still maintaining it.

This release incorporates a bunch of improvements to the experimental support for managing keytabs for Active Directory principals, all contributed by Bill MacAllister and Dropbox. Anyone using the previous experimental Active Directory support should read through the configuration options, since quite a lot has changed (for the better).

Also fixed in this release are some stray strlcpy and strlcat references that were breaking systems that include them in libc, better krb5.conf configuration handling, better support for Perl in non-standard locations, and a bunch of updates and modernization to the build and test frameworks.

You can get the latest release from the wallet distribution page.

04 June, 2018 02:15AM

June 03, 2018

Free software log (May 2018)

The wonders of a week of vacation that was spent mostly working on free software! The headline releases were remctl 3.15, which fixes a long-standing correctness bug on the server and adds more protocol validation and far better valgrind support, and podlators 4.11, which fixes a buncho f long-standing bugs in Pod::Text and its subclasses.

In support of those releases, I also released new versions of my three major development infrastructure packages:

On the Debian front, I realized that I had intended to donate libnet-duo-perl to the Debian Perl team but never finished uploading the package I had prepared (and even signed). I merged that with some other pending changes in Git and actually uploaded it. (I'm still hanging on to maintenance of the upstream Net::Duo Perl module because I'm kicking around the idea of using Duo on a small scale for some personal stuff, although at the moment I'm not using the module at all and therefore am not making changes to it.)

I also finally started working on wallet again, although I'm of two minds about the future of that package. It needs a ton of work — the Perl style and general backend approach is all wrong, and I've learned far better ways to do equivalent things since. And one could make a pretty solid argument that Vault does essentially the same thing, has a lot more resources behind it, and has a ton of features that I haven't implemented or may never implement. I think I still like my ACL model better, and of course there's the Kerberos support (which is probably superior to Vault), but I haven't looked at Vault closely enough to be sure and it may be that it's better in those areas as well.

I don't use wallet for my personal stuff, but we still do use it in a few places at work. I kind of want to overhaul the package and fix it, since I like the concept, but in the broader scheme of things it's probably a "waste" of my time to do this.

Free software seems full of challenges like this. I'll at least put out another release, and then probably defer making a decision for a while longer.

03 June, 2018 05:08PM

David Kalnischkies

APT for package self-builders

One of the main jobs of a package manager like apt is to download packages (ideally in a secure way) from a repository so that they can be processed further – usually installed. FSVO "normal user" this is all there ever is to it in terms of getting packages.

Package maintainers and other users rolling their own binary packages on the other hand tend to have the packages they want to install and/or play-test with already on their disk. For them, it seems like an additional hassle to push their packages to a (temporary) repository, so apt can download data from there again… for the love of supercow, there must be a better way… right?

For the sake of a common start lets say I want to modify (and later upload) hello, so I acquire the source via apt source hello. Friendly as apt is it ran dpkg-source for me already, so I have (at the time of writing) the files hello_2.10.orig.tar.gz, hello_2.10-1.debian.tar.xz and hello_2.10-1.dsc in my working directory as well as the extracted tarballs in the subdirectory hello-2.10.

Anything slightly more complex than hello probably has a bunch of build-dependencies, so what I should do next is install build-dependencies: Everyone knows apt build-dep hello and that works in this case, but given that you have a dsc file we could just as well use that and free us from our reliance on the online repository: apt build-dep ./hello_2.10-1.dsc. We still depend on having a source package built previously this way… but wait! We have the source tree and this includes the debian/control file so… apt build-dep ./hello-2.10 – the later is especially handy if you happen to add additional build-dependencies while hacking on your hello.

So now that we can build the package have fun hacking on it! You probably have your preferred way of building packages, but for simplicity lets just continue using apt for now: apt source hello -b. If all worked out well we should have now (if you are on a amd64 machine) also a hello_2.10-1_amd64.changes file as well as two binary packages named hello_2.10-1_amd64.deb and hello-dbgsym_2.10-1_amd64.deb (you will also get a hello_2.10-1_amd64.buildinfo which you can hang onto, but apt has currently no way of making use of it, so I ignore it for the moment).

Everyone should know by now that you can install a deb via apt install ./hello_2.10-1_amd64.deb but that quickly gets boring with increasing numbers, especially if the packages you want to install have tight relations. So feel free to install all debs included in a changes file with apt install ./hello_2.10-1_amd64.changes.

So far so good, but all might be a bit much. What about install only some debs of a changes file? Here it gets interesting as if you play your cards right you can test upgrades this way as well. So lets add a temporary source of metadata (and packages) – but before you get your preferred repository builder setup and your text editor ready: You just have to add an option to your apt call. Coming back to our last example of installing packages via a changes file, lets say we just want to install hello and not hello-dbgsym: apt install --with-source ./hello_2.10-1_amd64.changes hello.

That will install hello just fine, but if you happen to have hello installed already… apt is going to tell you it has already the latest version installed. You can look at this situation e.g. with apt policy --with-source ./hello_2.10-1_amd64.changes hello. See, the Debian repository ships a binary-only rebuild as 2.10-1+b1 at the moment, which is a higher version than the one we have locally build. Your usual apt-knowledge will tell you that you can force apt to install your hello with apt install --with-source ./hello_2.10-1_amd64.changes hello=2.10-1 but that isn't why I went down this path: As you have seen now metadata inserted via --with-source participates as usual in the candidate selection process, so you can actually perform upgrade tests this way: apt upgrade --with-source ./hello_2.10-1_amd64.changes (or full-upgrade).

The hello example reaches its limits here, but if you consider time travel a possibility we will jump back into a time in which hello-debhelper existed. To be exact: Right to the moment its maintainer wanted to rename hello-debhelper to hello. Most people consider package renames hard. You need to get file overrides and maintainerscripts just right, but at least with figuring out the right dependency relations apt can help you a bit. How you can feed in changes files we have already seen, so lets imagine you deal with with multiple packages from different sources – or just want to iterate quickly! In that case you want to create a Packages file which you would normally find in a repository. You can write those by hand of course, but its probably easier to just call dpkg-scanpackages . > Packages (if you have dpkg-dev installed) or apt-ftparchive packages . > Packages (available via apt-utils) – they behave slightly different, but for our proposes its all the same. Either way, ending up with a Packages file nets you another file you can feed to --with-source (sorry, you can't install a Packages file). This also allows you to edit the dependency relations of multiple packages in a single file without constant "fiddle and build" loops of the included packages – just make sure to run as non-root & in simulation mode (-s) only or you will make dpkg (and in turn apt) very sad.

Of course upgrade testing is only complete if you can influence what is installed on your system before you try to upgrade easily. You can with apt install --with-source ./Packages hello=2.10-1 -s -o Dir::state::status=/dev/null (it will look like nothing is installed) or feed a self-crafted file (or some compressed /var/backups/dpkg.status file from days past), but to be fair that gets a bit fiddly, so at some point its probably easier to write an integration test for apt which are just little shellscript in which (nearly) everything is possible, but that might be the topic of another post some day.

Q: How long do I have to wait to use this?

A: I think I have implemented the later parts of this in the 1.3 series. Earlier parts are in starting with 1.0. Debian stable (stretch) has the 1.4 series, so… you can use it now. Otherwise use your preferred package manager to upgrade your system to latest stable release. I hope it is clear which package manager that should be… 😉︎

Q: Does this only work with apt?

A: This works just the same with apt-cache (where the --with-source option is documented in the manpage btw) and apt-get. Everything else using libapt (so aptitude included) does not at the moment, but potentially can and probably will in the future. If you feel like typing a little bit more you can at least replicate the --with-source examples by using the underlying generic option: aptitude install -s hello-dbgsym -o APT::Sources::With::=./hello_2.10-1_amd64.changes (That is all you really need anyhow, the rest is syntactic sugar). Before you start running off to report bugs: Check before reporting duplicates (and don't forget to attach patches)!

Q: Why are you always typing ./packages.deb?

A: With the --with-source option the ./ is not needed actually, but for consistency I wrote it everywhere. In the first examples we need it as apt needs to know somehow if the string it sees here is a package name, a glob, a regex, a task, … or a filename. The string "package.deb" could be a regex after all. And any string could be a directory name… Combine this with picking up files and directories in the current directory and you would have a potential security risk looming here if you start apt in /tmp (No worries, we hadn't realized this from the start either).

Q: But, but, but … security anyone?!?

The files are on your disk and apt expects that you have verified that they aren't some system-devouring malware. How should apt verify that after all as there is no trustpath. So don't think that downloading a random deb suddently became a safe thing to do because you used apt instead of dpkg -i. If the dsc or changes files you use are signed and you verfied them through, you can rest assured that apt is verifying that the hashes mentioned in those files apply to the files they index. Doesn't help you at all if the files are unsigned or other users are able to modify the files after you verified them, but apt will check hashes in those cases anyhow.

Q: I ❤︎ u, 🍑︎ tl;dr

Just 🏃︎ those, you might 😍︎ some of them:

apt source hello
apt build-dep ./hello-*/ -s
apt source -b hello
apt install ./hello_*.deb -s
apt install ./hello_*.changes -s
apt install --with-source ./hello_*.changes hello -s
apt-ftparchive packages . > ./Packages
apt upgrade --with-source ./Packages -s

P.S.: If you have expected this post to be published sometime inbetween the last two months… welcome to the club! I thought I would do it, too. Lets see how long I will need for the next one… I have it partly written already, but that was the case for this one as well… we will see.

03 June, 2018 12:01PM

Michael Stapelberg

Looking for a new Raspberry Pi image maintainer

(Cross-posting this message I sent to pkg-raspi-maintainers for broader visibility.)

I started building Raspberry Pi images because I thought there should be an easy, official way to install Debian on the Raspberry Pi.

I still believe that, but I’m not actually using Debian on any of my Raspberry Pis anymore¹, so my personal motivation to do any work on the images is gone.

On top of that, I realize that my commitments exceed my spare time capacity, so I need to get rid of responsibilities.

Therefore, I’m looking for someone to take up maintainership of the Raspberry Pi images. Numerous people have reached out to me with thank you notes and questions, so I think the user interest is there. Also, I’ll be happy to answer any questions that you might have and that I can easily answer. Please reply here (or in private) if you’re interested.

If I can’t find someone within the next 7 days, I’ll put up an announcement message in the raspi3-image-spec README, wiki page, and my blog posts, stating that the image is unmaintained and looking for a new maintainer.

Thanks for your understanding,

① just in case you’re curious, I’m now running cross-compiled Go programs directly under a Linux kernel and minimal userland, see https://gokrazy.org/

03 June, 2018 06:43AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Let's migrate away from GitHub

As many of you heard today, Microsoft is acquiring GitHub. What this means for the future of GitHub is not yet clear, but the folks at Gitlab think Microsoft's end goal is to integrate GitHub in their Azure empire. To me, this makes a lot of sense.

Even though I still reluctantly use GitHub for some projects, I migrated all my personal repositories to Gitlab instances a while ago1. Now is time for you to do the same and ditch GitHub.

Microsft loven't Linux

Some people might be fine with Microsoft's takeover, but to me it's the straw that breaks the camel's back. For a few years now, MS has been running a large marketing campaign on how they love Linux and suddenly decided to embrace Free Software in all of its forms. More like MS BS to me.

Let us take a moment to remind ourselves that:

  • Windows is still a huge proprietary monster that rips billions of people from their privacy and rights every day.
  • Microsoft is known for spreading FUD about "the dangers" of Free Software in order to keep governments and schools from dropping Windows in favor of FOSS.
  • To secure their monopoly, Microsoft hooks up kids on Windows by giving out "free" licences to primary schools around the world. Drug dealers use the same tactics and give out free samples to secure new clients.
  • Microsoft's Azure platform - even though it can run Linux VMs - is still a giant proprietary hypervisor.

I know moving git repositories around can seem like a pain in the ass, but the folks at Gitlab are riding the wave of people leaving GitHub and made the the migration easy by providing a GitHub importer.

If you don't want to use Gitlab's main instance (gitlab.org), here are two other alternative instances you can use for Free Software projects:

Friends don't let friends use GitHub anymore.


  1. Gitlab is pretty good, but it should not be viewed as a panacea: it's still an open-core product made by a for-profit enterprise that could one day be sold to a large corp like Oracle or Microsoft. 

  2. See the Salsa FAQ for more details. 

03 June, 2018 04:00AM by Louis-Philippe Véronneau

June 02, 2018

Elana Hashman

I'm hosting a small Debian BSP in Brooklyn

The time has come for NYC Debian folks to gather. I've bravely volunteered to host a local bug squashing party (or BSP) in late June.

Details

  • Venue: 61 Local: 61 Bergen St., Brooklyn, NY, USA
  • More about the venue: website, good vegetarian options available
  • Date: Sunday, June 24, 2018
  • Start: 3pm
  • End: 8pm or so
  • RSVP: Please RSVP! Click here

I'm an existing contributor, what should I work on?

The focus of this BSP is to give existing contributors some dedicated time to work on their projects. I don't have a specific outcome in mind. I do not plan on tagging bugs specifically for the BSP, but that shouldn't stop you from doing so if you want to.

Personally, I am going to spend some time on fixing the alternatives logic in the clojure and clojure1.8 packages.

If you don't really have a project you want to work on, but you're interested in helping mentor new contributors that show up, please get in touch.

I'm a new contributor and want to join but I have no idea what I'm doing!

At some point, that was all of us!

Even though this BSP is aimed at existing contributors, you are welcome to attend! We'll have a dedicated mentor available to coordinate and help out new contributors.

If you've never contributed to Debian before, I recommend you check out "How can you help Debian?" and the beginner's HOWTO for BSPs in advance of the BSP. I also wrote a tutorial and blog post on packaging that might help. Remember, you don't have to code or build packages to make valuable contributions!

See you there!

Happy hacking.

02 June, 2018 07:20PM by Elana Hashman

hackergotchi for Holger Levsen

Holger Levsen

20180602-lts-201805

My LTS work in May 2018

Organizing the MiniDebConf 2018 in Hamburg definitly took more time than planned, and then some things didnt work out as I had imagined so I could only start working on LTS at the end of May, and then there was this Alioth2Salsa migration too… But at least I managed to get started working on LTS gain \o/

I managed to spend 6.5h working on:

  • reviewing the list of open CVEs against tiff and tiff3 in wheezy
  • prepare tiff 4.0.2-6+debu21, test and upload to wheezy-security, fixing CVE-2017-11613 and CVE-2018-5784.

  • review procps 1:3.3.3-3+deb7u1 by Abhijith PA, spot an error, re-review, quick test and upload to wheeze-security, then re-upload after building with -sa :) This upload fixes CVE-2018-1122 CVE-2018-1123 CVE-2018-1124 CVE-2018-1125 and CVE-2018-1126.

  • write and release DLA-1390-1 and DLA-1301 for those two uploads.

I still need to mark CVE-2017-9815 as fixed in wheezy, as the fix for CVE-2017-9403 also fixes this issue.

02 June, 2018 05:55PM

Sylvain Beucler

Reproducible Windows builds

I'm working again on making reproducible .exe-s. I thought I'd share my process:

Pros:

  • End users get a bit-for-bit reproducible .exe, known not to contain trojan and auditable from sources
  • Point releases can reuse the exact same build process and avoid introducing bugs

Steps:

  • Generate a source tarball (non reproducibly)
  • Debian Docker as a base, with fixed version + snapshot.debian.org sources.list
    • Dockerfile: install packaged dependencies and MXE(.cc) from a fixed Git revision
    • Dockerfile: compile MXE with SOURCE_DATE_EPOCH + fix-ups
  • Build my project in the container with SOURCE_DATE_EPOCH and check SHA256
  • Copy-on-release

Result:

git.savannah.gnu.org/gitweb/?p=freedink/dfarc.git;a=tree;f=autobuild/dfarc-w32-snapshot

Generate a source tarball (non reproducibly)

This is not reproducible due to using non-reproducible tools (gettext, automake tarballs, etc.) but it doesn't matter: only building from source needs to be reproducible, and the source is the tarball.

It would be better if the source tarball were perfectly reproducible, especially for large generated content (./configure, wxGlade-generated GUI source code...), but that can be a second step.

Debian Docker as a base

AFAIU the Debian Docker images are made by Debian developers but are in no way official images. That's a pity, and to be 100% safe I should start anew from debootstrap, but Docker is providing a very efficient framework to build images, notably with caching of every build steps, immediate fresh containers, and public images repository.

This means with a single:

sudo -g docker make

you get my project reproducibly built from scratch with nothing to setup at all.

I avoid using a :latest tag, since it will change, and also backports, since they can be updated anytime. Here I'm using stretch:9.4 and no backports.

Using snapshot.debian.org in sources.list makes sure the installed packaged dependencies won't change at next build. For a dot release however (not for a rebuild), they should be updated in case there was a security fix that has an effect on built software (rare, but exists).

Last but not least, APT::Install-Recommends "false"; for better dependency control.

MXE

mxe.cc is compilation environment to get MingGW (GCC for Windows) and selected dependencies rebuilt unattended with a single make. Doing this manually would be tedious because every other day, upstream breaks MinGW cross-compilation, and debugging an hour-long build process takes ages. Been there, done that.

MXE has a reproducible-boosted binutils with a patch for SOURCE_DATE_EPOCH that avoids getting date-based and/or random build timestamps in the PE (.exe/.dll) files. It's also compiled with --enable-deterministic-archives to avoid timestamp issues in .a files (but no automatic ordering).

I set SOURCE_DATE_EPOCH to the fixed Git commit date and I run MXE's build.

This does not apply to GCC however, so I needed to e.g. patch a __DATE__ in wxWidgets.

In addition, libstdc++.a has a file ordering issue (said ordering surprisingly stays stable between a container and a host build, but varies when using a different computer with the same distros and tools versions). I hence re-archive libstdc++.a manually.

It's worth noting that PE files don't have issues with build paths (and varying BuildID-s - unlike ELF... T_T).

Again, for a dot release, it makes sense to update the MXE Git revision so as to catch security fixes, but at least I have the choice.

Build project

With this I can start a fresh Docker container and run the compilation process inside, as a non-privileged user just in case.

I set SOURCE_DATE_EPOCH to the release date at 00:00UTC, or the Git revision date for snapshots.

This rebuild framework is excluded from the source tarball, so the latter stays stable during build tuning. I see it as a post-release tool, hence not part of the release (just like distros packaging).

The generated .exe is statically compiled which helps getting a stable result (only the few needed parts of dependencies get included in the final executable).

Since MXE is not itself reproducible differences may come from MXE itself, which may need fixes as explained above. This is annoying and hopefully will be easier once they ship GCC6. To debug I unzip the different .zip-s, upx -d my .exe-s, and run diffoscope.

I use various tricks (stable ordering, stable timestamping, metadata cleaning) to make the final .zip reproducible as well. Post-processing tools would be an alternative if they were fixed.

reprotest

Any process is moot if it can't be tested.

reprotest helps by running 2 successive compilations with varying factors (build path, file system ordering, etc.), and check that we get the exact same binary. As a trade-off, I don't run it on the full build environment, just on the project itself. I plugged reprotest to the Docker container by running a sshd on the fly. I have another Makefile target to run reprotest in my host system where I also installed MXE, so I can compare results and sometimes find differences (e.g. due to using a different filesystem). In addition this is faster for debugging since changing anything in the early Dockerfile steps means a full 1h rebuild.

Copy-on-release

At release time I make a copy of the directory that contains all the self-contained build scripts and the Dockerfile, and rename it after the new release version. I'll continue improving upon the reproducible build system in the 'snapshot' directory, but the versioned directory will stay as-is and can be used in the future to get the same bit-for-bit identical .exe anytime.

This is the technique I used in my Android Rebuilds project.

Other platforms

For now I don't control the build process for other platforms: distros have their own autobuilders, so does F-Droid. Their problem :P

I have plans to make reproducible GNU/Linux AppImage-based builds in the future though. I should be able to use a finer-grained, per-dependency process rather than the huge MXE-based chunk I currently do.

I hope this helps other projects provide reproducible binaries directly! Comments/suggestions welcome.

02 June, 2018 05:12PM

hackergotchi for Steve Kemp

Steve Kemp

A brief metric-update, and notes on golang-specific metrics

My previous post briefly described the setup of system-metric collection. (At least the server-side setup required to receive the metrics submitted by various clients.)

When it came to the clients I was complaining that collectd was too heavyweight, as installing it pulled in a ton of packages. A kind twitter user pointed out that you can get most of the stuff you need via the use of the of collectd-core package:

 # apt-get install collectd-core

I guess I should have known that! So for the moment that's what I'm using to submit metrics from my hosts. In the future I will spend more time investigating telegraf, and other "modern" solutions.

Still with collectd-core installed we've got the host-system metrics pretty well covered. Some other things I've put together also support metric-submission, so that's good.

I hacked up a quick package for automatically submitting metrics to a remote server, specifically for golang applications. To use it simply add an import to your golang application:

  import (
    ..
    _ "github.com/skx/golang-metrics"
    ..
  )

Add the import, and rebuild your application and that's it! Configuration is carried out solely via environmental variables, and the only one you need to specify is the end-point for your metrics host:

$ METRICS=metrics.example.com:2003 ./foo

Now your application will be running as usual and will also be submitting metrics to your central host every 10 seconds or so. Metrics include the number of running goroutines, application-uptime, and memory/cpu stats.

I've added a JSON-file to import as a grafana dashboard, and you can see an example of what it looks like there too:

02 June, 2018 03:30AM

June 01, 2018

Vincent Sanders

You can't make a silk purse from a sow's ear

Pile of network switches
I needed a small Ethernet network switch in my office so went to my pile of devices and selected an old Dell PowerConnect 2724 from the stack. This seemed the best candidate as the others were intended for data centre use and known to be very noisy.

I installed it into place and immediately ran into a problem, the switch was not quiet enough, in fact I could not concentrate at all with it turned on.

Graph of quiet office sound pressure
Believing I could not fix what I could not measure I decided to download an app for my phone that measured raw sound pressure. This would allow me to empirically examine what effects any changes to the switch made.

The app is not calibrated so can only be used to examine relative changes so a reference level is required. I took a reading in the office with the switch turned off but all other equipment operating to obtain a baseline measurement.

All measurements were made with the switch and phone in the same positions about a meter apart. The resulting yellow curves are the average for a thirty second sample period with the peak values in red.

The peak between 50Hz and 500Hz initially surprised me but after researching how a human perceives sound it appears we must apply the equal loudness curve to correct the measurement.

Graph of office sound pressure with switch turned onWith this in mind we can concentrate on the data between 200Hz and 6000Hz as the part of the frequency spectrum with the most impact. So in the reference sample we can see that the audio pressure is around the -105dB level.

I turned the switch on and performed a second measurement which showed a level around the -75dB level with peaks at the -50dB level. This is a difference of some 30dB, if we assume our reference is a "calm room" at 25dB(SPL) then the switch is causing the ambient noise level to similar to a "normal conversation" at 55dB(SPL).

Something had to be done if I were to keep using this device so I opened the switch to examine the possible sources of noise.

Dell PowerConnect 2724 with replacement Noctua fan
There was a single 40x40x20mm 5v high capacity sunon brand fan in the rear of the unit. I unplugged the fan and the noise level immediately returned to ambient indicating that all the noise was being produced by this single device, unfortunately the switch soon overheated without the cooling fan operating.

I thought the fan might be defective so purchased a high quality "quiet" NF-A4x20 replacement from Noctua. The fan has rubber mounting fixings to further reduce noise and I was hopeful this would solve the issue.

Graph of office sound pressure with modified switch turned on
The initial results were promising with noise above 2000Hz largely being eliminated. However the way the switch enclosure was designed caused airflow to make sound which produce a level around 40dB(SPL) between 200Hz and 2000Hz.

I had the switch in service for several weeks in this configuration eventually the device proved impractical on several points:

  • The management interface was dreadful to use.
  • The network performance was not very good especially in trunk mode.
  • The lower frequency noise became a distraction for me in an otherwise quiet office.

In the end I purchased an 8 port zyxel switch which is passively cooled and otherwise silent in operation and has none of the other drawbacks.

From this experience I have learned some things:

  • Higher frequency noise (2000Hz and above) is much more difficult to ignore than other types of noise.
  • As I have become older my tolerance for equipment noise has decreased and it actively affects my concentration levels.
  • Some equipment has a design which means its audio performance cannot be improved sufficiently.
  • Measuring and interpreting noise sources is quite difficult.

01 June, 2018 12:27PM by Vincent Sanders ([email protected])

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 3.0

Weblate 3.0 has been released today. It contains brand new access control module and 61 fixed isssues.

Full list of changes:

  • Rewritten access control.
  • Several code cleanups that lead to moved and renamed modules.
  • New addon for automatic component discovery.
  • The import_project management command has now slightly different parameters.
  • Added basic support for Windows RC files.
  • New addon to store contributor names in PO file headers.
  • The per component hook scripts are removed, use addons instead.
  • Add support for collecting contributor agreements.
  • Access control changes are now tracked in history.
  • New addon to ensure all components in a project have same translations.
  • Support for more variables in commit message templates.
  • Add support for providing additional textual context.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate

01 June, 2018 12:00PM

bisco

Second GSoC Report

A lot has happened since the last report. The main change in nacho was probably the move to integrate django-ldapdb. This abstracts a lot of operations one would have to do on the directory using bare ldap and it also provides the possibility of having the LDAP objects in the Django admin interface, as those are addressed as Django models. By using django-ldapdb i was able to remove around 90% of the self written ldap logic. The only functionality that still remains where i have to directly use the ldap library are the password operations. It would be possible to implement these features with django-ldapdb, but then i would have to integrate password hashing functionality into nacho and above all i would have to adjust the hashing function for every ldap server with a different hashing algorithm setting. This way the ldap server does the hashing and i won’t have to set the algorighm in two places.

This led to the next feature i implemented, which was the password reset functionality. It works as known from most other sites: one enters a username and gets an email with a password reset link. Related to this is also the mofification operation of the mail attribute: i wasn’t sure if the email address should be changeable right away or if a new address should be confirmed with a token sent by mail. We talked about this during our last mentors-student meeting and both formorer and babelouest said it would be good to have a confirmation for email addresses. So that was another feature i implemented.

Two more attribute that weren’t part of nacho up until now were SSH Keys and a profile image. Especially the ssh keys led to a redesign of the profile page, because there can be multiple ssh keys. So i changed the profile container to be a bootstrap card and the individual areas are tabs in this card:

Screenshot of the profile page

For the image i had to create a special upload form that saves the bytestream of the file directly to ldap which stores it as base64 encoded data. The display of the jpegPhot field is then done via

<img src=data:image/png;base64,...

This way we don’t have to store the image files on the server at all.

A short note about the ssh key schema

We are using this openssh-ldap schema. To include the schema in the slapd installation it to be converted to an ldif file. For that i had to create a temporary file, lets call it schema_convert.conf with the line

include /path/to/openssh-ldap.schema

using

sudo slaptest -f schema_convert.conf -F /tmp/temporaryfolder

one gets a folder containing the ldif file in /tmp/temporaryfolder/cn=config/cn=schema/cn={0}openssh-ldap.ldif. This file has to be edited (remove the metadata) and can then be added to ldap using:

ldapadd -Y EXTERNAL -H ldapi:/// -f openssh-ldap.ldif

What else happend

Another big improvement is the admin site. Using django-ldapdb i have a model view on selected ldap tree areas and can manage them using the webinterface. Using the group mapping feature of django-auth-ldap i was able to give management permissions to groups that are also stored in ldap.

I updated the nacho debian package. Now that django-ldapdb is in testing, all the dependecies can be installed from Debian packages. I started to use the salsa issue tracker for the issues which makes it a lot easier to keep track of things to do. I took a whole day to start getting into unit tests and i started writing some. On day two of the unit test experience i started using the gitlab continuous integration feature of salsa. Now every commit is being checked against the test suite. But there are only around 20 tests at the moment and it only covers registration and login and password reset- i guess there are around 100 test cases for all the other stuff that i still have to write ;)

01 June, 2018 06:28AM

Paul Wise

FLOSS Activities May 2018

Changes

Issues

Review

Administration

  • iotop: merge patch
  • Debian: buildd check, install package, redirect support, fix space in uid/gid, reboot lock workaround
  • Debian mentors: reboot for security updates
  • Debian wiki: whitelist email addresses,
  • Openmoko: web server restart

Communication

Sponsors

The tesseract/purple-discord work, bug reports for samba/git-lab/octotree/dh-make-golang and AutomaticPackagingTools change were sponsored by my employer. All other work was done on a volunteer basis.

01 June, 2018 12:39AM