<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:dc="http://purl.org/dc/elements/1.1/"
     xmlns:wfw="http://wellformedweb.org/CommentAPI/"
     >
  <channel>
    <title>Mister Muffin Blog</title>
    <link>None</link>
    <description>Your Blog's short description</description>
    <pubDate>Sun, 18 Dec 2016 16:34:55 GMT</pubDate>
    <generator>Blogofile</generator>
    <sy:updatePeriod>hourly</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <item>
      <title>Looking for self-hosted filesharing software</title>
      <link>http://blog.mister-muffin.de/2016/12/18/looking-for-self-hosted-filesharing-software</link>
      <pubDate>Sun, 18 Dec 2016 10:18:00 CET</pubDate>
      <category><![CDATA[blog]]></category>
      <category><![CDATA[debian]]></category>
      <guid>MnOgVF-DNRLL66n78CS5NoEfSW8=</guid>
      <description>Looking for self-hosted filesharing software</description>
      <content:encoded><![CDATA[<p>The owncloud package was <a href="https://tracker.debian.org/news/764369">removed</a> from
Debian unstable and testing. I am thus now looking for an alternative.
Unfortunately, finding such replacement seems to be harder than I initially
thought, even though I only use a very small subset of what owncloud provides.
What I require is some software which allows me to:</p>
<ol>
<li>upload a directory of files of any type to my server (no "distributed" filesharing where I have to stay online with my laptop)</li>
<li>share the content of that directory via HTTP (no requirement to install any additional software other than a web browser)</li>
<li>let the share-links be private (no possibility to infer the location of other shares)</li>
<li>allow users to browse that directory (image thumbnails or a photo gallery would be nice)</li>
<li>allow me to allow anonymous users to upload their own content into that directory (also only requiring their web browser)</li>
<li>already in Debian or easy to package and maintain due to low complexity (I don't have enough time to become the next "owncloud maintainer")</li>
</ol>
<p>I thought this was a pretty simple task to solve but I am unable to find any
software that fits above criteria.</p>
<p>The below table shows the result of my research of what's currently available.
The columns mark whether the respective software fulfills one of the six
criteria from above.</p>
<table border=1>
<tr><th>Software</th>           <th>1</th><th>2</th><th>3</th><th>4</th><th>5</th><th>6</th></tr>
<tr><td>owncloud</td>           <td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✘</td></tr>
<tr><td>sparkleshare</td>       <td>✔</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✔</td></tr>
<tr><td>dvcs-autosync</td>      <td>✔</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✔</td></tr>
<tr><td>git annex assistant</td><td>✔</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✔</td></tr>
<tr><td>syncthing</td>          <td>✔</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✔</td></tr>
<tr><td>pydio</td>              <td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✘</td></tr>
<tr><td>seafile</td>            <td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✘</td></tr>
<tr><td>sandstorm.io</td>       <td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✔</td><td>✘</td></tr>
<tr><td>ipfs</td>               <td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td><td>✘</td></tr>
<tr><td>bozon</td>              <td>✔</td><td>✔</td><td>✔</td><td>✘</td><td>✘</td><td>✔</td></tr>
<tr><td>droppy</td>             <td>✔</td><td>✔</td><td>✔</td><td>✘</td><td>✘</td><td>✔</td></tr>
</table>

<p>Pydio, seafile and sandstorm.io look promising but they seem to be beasts
similar in complexity to owncloud as they bring features like version tracking,
office integration, wikis, synchronization across multiple devices or online
editing of files which are features that I do not need.</p>
<p>I would already be very happy if there was a script which would make it easy to
create a hard-to-guess symlink to a directory with data tracked by git annex
under my www-root and then generate some static HTML to provide a thumbnails
view or a photo gallery. Unfortunately, even that solution would not be
sufficient as it would still disallow public upload by anybody whom I would
give the link to...</p>
<p>If you know some software that meets my criteria or would like to submit
corrections to above table, please shoot an email to josch@debian.org. Thanks!</p>]]></content:encoded>
    </item>
    <item>
      <title>Let's Encrypt with Pound on Debian</title>
      <link>http://blog.mister-muffin.de/2015/11/04/let's-encrypt-with-pound-on-debian</link>
      <pubDate>Wed, 04 Nov 2015 13:26:00 CET</pubDate>
      <category><![CDATA[debian]]></category>
      <guid>HOJ6Gk61oL0NX2gTJJTdF_TFjOg=</guid>
      <description>Let's Encrypt with Pound on Debian</description>
      <content:encoded><![CDATA[<p>TLDR: mister-muffin.de (and all its subdomains), bootstrap.debian.net and
binarycontrol.debian.net are now finally signed by "Let's Encrypt Authority X1"
\o/</p>
<p><em>EDIT2</em>: I created this post when Let's Encrypt was still in beta. For a recipe
of how to use letsencrypt with pound and without super user privileges read the
very last section at the bottom.</p>
<p>I just tried out the letsencrypt client Debian packages prepared by Harlan
Lieberman-Berg which can be found here:</p>
<ul>
<li>python-acme <a href="https://anonscm.debian.org/cgit/letsencrypt/python-acme.git/">git</a> <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=801356">ITP</a></li>
<li>python-letsencrypt (needs python-acme) <a href="https://anonscm.debian.org/cgit/letsencrypt/letsencrypt.git/">git</a> <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=774387">ITP</a></li>
</ul>
<p>My server setup uses <a href="http://www.apsis.ch/pound">Pound</a> as a reverse proxy in
front of a number of LXC based containers running the actual services.
Furthermore, letsencrypt only supports Nginx and Apache for now, so I had to
manually setup things anyways. Here is how.</p>
<p>After installing the Debian packages I built from above git repositories, I ran
the following commands:</p>
<pre><code>$ mkdir -p letsencrypt/etc letsencrypt/lib letsencrypt/log
$ letsencrypt certonly --authenticator manual --agree-dev-preview \
    --server https://acme-v01.api.letsencrypt.org/directory --text \
    --config-dir letsencrypt/etc --logs-dir letsencrypt/log \
    --work-dir letsencrypt/lib --email josch@mister-muffin.de \
    --domains mister-muffin.de --domains blog.mister-muffin.de \
    --domains [...]
</code></pre>
<p>I created the <code>letsencrypt</code> directory structure to be able to run <code>letsencrypt</code>
as a normal user. Otherwise, running this command would require access to
<code>/etc/letsencrypt</code> and others. Having to set this up and pass all these
parameters is a bit bothersome but there is an <a href="https://github.com/letsencrypt/letsencrypt/issues/973">upstream
issue</a> about making this
easier when using the "certonly" option which in princible should not require
superuser privileges.</p>
<p>The <code>--server</code> option is necessary for now because "Let's Encrypt" is <a href="https://community.letsencrypt.org/t/beta-program-announcements/1631">still in
beta and one needs to register for
it</a>.
Without the <code>--server</code> option one will get an untrusted certificate from the
"happy hacker fake CA".</p>
<p>The <code>letsencrypt</code> program will then ask me for my agreement to the Terms of
Service and then, for each domain I specified with the <code>--domains</code> option
present me the token content and the location under each domain where it
expects to find this content, respectively. This looks like this each time:</p>
<pre><code>-------------------------------------------------------------------------------
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running letsencrypt in manual mode on a machine that is
not your server, please ensure you're okay with that.

Are you OK with your IP being logged?
-------------------------------------------------------------------------------
(Y)es/(N)o: Y
Make sure your web server displays the following content at
http://mister-muffin.de/.well-known/acme-challenge/XXXX before continuing:

{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}

Content-Type header MUST be set to application/jose+json.

If you don't have HTTP server configured, you can run the following
command on the target server (as root):

mkdir -p /tmp/letsencrypt/public_html/.well-known/acme-challenge
cd /tmp/letsencrypt/public_html
echo -n '{"header": {"alg": "RS256", "jwk": {"e": "AQAB", "kty": "RSA", "n": "YYYY"}}, "payload": "ZZZZ", "signature": "QQQQ"}' &gt; .well-known/acme-challenge/XXXX
# run only once per server:
$(command -v python2 || command -v python2.7 || command -v python2.6) -c \
"import BaseHTTPServer, SimpleHTTPServer; \
SimpleHTTPServer.SimpleHTTPRequestHandler.extensions_map = {'': 'application/jose+json'}; \
s = BaseHTTPServer.HTTPServer(('', 80), SimpleHTTPServer.SimpleHTTPRequestHandler); \
s.serve_forever()" 
Press ENTER to continue
</code></pre>
<p>For brevity I replaced any large base64 encoded chunks of the messages with
<code>YYYY</code>, <code>ZZZZ</code> and <code>QQQQ</code>. The token location is abbreviated with <code>XXXX</code>.</p>
<p>After temporarily stopping Pound on my webserver I created the directory
<code>/tmp/letsencrypt/public_html/.well-known/acme-challenge</code> and then opened two
shells on my server, both at <code>/tmp/letsencrypt/public_html</code>. In one, I kept a
tiny HTTP server running (like the suggested Python SimpleHTTPServer which will
also work if one has Python installed). In the other I copy pasted the <code>echo</code>
line that the <code>letsencrypt</code> program suggested me to run.</p>
<p>I had to copypaste that <code>echo</code> command for each domain I wanted to verify. This
could easily be automated, so I <a href="https://github.com/letsencrypt/letsencrypt/issues/1321">filed an issue about
this</a> with upstream.</p>
<p>It seems that the letsencrypt servers query each of these tokens twice: once
directly each time after having hit enter after seeing the message above and
another time once all tokens are in place.</p>
<p>At the end of this ordeal I get:</p>
<pre><code>2015-11-04 11:12:18,409:WARNING:letsencrypt.client:Non-standard path(s), might not work with crontab installed by your operating system package manager

IMPORTANT NOTES:
 - If you lose your account credentials, you can recover through
   e-mails sent to josch@mister-muffin.de.
 - Congratulations! Your certificate and chain have been saved at
   letsencrypt/etc/live/mister-muffin.de/fullchain.pem. Your cert will
   expire on 2016-02-02. To obtain a new version of the certificate in
   the future, simply run Let's Encrypt again.
 - Your account credentials have been saved in your Let's Encrypt
   configuration directory at letsencrypt/etc. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Let's
   Encrypt so making regular backups of this folder is ideal.
</code></pre>
<p>I can now scp the content of <code>letsencrypt/etc/live/mister-muffin.de/*</code> to my
server. Unfortunately, Pound (and also my ejabberd XMPP server) requires the
private key to be in the same file as the certificate and the chain, so on the
server I also had to do:</p>
<pre><code>cat /etc/ssl/private/privkey.pem /etc/ssl/private/fullchain.pem &gt; /etc/ssl/private/private_fullchain.pem
</code></pre>
<p>And edit the Pound config to use <code>/etc/ssl/private/private_fullchain.pem</code>. But
that's all, folks!</p>
<p><em>EDIT</em></p>
<p>It seems that manually copying over the echo commands as I described above is
not necessary. Instead of using the <code>certonly</code> plugin, I can use the <code>webroot</code>
plugin. That plugin takes the <code>--webroot-path</code> option and will copy the tokens
to there. Since my webroot is on a remote machine, I could just mount it
locally via sshfs and pass the mountpoint as <code>--webroot-path</code>.</p>
<p>That I didn't realize that the webroot plugin does what I want (and not the
certonly plugin) can easily be explained by the only documentation of the
webroot plugin in the help output and the man page generated from it being
"Webroot Authenticator" which is not very helpful.</p>
<p>Another user seems to have run into <a href="https://github.com/letsencrypt/letsencrypt/issues/1190">similar
problems</a>. Better
documenting the plugins so that these situations can be prevented in the future
is tracked in <a href="https://github.com/letsencrypt/letsencrypt/issues/1137">this upstream
bug</a>.</p>
<p><em>EDIT2</em></p>
<p>Now that letsencrypt is out for everybody, lets update the instructions with
what I learned. Firstly, since we don't want a long downtime, we add the
following section to <code>/etc/pound/pound.cfg</code>:</p>
<pre><code>Service
        URL "^/.well-known/acme-challenge/"
        BackEnd
                Address 127.0.0.1
                Port 8000
        End
End
</code></pre>
<p>This will make sure that all requests to <code>/.well-known/acme-challenge/</code> and
below are redirected to a server running on port 8000. That service will be a
temporary webserver which we will only switch on for the purpose of retrieving
new certificates. So on my server I run:</p>
<pre><code>$ mkdir ~/letsencrypt
$ (cd ~/letsencrypt &amp;&amp; python3 -m http.server 8000)
</code></pre>
<p>Now on my laptop I mount that directory via sshfs locally:</p>
<pre><code>$ sshfs fulda:/root/letsencrypt ~/letsencrypt/fulda
</code></pre>
<p>And finally I use the <code>webroot</code> authenticator to automatically retrieve and
validate all my certificates. No manual intervention needed anymore:</p>
<pre><code>$ letsencrypt certonly --authenticator webroot --text \
    --config-dir letsencrypt/etc --logs-dir letsencrypt/log \
    --work-dir letsencrypt/lib --email josch@mister-muffin.de \
    --webroot-path ~/letsencrypt/fulda --domains mister-muffin.de \
    --domains [...]
</code></pre>
<p>Now I can quit the python webserver running on my server and copy the generated
certificates into their right locations.</p>]]></content:encoded>
    </item>
    <item>
      <title>unshare without superuser privileges</title>
      <link>http://blog.mister-muffin.de/2015/10/25/unshare-without-superuser-privileges</link>
      <pubDate>Sun, 25 Oct 2015 18:44:00 CET</pubDate>
      <category><![CDATA[code]]></category>
      <category><![CDATA[debian]]></category>
      <category><![CDATA[linux]]></category>
      <guid>ZnNil9Fg7tIaib8VcibEfeqYBCk=</guid>
      <description>unshare without superuser privileges</description>
      <content:encoded><![CDATA[<p>TLDR: With the help of Helmut Grohne I finally figured out most of the bits
necessary to unshare everything without becoming root (though one might say
that this is still cheated because the suid root tools <code>newuidmap</code> and <code>newgidmap</code>
are used). I wrote a Perl script which documents how this is done in practice.
This script is nearly equivalent to using the existing commands <code>lxc-usernsexec
[opts] -- unshare [opts] -- COMMAND</code> except that these two together cannot be
used to mount a new proc. Apart from this problem, this Perl script might also
be useful by itself because it is architecture independent and easily
inspectable for the curious mind without resorting to sources.debian.net (it is
heavily documented at nearly 2 lines of comments per line of code on average).
It can be retrieved here at
<a href="https://gitlab.mister-muffin.de/josch/user-unshare/blob/master/user-unshare">https://gitlab.mister-muffin.de/josch/user-unshare/blob/master/user-unshare</a></p>
<p>Long story: Nearly two years after my last <a href="/2014/01/11/why-do-i-need-superuser-privileges-when-i-just-want-to-write-to-a-regular-file/">last rant about everything needing
superuser privileges in
Linux</a>,
I'm still interested in techniques that let me do more things without becoming
root. Helmut Grohne had told me for a while about unshare(), or user namespaces
as the right way to have things like chroot without root. There are also
reports of LXC containers working without root privileges but they are hard to
come by. A couple of days ago I had some time again, so Helmut helped me to get
through the major blockers that were so far stopping me from using unshare in a
meaningful way without executing everything with <code>sudo</code>.</p>
<p>My main motivation at that point was to let <code>dpkg-buildpackage</code> when executed
by <code>sbuild</code> be run with an unshared network namespace and thus without network
access (except for the loopback interface) because like pbuilder I wanted
sbuild to enforce the rule not to access any remote resources during the build.
After several evenings of investigating and doctoring at the Perl script I
mentioned initially, I came to the conclusion that the only place that can
unshare the network namespace without disrupting anything is schroot itself.
This is because unsharing <em>inside</em> the chroot will fail because
dpkg-buildpackage is run with non-root privileges and thus the user namespace
has to be unshared. But this then will destroy all ownership information. But
even if that wasn't the case, the chroot itself is unlikely to have (and also
should not) tools like <code>ip</code> or <code>newuidmap</code> and <code>newgidmap</code> installed. Unsharing
the schroot call itself also will not work. Again we first need to unshare the
user namespace and then schroot will complain about wrong ownership of its
configuration file <code>/etc/schroot/schroot.conf</code>. Luckily, when contacting Roger
Leigh about this wishlist feature in
<a href="http://bugs.debian.org/802849">bug#802849</a> I was told that this was already
implemented in its git master \o/. So this particular problem seems to be taken
care of and once the next schroot release happens, sbuild will make use of it
and have <code>unshare --net</code> capabilities just like <code>pbuilder</code> already had since
last year.</p>
<p>With the sbuild case taken care of, the rest of this post will introduce <a href="https://gitlab.mister-muffin.de/josch/user-unshare/blob/master/user-unshare">the
Perl script I wrote</a>.
The name <code>user-unshare</code> is really arbitrary. I just needed some identifier for
the git repository and a filename.</p>
<p>The most important discovery I made was, that Debian disables unprivileged user
namespaces by default with the patch
<code>add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by-default.patch</code> to the
Linux kernel. To enable it, one has to first either do</p>
<pre><code>echo 1 | sudo tee /proc/sys/kernel/unprivileged_userns_clone &gt; /dev/null
</code></pre>
<p>or</p>
<pre><code>sudo sysctl -w kernel.unprivileged_userns_clone=1
</code></pre>
<p>The tool tries to be like unshare(1) but with the power of lxc-usernsexec(1) to
map more than one id into the new user namespace by using the programs
<code>newgidmap</code> and <code>newuidmap</code>. Or in other words: This tool tries to be like
lxc-usernsexec(1) but with the power of unshare(1) to unshare more than just
the user and mount namespaces. It is nearly equal to calling:</p>
<pre><code>lxc-usernsexec [opts] -- unshare [opts] -- COMMAND
</code></pre>
<p>Its main reason of existence are:</p>
<ul>
<li>as a project for me to learn how unprivileged namespaces work</li>
<li>written in Perl which means:<ul>
<li>architecture independent (same executable on any architecture)</li>
<li>easily inspectable by other curious minds</li>
</ul>
</li>
<li>tons of code comments to let others understand how things work</li>
<li>no need to install the lxc package in a minimal environment (perl itself
    might not be called minimal either but is present in every Debian
    installation)</li>
<li>not suffering from being unable to mount proc</li>
</ul>
<p>I hoped that <code>systemd-nspawn</code> could do what I wanted but it seems that its
requirement for being run as root will <a href="http://lists.freedesktop.org/archives/systemd-devel/2015-February/028139.html">not change any time
soon</a></p>
<p>Another tool in Debian that offers to do chroot without superuser privileges is
<code>linux-user-chroot</code> but that one cheats by being suid root.</p>
<p>Had I found <code>lxc-usernsexec</code> earlier I would've probably not written this. But
after I found it I happily used it to get an even better understanding of the
matter and further improve the comments in my code. I started writing my own
tool in Perl because that's the language sbuild was written in and as mentioned
initially, I intended to use this script with sbuild. Now that the sbuild
problem is taken care of, this is not so important anymore but I like if I can
read the code of simple programs I run directly from /usr/bin without having to
retrieve the source code first or use sources.debian.net.</p>
<p>The only thing I wasn't able to figure out is how to properly mount proc into
my new mount namespace. I found a workaround that works by first mounting a new
proc to <code>/proc</code> and then bind-mounting <code>/proc</code> to whatever new location for
proc is requested. I didn't figure out how to do this without mounting to
<code>/proc</code> first partly also because this doesn't work at all when using
<code>lxc-usernsexec</code> and <code>unshare</code> together. In this respect, this perl script is a
bit more powerful than those two tools together. I suppose that the reason is
that <code>unshare</code> wasn't written with having being called without superuser
privileges in mind. If you have an idea what could be wrong, the code has a big
<code>FIXME</code> about this issue.</p>
<p>Finally, here a demonstration of what my script can do. Because of the <code>/proc</code>
bug, <code>lxc-usernsexec</code> and <code>unshare</code> together are not able to do this but it
might also be that I'm just not using these tools in the right way. The
following will give you an interactive shell in an environment created from one
of my sbuild chroot tarballs:</p>
<pre><code>$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net \
    --uts --mount --fork -- sh -c 'ip link set lo up &amp;&amp; ip addr &amp;&amp; \
    hostname hoothoot-chroot &amp;&amp; \
    tar -C /tmp/buildroot -xf /srv/chroot/unstable-amd64.tar.gz; \
    /usr/sbin/chroot /tmp/buildroot /sbin/runuser -s /bin/bash - josch &amp;&amp; \
    umount /tmp/buildroot/proc &amp;&amp; rm -rf /tmp/buildroot'
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ whoami
josch
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ hostname
hoothoot-chroot
(unstable-amd64-sbuild)josch@hoothoot-chroot:/$ ls -lha /proc | head
total 0
dr-xr-xr-x 218 nobody nogroup    0 Oct 25 19:06 .
drwxr-xr-x  22 root   root     440 Oct  1 08:42 ..
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 1
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 15
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 16
dr-xr-xr-x   9 root   root       0 Oct 25 19:06 7
dr-xr-xr-x   9 josch  josch      0 Oct 25 19:06 8
dr-xr-xr-x   4 nobody nogroup    0 Oct 25 19:06 acpi
dr-xr-xr-x   6 nobody nogroup    0 Oct 25 19:06 asound
</code></pre>
<p>Of course instead of running this long command we can also instead write a
small shell script and execute that instead. The following does the same things
as the long command above but adds some comments for further explanation:</p>
<div class="pygments_murphy"><pre><span></span><span class="ch">#!/bin/sh</span><br/><br/><span class="nb">set</span> -exu<br/><br/><span class="c1"># I&#39;m using /tmp because I have it mounted as a tmpfs</span><br/><span class="nv">rootdir</span><span class="o">=</span><span class="s2">&quot;/tmp/buildroot&quot;</span><br/><br/><span class="c1"># bring the loopback interface up</span><br/>ip link <span class="nb">set</span> lo up<br/><br/><span class="c1"># show that the loopback interface is really up</span><br/>ip addr<br/><br/><span class="c1"># make use of the UTS namespace being unshared</span><br/>hostname hoothoot-chroot<br/><br/><span class="c1"># extract the chroot tarball. This must be done inside the user namespace for</span><br/><span class="c1"># the file permissions to be correct.</span><br/><span class="c1">#</span><br/><span class="c1"># tar will fail to call mknod and to change the permissions of /proc but we are</span><br/><span class="c1"># ignoring that</span><br/>tar -C <span class="s2">&quot;</span><span class="nv">$rootdir</span><span class="s2">&quot;</span> -xf /srv/chroot/unstable-amd64.tar.gz <span class="o">||</span> <span class="nb">true</span><br/><br/><span class="c1"># run chroot and inside, immediately drop permissions to the user &quot;josch&quot; and</span><br/><span class="c1"># start an interactive shell</span><br/>/usr/sbin/chroot <span class="s2">&quot;</span><span class="nv">$rootdir</span><span class="s2">&quot;</span> /sbin/runuser -s /bin/bash - josch<br/><br/><span class="c1"># unmount /proc and remove the temporary directory</span><br/>umount <span class="s2">&quot;</span><span class="nv">$rootdir</span><span class="s2">/proc&quot;</span><br/>rm -rf <span class="s2">&quot;</span><span class="nv">$rootdir</span><span class="s2">&quot;</span><br/></pre></div>

<p>and then:</p>
<pre><code>$ mkdir -p /tmp/buildroot/proc
$ ./user-unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh
</code></pre>
<p>As mentioned in the beginning, the tool is nearly equivalent to calling
<code>lxc-usernsexec [opts] -- unshare [opts] -- COMMAND</code> but because of the problem
with mounting proc (mentioned earlier), <code>lxc-usernsexec</code> and <code>unshare</code> cannot
be used with above example. If one tries anyways one will only get:</p>
<pre><code>$ lxc-usernsexec -m b:0:1000:1 -m b:1:558752:1 -- unshare --mount-proc=/tmp/buildroot/proc --ipc --pid --net --uts --mount --fork -- ./chroot.sh
unshare: mount /tmp/buildroot/proc failed: Invalid argument
</code></pre>
<p>I'd be interested in finding out why that is and how to fix it.</p>]]></content:encoded>
    </item>
    <item>
      <title>new sbuild release 0.66.0</title>
      <link>http://blog.mister-muffin.de/2015/10/04/new-sbuild-release-0.66.0</link>
      <pubDate>Sun, 04 Oct 2015 11:00:00 CEST</pubDate>
      <category><![CDATA[debian]]></category>
      <guid>DYPJb7R9YYzgxwTOQQzWwup3GBY=</guid>
      <description>new sbuild release 0.66.0</description>
      <content:encoded><![CDATA[<p>I just released sbuild 0.66.0-1 into unstable. It fixes a whopping 30 bugs!
Thus, I'd like to use this platform to:</p>
<ul>
<li>kindly ask all sbuild users to report any new bugs introduced with this
   release</li>
<li>give a big thank you to everybody who supplied the patches that made fixing
   this many bugs possible (in alphabetical order): Aurelien Jarno, Christian
   Kastner, Christoph Egger, Colin Watson, Dima Kogan, Guillem Jover, Luca
   Falavigna, Maria Valentina Marin Rordrigues, Miguel A. Colón Vélez, Paul
   Tagliamonte</li>
</ul>
<p>And a super big thank you to Roger Leigh who, despite having resigned from
Debian, was always available to give extremely helpful hints, tips, opinion and
guidance with respect to sbuild development. Thank you!</p>
<p>Here is a list of the major changes since the last release:</p>
<ul>
<li>add option <code>--arch-all-only</code> to build <code>arch:all</code> packages</li>
<li>environment variable <code>SBUILD_CONFIG</code> allows to specify a custom
     configuration file</li>
<li>add option <code>--build-path</code> to set a deterministic build path</li>
<li>fix crossbuild dependency resolution</li>
<li>add option <code>--extra-repository-key</code> for extra apt keys</li>
<li>add option <code>--build-dep-resolver=aspcud</code> for aspcud based resolver</li>
<li>allow complex commands as sbuild hooks</li>
<li>add now external command <code>%SBUILD_SHELL</code> produces an interactive shell</li>
<li>add options <code>--build-deps-failed-commands</code>, <code>--build-failed-commands</code> and
     <code>--anything-failed-commands</code> for more hooks</li>
</ul>]]></content:encoded>
    </item>
    <item>
      <title>I became a Debian Developer</title>
      <link>http://blog.mister-muffin.de/2015/02/04/i-became-a-debian-developer</link>
      <pubDate>Wed, 04 Feb 2015 18:00:00 CET</pubDate>
      <category><![CDATA[blog]]></category>
      <category><![CDATA[debian]]></category>
      <guid>CZZVOyt2Jcec0I9N5k6GfsyF5Ck=</guid>
      <description>I became a Debian Developer</description>
      <content:encoded><![CDATA[<p><img width="75%" src="/images/josch_dd.jpg" /></p>
<p>Thanks to akira for the confetti to celebrate the occasion!</p>]]></content:encoded>
    </item>
    <item>
      <title>simple email setup</title>
      <link>http://blog.mister-muffin.de/2014/11/30/simple-email-setup</link>
      <pubDate>Sun, 30 Nov 2014 16:39:00 CET</pubDate>
      <category><![CDATA[config]]></category>
      <category><![CDATA[debian]]></category>
      <guid>AR6-RLeG5cWSDhlS6t1hUQEQOoY=</guid>
      <description>simple email setup</description>
      <content:encoded><![CDATA[<p>I was unable to find a good place that describes how to create a simple
self-hosted email setup. The most surprising discovery was, how much already
works after:</p>
<pre><code>apt-get install postfix dovecot-imapd
</code></pre>
<p>Right after having finished the installation I was able to receive email (but
only in in <code>/var/mail</code> in mbox format) and send email (bot not from any other
host). So while I expected a pretty complex setup, it turned out to boil down
to just adjusting some configuration parameters.</p>
<h1 id="postfix">Postfix</h1>
<p>The two interesting files to configure postfix are <code>/etc/postfix/main.cf</code> and
<code>/etc/postfix/master.cf</code>. A commented version of the former exists in
<code>/usr/share/postfix/main.cf.dist</code>. Alternatively, there is the ~600k word
strong man page postconf(5). The latter file is documented in master(5).</p>
<h2 id="etcpostfixmaincf">/etc/postfix/main.cf</h2>
<p>I changed the following in my <code>main.cf</code></p>
<div class="pygments_murphy"><pre><span></span><span class="gu">@@ -37,3 +37,9 @@</span><br/> mailbox_size_limit = 0<br/> recipient_delimiter = +<br/> inet_interfaces = all<br/><span class="gi">+</span><br/><span class="gi">+home_mailbox = Mail/</span><br/><span class="gi">+smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated</span><br/><span class="gi">+smtpd_sasl_type = dovecot</span><br/><span class="gi">+smtpd_sasl_path = private/auth</span><br/><span class="gi">+smtp_helo_name = my.reverse.dns.name.com</span><br/></pre></div>

<p>At this point, also make sure that the parameters <code>smtpd_tls_cert_file</code> and
<code>smtpd_tls_key_file</code> point to the right certificate and private key file. So
either change these values or replace the content of
<code>/etc/ssl/certs/ssl-cert-snakeoil.pem</code> and
<code>/etc/ssl/private/ssl-cert-snakeoil.key</code>.</p>
<p>The <code>home_mailbox</code> parameter sets the default path for incoming mail. Since
there is no leading slash, this puts mail into <code>$HOME/Mail</code> for each user. The
trailing slash is important as it specifies ``qmail-style delivery'' which
means maildir.</p>
<p>The default of the <code>smtpd_recipient_restrictions</code> parameter is
<code>permit_mynetworks reject_unauth_destination</code> so this just adds the
<code>permit_sasl_authenticated</code> option. This is necessary to allow users to send
email when they successfully verified their login through dovecot.  The dovecot
login verification is activated through the <code>smtpd_sasl_type</code> and
<code>smtpd_sasl_path</code> parameters.</p>
<p>I found it necessary to set the <code>smtp_helo_name</code> parameter to the reverse DNS
of my server. This was necessary because many other email servers would only
accept email from a server with a valid reverse DNS entry. My hosting provider
charges USD 7.50 per month to change the default reverse DNS name, so the easy
solution is, to instead just adjust the name announced in the SMTP <code>helo</code>.</p>
<h2 id="etcpostfixmastercf">/etc/postfix/master.cf</h2>
<p>The file <code>master.cf</code> is used to enable the <code>submission</code> service. The following
diff just removes the comment character from the appropriate section.</p>
<div class="pygments_murphy"><pre><span></span><span class="gu">@@ -13,12 +13,12 @@</span><br/> #smtpd     pass  -       -       -       -       -       smtpd<br/> #dnsblog   unix  -       -       -       -       0       dnsblog<br/> #tlsproxy  unix  -       -       -       -       0       tlsproxy<br/><span class="gd">-#submission inet n       -       -       -       -       smtpd</span><br/><span class="gd">-#  -o syslog_name=postfix/submission</span><br/><span class="gd">-#  -o smtpd_tls_security_level=encrypt</span><br/><span class="gd">-#  -o smtpd_sasl_auth_enable=yes</span><br/><span class="gd">-#  -o smtpd_client_restrictions=permit_sasl_authenticated,reject</span><br/><span class="gd">-#  -o milter_macro_daemon_name=ORIGINATING</span><br/><span class="gi">+submission inet n       -       -       -       -       smtpd</span><br/><span class="gi">+  -o syslog_name=postfix/submission</span><br/><span class="gi">+  -o smtpd_tls_security_level=encrypt</span><br/><span class="gi">+  -o smtpd_sasl_auth_enable=yes</span><br/><span class="gi">+  -o smtpd_client_restrictions=permit_sasl_authenticated,reject</span><br/><span class="gi">+  -o milter_macro_daemon_name=ORIGINATING</span><br/> #smtps     inet  n       -       -       -       -       smtpd<br/> #  -o syslog_name=postfix/smtps<br/> #  -o smtpd_tls_wrappermode=yes<br/></pre></div>

<h1 id="dovecot">Dovecot</h1>
<p>Since above configuration changes made postfix store email in a different
location and format than the default, dovecot has to be informed about these
changes as well. This is done in <code>/etc/dovecot/conf.d/10-mail.conf</code>. The second
configuration change enables postfix to authenticate users through dovecot in
<code>/etc/dovecot/conf.d/10-master.conf</code>. For SSL one should look into
<code>/etc/dovecot/conf.d/10-ssl.conf</code> and either adapt the parameters <code>ssl_cert</code>
and <code>ssl_key</code> or store the correct certificate and private key in
<code>/etc/dovecot/dovecot.pem</code> and <code>/etc/dovecot/private/dovecot.pem</code>,
respectively.</p>
<p>The <code>dovecot-core</code> package (which <code>dovecot-imapd</code> depends on) ships tons of
documentation. The file
<code>/usr/share/doc/dovecot-core/dovecot/documentation.txt.gz</code> gives an overview of
what resources are available. The path
<code>/usr/share/doc/dovecot-core/dovecot/wiki</code> contains a snapshot of the dovecot
wiki at http://wiki2.dovecot.org/. The example configurations seem to be the
same files as in <code>/etc/</code> which are already well commented.</p>
<h2 id="etcdovecotconfd10-mailconf">/etc/dovecot/conf.d/10-mail.conf</h2>
<p>The following diff changes the default email location in <code>/var/mail</code> to a
maildir in <code>~/Mail</code> as configured for postfix above.</p>
<div class="pygments_murphy"><pre><span></span><span class="gu">@@ -27,7 +27,7 @@</span><br/> #<br/> # &lt;doc/wiki/MailLocation.txt&gt;<br/> #<br/><span class="gd">-mail_location = mbox:~/mail:INBOX=/var/mail/%u</span><br/><span class="gi">+mail_location = maildir:~/Mail</span><br/> <br/> # If you need to set multiple mailbox locations or want to change default<br/> # namespace settings, you can do it by defining namespace sections.<br/></pre></div>

<h2 id="etcdovecotconfd10-masterconf">/etc/dovecot/conf.d/10-master.conf</h2>
<p>And this enables the authentication socket for postfix:</p>
<div class="pygments_murphy"><pre><span></span><span class="gu">@@ -93,9 +93,11 @@</span><br/>   }<br/> <br/>   # Postfix smtp-auth<br/><span class="gd">-  #unix_listener /var/spool/postfix/private/auth {</span><br/><span class="gd">-  #  mode = 0666</span><br/><span class="gd">-  #}</span><br/><span class="gi">+  unix_listener /var/spool/postfix/private/auth {</span><br/><span class="gi">+    mode = 0660</span><br/><span class="gi">+    user = postfix</span><br/><span class="gi">+    group = postfix</span><br/><span class="gi">+  }</span><br/> <br/>   # Auth process is run as this user.<br/>   #user = $default_internal_user<br/></pre></div>

<h1 id="aliases">Aliases</h1>
<p>Now Email will automatically put into the '~/Mail' directory of the receiver.
So a user has to be created for whom one wants to receive mail...</p>
<pre><code>$ adduser josch
</code></pre>
<p>...and any aliases for it to be configured in <code>/etc/aliases</code>.</p>
<div class="pygments_murphy"><pre><span></span><span class="gu">@@ -1,2 +1,4 @@</span><br/><span class="gd">-# See man 5 aliases for format</span><br/><span class="gd">-postmaster:    root</span><br/><span class="gi">+root:       josch</span><br/><span class="gi">+postmaster: josch</span><br/><span class="gi">+hostmaster: josch</span><br/><span class="gi">+webmaster:  josch</span><br/></pre></div>

<p>After editing <code>/etc/aliases</code>, the command</p>
<pre><code>$ newaliases
</code></pre>
<p>has to be run. More can be read in the aliases(5) man page.</p>
<h1 id="finishing-up">Finishing up</h1>
<p>Everything is done and now postfix and dovecot have to be informed about the
changes. There are many ways to do that. Either restart the services, reboot or
just do:</p>
<pre><code>$ postfix reload
$ doveadm reload
</code></pre>
<h1 id="spf">SPF</h1>
<pre><code>$ apt-get install postfix-policyd-spf-python
</code></pre>
<h2 id="etcpostfixmaincf_1">/etc/postfix/main.cf</h2>
<div class="pygments_murphy"><pre><span></span>policy-spf_time_limit = 3600s<br/></pre></div>

<h2 id="etcpostfixmastercf_1">/etc/postfix/master.cf</h2>
<p>policy-spf  unix  -       n       n       -       -       spawn
     user=nobody argv=/usr/bin/policyd-spf</p>
<p>DNS TXT record with value:</p>
<pre><code>v=spf1 ip4:62.75.219.19 -all
</code></pre>
<h2 id="etcpostfix-policyd-spf-pythonpolicyd-spfconf">/etc/postfix-policyd-spf-python/policyd-spf.conf</h2>
<div class="pygments_murphy"><pre><span></span>debugLevel = 1 <br/>defaultSeedOnly = 1<br/><br/>HELO_reject = SPF_Not_Pass<br/>Mail_From_reject = Fail<br/><br/>PermError_reject = False<br/>TempError_Defer = False<br/><br/>skip_addresses = 127.0.0.0/8,::ffff:127.0.0.0//104,::1//128<br/></pre></div>

<p>FIXME: the <code>skip_addresses</code> field should also list all hosts that I get email
forwarded from. For example if I get my josch@debian.org email forwarded to
this server, then I should list the debian.org mail relay servers. A list of
these can be found by doing:</p>
<pre><code>ldapsearch -x -LLL -b dc=debian,dc=org -h db.debian.org 'purpose=mail relay' ipHostNumber
</code></pre>
<p>Otherwise, senders with an SPF record with only their own IP and a final <code>-all</code>
will see their mail rejected by the server. This is because the email was
forwarded by the debian.org relay but that IP was not in their SPF record.</p>
<h1 id="dkim">DKIM</h1>
<pre><code>$ apt-get install opendkim opendkim-tools
$ mkdir /etc/mail
$ cd /etc/mail
$ opendkim-genkey -t -s mail -d mister-muffin.de
$ cat mail.txt
</code></pre>
<h2 id="etcopendkimconf">/etc/opendkim.conf</h2>
<p>Domain                  mister-muffin.de
KeyFile                 /etc/mail/mail.private
Selector                mail
Canonicalization        relaxed/relaxed</p>
<h2 id="etcdefaultopendkim">/etc/default/opendkim</h2>
<p>SOCKET="inet:8891@localhost"</p>
<h2 id="etcpostfixmaincf_2">/etc/postfix/main.cf</h2>
<p>milter_default_action = accept
milter_protocol = 2
smtpd_milters = inet:localhost:8891
non_smtpd_milters = inet:localhost:8891</p>
<pre><code>$ service opendkim restart
$ service postfix restart
</code></pre>]]></content:encoded>
    </item>
    <item>
      <title>automatically suspending cpu hungry applications</title>
      <link>http://blog.mister-muffin.de/2014/11/07/automatically-suspending-cpu-hungry-applications</link>
      <pubDate>Fri, 07 Nov 2014 08:51:00 CET</pubDate>
      <category><![CDATA[config]]></category>
      <guid>DsbKrmxh5qvv6imdp5zBVwUcK_s=</guid>
      <description>automatically suspending cpu hungry applications</description>
      <content:encoded><![CDATA[<p>TLDR: Using the <a href="http://awesome.naquadah.org">awesome window manager</a>: how to automatically send
<code>SIGSTOP</code> and <code>SIGCONT</code> to application windows when they get unfocused or
focused, respectively, to let the application not waste CPU cycles when not in
use.</p>
<p>I don't require any fancy looking GUI, so my desktop runs no full-blown desktop
environment like Gnome or KDE but instead only awesome as a light-weight window
manager. Usually, the only application windows I have open are rxvt-unicode as
my terminal emulator and firefox/iceweasel with the <a href="http://5digits.org/pentadactyl">pentadactyl</a> extension as my
browser. Thus, I would expect that CPU usage of my idle system would be pretty
much zero but instead firefox decides to constantly eat 10-15%. Probably to
update some GIF animations or JavaScript (or nowadays even HTML5 video
animations).  But I don't need it to do that when I'm not currently looking at
my browser window.  Disabling all JavaScript is no option because some websites
that I need for uni or work are just completely broken without JavaScript, so I
have to enable it for those websites.</p>
<p>Solution: send <code>SIGSTOP</code> when my firefox window looses focus and send <code>SIGCONT</code>
once it gains focus again.</p>
<p>The following addition to my <code>/etc/xdg/awesome/rc.lua</code> does the trick:</p>
<div class="pygments_murphy"><pre><span></span><span class="kd">local</span> <span class="n">capi</span> <span class="o">=</span> <span class="p">{</span> <span class="n">timer</span> <span class="o">=</span> <span class="n">timer</span> <span class="p">}</span><br/><span class="n">client</span><span class="p">.</span><span class="n">add_signal</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">focus&quot;</span><span class="p">,</span> <span class="k">function</span><span class="p">(</span><span class="n">c</span><span class="p">)</span><br/>  <span class="k">if</span> <span class="n">c</span><span class="p">.</span><span class="n">class</span> <span class="o">==</span> <span class="s2">&quot;</span><span class="s">Iceweasel&quot;</span> <span class="k">then</span><br/>    <span class="n">awful</span><span class="p">.</span><span class="n">util</span><span class="p">.</span><span class="n">spawn</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">kill -CONT &quot;</span> <span class="o">..</span> <span class="n">c</span><span class="p">.</span><span class="n">pid</span><span class="p">)</span><br/>  <span class="k">end</span><br/><span class="k">end</span><span class="p">)</span><br/><span class="n">client</span><span class="p">.</span><span class="n">add_signal</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">unfocus&quot;</span><span class="p">,</span> <span class="k">function</span><span class="p">(</span><span class="n">c</span><span class="p">)</span><br/>  <span class="k">if</span> <span class="n">c</span><span class="p">.</span><span class="n">class</span> <span class="o">==</span> <span class="s2">&quot;</span><span class="s">Iceweasel&quot;</span> <span class="k">then</span><br/>    <span class="kd">local</span> <span class="n">timer_stop</span> <span class="o">=</span> <span class="n">capi</span><span class="p">.</span><span class="n">timer</span> <span class="p">{</span> <span class="n">timeout</span> <span class="o">=</span> <span class="mi">10</span> <span class="p">}</span><br/>    <span class="kd">local</span> <span class="n">send_sigstop</span> <span class="o">=</span> <span class="k">function</span> <span class="p">()</span><br/>      <span class="n">timer_stop</span><span class="p">:</span><span class="n">stop</span><span class="p">()</span><br/>      <span class="k">if</span> <span class="n">client</span><span class="p">.</span><span class="n">focus</span><span class="p">.</span><span class="n">pid</span> <span class="o">~=</span> <span class="n">c</span><span class="p">.</span><span class="n">pid</span> <span class="k">then</span><br/>        <span class="n">awful</span><span class="p">.</span><span class="n">util</span><span class="p">.</span><span class="n">spawn</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">kill -STOP &quot;</span> <span class="o">..</span> <span class="n">c</span><span class="p">.</span><span class="n">pid</span><span class="p">)</span><br/>      <span class="k">end</span><br/>    <span class="k">end</span><br/>    <span class="n">timer_stop</span><span class="p">:</span><span class="n">add_signal</span><span class="p">(</span><span class="s2">&quot;</span><span class="s">timeout&quot;</span><span class="p">,</span> <span class="n">send_sigstop</span><span class="p">)</span><br/>    <span class="n">timer_stop</span><span class="p">:</span><span class="n">start</span><span class="p">()</span><br/>  <span class="k">end</span><br/><span class="k">end</span><span class="p">)</span><br/></pre></div>

<p>Since I'm running Debian, the class is "Iceweasel" and not "Firefox". When the
window gains focus, a <code>SIGCONT</code> is sent immediately. I'm executing <code>kill</code>
because I don't know how to send UNIX signals from lua directly.</p>
<p>When the window looses focus, then the <code>SIGSTOP</code> signal is only sent after a 10
second timeout. This is done for several reasons:</p>
<ul>
<li>I don't want firefox to stop in cases where I'm just quickly switching back and forth between it and other application windows</li>
<li>When firefox starts, it doesn't have a window for a short time. So without a timeout, the process would start but immediately get stopped as there is no window to have a focus.</li>
<li>when using the X paste buffer, then the application behind the source window must not be stopped when pasting content from it. I assume that I will not spend more than 10 seconds between marking a string in firefox and pasting it into another window</li>
</ul>
<p>With this change, when I now open <code>htop</code>, the process consuming most CPU
resources is htop itself. Success!</p>
<p>Another cool advantage is, that firefox can now be moved completely into swap
space in case I run otherwise memory hungry applications without ever requiring
any memory from swap until I really use it again.</p>
<p>I haven't encountered any disadvantages of this setup yet. If 10 seconds prove
to be too short to copy and paste I can easily extend this delay. Even clicking
on links in my terminal works flawlessly - the new tab will just only load once
firefox gets focused again.</p>
<p>EDIT: thanks to Helmut Grohne for suggesting to compare the pid instead of the
raw client instance to prevent misbehaviour when firefox opens additional
windows like the preferences dialog.</p>]]></content:encoded>
    </item>
    <item>
      <title>bootstrap.debian.net temporarily not updated</title>
      <link>http://blog.mister-muffin.de/2014/07/29/bootstrap.debian.net-temporarily-not-updated</link>
      <pubDate>Tue, 29 Jul 2014 10:37:00 CEST</pubDate>
      <category><![CDATA[debian]]></category>
      <guid>075G4otz2SDdBLla2zHKih1l97E=</guid>
      <description>bootstrap.debian.net temporarily not updated</description>
      <content:encoded><![CDATA[<p>I'll be moving places twice within the next month and as I'm hosting the
machine that generates the data, I'll temporarily suspend the
<a href="http://bootstrap.debian.net">bootstrap.debian.net</a> service until maybe around
September. Until then, bootstrap.debian.net will not be updated and retain the
status as of 2014-07-28. Sorry if that causes any inconvenience. You can write
to me if you need help with manually generating the data bootstrap.debian.net
provided.</p>]]></content:encoded>
    </item>
    <item>
      <title>botch updates</title>
      <link>http://blog.mister-muffin.de/2014/06/05/botch-updates</link>
      <pubDate>Thu, 05 Jun 2014 07:59:00 CEST</pubDate>
      <category><![CDATA[debian]]></category>
      <guid>rGXjyXwaoTtT-kPJx5nPHtm0ObA=</guid>
      <description>botch updates</description>
      <content:encoded><![CDATA[<p>My last update about ongoing development of botch, the bootstrap/build ordering
tool chain, was four months ago and about <a href="/2014/02/06/botch-updates">several incremental updates</a>.
This post will be of similar nature. The most interesting news is probably the
additional data that <a href="http://bootstrap.debian.net">bootstrap.debian.net</a> now provides. This is listed in
the next section. All subsequent sections then list the changes under the hood
that made the additions to bootstrap.debian.net possible.</p>
<h2 id="bootstrapdebiannet">bootstrap.debian.net</h2>
<p>The <a href="http://bootstrap.debian.net">bootstrap.debian.net service</a> used to have botch as a git submodule
but now runs botch from its Debian package. This at least proves that the botch
Debian package is mature enough to do useful stuff with it. In addition to the
bootstrapping results by architecture, bootstrap.debian.net now also hosts the
following additional services:</p>
<ul>
<li><a href="http://bootstrap.debian.net/history.html">History of graph size</a> shows how the dependency graph developed over time for a normal self-contained repository plus for both minimizing strategies, updated every five days</li>
<li><a href="http://bootstrap.debian.net/cross.html">Crossbuild dependency satisfaction</a> gives an overview of reasons why the crossbuild dependency situation cannot yet be analyzed with bug numbers where applicable</li>
<li><a href="http://bootstrap.debian.net/cross_cheated.html">Crossbuild order</a> modifies the metadata of a repository so that the crossbuild dependency situation can be analyzed and then outputs an <a href="http://bootstrap.debian.net/cross_cheated_stats.html">overview page</a> as it is done for every architecture on the main page</li>
<li><a href="http://bootstrap.debian.net/importance_metric.html">Source package importance</a> calculates the <a href="https://lists.debian.org/20131127175834.2752.85430@hoothoot">port metric for source packages</a> on a daily basis</li>
</ul>
<p>Further improvements concern how dependency cycles are now presented in the
html overviews. While before, vertices in a cycle where separated by commas as
if they were simple package lists, vertices are now connected by unicode
arrows. Dashed arrows indicate build dependencies while solid arrows indicate
builds-from relationships. For what it's worth, installation set vertices now
contain their installation set in their <code>title</code> attribute.</p>
<h2 id="debian-package">Debian package</h2>
<p>Botch has long depended on features of an unreleased version of <code>dose3</code> which
in turn depended on an unrelease version of <code>libcudf</code>. Both projects have
recently made new releases so that I was now able to drop the <code>dose3</code> git
submodule and rely on the host system's <code>dose3</code> version instead. This also made
it possible to create a Debian package of botch which currently sits at <a href="https://mentors.debian.net/package/botch">Debian
mentors</a>.  Writing the package also finally made me create a usable
<code>install</code> target in the <code>Makefile</code> as well as adding stubs for the manpages of
the 44 applications that botch currently ships. The actual content of these
manpages still has to be written. The only documentation botch currently ships
in the <code>botch-doc</code> package is an offline version of the <a href="https://gitorious.org/debian-bootstrap/pages/Home">wiki on gitorious</a>.
The new page <a href="https://gitorious.org/debian-bootstrap/pages/ExamplesGraphs">ExamplesGraphs</a> even includes pictures.</p>
<h2 id="cross">Cross</h2>
<p>By default, botch analyzes the native bootstrapping phase. That is, assume that
the initial set of <code>Essential:yes</code> and <code>build-essential</code> packages magically
exists and find out how to bootstrap the rest from there through native
compilation. But part of the bootstrapping problem is also to create the set of
<code>Essential:yes</code> and <code>build-essential</code> packages from nothing via cross
compilation. Botch is unable to analyze the cross phase because too many
packages cannot satisfy their crossbuild dependencies due to multiarch
conflicts. This problem is only about the dependency metadata and not about
whether a given source package actually crosscompiles fine in practice.</p>
<p>Helmut Grohne has done great work with <a href="https://wiki.debian.org/HelmutGrohne/rebootstrap">rebootstrap</a> which is regularly
<a href="https://jenkins.debian.net/view/rebootstrap/">run by jenkins.debian.net</a>. He convinced me that we need an overview of
what packages are blocking the analysis of the cross case and that it was
useful to have a crossbuild order even if that was a fake order just to have a
rough overview of the current situation in Debian Sid.</p>
<p>I wrote a couple of scripts which would run <code>dose-builddebcheck</code> on a
repository, analyze which packages fail to satisfy their crossbuild
dependencies and why, fix those cases by adjusting package metadata accordingly
and repeat until all relevant source packages satisfy their crossbuild
dependencies. The result of this can then be used to identify the packages that
need to be modified as well as to generate a crossbuild order.</p>
<p>The fixes to the metadata are done in an automatic fashion and do not
necessarily reflect the real fix that would solve the problem. Nevertheless, I
ended up agreeing that it is better to have a slightly wrong overview than no
overview at all.</p>
<h2 id="minimizing-the-dependency-graph-size">Minimizing the dependency graph size</h2>
<p>Installation sets in the dependency graph are calculated independent from each
other. If two binary packages provide <code>A</code>, then dependencies on <code>A</code> in
different installation sets might choose different binary packages as providers
of <code>A</code>. The same holds for disjunctive dependencies. If a package depends on <code>A
| C</code> and another package depends on <code>C | A</code> then there is no coordination to
choose <code>C</code> so to minimize the overall amount of vertices in the graph.  I
implemented two methods to minimize the impact of cases where the dependency
solver has multiple options to satisfy a dependency through <code>Provides</code> and
dependency disjunctions.</p>
<p>The first method is inspired by Helmut Grohne. An algorithm goes through all
disjunctive binary dependencies and removes all virtual packages, leaving only
real packages. Of the remaining real packages, the first one is selected. For
build dependencies, the algorithm drops all but the first package in every
disjunction. This is also what sbuild does. Unfortunately this solution
produces an unsatisfiable dependency situation in most cases. This is because
oftentimes it is necessary to select the virtual disjunctive dependency because
of a conflict relationship introduced by another package.</p>
<p>The second method involves <code>aspcud</code>, a cudf solver which can optimize a
solution by a criteria. This solution is based on an idea by Pietro Abate who
implemented the basis for this idea back in 2012. In contrast to a usual cudf
problem, binary packages now also depend on the source packages they build
from. If we now ask <code>aspcud</code> to find an installation set for one of the base
source packages (I chose <code>src:build-essential</code>) then it will return an
installation set that includes source packages. As an optimization criteria the
number of source packages in the installation set is minimized. This solution
would be flawless if there were no conflicts between binary packages. Due to
conflicts not all binary packages that must be coinstallable for this strategy
to work can be coinstalled. The quick and dirty solution is to remove all
conflicts before passing the cudf universe to <code>aspcud</code>. But this also means
that the solution does sometimes not work in practice.</p>
<h2 id="test-cases">Test cases</h2>
<p>Botch now finally has a <code>test</code> target in its <code>Makefile</code>. The <code>test</code> target
tests two code paths of the <code>native.sh</code> script and the <code>cross.sh</code> script.
Running these two scripts covers testing most parts of botch. Given that I did
lots of refactoring in the past weeks, the test cases greatly helped to assure
that I didnt break anything in the process.</p>
<p>I also added <a href="http://dep.debian.net/deps/dep8/">autopkgtests</a> to the Debian packaging which test the same
things as the <code>test</code> target but naturally run the installed version of botch
instead. The autopkgtests were a great help in weeding out some lasts bugs
which made botch depend on being executed from its source directory.</p>
<h2 id="python-3">Python 3</h2>
<p>Reading the <a href="https://www.debian.org/doc/packaging-manuals/python-policy/ch-python3.html">suggestions in the Debian python policy</a> I evaluated the
possibility to use Python 3 for the Python scripts in botch.  While I was at it
I added transparent decompression for gzip, bz2 and xz based on the file magic,
replaced python-apt with python-debian because of <a href="https://bugs.debian.org/748922">bug#748922</a> and added
<code>argparse</code> argument parsing to all scripts.</p>
<p>Unfortunately I had to find out that Python 3 support does not yet seem to be
possible for botch for the following reasons:</p>
<ul>
<li>no soap module for Python 3 in Debian (needed for bts access)</li>
<li>hash randomization is turned on by default in Python 3 and therefore the graph output of networkx is not deterministic anymore (<a href="https://bugs.debian.org/749710">bug#749710</a>)</li>
</ul>
<p>Thus I settled for changing the code such that it would be compatible with
Python 2 as well as with Python 3. Because of the changed string handling and
<code>sys.stdout</code> properties in Python 3 this <a href="http://stackoverflow.com/questions/23944976/">proved to be tricky</a>. On the other
hand this showed me bugs in my code where I was wrongly relying on
deterministic dictionary key traversal.</p>]]></content:encoded>
    </item>
    <item>
      <title>mapbender - maps for long-distance travels</title>
      <link>http://blog.mister-muffin.de/2014/04/03/mapbender---maps-for-long-distance-travels</link>
      <pubDate>Thu, 03 Apr 2014 11:20:00 CEST</pubDate>
      <category><![CDATA[openstreetmap]]></category>
      <guid>iPGVNi647hbmwpzOaAWJcDgoUi0=</guid>
      <description>mapbender - maps for long-distance travels</description>
      <content:encoded><![CDATA[<p>Back in 2007 I stumbled over the "Plus Fours Routefinder", an invention of the
1920's. It's worn on the wrist and allows the user to scroll through a map of
the route they planned to take, rolled up on little wooden rollers.</p>
<p><a href="/images/plus_fours_routefinder.jpg">
<img src="/thumbs/plus_fours_routefinder.jpg" />
</a></p>
<p>At that point I thought: that's awesome for long trips where you either dont
want to take electronics with you or where you are without any electricity for
a long time. And creating such rollable custom maps of your route automatically
using openstreetmap data should be a breeze! Nevertheless it seems nobody
picked up the idea.</p>
<p>Years passed and in a few weeks I'll go on a biking trip along the Weser, a
river in nothern Germany. For my last multi-day trip (which was through the
Odenwald, an area in southern Germany) I printed a big map from openstreetmap
data which contained the whole route. Openstreetmap data is fantastic for this
because in contrast to commercial maps it doesnt only allow you to just print
the area you need but also allows you to highlight your planned route and
objects you would probably not find in most commercial maps like for example
supermarkets to stock up on supplies or bicycle repair shops.</p>
<p>Unfortunately such big maps have the disadvantage that to show everything in
the amount of detail that you want along your route, they have to be pretty
huge and thus easily become an inconvenience because the local plotter can't
handle paper as large as DIN A0 or because it's a pain to repeatedly fold and
unfold the whole thing every time you want to look at it. Strong winds are also
no fun with a huge sheet of paper in your hands. One solution would be to print
DIN A4 sized map regions in the desired scale. But that has the disadvantage
that either you find yourself going back and forth between subsequent pages
because you happen to be right at the border between two pages or you have to
print sufficiently large overlaps, resulting in many duplicate map pieces and
more pages of paper than you would like to carry with you.</p>
<p>It was then that I remembered the "Plus Fours Routefinder" concept. Given a
predefined route it only shows what's important to you: all things close to the
route you plan to travel along. Since it's a long continuous roll of paper
there is no problem with folding because as you travel along the route you
unroll one end and roll up the other. And because it's a long continuous map
there is also no need for flipping pages or large overlap regions. There is not
even the problem of not finding a big enough sheet of paper because multiple
DIN A4 sheets can easily be glued together at their ends to form a long roll.</p>
<p><a href="/images/mapbender1.png">
<img width="50%" style="float:right" src="/thumbs/mapbender1.png" />
</a></p>
<p>On the left you see the route we want to take: the bicycle route along the
Weser river. If I wanted to print that map on a scale that allows me to see
objects in sufficient detail along our route, then I would also see objects in
Hamburg (upper right corner) in the same amount of detail. Clearly a waste of
ink and paper as the route is never even close to Hamburg.</p>
<div style="clear:both"></div>

<p><a href="/images/mapbender2.png">
<img width="50%" style="float:right" src="/thumbs/mapbender2.png" />
</a></p>
<p>As the first step, a smooth approximation of the route has to be found. It
seems that the best way to do that is to calculate a B-Spline curve
approximating the input data with a given smoothness. On the right you can see
the approximated curve with a smoothing value of 6. The curve is sampled into
20 linear segments. I calculated the B-Spline using the FITPACK library to
which scipy offers a Python binding.</p>
<div style="clear:both"></div>

<p><a href="/images/mapbender3.png">
<img width="50%" style="float:right" src="/thumbs/mapbender3.png" />
</a></p>
<p>The next step is to expand each of the line segments into quadrilaterals. The
distance between the vertices of the quadrilaterals and the ends of the line
segment they belong to is the same along the whole path and obviously has to be
big enough such that every point along the route falls into one quadrilateral.
In this example, I draw only 20 quadrilaterals for visual clarity. In practice
one wants many more for a smoother approximation.</p>
<div style="clear:both"></div>

<p><a href="/images/mapbender5.png">
<img width="12.5%" style="float:right" src="/thumbs/mapbender5.png" />
</a>
<a href="/images/mapbender4.png">
<img width="12.5%" style="float:right" src="/thumbs/mapbender4.png" />
</a></p>
<p>Using a simple transform, each point of the original map and the original path
in each quadrilateral is then mapped to a point inside the corresponding
"straight" rectangle. Each target rectangle has the height of the line segment
it corresponds to. It can be seen that while the large scale curvature of the
path is lost in the result, fine details remain perfectly visible. The
assumption here is, that while travelling a path several hundred kilometers
long, it does not matter that large scale curvature that one is not able to
perceive anyways is not preserved.</p>
<p>The transformation is done on a Mercator projection of the map itself as well
as the data of the path. Therefore, this method probably doesnt work if you
plan to travel to one of the poles.</p>
<p>Currently I transform openstreetmap bitmap data. This is not quite optimal as
it leads to text on the map being distorted. It would be just as easy to apply
the necessary transformations to raw openstreetmap XML data but unfortunately I
didnt find a way to render the resulting transformed map data as a raster image
without setting up a database. I would've thought that it would be possible to
have a standalone program reading openstreetmap XML and dumping out raster or
svg images without a round trip through a database. Furthermore,
<a href="https://www.mapbox.com/tilemill/">tilemill</a>, one of the programs that seem to
be one of the least hasslesome to set up and produce raster images is stuck in
<a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=644767">an ITP</a> and the
existing packaging attempt fails to produce a non-empty binary package. Since I
have no clue about nodejs packaging, <a href="http://lists.alioth.debian.org/pipermail/pkg-javascript-devel/2014-April/007311.html">I wrote about
this</a>
to the pkg-javascript-devel list. Maybe I can find a kind soul to help me with
it.</p>
<div style="clear:both"></div>

<p><a href="/images/mapbender6.png">
<img width="50%" style="float:right" src="/thumbs/mapbender6.png" />
</a></p>
<p>Here a side by side overview that doesnt include the underlying map data but
only the path. It shows how small details are preserved in the transformation.</p>
<p>The code that produced the images in this post is very crude, unoptimized and
kinda messy. If you dont care, then it can be accessed
<a href="https://github.com/josch/mapbender">here</a></p>
<div style="clear:both"></div>]]></content:encoded>
    </item>
  </channel>
</rss>
