January 11, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.1.0: Now on Windows

Last month, we released nanotime, a package to work with nanosecond timestamps. See the initial release announcement for some background material and a few first examples.

nanotime relies on the RcppCCTZ package for high(er) resolution time parsing and formatting: R itself stops a little short of a microsecond. And it uses the bit64 package for the actual arithmetic: time at this granularity is commonly represented at (integer) increments (at nanosecond resolution) relative to an offset, for which the standard epoch of Januar 1, 1970 is used. int64 types are a perfect match here, and bit64 gives us an integer64. Naysayers will point out some technical limitations with R's S3 classes, but it works pretty much as needed here.

The one thing we did not have was Windows support. RcppCCTZ and the CCTZ library it uses need real C++11 support, and the g++-4.9 compiler used on Windows falls a little short lacking inter alia a suitable std::get_time() implementation. Enter Dan Dillon who ported this from LLVM's libc++ which lead to Sunday's RcppCCTZ 0.2.0 release.

And now we have all our ducks in a row: everything works on Windows too. The next paragraph summarizes the changes for both this release as well as the initial one last month:

Changes in version 0.1.0 (2017-01-10)

  • Added Windows support thanks to expanded RcppCCTZ (closes #6)

  • Added "mocked up" demo with nanosecond delay networking analysis

  • Added 'fmt' and 'tz' options to output functions, expanded format.nanotime (closing #2 and #3)

  • Added data.frame support

  • Expanded tests

Changes in version 0.0.1 (2016-12-15)

  • Initial CRAN upload.

  • Package is functional and provides examples.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 January, 2017 12:49AM

January 10, 2017

hackergotchi for Bálint Réczey

Bálint Réczey

Debian Developer Game of the Year

I have just finished level one, fixing all RC bugs in packages under my name, even in team-maintained ones. 🙂

Next level is no unclassified bug reports, which gonna be harder since I have just adopted shadow with 70+ open bugs. :-\

Luckily I can still go on bonus tracks which is fixing (RC) bugs in others’ packages, but one should not spend all the time on those track before finishing level 1!

PS: Last time I tried playing a conventional game I ended up fixing it in a few minutes instead.

10 January, 2017 10:03PM by Réczey Bálint

Vincent Fourmond

Version 2.1 of QSoas is out

I have just released QSoas version 2.1. It brings in a new solve command to solve arbitrary non-linear equations of one unknown. I took advantage of this command in the figure to solve the equation for . It also provides a new way to reparametrize fits using the reparametrize-fit command, a new series of fits to model the behaviour of an adsorbed 1- or 2-electrons catalyst on an electrode (these fits are discussed in great details in our recent review (DOI: 10.1016/j.coelec.2016.11.002), improvements in various commands, the possibility to now compile using Ruby 2.3 and the most recent version of the GSL library, and sketches for an emacs major mode, which you can activate (for QSoas script files, ending in .cmds) using the following snippet in $HOME/.emacs:

(autoload 'qsoas-mode "$HOME/Prog/QSoas/misc/qsoas-mode.el" nil t)
(add-to-list 'auto-mode-alist '("\\.cmds$" . qsoas-mode))

Of course, you'll have to adapt the path $HOME/Prog/QSoas/misc/qsoas-mode.el to the actual location of qsoas-mode.el.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too. Enjoy !

10 January, 2017 07:47AM by Vincent Fourmond ([email protected])

January 09, 2017

hackergotchi for Sean Whitton

Sean Whitton

jan17vcspkg

There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference.

I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things.

When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian’s packaging is just another branch of development.

Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git.

One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.

09 January, 2017 08:14PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

The Great Indian Digital Tamasha

Indian Railways

This is an extension to last month’s article/sharing where I had shared the changes that had transpired in the last 2-3 months. Now am in a position to share the kind of issues a user can go through in case he is looking for support from IRCTC to help him/her go cashless. If you a new user to use IRCTC services you wouldn’t go through this trouble.

For those who might have TL;DR issues it’s about how hard it can become to get digital credentials fixed in IRCTC (Indian Railway Catering and Tourism Corporation) –

a. 2 months back Indian Prime Minister gave a call incentivizing people to use digital means to do any commercial activities. One of the big organizations which took/takes part is IRCTC which handles the responsibility for e-ticketing millions of Rail tickets for common people. In India, a massive percentage moves by train as it’s cheaper than going by Air.

A typical fare from say Pune – Delhi (capital of India) by second class sleeper would be INR 645/- for a distance of roughly 1600 odd kms and these are monopoly rates, there are no private trains and I’m not suggesting anything of that sort, just making sure that people know.

An economy class ticket by Air for the same distance would be anywhere between INR 2500-3500/- for a 2 hour flight between different airlines. Last I checked there are around 8 mainstream airlines including flag-carrier Air India.

About 30% of the population live on less than a dollar and a half a day which would come around INR 100/-.

There was a comment some six months back on getting more people out of the poverty line. But as there are lots of manipulations in numbers for who and what denotes above poor and below poor in India and lot of it has to do with politics it’s not something which would be easily fixable.

There are lots to be said in that arena but this article is not an appropriate blog-post for that.

All in all, it’s only 3-5% of the population at the most who can travel via Air if situation demands and around 1-2% who might be frequent, business or leisure travellers.

Now while I can thankfully afford an Air Ticket if the situation so demands, my mother gets motion sickness so while together we can only travel by train.

b. With the above background, I had registered with IRCTC few years ago with another number (dual-SIM) I had purchased and was thinking that I would be using this long-term (seems to my first big mistake, hindsight 50:50) . This was somewhere in 2006/2007.

c. Few months later I found that the other service provider wasn’t giving good service or was not upto mark. I was using IDEA (the main mobile operator) throughout those times.

d. As I didn’t need the service that much, didn’t think to inform them that I want to change to another service provider at that point in time (possibly the biggest mistake, hindsight 50:50)

e. In July 2016 itself IRCTC cut service fees,

f. This was shared as a NEW news item/policy decision at November-end 2016 .

g. While I have done all that has been asked by irctc-care haven’t still got the issues resolved😦 IRCTC’s e-mail id – [email protected]

Now in detail –

This is my first e-mail sent to IRCTC in June 2016 –

Dear Customer care,

I had applied and got username and password sometime back . The
number I had used to register with IRCTC was xxxxxxxxxx (BSNL mobile number not used anymore) . My mobile was lost and along with that the number was also lost. I had filed a complaint with the police and stopped that number as well. Now I have an another mobile number but have forgotten both the password and the security answer that I had given when I had registered . I do have all the conversations I had both with the [email protected] as well as [email protected] if needed to prove my identity.

The new number I want to tie it with is xxxxxxxxxx (IDEA number in-use for last 10 years)

I see two options :-

a. Tie the other number with my e-mail address

b. Take out the e-mail address from the database so that I can fill in
as a new applicant.

Looking forward to hear from you.

There was lot of back and forth with various individuals on IRCTC and after a lot of back and forth, this is the final e-mail I got from them somewhere in August 2016, he writes –

Dear Customer,

We request you to send mobile bill of your mobile number if it is post paid or if it is prepaid then contact to your service provider and they will give you valid proof of your mobile number or they will give you in written on company head letter so that we may update your mobile number to update so that you may reset your password through mobile OTP.
and Kindly inform you that you can update your profile by yourself also.

1.login on IRCTC website
2.after login successfully move courser on “my profile” tab.
3.then click on “update profile”
4.re-enter your password then you can update your profile
5.click on user-profile then email id.
6. click on update.

Still you face any problem related to update profile please revert to us with the screen shots of error message which you will get at the time of update profile .

Thanks & Regards

Parivesh Patel
Executive, Customer Care
[email protected]
http://www.irctc.co.in
[#3730034]

IRCTC’s response seemed responsible, valid and thought it would be a cake-walk as private providers are supposed to be much more efficient than public ones. The experience proved how wrong was I trust them with doing the right thing –

1. First I tried the twitter handle to see how IDEA uses their twitter handle.

2. The idea customer care twitter handle was mild in its response.

3. After sometime I realized that the only way out of this quagmire would perhaps be to go to a brick-mortar shop and get it resolved face-to-face. I went twice or thrice but each time something or the other would happen.

On the fourth and final time, I was able to get to the big ‘Official’ shop only to be told they can’t do anything about this and I would have to the appellate body to get the reply.

The e-mail address which they shared (and I found it later) was wrong. I sent a somewhat longish e-mail sharing all the details and got bounce-backs. The correct e-mail address for the IDEA Maharashtra appellate body is – [email protected]

I searched online and after a bit of hit and miss finally got the relevant address. Then finally on 30th December, 2016 wrote a short email to the service provider as follows –

Dear Sir,
I have been using prepaid mobile connection –

number – xxxxxxx

taken from IDEA for last 10 odd years.

I want to register myself with IRCTC for online railway booking using
my IDEA mobile number.

Earlier, I was having a BSNL connection which I discontinued 4 years back,

For re-registering myself with IRCTC, I have to fulfill their latest
requirements as shown in the email below .

It is requested that I please be issued a letter confirming my
credentials with your esteemed firm.

I contacted your local office at corner of Law College Road and
Bhandarkar Road, Pune (reference number – Q1 – 84786060793) who
refused to provide me any letter and have advised me to contact on the
above e-mail address, hence this request is being forwarded to you.

Please do the needful at your earliest.

Few days later I got this short e-mail from them –

Dear Customer,

Greetings for the day!

This is with reference to your email regarding services.

Please accept our apologies for the inconvenience caused to you and delay in response.

We regret to inform you that we are unable to provide demographic details from our end as provision for same is not available with us.

Should you need any further assistance, please call our Customer Service help line number 9822012345 or email us at [email protected] by mentioning ten digit Idea mobile number in subject line.

Thanks & Regards,

Javed Khan

Customer Service Team

IDEA Cellular Limited- Maharashtra & Goa Circle.

Now I was at almost my wit’s end. Few days before, I had re-affirmed my e-mail address to IDEA . I went to the IDEA care site, registered with my credentials. While the https connection to the page is weak, but let’s not dwell on that atm.

I logged into the site, I went through all the drop-down menus and came across My Account > Raise a request link which I clicked on . This came to a page where I could raise requests for various things. One of the options given there was Bill Delivery. As I wasn’t a postpaid user but a prepaid user didn’t know if that would work or not I still clicked on it. It said it would take 4 days for that to happen. I absently filed it away as I was somewhat sure that nothing would happen from my previous experience with IDEA. But this time the IDEA support staff came through and shared a toll-free SMS number and message format that I could use to generate call details from the last 6 months.

The toll-free number from IDEA is 12345 and the message format is EBILL MON (short-form for month so if it’s January would be jan, so on and so forth).

After gathering all the required credentials, sent my last mail to IRCTC about a week, 10 days back –

Dear Mr. Parivesh Patel,

I was out-of-town and couldn’t do the needful so sorry for the delay.
Now that I’m back in town, I have been able to put together my prepaid
bills of last 6 months which should make it easy to establish my
identity.

As had shared before, I don’t remember my old password and the old
mobile number (BSNL number) is no longer accessible so can’t go
through that route.

Please let me know the next steps in correcting the existing IRCTC
account (which I haven’t operated ever) so I can start using it to
book my tickets.

Look forward to hearing from you.

Haven’t heard anything them from them, apart from a generated token number, each time you send a reply happens. This time it was #4763548

The whole sequence of events throws a lot of troubling questions –

a. Could IRCTC done a better job of articulating their need to me instead of the run-around I was given ?

b. Shouldn’t there be a time limit to accounts from which no transactions have been done ? I hadn’t done a single transaction since registering. When cell service providers including BSNL takes number out after a year of not using a number, why is that account active for so long ?

c. As that account didn’t have OTP at registration, dunno if it’s being used for illegal activities or something.

Update – This doesn’t seem to be a unique thing at all. Just sampling some of the tweets by people at @IRCTC_LTD https://twitter.com/praveen4al/status/775614978258718721 https://twitter.com/vis_nov25/status/786062572390932480 https://twitter.com/ShubhamDevadiya/status/794241443950948352 https://twitter.com/rajeshhindustan/status/798028633759584256 https://twitter.com/ameetsangita/status/810081624343908352 https://twitter.com/grkisback/status/813733835213078528 https://twitter.com/gbalaji_/status/804230235625394177 https://twitter.com/chandhu_nr/status/800675627384721409 , all of this just goes to show how un-unique the situation really is.


Filed under: Miscellenous Tagged: #customer-service, #demonetization, #IDEA-aditya birla, #IRCTC, #web-services, rant

09 January, 2017 01:38PM by shirishag75

Petter Reinholdtsen

Where did that package go? — geolocated IP traceroute

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (129.240.202.1)  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (129.240.24.229)  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (128.39.65.17)  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (193.156.90.3)  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48)  3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (195.0.242.39)  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23)  0.622 ms
 7  89.191.10.146 (89.191.10.146)  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *
[...]

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

09 January, 2017 11:20AM

hackergotchi for Guido Günther

Guido Günther

Debian Fun in December 2016

Debian LTS

November marked the 20th month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • some rather quiet frontdesk days
  • updating icedove to 45.5.1 resulting in DLA-752-1 fixing 7 CVEs
  • looking whether Wheezy is affected by xsa-202, xsa-203, xsa-204 and handling the communication with credativ for these (update not yet released)
  • Assessing cURL/libcURL CVE-2016-9586
  • Assessing whether Wheezy's QEMU is affeced by security issues in 9pfs "proxy" and "handle" code
  • Releasing DLA-776-1 for samba fixing CVE-2016-2125

Other Debian stuff

Some other Free Software activites

09 January, 2017 08:24AM

hackergotchi for Riku Voipio

Riku Voipio

20 years of being a debian maintainer


fte (0.44-1) unstable; urgency=low

* initial Release.

-- Riku Voipio Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.

uid Riku Voipio
sig 89A7BF01 1996-12-15 Riku Voipio
sig 4CBA92D1 1997-02-24 Lars Wirzenius
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.

09 January, 2017 08:01AM by Riku Voipio ([email protected])

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.0

A new version, now at 0.2.0, of RcppCCTZ is now on CRAN. And it brings a significant change: windows builds! Thanks to Dan Dillon who dug deep enough into the libc++ sources from LLVM to port the std::get_time() function that is missing from the 4.* series of g++. And with Rtools being fixed at g++-4.9.3 this was missing for us here. Now we can parse dates for use by RcppCCTZ on Windows as well. That is important not only for RcppCCTZ but also particularly for the one package (so far) depending on it: nanotime.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo -- and for which R on Windows ships its own copy we can use. RcppCCTZ connects this library to R by relying on Rcpp.

The RcppCCTZ page has a few usage examples, as does the post announcing the previous release.

The changes in this version are summarized here:

Changes in version 0.2.0 (2017-01-08)

  • Windows compilation was enabled by defining OFFSET() and ABBR() for MinGW (#10 partially addressing #9)

  • Windows use completed with backport of std::get_time from LLVM's libc++ to enable strptime semantics (Dan Dillon in #11 completing #9)

  • Timezone information on Windows is supplied via R's own copy of zoneinfo with TZDIR set (also #10)

  • The interface to formatDouble was cleaned up

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

09 January, 2017 12:49AM

January 08, 2017

Bits from Debian

New Debian Developers and Maintainers (November and December 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Karen M Sandler (karen)
  • Sebastien Badia (sbadia)
  • Christos Trochalakis (ctrochalakis)
  • Adrian Bunk (bunk)
  • Michael Lustfield (mtecknology)
  • James Clarke (jrtc27)
  • Sean Whitton (spwhitton)
  • Jerome Georges Benoit (calculus)
  • Daniel Lange (dlange)
  • Christoph Biedl (cbiedl)
  • Gustavo Panizzo (gefa)
  • Gert Wollny (gewo)
  • Benjamin Barenblat (bbaren)
  • Giovani Augusto Ferreira (giovani)
  • Mechtilde Stehmann (mechtilde)
  • Christopher Stuart Hoskin (mans0954)

The following contributors were added as Debian Maintainers in the last two months:

  • Dmitry Bogatov
  • Dominik George
  • Gordon Ball
  • Sruthi Chandran
  • Michael Shuler
  • Filip Pytloun
  • Mario Anthony Limonciello
  • Julien Puydt
  • Nicholas D Steeves
  • Raoul Snyman

Congratulations!

08 January, 2017 11:30PM by Jean-Pierre Giraud

hackergotchi for Steve Kemp

Steve Kemp

Patching scp and other updates.

I use openssh every day, be it the ssh command for connecting to remote hosts, or the scp command for uploading/downloading files.

Once a day, or more, I forget that scp uses the non-obvious -P flag for specifying the port, not the -p flag that ssh uses.

Enough is enough. I shall not file a bug report against the Debian openssh-client page, because no doubt compatibility with both upstream, and other distributions, is important. But damnit I've had enough.

apt-get source openssh-client shows the appropriate code:

    fflag = tflag = 0;
    while ((ch = getopt(argc, argv, "dfl:prtvBCc:i:P:q12346S:o:F:")) != -1)
          switch (ch) {
          ..
          ..
            case 'P':
                    addargs(&remote_remote_args, "-p");
                    addargs(&remote_remote_args, "%s", optarg);
                    addargs(&args, "-p");
                    addargs(&args, "%s", optarg);
                    break;
          ..
          ..
            case 'p':
                    pflag = 1;
                    break;
          ..
          ..
          ..

Swapping those two flags around, and updating the format string appropriately, was sufficient to do the necessary.

In other news I've done some hardware development, using both Arduino boards and the WeMos D1-mini. I'm still at the stage where I'm flashing lights, and doing similarly trivial things:

I have more complex projects planned for the future, but these are on-hold until the appropriate parts are delivered:

  • MP3 playback.
  • Bluetooth-speakers.
  • Washing machine alarm.
  • LCD clock, with time set by NTP, and relay control.

Even with a few LEDs though I've had fun, for example writing a trivial binary display.

08 January, 2017 02:45PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

SpeedHQ decoder

I reverse-engineered a video codec. (And then the CTO of the company making it became really enthusiastic, and offered help. Life is strange sometimes.)

I'd talk about this and some related stuff at FOSDEM, but there's a scheduling conflict, so I will be in Ås that weekend, not Brussels.

08 January, 2017 12:06PM

Jonas Meurer

debian lts report 2016.12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

08 January, 2017 10:18AM

debian lts report 2016 12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

08 January, 2017 10:13AM

hackergotchi for Keith Packard

Keith Packard

embedded-arm-libc

Finding a Libc for tiny embedded ARM systems

You'd think this problem would have been solved a long time ago. All I wanted was a C library to use in small embedded systems -- those with a few kB of flash and even fewer kB of RAM.

Small system requirements

A small embedded system has a different balance of needs:

  • Stack space is limited. Each thread needs a separate stack, and it's pretty hard to move them around. I'd like to be able to reliably run with less than 512 bytes of stack.

  • Dynamic memory allocation should be optional. I don't like using malloc on a small device because failure is likely and usually hard to recover from. Just make the linker tell me if the program is going to fit or not.

  • Stdio doesn't have to be awesomely fast. Most of our devices communicate over full-speed USB, which maxes out at about 1MB/sec. A stdio setup designed to write to the page cache at memory speeds is over-designed, and likely involves lots of buffering and fancy code.

  • Everything else should be fast. A small CPU may run at only 20-100MHz, so it's reasonable to ask for optimized code. They also have very fast RAM, so cycle counts through the library matter.

Available small C libraries

I've looked at:

  • μClibc. This targets embedded Linux systems, and also appears dead at this time.

  • musl libc. A more lively project; still, definitely targets systems with a real Linux kernel.

  • dietlibc. Hasn't seen any activity for the last three years, and it isn't really targeting tiny machines.

  • newlib. This seems like the 'normal' embedded C library, but it expects a fairly complete "kernel" API and the stdio bits use malloc.

  • avr-libc. This has lots of Atmel assembly language, but is otherwise ideal for tiny systems.

  • pdclib. This one focuses on small source size and portability.

Current AltOS C library

We've been using pdclib for a couple of years. It was easy to get running, but it really doesn't match what we need. In particular, it uses a lot of stack space in the stdio implementation as there's an additional layer of abstraction that isn't necessary. In addition, pdclib doesn't include a math library, so I've had to 'borrow' code from other places where necessary. I've wanted to switch for a while, but there didn't seem to be a great alternative.

What's wrong with newlib?

The "obvious" embedded C library is newlib. Designed for embedded systems with a nice way to avoid needing a 'real' kernel underneath, newlib has a lot going for it. Most of the functions have a good balance between speed and size, and many of them even offer two implementations depending on what trade-off you need. Plus, the build system 'just works' on multi-lib targets like the family of cortex-m parts.

The big problem with newlib is the stdio code. It absolutely requires dynamic memory allocation and the amount of code necessary for 'printf' is larger than the flash space on many of our devices. I was able to get a cortex-m3 application compiled in 41kB of code, and that used a smattering of string/memory functions and printf.

How about avr libc?

The Atmel world has it pretty good -- avr-libc is small and highly optimized for atmel's 8-bit avr processors. I've used this library with success in a number of projects, although nothing we've ever sold through Altus Metrum.

In particular, the stdio implementation is quite nice -- a 'FILE' is effectively a struct containing pointers to putc/getc functions. The library does no buffering at all. And it's tiny -- the printf code lacks a lot of the fancy new stuff, which saves a pile of space.

However, much of the places where performance is critical are written in assembly language, making it pretty darn hard to port to another processor.

Mixing code together for fun and profit!

Today, I decided to try an experiment to see what would happen if I used the avr-libc stdio bits within the newlib environment. There were only three functions written in assembly language, two of them were just stubs while the third was a simple ultoa function with a weird interface. With those coded up in C, I managed to get them wedged into newlib.

Figuring out the newlib build system was the only real challenge; it's pretty awful having generated files in the repository and a mix of autoconf 2.64 and 2.68 version dependencies.

The result is pretty usable though; my STM 32L discovery board demo application is only 14kB of flash while the original newlib stdio bits needed 42kB and that was still missing all of the 'syscalls', like read, write and sbrk.

Here's gitweb pointing at the top of the tiny-stdio tree:

gitweb

And, of course you can check out the whole thing

git clone git://keithp.com/git/newlib

'master' remains a plain upstream tree, although I do have a fix on that branch. The new code is all on the tiny-stdio branch.

I'll post a note on the newlib mailing list once I've managed to subscribe and see if there is interest in making this option available in the upstream newlib releases. If so, I'll see what might make sense for the Debian libnewlib-arm-none-eabi packages.

08 January, 2017 07:32AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp now used by 900 CRAN packages

800 Rcpp packages

Today, Rcpp passed another milestone as 900 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

The easiest way to compute this is to use the reverse_dependencies_with_maintainers() function from a helper scripts file on CRAN. This still gets one or two false positives of packages declaring a dependency but not actually containing C++ code and the like. There is also a helper function revdep() in the devtools package but it includes Suggests: which does not firmly imply usage, and hence inflates the count. I have always opted for a tighter count with corrections.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July and 800 packages last October. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of last year, seven percent just before Christmas eight percent this summer, and nine percent mid-December.

900 user packages is a really large number. This puts more than some responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

At the rate things are going, the big 1000 may be hit some time in April.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 January, 2017 03:42AM

January 07, 2017

hackergotchi for Lars Wirzenius

Lars Wirzenius

Hacker Noir, chapter 1: Negotiation

I participated in Nanowrimo in November, but I failed to actually finish the required 50,000 words during the month. Oh well. I plan on finishing the book eventually, anyway.

Furthermore, as an open source exhibitionist I thought I'd publish a chapter each month. This will put a bit of pressure on me to keep writing, and hopefully I'll get some nice feedback too.

The working title is "Hacker Noir". I've put the first chapter up on http://noir.liw.fi/.

07 January, 2017 08:53PM

hackergotchi for Simon Richter

Simon Richter

Crossgrading Debian in 2017

So, once again I had a box that had been installed with the kind-of-wrong Debian architecture, in this case, powerpc (32 bit, bigendian), while I wanted ppc64 (64 bit, bigendian). So, crossgrade time.

If you want to follow this, be aware that I use sysvinit. I doubt this can be done this way with systemd installed, because systemd has a lot more dependencies for PID 1, and there is also a dbus daemon involved that cannot be upgraded without a reboot.

To make this a bit more complicated, ppc64 is an unofficial port, so it is even less synchronized across architectures than sid normally is (I would have used jessie, but there is no jessie for ppc64).

Step 1: Be Prepared

To work around the archive synchronisation issues, I installed pbuilder and created 32 and 64 bit base.tgz archives:

pbuilder --create --basetgz /var/cache/pbuilder/powerpc.tgz
pbuilder --create --basetgz /var/cache/pbuilder/ppc64.tgz \
    --architecture ppc64 \
    --mirror http://ftp.ports.debian.org/debian-ports \
    --debootstrapopts --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
    --debootstrapopts --include=debian-ports-archive-keyring

Step 2: Gradually Heat the Water so the Frog Doesn't Notice

Then, I added the sources to sources.list, and added the architecture to dpkg:

deb [arch=powerpc] http://ftp.debian.org/debian sid main
deb [arch=ppc64] http://ftp.ports.debian.org/debian-ports sid main
deb-src http://ftp.debian.org/debian sid main

dpkg --add-architecture ppc64
apt update

Step 3: Time to Go Wild

apt install dpkg:ppc64

Obviously, that didn't work, in my case because libattr1 and libacl1 weren't in sync, so there was no valid way to install powerpc and ppc64 versions in parallel, so I used pbuilder to compile the current version from sid for the architecture that wasn't up to date (IIRC, one for powerpc, and one for ppc64).

Manually installed the libraries, then tried again:

apt install dpkg:ppc64

Woo, it actually wants to do that. Now, that only half works, because apt calls dpkg twice, once to remove the old version, and once to install the new one. Your options at this point are

apt-get download dpkg:ppc64
dpkg -i dpkg_*_ppc64.deb

or if you didn't think far enough ahead, cursing followed by

cd /tmp
ar x /var/cache/apt/archives/dpkg_*_ppc64.deb
cd /
tar -xJf /tmp/data.tar.xz
dpkg -i /var/cache/apt/archives/dpkg_*_ppc64.deb

Step 4: Automate That

Now, I'd like to get this a bit more convenient, so I had to repeat the same dance with apt and aptitude and their dependencies. Thanks to pbuilder, this wasn't too bad.

With the aptitude resolver, it was then simple to upgrade a test package

aptitude install coreutils:ppc64 coreutils:powerpc-

The resolver did its thing, and asked whether I really wanted to remove an Essential package. I did, and it replaced the package just fine.

So I asked dpkg for a list of all powerpc packages installed (since it's a ppc64 dpkg, it will report powerpc as foreign), massage that into shape with grep and sed, and give the result to aptitude as a command line.

Some time later, aptitude finished, and I had a shiny 64 bit system. Crossgrade through an ssh session that remained open all the time, and without a reboot. After closing the ssh session, the last 32 bit binary was deleted as it was no longer in use.

There were a few minor hiccups during the process where dpkg refused to overwrite "shared" files with different versions, but these could be solved easily by manually installing the offending package with

dpkg --force-overwrite -i ...

and then resuming what aptitude was doing, using

aptitude install

So, in summary, this still works fairly well.

07 January, 2017 08:45PM

Thorsten Alteholz

My Debian Activities in December 2016

FTP assistant

This month I marked 367 packages for accept and rejected 45 packages. This time I only sent 10 emails to maintainers asking questions.

Debian LTS

This was my thirtieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13.50h. During that time I did uploads of

  • [DLA 739-1] jasper security update for nine CVEs
  • [DLA 749-1] php5 security update for 14 CVEs
  • [DLA 771-1] hdf5 security update for four CVEs

Other stuff

The Debian Med Advent Calendar was really successful this year. As announced in [1] this year the second highest number of bugs has been closed during tht bug squashing:

year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95

Well done everybody who participated!

In December I also uploaded new upstream versions of duktape, fixed bugs in openzwave, did a binary upload for mpb on mipsel, sponsored openzwave-controlpanel, sidedoor and printrun.
Thanks to lamby that openzwave-controlpanel and sidedoor even made it into Stretch.

Last but not least I want to wish everybody a Happy New Year.

[1] https://lists.debian.org/debian-med/2016/12/msg00180.html

07 January, 2017 07:00PM by alteholz

Enrico Zini

Teamwork

When I saw this video or this video I thought of this article.

When I feel part of a tightly coordinated and synchronized team I feel proud for the achievements of the team as a whole, which I see as bigger than what I could have achieved alone.

I also don't feel at risk of taking bad decisions. I feel less responsible. If I do what I'm told, I can't be blamed for doing the wrong things. I find it relaxing, every once in a while, to not have to be in charge.

I guess this could be part of the allure of a totalitarian regime: being freed from the burden of growing up

Thinking about this, reading those articles about romantic relationships, I see quite a bit of parallels also with organising cooperation and teamwork.

It looks like I ended up making parallels between Polyamory, Anarchism, and Free Software again. If you think there should traditionally be also a mention of BDSM, go back to "I find it relaxing, every once in a while, to not have to be in charge".

07 January, 2017 01:38PM

January 06, 2017

Vincent Fourmond

RFH: screen that hurts my eyes

Short summary:

My eyes hurt when I use my home desktop computer - but only with this computer. This has been very long and frustrating for me, so if you think you can help, read the whole story just below, or skip to what I've tried and what I suspect might be the problem, and post comments below, or send me a mail (my adress should be quite obvious in this page).

Whole story:

Two years ago, I bought myself a new fancy motherboard (a Asus B85M-G C2) with a new fancy Intel-based processor with built-in graphics (an Intel Core i7-4770) and memory to go with it. I installed it in the place of my old AMD-based motherboard, keeping everything else (my hoard of hard drives and such) excepted the graphics card, which was not needed anymore. I immediately noticed my eyes were aching when using the computer. I was quite surprised since I had been using the screen very heavily for almost 10 years before that, without any problems. I attributed that to Intel Graphics, so I tried putting back the old graphics card, but it did not help. The situation was very frustrating, since working on the computer for an hour or so was making my eyes hurt for several days. This problem was specific to this computer, I could keep on using my computer at work and my laptop without problems.

I could use the computer using SSH from my laptop, so I could profit from the faster processor, but, hey, that wasn't how it was meant to happen. I bought another screen, also tried with one from the work, without any change. I tried using two screens at the same time (this is what I have at work), also without success, so I just kept not using the computer directly. I moved recently to a new place and tried to get that working back again, but didn't get any luck. Frustrated, I got another desktop computer and another screen, and I still have the same problem ! I also tried remounting the old motherboard with the AMD processor and the old graphics card, but that didn't bring any improvement. I just don't get it. This situation is rather frustrating for me, and it's been holding me back in my software projects for two years now (which partly explains my lack of involvement in Debian over the past few years). This post is here in the hope someone will have a idea, but also for me to keep track of what I've done and what I should.

What is puzzling me is that the computer I had before was perfectly fine, and that I have a very very similar setup at work (also with a NVIDIA graphics card) that doesn't hurt my eyes at all.

What I've tried:

Here is what I tried, you need to keep in mind that when a trial fails, my eyes keep on hurting for several days, and might trigger false positives.
  • putting back the old (NVIDIA) graphics card, buying a new one (NVIDIA as well);
  • putting back the old motherboard (but with a new OS, but maybe my eyes were too sore for a clean test);
  • using another screen (a new one from the same brand, Samsung), a Dell and a HP from my work, and a brand new Phillips;
  • using two screens at the same time;
  • using a completely different (new, based on Xeon processors and a NVIDIA graphics card) computer (with new mouse, keyboard, hard drives and so on);
  • changing house, including changing the lighting conditions, the desk, the internet provider (no, I didn't do that just because of my computer problems !);
  • changing the way I drive the screens between VGA, DVI and HDMI;
  • copying the system I have in my workplace to the new computer and booting from that system (after a few adaptations, though).

As you can guess, none of those brought any improvement.

Wild hypotheses:

  • Is that a software thing ? Is there something wrong for me in the versions of Debian dating from August 2014 and after ?
  • Is that a BIOS problem with recent computers ?
  • Is that linked to some waves (bluetooth shoudln't be on, but maybe I didn't check well enough ?)
  • Is that linked to EFI (but I also have the problems when I use legacy BIOS for booting)
  • Something weird in my home ?
  • Anything else ?

Any help will be greatly appreciated, but please don't advise going to see a doctor, I don't see how this could be a medical condition specific to my home desktop computer, unless this is a very specific psychosomatic problem.

06 January, 2017 07:25PM by Vincent Fourmond ([email protected])

hackergotchi for Urvika Gola

Urvika Gola

Outreachy- Week 3 Progress

In my previous blog I had tried to explain what White Labelling is and my approach of implementing it in Lumicall.

This week I went ahead with the implementation by using productFlavors feature in Android. I baked my cake!


I’ve created two flavors :

  1. Lumicall (which runs like the default version)
  2. Whitelabel

To switch to the desired version, there is a build variant window on the bottom left of Android Studio, when you open it, you can change your current active flavor from the ones you defined.

So the idea is, there would be different flavors for each client. And all the client specific resources would go under src/(flavorname) folder.

The client specific resources could be:

  1. Application name
  2. Client’s logo (Drawable)
  3. Details about the client’s organization which would go under “about” tab
  4. Colors / Themes (Colors.xml)
  5. Strings (Strings.xml)
  6. Additional Files which include new features

To understand how to modify these resources in the flavored version, Let’s take an example in which we would like to replace the application name from  from Lumicall to, lets say “ClientApp”.

  1. Go to file : /res/values/strings.xml which has the application name <string name=”app_name”>Lumicall</string>
  2. To replace it with client’s app name, You’d have to create a new strings.xml file (Note : Same name as that of the existing ‘strings.xml file’)  in the directory LumicallWhitelabel/src/whitelabel/res/values/strings.xml

    So, in our project, there are two strings.xml file. One is /res/values/strings.xml and one flavored specfic file in LumicallWhitelabel/src/whitelabel/res/values/strings.xml.

    Only add the values which would be different in the flavor version.
    If there are particular things which you’d like to be same, then there is no need of adding them again with the same value in the flavored file.


    -Just define the values you would want to replace. Not the values you would like to be same-

    Gradle will take care of the overwriting while merging the resources when you run the flavored version.

Now, the most important thing is changing the Application ID. In my previous blog I explained the difference between ApplicationID and package name.

I also added  a snippet from my build.gradle file which would suffix “.whitelabel” at the end of the orignal applicationID. So for configuring ApplicationID for each flavor, add applicationIdSuffix’suffix_you_want’.

Link to the the cake : https://github.com/Urvika-gola/LumicallWhitelabel

Thanks for reading,
U


06 January, 2017 06:27PM by urvikagola

Elena 'valhalla' Grandi

Candy from Strangers

Candy from Strangers

A few days ago I gave a talk at ESC https://www.endsummercamp.org/ about some reasons why I think that using software and especially libraries from the packages of a community managed distribution is important and much better than alternatives such as pypi, nmp etc. This article is a translation of what I planned to say before forgetting bits of it and luckily adding it back as an answer to a question :)

When I was young, my parents taught me not to accept candy from strangers, unless they were present and approved of it, because there was a small risk of very bad things happening. It was of course a simplistic rule, but it had to be easy enough to follow for somebody who wasn't proficient (yet) in the subtleties of social interactions.

One of the reasons why it worked well was that following it wasn't a big burden: at home candy was plenty and actual offers were rare: I only remember missing one piece of candy because of it, and while it may have been a great one, the ones I could have at home were also good.

Contrary to candy, offers of gratis software from random strangers are quite common: from suspicious looking websites to legit and professional looking ones, to platforms that are explicitly designed to allow developers to publish their own software with little or no checks.

Just like candy, there is also a source of trusted software in the Linux distributions, especially those lead by a community: I mention mostly Debian because it's the one I know best, but the same principles apply to Fedora and, to some measure, to most of the other distributions. Like good parents, distributions can be wrong, and they do leave room for older children (and proficient users) to make their own choices, but still provide a safe default.

Among the unsafe sources there are many different cases and while they do share some of the risks, they have different targets with different issues; for brevity the scope of this article is limited to the ones that mostly concern software developers: language specific package managers and software distribution platforms like PyPi, npm and rubygems etc.

These platforms are extremely convenient both for the writers of libraries, who are enabled to publish their work with minor hassles, and for the people who use such libraries, because they provide an easy way to install and use an huge amount of code. They are of course also an excellent place for distributions to find new libraries to package and distribute, and this I agree is a good thing.

What I however believe is that getting code from such sources and using it without carefully checking it is even more risky than accepting candy from a random stranger on the street in an unfamiliar neighbourhood.

The risk aren't trivial: while you probably won't be taken as an hostage for ransom, your data could be, or your devices and the ones who run your programs could be used in some criminal act causing at least some monetary damage both to yourself and to society at large.

If you're writing code that should be maintained in time there are also other risks even when no malice is involved, because each package on these platform has a different policy with regards to updates, their backwards compatibility and what can be expected in case an old version is found to have security issues.

The very fact that everybody can publish anything on such platforms is both their biggest strength and their main source of vulnerability: while most of the people who publish their libraries do so with good intentions, attacks have been described and publicly tested, such as the fun typo-squatting http://incolumitas.com/2016/06/08/typosquatting-package-managers/ one (archived on http://web.archive.org/web/20160801161807/http://incolumitas.com/2016/06/08/typosquatting-package-managers) that published harmless malicious code under common typos for famous libraries.

Contrast this with Debian, where everybody can contribute, but before they are allowed full unsupervised access to the archive they have to establish a relationship with the rest of the community, which includes meeting other developers in real life, at the very least to get their gpg keys signed.

This doesn't prevent malicious people from introducing software, but raises significantly the effort required to do so, and once caught people can usually be much more effectively prevented from repeating it than a simple ban on an online-only account can do.

It is true that not every Debian maintainer actually does a full code review of everything that they allow in the archive, and in some cases it would be unreasonable to expect it, but in most cases they are at least reasonably familiar with the code to do at least bug triage, and most importantly they are in an excellent position to establish a relationship of mutual trust with the upstream authors.

Additionally, package maintainers don't work in isolation: a growing number of packages are being maintained by a team of people, and most importantly there are aspects that involve potentially the whole community, from the fact that new packages that enter the distribution are publicity announced on a mailing list to the various distribution-wide QA efforts.

Going back to the language specific distribution platforms, sometimes even the people who manage the platform themselves can't be fully trusted to do the right thing: I believe everybody in the field remembers the npm fiasco https://lwn.net/Articles/681410/ where a lawyer letter requesting the removal of a package started a series of events that resulted in potentially breaking a huge amount of automated build systems.

Here some of the problems were caused by some technical policies that caused the whole ecosystem to be especially vulnerable, but one big issue was the fact that the managers of the npm platform are a private entity with no oversight from the user community.

Here not all distributions are equal, but contrast this with Debian, where the distribution is managed by a community that is based on a social contract https://www.debian.org/social_contract and is governed via democratic procedures established in its https://www.debian.org/devel/constitution.

Additionally, the long history of the distribution model means that many issues have already been met, the errors have already been done, and there are established technical procedures to deal with them in a better way.

So, shouldn't we use language specific distribution platforms at all? No! As developers we aren't children, we are adults who have the skills to distinguish between safe and unsafe libraries just as well as the average distribution maintainer can do. What I believe we should do is stop treating them as a safe source that can be used blindly and reserve that status to actual trustful sources like Debian, falling back to the language specific platforms only when strictly needed, and in that case:

actually check carefully what we are using, both by reading the code and by analysing the development and community practices of the authors;
if possible, share that work by becoming ourselves maintainers of that library in our favourite distribution, to prevent duplication of effort and to give back to the community whose work we get advantage from.

Edit: fixed broken typosquatting url

06 January, 2017 04:11PM by Elena ``of Valhalla''

hackergotchi for Joachim Breitner

Joachim Breitner

TikZ aesthetics

Every year since 2012, I typeset the problems and solutions for the German math event Tag der Mathematik, which is organized by the Zentrum für Mathematik and reaches 1600 students from various parts of Germany. For that, I often reach to the LaTeX drawing package TikZ, and I really like the sober aesthetics of a nicely done TikZ drawing. So mostly for my own enjoyment, I collect the prettiest here. On a global scale they are still rather mundane, and for really impressive and educating examples, I recommend the TikZ Gallery.

06 January, 2017 03:08PM by Joachim Breitner ([email protected])

Mark Brown

OpenTAC sprint

This weekend Toby Churchill kindly hosted a hacking weekend for OpenTAC – myself, Michael Grzeschik, Steve McIntyre and Andy Simpkins got together to bring up the remaining bits of the hardware on the current board revision and get some of the low level tooling like production flashing for the FTDI serial ports on the board up and running. It was a very productive weekend, we verified that everything was working with only few small mods needed for the board . Personally the main thing I worked on was getting most of an initial driver for the EMC1701 written. That was the one component without Linux support and allowed us to verify that the power switching and measurement for the systems under test was working well.

There’s still at least one more board revision and quite a bit of software work to do (I’m hoping to get the EMC1701 upstream for v4.8) but it was great to finally see all the physical components of the system working well and see it managing a system under test, this board revision should support all the software development that’s going to be needed for the final board.

Thanks to all who attended, Pengutronix for sponsoring Michael’s attendance and Toby Churchill for hosting!

Team at work
Group photo

06 January, 2017 01:09PM by broonie

hackergotchi for Jonathan McDowell

Jonathan McDowell

2016 in 50 Words

Idea via Roger. Roughly chronological order. Some things were obvious inclusions but it was interesting to go back and look at the year to get to the full 50 words.

Speaking at BelFOSS. Earthlings birthday. ATtiny hacking. Speaking at ISCTSJ. Dublin Anomaly. Co-habiting. DebConf. Peak Lion. Laura’s wedding. Christmas + picnic. Engagement. Car accident. Car write off. Tennent’s Vital. Dissertation. OMGWTFBBQ. BSides. New job. Rachel’s wedding. Digital Privacy talk. Graduation. All The Christmas Dinners. IMDB Top 250. Shay leaving drinks.

(This also serves as a test to see if I’ve correctly updated Planet Debian to use https and my new Hackergotchi that at least looks a bit more like I currently do.)

06 January, 2017 08:03AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppTOML 0.1.0

Big news: RcppTOML now works on Windows too!

This package had an uneventful 2016 without a single update. Release 0.0.5 had come out in late 2015 and we had no bugs or issues to fix. We use the package daily in production: a key part of our parameterisation is in TOML files

In the summer, I took one brief stab at building on Windows now that R sports itself a proper C++11 compiler on Windows too. I got stuck over the not-uncommon problem of incomplete POSIX and/or C++11 support with MinGW and g++-4.9. And sadly ... I appears I wasn't quite awake enough to realize that the missing functionality was right there exposed by Rcpp! Having updated that date / datetime functionality very recently, I was in a better position to realize this when Devin Pastoor asked two days ago. I was able to make a quick suggestion which he tested, which I then refined ... here we are: RcppTOML on Windows too! (For the impatient: CRAN has reported that it has built the Windows binaries, they should hit mirrors such as this CRAN package for RcppTOML shortly.)

So what is this TOML thing, you ask? A file format, very suitable for configurations, meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML -- though sadly these may be too ubiquitous now. But TOML is making good inroads with newer and more flexible projects. The Hugo static blog compiler is one example; the Cargo system of Crates (aka "packages") for the Rust language is another example.

The new release updates the included cpptoml template header by Chase Geigle, brings the aforementioned Windows support and updates the Travis configuration. We also added a NEWS file for the first time so here are all changes so far:

Changes in version 0.1.0 (2017-01-05)

  • Added Windows support by relying on Rcpp::mktime00() (#6 and #8 closing #5 and #3)

  • Synchronized with cpptoml upstream (#9)

  • Updated Travis CI support via newer run.sh

Changes in version 0.0.5 (2015-12-19)

  • Synchronized with cpptoml upstream (#4)

  • Improved and extended examples

Changes in version 0.0.4 (2015-07-16)

  • Minor update of upstream cpptoml.h

  • More explicit call of utils::str()

  • Properly cope with empty lists (#2)

Changes in version 0.0.3 (2015-04-27)

  • First CRAN release after four weeks of initial development

Courtesy of CRANberries, there is a diffstat report for this release.

More information and examples are on the RcppTOML page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 January, 2017 01:57AM

hackergotchi for Joey Hess

Joey Hess

the cliff

Falling off the cliff is always a surprise. I know it's there; I've been living next to it for months. I chart its outline daily. Avoiding it has become routine, and so comfortable, and so failing to avoid it surprises.

Monday evening around 10 pm, the laptop starts draining down from 100%. House battery, which has been steady around 11.5-10.8 volts since well before Winter solstice, and was in the low 10's, has plummeted to 8.5 volts.

With the old batteries, the cliff used to be at 7 volts, but now, with new batteries but fewer, it's in a surprising place, something like 10 volts, and I fell off it.

Weather forecast for the week ahead is much like the previous week: Maybe a couple sunny afternoons, but mostly no sun at all.

Falling off the cliff is not all bad. It shakes things up. It's a good opportunity to disconnect, to read paper books, and think long winter thoughts. It forces some flexability.

I have an auxillary battery for these situations. With its own little portable solar panel, it can charge the laptop and run it for around 6 hours. But it takes it several days of winter sun to charge back up.

That's enough to get me through the night. Then I take a short trip, and glory in one sunny afternoon. But I know that won't get me out of the hole, the batteries need a sunny week to recover. This evening, I expect to lose power again, and probably tomorrow evening too.

Luckily, my goal for the week was to write slides for two talks, and I've been able to do that despite being mostly offline, and sometimes decomputered.

And, in a few days I will be jetting off to Australia! That should give the batteries a perfect chance to recover.

Previously: battery bank refresh late summer

06 January, 2017 12:12AM

January 05, 2017

Jamie McClelland

End-to-End Encrypted group chats via XMPP

It's been over a year since my colleagues and I at the Progressive Technology Project abandoned Skype, first for IRC and soon after for XMPP. Thanks to the talented folks maintaining conversations.im it's been a breeze to get everyone setup with accounts (8 Euros/year is quite worth it) and a group chat going.

However, our group chats have not been using end-to-end encryption... until now. It wasn't exactly painless, so I'm sharing some tips and tricks.

  • Use either Conversations for Android (f-droid or Play) or Gajim for Windows or Linux. At the time of this writing, these are the only two applications I know of that support OMEMO, the XMPP extension that supports end-to-end encryption. Chat Secure for iOS, however, is just a release away. We managed to get things working with most of us using both Gajim and Conversations. It would probably have been much easier and smoother if everyone were only using Conversations because OMEMO is built-in to core, rather than Gajim, where OMEMO support is provided via an extension.
  • If you are using Gajim... After installing the OMEMO plugin in Gajim, fully restart Gajim. Similarly, if you add or remove a contact from your group, it seems you have to fully restart Gajim. Not sure why. If something is not working in Gajim, try restarting it.
  • Ensure that everyone in your group has added everyone else in the group to their roster. This was the single biggest and most confusing part of the process. If you are missing just one contact in your roster, then messages you type into the group chat will not show up without any indication as to what happened or why (on Gajim). Take this step first or prepare for confusing failures. Remember: everyone has to have everyone else in their roster.
  • Create the group in the android Conversations app, not in Gajim. There are strict requirements for how the group needs to be setup (private, members only and non-anonymous). I tried creating the group in Gajim and followed the directions but couldn't get it to work. Creating the group in Conversations worked right away. Remember: don't add members to the group unless everyone has them in their roster!
  • You can give your group a easy to remember name in your Gajim bookmarks, but under the hood, it will be assigned a random name. Conversations will show you the random name via "Conference Details" and Gajim will show it under the tab in the Messages window. When inviting people to the group you may need to select the random name.
  • Trust on First Use. In our experiment, we created a group for four people and we were all on a video and voice chat while we set things up. Three out of the four of us had both Gajim and Conversations in play. That meant 4 different people had to verify between 5 and 6 fingerprints each. We decided to use Trust on First Use rather than go through the process of reading out all the fingerprints (for the record, it still took us an hour and 15 minutes to get it all working). See Daniel Gultsch's interesting article on Trust on First Use.
  • If you get an error "This is not a group chat" it may be because you accidentally added the group as a contact to your roster. Click View -> Offline contacts. And if you see your group listed, delete it and close the tab in your Messages window (if one is open for it). You may also need to restart Gajim. Repeat until it no longer shows up in your roster.

Anyone interested in secure XMPP may also find the Riseup XMPP page useful.

05 January, 2017 02:10PM

hackergotchi for Michal Čihař

Michal Čihař

Gammu 1.38.1

Today Gammu 1.38.1 has been released. This is bugfix release fixing several minor bugs which were discovered in 1.38.0.

The Windows binaries will be available shortly. These are built using AppVeyor and will help bring Windows users back to latest versions.

Full list of changes and new features can be found on Gammu 1.38.1 release page.

Would you like to see more features in Gammu? You an support further Gammu development at Bountysource salt or by direct donation.

Filed under: Debian English Gammu | 1 comments

05 January, 2017 01:00PM

January 04, 2017

Carl Chenet

My Free Software activities in December 2016

My Monthly report for December 2016 gives an extended list of what were my Free Software related activities during this month.

Personal projects:

That’s all folks! See you next month!

04 January, 2017 11:00PM by Carl Chenet

hackergotchi for Michal Čihař

Michal Čihař

Seven tools that help us develop Weblate

Weblate probably would not exist (or at least would be much harder to manage) without several services that help us to develop, improve and fix bugs in our code base.

Over the time the development world has become very relying on cloud services. As every change this has both sides - you don't have to run the service, but you also don't have control on the service. Personally I'd prefer to use more free software services, on the other side I really love this comfort and I'm lazy to setup things which I can get for free.

The list was written down mostly for showing up how we work and the services are not listed in any particular order. All of the services provide free offerings for free software projects or for limited usage.

GitHub

I guess there is not much to say here, it has become standard place to develop software - it has Git repositories, issue tracker, pull requests and several other features.

Travis CI

Running tests on every commit is something what will make you feel confident that you didn't break anything. Of course you still need to write the tests, but having them run automatically is really great help. Especially great for automatically checking pull requests.

AppVeyor

Continuous integration on Windows - it's still widely used platform with it's quirks, so it's really good idea to test there as well. With AppVeyor you can do that and it works pretty nicely.

Codecov

When running tests it's good to know how much of your code is covered by them. Codecov is one of the best interfaces I've seen for this. They are also able to merge coverage reports from multiple builds and platforms (for example for wlc we have combined coverage for Linux, OSX and Windows coming from Travis CI and AppVeyor builds).

SauceLabs

Unit testing is good, but the frontend testing in browser is also important. We run Selenium tests in several browsers in SauceLabs to verify that we haven't screwed up something from the user interface.

Read the Docs

Documentation is necessary for every project and having it built automatically is nice bonus.

Landscape

Doing code analysis is a way to avoid some problems which are not spot during testing. These can be code paths not covered by test or simply coding style issues. There are several such services, but Landscape is my favorite one right now.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

04 January, 2017 05:00PM

Dominique Dumont

New with cme: a GUI to configure Systemd services

Hello

Systemd is powerful, but creating a new service is a task that require creating several files in non obvious location (like /etc/systemd/system or ~/.local/share/systemd/user/). Each file features 2 or more sections (e.g. [Unit], [Service]). And each section supports a lot of parameters.

Creating such Systemd configuration files can be seen as a daunting task for beginners.

cme project aims to make this task easier by providing a GUI that:

  • shows all existing services in a single screen
  • shows all possible sections and parameters with their documentation
  • validates the content of each parameter (if possible)

For instance, on my laptop, the command cme edit systemd-user shows 2 custom services (“free-imap-tunnel@” and “gmail-imap-tunnel@”) with:

cme_edit_systemd_001

The GUI above shows the units for my custom systemd files:

$ ls ~/.config/systemd/user/
[email protected]
free-imap-tunnel.socket
[email protected]
gmail-imap-tunnel.socket
sockets.target.wants

and the units installed by Debian packages:

$ find /usr/lib/systemd/user/ -maxdepth 1 \
  '(' -name '*.service' -o -name '*.socket' ')' \
  -printf '%f\n' |sort |head -15
at-spi-dbus-bus.service
colord-session.service
dbus.service
dbus.socket
dirmngr.service
dirmngr.socket
glib-pacrunner.service
gpg-agent-browser.socket
gpg-agent-extra.socket
gpg-agent.service
gpg-agent.socket
gpg-agent-ssh.socket
obex.service
pulseaudio.service
pulseaudio.socket

The screenshot above shows the content of the service defined by the following file:

$ cat ~/.config/systemd/user/[email protected]
[Unit]
Description=Tunnel IMAPS connections to Free with Systemd

[Service]
StandardInput=socket
# no need to install corkscrew
ExecStart=-/usr/bin/socat - PROXY:127.0.0.1:imap.free.fr:993,proxyport=8888

Note that empty parameters are not shown because the “hide empty value” checkbox on top right is enabled.

Likewise, cme is able to edit system files like user files with sudo cme edit systemd:

cme_edit_systemd_001

For more details on how to use the GUI to edit systemd files, please see:

Using a GUI may not be your cup of tea. cme can also be used as a validation tool. Let’s add a parameter with an excessive value to my service:

$ echo "CPUShares = 1000000" >> ~/.local/share/systemd/user/[email protected]

And check the file with cme:

$ cme check systemd-user 
cme: using Systemd model
loading data
Configuration item 'service:"free-imap-tunnel@" Service CPUShares' has a wrong value:
        value 1000000 > max limit 262144

ok, let’s fix this with cme. The wrong value can either be deleted:

$ cme modify systemd-user 'service:"free-imap-tunnel@" Service CPUShares~'
cme: using Systemd model

Changes applied to systemd-user configuration:
- service:"free-imap-tunnel@" Service CPUShares: '1000000' -> ''

Or modified:

$ cme modify systemd-user 'service:"free-imap-tunnel@" Service CPUShares=2048'
cme: using Systemd model

Changes applied to systemd-user configuration:
- service:"free-imap-tunnel@" Service CPUShares: '1000000' -> '2048'

You can also view the specification of a service using cme:

$ cme dump systemd-user 'service:"free-imap-tunnel@"'---
Service:
  CPUShares: 2048
  ExecStart:
    - '-/usr/bin/socat -  PROXY:127.0.0.1:imap.free.fr:993,proxyport=8888'
  StandardInput: socket
Unit:
  Description: Tunnel IMAPS connections to Free with Systemd

The output above matches the content of the service configuration file:

$ cat ~/.local/share/systemd/user/[email protected]
## This file was written by cme command.
## You can run 'cme edit systemd-user' to modify this file.
## You may also modify the content of this file with your favorite editor.

[Unit]
Description=Tunnel IMAPS connections to Free with Systemd

[Service]
StartupCPUWeight=100
CPUShares=2048
StartupCPUShares=1024
StandardInput=socket
# no need to install corkscrew now
ExecStart=-/usr/bin/socat -  PROXY:127.0.0.1:imap.free.fr:993,proxyport=8888

Last but not least, you can use cme shell if you want an interactive ui but cannot use a graphical interface:

$ cme shell systemd-user 
cme: using Systemd model
 >:$ cd service:"free-imap-tunnel@"  Service  
 >: service:"free-imap-tunnel@" Service $ ll -nz Exec*
name      │ type │ value                                                             
──────────┼──────┼───────────────────────────────────────────────────────────────────
ExecStart │ list │ -/usr/bin/socat -  PROXY:127.0.0.1:imap.free.fr:993,proxyport=8888

 >: service:"free-imap-tunnel@" Service $ ll -nz
name             │ type    │ value                                                             
─────────────────┼─────────┼───────────────────────────────────────────────────────────────────
StartupCPUWeight │ integer │ 100                                                               
CPUShares        │ integer │ 2048                                                              
StartupCPUShares │ integer │ 1024                                                              
StandardInput    │ enum    │ socket                                                            
ExecStart        │ list    │ -/usr/bin/socat -  PROXY:127.0.0.1:imap.free.fr:993,proxyport=8888

 >: service:"free-imap-tunnel@" Service $ set CPUShares=1024
 >: service:"free-imap-tunnel@" Service $ ll -nz CPUShares 
name      │ type    │ value
──────────┼─────────┼──────
CPUShares │ integer │ 1024 

 >: service:"free-imap-tunnel@" Service $ quit


Changes applied to systemd-user configuration:
- service:"free-imap-tunnel@" Service CPUShares: '2048' -> '1024'

write back data before exit ? (Y/n)

Currently, only service, socket and timer units are supported. Please create a bug report on github if you need more.

Installation instructions are detailed at the beginning of Managing Systemd configuration with cme wiki page.

As all softwares, cme probably has bugs. Please report any issue you might have with it.

For more information:

All in all, systemd is quite complex to setup. I hope I made a little bit easier to deal with.

All the best


Tagged: config-model, configuration, Perl, systemd

04 January, 2017 04:55PM by dod

Petter Reinholdtsen

Introducing ical-archiver to split out old iCalendar entries

Do you have a large iCalendar file with lots of old entries, and would like to archive them to save space and resources? At least those of us using KOrganizer know that turning on and off an event set become slower and slower the more entries are in the set. While working on migrating our calendars to a Radicale CalDAV server on our Freedombox server, my loved one wondered if I could find a way to split up the calendar file she had in KOrganizer, and I set out to write a tool. I spent a few days writing and polishing the system, and it is now ready for general consumption. The code for ical-archiver is publicly available from a git repository on github. The system is written in Python and depend on the vobject Python module.

To use it, locate the iCalendar file you want to operate on and give it as an argument to the ical-archiver script. This will generate a set of new files, one file per component type per year for all components expiring more than two years in the past. The vevent, vtodo and vjournal entries are handled by the script. The remaining entries are stored in a 'remaining' file.

This is what a test run can look like:

% ical-archiver t/2004-2016.ics 
Found 3612 vevents
Found 6 vtodos
Found 2 vjournals
Writing t/2004-2016.ics-subset-vevent-2004.ics
Writing t/2004-2016.ics-subset-vevent-2005.ics
Writing t/2004-2016.ics-subset-vevent-2006.ics
Writing t/2004-2016.ics-subset-vevent-2007.ics
Writing t/2004-2016.ics-subset-vevent-2008.ics
Writing t/2004-2016.ics-subset-vevent-2009.ics
Writing t/2004-2016.ics-subset-vevent-2010.ics
Writing t/2004-2016.ics-subset-vevent-2011.ics
Writing t/2004-2016.ics-subset-vevent-2012.ics
Writing t/2004-2016.ics-subset-vevent-2013.ics
Writing t/2004-2016.ics-subset-vevent-2014.ics
Writing t/2004-2016.ics-subset-vjournal-2007.ics
Writing t/2004-2016.ics-subset-vjournal-2011.ics
Writing t/2004-2016.ics-subset-vtodo-2012.ics
Writing t/2004-2016.ics-remaining.ics
%

As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections.

The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

04 January, 2017 11:20AM

hackergotchi for Raphaël Hertzog

Raphaël Hertzog

My Free Software Activities in December 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • I released DLA-741-1 on unzip. This was an easy update.
  • I reviewed Roberto Sanchez’s patch for CVE-2014-9911 in ICU.
  • I released DLA-759-1 on nss in collaboration with Antoine Beaupré. I merged and updated Guido’s work to enable the testsuite during build and to add DEP-8 tests.
  • I created a git repository for php5 maintenance in Debian LTS and started to work on an update. I added patches for two CVE (CVE-2016-3141, CVE-2016-2554) and added some binary files required by (currently failing) tests.

Misc packaging

With the strong freeze approaching, I had some customer requests to push packages into Debian and/or to fix packages that were in danger of being removed from stretch.

While trying to bring back uwsgi into testing I filed #847095 (libmongoclient-dev: Should not conflict with transitional mongodb-dev) and #847207 (uwsgi: FTBFS on multiple architectures with undefined references to uwsgi_* symbols) and interacted on some of the RC bugs that were keeping the package out of testing.

I also worked on a few new packages (lua-trink-cjson, lua-inotify, lua-sandbox-extensions) that enhance hindsight in some use cases and sponsored a rozofs update in experimental to fix a file conflict with inn2 (#846571).

Misc Debian work

Debian Live. I released two live-build updates. The second update added more options to customize the grub configuration (we use it in Kali to override the theme and add more menu entries) both for EFI boot and normal boot.

Misc bugreports. #846569 on libsnmp-dev to accomodate the libssl transition (I noticed the package was not maintained, I asked for new maintainers on debian-devel). #847168 on devscripts for debuild that started failing when lintian was failing (unexpected regression). #847318 on lintian to not emit spurious errors for kali packages (which was annoying with the debuild regression above). #847436 for an upgrade problem I got with tryton-server. #847223 on firefoxdriver as it was still depending on iceweasel instead of firefox.

Sponsorship. I sponsored a new version of asciidoc (#831965) and of ssldump 0.9b3-6 (for libssl transition). I also uploaded a new version of mutter to fix #846898 (it was ready in SVN already).

Distro Tracker

Not much happening, I fixed #814315 by switching a few remaining URLs to https. I merged patches from efkin to fix the functional test suite (#814315), that was a really useful contribution! The same contributer started to tackle another ticket (#824912) about adding an API to retrieve action items. This is a larger project and needs some thoughts. I still have to respond to him on his latest patches (after two rounds already).

Misc stuff

I updated the letsencrypt-sh salt formula for version 0.3.0 and added the possibility to customize the hook script to reload the webserver.

The @planetdebian twitter account is no longer working since twitterfeed.com closed doors and the replacement (dlvr.it) is unhappy about the RSS feed of planet.debian.org. I filed bug #848123 against planet-venus since it does not preserve the isPermalink attribute in the guid tag

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

04 January, 2017 09:48AM by Raphaël Hertzog

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.11

A new minor release with version number 0.6.11 of the digest package is now on CRAN and in Debian.

It is mostly a maintenance release. Sometime last spring we were asked to consider changing the license GPL-2 to GPL (>= 2). Having gotten agreement of all copyright holders, this could finally happen. But it so happens that the last yay just after the last release, so it took another cycle. In other changes, I also made the makeRaw function fully generic and added documentation. The pull request by Jim Hester to add covr support was finally folded in, leading as always to some gaming and improvement of the coverage metrics. A polite request by Radford Neal to support a nosharing option in base::serialize was also honoured; this should help for use with his interesting pqR variant of R.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

04 January, 2017 12:39AM

January 03, 2017

Reproducible builds folks

Reproducible Builds: week 88 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday December 25 and Saturday December 31 2016:

Media coverage

Reproducible bugs filed

Chris West:

Chris Lamb:

Rob Browning:

Reviews of unreproducible packages

7 package reviews have been added, 12 have been updated and 14 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris West (19)
  • Chris Lamb (7)
  • Rob Browning (1)

diffoscope development

strip-nondeterminism development

try.diffoscope.org development

  • Chris Lamb:
    • Show progress bar and position in queue, etc.
    • Promote command-line client with PyPI instructions.
    • Increase comparison time limit to 90 seconds.

tests.reproducible-builds.org

  • Run half of the arm64 build nodes in the future. (h01ger)
  • Resume testing scheduling (on i386 and armhf) now #846564 and #844701 bugs in dpkg are fixed in that suite. (h01ger)

Misc.

This week's edition was written by Chris Lamb, Holger Levsen and was reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

03 January, 2017 02:15PM

Russ Allbery

Review: Castle of Wizardry

Review: Castle of Wizardry, by David Eddings

Series: The Belgariad #4
Publisher: Del Rey
Copyright: May 1984
Printing: September 1991
ISBN: 0-345-33570-8
Format: Mass market
Pages: 373

Castle of Wizardry is the fourth book of the Belgariad and very much the middle of the story. Despite coming after an intermediate climax, this isn't the sort of series you can start in the middle.

The problem with intermediate climaxes in a long series is that the next book can be a bit of a let-down as the characters do the necessary regrouping and reorienting and determine next steps. I think that hurts Castle of Wizardry quite a lot. The best bits are at the beginning, as the party escapes the consequences of Magician's Gambit, collects one more party member, shows us a lot more of Errand (who is always delightful), and confounds Relg's life and world view considerably. (Although there's a good bit of authorial fiat in the last.) This builds into a major story event, which would normally help avoid the let-down after the climax, but it's the major story event that is so frequently and obviously foreshadowed that you'd have to be as dumb as, well, Garion to not know what's coming. That gives a certain "yes, yes, we know already" tone to proceedings that robs it of its ability to rebuild tension.

That said, the appeal of this series continues to be in the small details. While the first major event of this book goes pretty much as expected (including Ce'Nedra's reaction, which is just as irritating as you might be expecting), my favorite part was the endless, bubbly enthusiasm of the incredibly powerful artifact that features heavily. Usually epic fantasy will treat such world-breaking objects with seriousness and awe, as treasures to be admired and sacred (or terrifying) great works. See, for instance, the ur-example of Tolkien's rings, both the One Ring and the elven rings of power. Eddings manages a mix of awe and bemusement that doesn't undermine their power but that adds a delightful human element. This series pulls off treating a powerful magical artifact like an over-enthusiastic puppy without making it feel any less dangerous. It's a very neat, and I think underappreciated, trick to pull off.

Another part of this book I liked, if a more stock one, is Garion's reactions after the big story event. This isn't the first book to portray basic decency and thoughtfulness as a major feature in people from humble backgrounds elevated to great power, but I always enjoy seeing that. Garion stops whining (mostly) and starts acting like a decent, level-headed person who doesn't assume he has the right to arrange other people's lives, and is rewarded for it. Real life is often not that fair or ethical, but that's why one reads wish-fulfillment fantasy like this: for a world in which being a good person is rewarded.

However, Eddings does have some structural issues here. The narrative arc of this book, as a stand-alone entity, is odd. Its most dramatic event is in the middle, and then has a long traveling section that's, by comparison, much less exciting. The events of that section feel more like random encounters than a coherent part of the story, and are preceded by the most utterly ridiculous temper tantrums. I think the tantrums were meant to be pure humor, but my reaction was primarily eye-rolling. I have a hard time reconciling a screaming fit and breaking furniture with the long life experience and thoughtful planning of the character in question.

And then there is the Ce'Nedra section that closes this book, and Ce'Nedra in this book more generally.

To be fair, Castle of Wizardry is clearly intended to be Ce'Nedra's moment to grow as a person and stop being a childish brat. This does happen somewhat, and there are moments in the last section of the book where she does admirable things. But I couldn't quite believe in the mechanism, and it doesn't help that it's one of the most ham-handed bits of pre-ordained success in a book that has a tendency towards them. That undermines the real attempts Eddings makes to ground that success in Ce'Nedra's actual skills. Also undermining this is that those skills are manipulating people shamelessly, which Eddings seems to think is charming and attractive and I... don't.

But the real problem is that I flatly disbelieve in Ce'Nedra as a character, or, given the apparent existence of such a creature, the level of tolerance that other characters show her. If I'd been Polgara, within fifteen minutes of meeting her I would have been seriously debating whether the destruction of the world might be a small price to pay for the satisfaction of dumping her down the nearest well. And not only is she awful by herself but Garion also descends to the same level whenever he's around her, until both of them are behaving like blithering idiots.

I suspect part of my issue is that, to the extent that she is realistic at all, Ce'Nedra is the sort of intensely high-drama person who I have some amount of life experience with, and that life experience says "do not let this person anywhere near your life." Red flags all over everything. Garion needs to nope the hell out, because this will not end well. (Except, of course, it will, because it's that sort of series and the power of the author is strong.)

I want female characters with real agency in my fantasy, and I want a female protagonist who is doing things of equal importance as the male protagonist (not that Eddings attempts to go that far). But Ce'Nedra reads like a fictional character written by someone who had never met a woman, but has extensively studied female supporting characters in books about junior-high social cliques and then tried to reconcile that research with the stereotype of women as manipulative seductresses. Yes, this series is full of stereotypes and characters painted in broad strokes, but Ce'Nedra is several tiers below every other supporting character in the book in both believability and in my desire to read about her.

It's not that Eddings doesn't know how to write women at all. Polgara still falls into a few stereotyped categories, but she's sensible, opinionated, and has clear agency throughout the story. Taiba is delightful, if minor here. Poledra is absolutely wonderful whenever she appears. Some of the queens are obviously practical and sensible. And this book features a surprisingly good resolution to the subplot around Barak's wife, although the mechanism is a bit eye-rollingly cliched. Ce'Nedra's character is unusual for the series and almost certainly a deliberate authorial choice, and this book is supposed to be her coming of age. But I am baffled by that choice, and there's very little about it that I enjoyed reading.

One more minor complaint: Silk gets a "tragic secret" in this book, and I really wish he hadn't. More time with Silk is always a feature, and I still love the character, but his oddities were already adequately explained by both his innate character and his way of dealing with a particularly awkward court situation. (One that ties into Eddings's habit of using some bad relationship stereotypes, but that's a rant for another day.) I think this additional tragic secret was gratuitous and really unnecessary, not to mention weirdly implausible and oddly cruel towards the other character involved.

I was hoping that Magician's Gambit had turned a corner for the series, but Castle of Wizardry, despite having some neat moments, has some serious flaws. One more book to go, in which we learn that some of the eastern races have redeeming qualities!

Followed by Enchanter's End Game.

Rating: 6 out of 10

03 January, 2017 02:29AM

End of 2016 haul

May as well start 2017 with a burst of recorded optimism: the last books I bought in 2016 that I'm queuing up to read. The hopoe is that this year I'll actually read more of them!

Becky Chambers — A Closed and Common Orbit (sff)
T. Kingfisher — The Raven and the Reindeer (sff)
Joseph R. Lallo — The Book of Deacon Anthology (sff)
M. Louisa Locke — Maids of Misfortune (historical)
Rebecca Solnit — Hope in the Dark (nonfiction)
K.B. Spangler — Maker Space (sff)
K.B. Spangler — State Machine (sff)
Steven W. White — New World (sff)

Most of these are various StoryBundle add-ons that I'd somehow missed downloading the first time (and hence are fairly low priority on the reading list). The rest is a mixed bag of Kindle purchases.

I started A Closed and Common Orbit today and could barely put it down. An auspicious start to the new year.

03 January, 2017 01:25AM

hackergotchi for Maria Glukhova

Maria Glukhova

Getting to know diffoscope better

I apologize to all potential readers of this blog for not writing a comprehensive “Introduction” post with details of the project I am taking part in during my internship, as well as some story about how I ended up there.

Let me just say that I was a Debian user for years when I discovered it is taking part in Outreachy as one of organisations. Their Reproducible Builds effort has a noble goal and a bunch of great people behind it - I had no chances not to get excited by it. Looking for a place where my skills could be of any use, I discovered diffoscope - the tool for in-depth comparassion of files, archives etc. My mentor, Mattia Rizzolo, supported my decision to work on it, so now I am concentrating my efforts on improving diffoscope.

As my first steps, I am doing small (but hopefully still somewhat important) job of fixing existing bugs. It helps me to better understand how diffoscope works, as well as introduces me to the workflow of opensource development.

During December, I have done several small contributions, mostly fixing bugs.

Test data and jessie-backports

First of them could be somewhat called cleaning up after my own mistake, although that mistake wasn’t trivial. During the application period, I have fixed a bug with diffoscope failing while comparing symlinks to directory. That was a small change, but I included some tests for that case anyway.

…And that actually caused problems. With these tests, I included test data: two folders with symlinks. All was good in unstable version of Debian, but in jessie-backports, that commit caused build to fail. After some digging, I discovered the problem was caused by build process including copying that data. That was done using shutils Python module, and older version of that module, included in jessie, could not handle copying symlinks to directory properly.

Thanks to my mentor for giving me a hint on how to resolve this: using temporary folders and creating these symlinks at runtime. That way, we ensured tests run without problems during build process on jessie.

What have I learned: A great deal, actually. I spent too much time on that one, but I learned how to build packages, what happens during dpkg-buildpackage run and what debhelper tools are for. I also learned a bit about what chroot is and how to use it for testing.

ICC profile files and file type recognizing regexp

Another one was also about failing tests and, therefore, failing build. Failing tests were all due to ICC files were not recognized by diffoscope. Turned out libmagic got an update which changed the description of ICC profile files. Diffoscope was relying on regexp applied to file type description to recognize the file, so I changed regexp to reflect the changes in libmagic.

What have I learned: How diffoscope “recognizes” file types. Got me thinking: maybe there is a better way? That regexp-based approach is doomed to cause problems with every file type description change. I have this question still lingering in my mind - maybe I will come up with an idea later.

Order-like difference in text files

Next, I decided to do something a bit bigger and fullfilled a feature request. That request was for detecting order-like difference in text files (when files has the same lines, but in different order). I did it by collecting “added” and “removed” lines in diff output in lists, sorting and then comparing them.

Sadly, I forgot about one particular case - when one of the files is missing the newline at the end of file. I was kindly reminded of that quite soon in comments on the bug-tracker (thanks danielsh!) and have already fixed that. I also recieved feedback on how better implement it deeper in the diffoscope - not using the results of diff, but rather comparing sum of hashes of the lines directly in the difference module. I am yet to try that.

What have I learned: That a call to diff is actually the slowest part of the diffoscope run when done on two big text files. Could it help somehow in speeding it up? I don’t know yet.

I also learned to comment on bugs in Debian bugtracker and was surprised by how much feedback I got. Thanks to my mentor for pushing me to do that - I definetely need to overcome my fear of communications to be more effective!

Random FTBFS

There was also a very nasty bug that caused diffoscope to fail to be built from source randomly, failing with non-informative Fatal Python error: deallocated None. It already seemed strange when it was first reported; It got only more strange when suddenly that bug ceased to be reproducible. We hoped that would mean that bug was caused by some external tool, and was fixed there. Turns out it was not that easy. I tested this on two separate computers and on virtual machine; I used different versions of diffoscope. Well. Seems like that bug is still somehow tied to diffoscope version and not some external tool version - I still can do git checkout 64 and be able to reproduce the bug (still randomly, though).

Although I spent quite a lot of time on that one, the only result was the information about connection between bug apperances and diffoscope version. I still wasn’t able to get to the root of the problem - hopefully, someone else will be able to, given the information I found.

What have I learned: git-bisect! Thanks to my friend for pointing me to it, that tool came handy in that situation. Also, got some experience in catching nasty bugs like that (pity that no experience in squashing them).

I had some extra time commitements in December, one of them (Reproducible Builds Summit II) connected to my internship and one (my exam session in university) not. In January, I should be able to allocate more time to that work - I hope it will help me achieve more significant results.

Many thanks to Mattia Rizzolo, Chris Lamb, Holger Levsen and all other folks of Reproducible Builds project - I cannot stress enough how important your support is to me.

Wish you all a great 2017!

03 January, 2017 12:00AM

Elizabeth Ferdman

4 Week Progress Update for PGP Clean Room

Happy New Year Everyone!

Aside from taking some time off for the holidays, I set up a Debian-Sid USB stick in order to test gnupg version 2.1.16-3, the version to be included in Debian Stretch. For now, I’m using the package rng-tools to speed up the key creation for the purpose of testing gpg commands. By running sudo rngd -r /dev/urandom before the gpg command, you can create the keys in about a second.

Here are some of the sources that I’ve been using that inform the workflow and secure practices for gpg that we’ll be including in the Clean Room:

Some feature suggestions that were made by Neal Walfield that could be included in the workflow:

  1. Use a smartcard for the primary key and a smartcard for the subkeys

  2. Support subkey rotation– the creation of new subkeys

  3. Upon finishing a session, write a script to the USB that sends mails with the signed keys and imports the user’s public keys.

03 January, 2017 12:00AM

January 02, 2017

hackergotchi for Shirish Agarwal

Shirish Agarwal

India Tourism, E-Visa and Hong Kong

A Safe and Happy New Year to all.

While Debconf India is still a pipe-dream as of now, did see that India has been gradually doing it easier for tourists and casual business visitors to come visit India. This I take as very positive development for India itself.

The 1st condition is itself good for anybody visiting India –

Eligibility

International Travellers whose sole objective of visiting India is recreation , sight-seeing , casual visit to meet friends or relatives, short duration medical treatment or casual business visit.

https://indianvisaonline.gov.in/visa/tvoa.html

That this facility is being given to 130 odd countries is better still –

Albania, Andorra, Anguilla, Antigua & Barbuda, Argentina, Armenia, Aruba, Australia, Austria, Bahamas, Barbados, Belgium, Belize, Bolivia, Bosnia & Herzegovina, Botswana, Brazil, Brunei, Bulgaria, Cambodia, Canada, Cape Verde, Cayman Island, Chile, China, China- SAR Hong-Kong, China- SAR Macau, Colombia, Comoros, Cook Islands, Costa Rica, Cote d’lvoire, Croatia, Cuba, Czech Republic, Denmark, Djibouti, Dominica, Dominican Republic, East Timor, Ecuador, El Salvador, Eritrea, Estonia, Fiji, Finland, France, Gabon, Gambia, Georgia, Germany, Ghana, Greece, Grenada, Guatemala, Guinea, Guyana, Haiti, Honduras, Hungary, Iceland, Indonesia, Ireland, Israel, Jamaica, Japan, Jordan, Kenya, Kiribati, Laos, Latvia, Lesotho, Liberia, Liechtenstein, Lithuania, Luxembourg, Madagascar, Malawi, Malaysia, Malta, Marshall Islands, Mauritius, Mexico, Micronesia, Moldova, Monaco, Mongolia, Montenegro, Montserrat, Mozambique, Myanmar, Namibia, Nauru, Netherlands, New Zealand, Nicaragua, Niue Island, Norway, Oman, Palau, Palestine, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Poland, Portugal, Republic of Korea, Republic of Macedonia, Romania, Russia, Saint Christopher and Nevis, Saint Lucia, Saint Vincent & the Grenadines, Samoa, San Marino, Senegal, Serbia, Seychelles, Singapore, Slovakia, Slovenia, Solomon Islands, South Africa, Spain, Sri Lanka, Suriname, Swaziland, Sweden, Switzerland, Taiwan, Tajikistan, Tanzania, Thailand, Tonga, Trinidad & Tobago, Turks & Caicos Island, Tuvalu, UAE, Ukraine, United Kingdom, Uruguay, USA, Vanuatu, Vatican City-Holy See, Venezuela, Vietnam, Zambia and Zimbabwe.

This should make it somewhat easier for any Indian organizer as well as any participants from any of the member countries shared. There is possibility that this list would even get longer, provided we are able to scale our airports and all and any necessary infrastructure that would be needed for International Visitors to have a good experience.

What has been particularly interesting is to know which ports of call are being used by International Visitors as well as overall growth rate –

The Percentage share of Foreign Tourist Arrivals (FTAs) in India during November, 2016 among the top 15 source countries was highest from USA (15.53%) followed by UK (11.21%), Bangladesh (10.72%), Canada (4.66%), Russian Fed (4.53%), Australia (4.04%), Malaysia (3.65%), Germany (3.53%), China (3.14%), France (2.88%), Sri Lanka (2.49%), Japan (2.49%), Singapore (2.16%), Nepal (1.46%) and Thailand (1.37%).

And port of call –

The Percentage share of Foreign Tourist Arrivals (FTAs) in India during November 2016 among the top 15 ports was highest at Delhi Airport (32.71%) followed by Mumbai Airport (18.51%), Chennai Airport (6.83%), Bengaluru Airport (5.89%), Haridaspur Land check post (5.87%), Goa Airport (5.63%), Kolkata Airport (3.90%), Cochin Airport (3.29%), Hyderabad Airport (3.14%), Ahmadabad Airport (2.76%), Trivandrum Airport (1.54%), Trichy Airport (1.53%), Gede Rail (1.16%), Amritsar Airport (1.15%), and Ghojadanga land check post (0.82%) .

The Ghojadanga land check post seems to be between West Bengal, India and Bangladesh. Gede Railway Station is also in West Bengal as well. So all and any overlanders could take any of those ways.Even Hardispur Land Check post comes in the Bengal-Bangladesh border only.

In the airports, Delhi Airport seems to be attracting lot more business than the Mumbai Airport. Part of the reason I *think* is the direct link of Delhi Airport to NDLS via the Delhi Airport Express Line . The same when it will happen in Mumbai should be a game-changer for city too.

Now if you are wondering why I have been suddenly talking about visas and airports in India, it came because Hong Kong is going to Withdraw Visa Free Entry Facility For Indians. Although, as rightly pointed out in the article doesn’t make sense from economic POV and seems to be somewhat politically motivated. Not that I or anybody else can do anything about that.

Seeing that, I thought it was a good opportunity to see how good/Bad our Government is and it seems to be on the right path. Although the hawks (Intelligence and Counter-Terrorist Agencies) will probably become a bit more paranoid , their work becomes tougher.


Filed under: Miscellenous Tagged: #Airport Metro Line 3, #CSIA, #Incredible India, #India, #International Tourism

02 January, 2017 11:21PM by shirishag75

hackergotchi for Ross Gammon

Ross Gammon

Happy New Year – My Free Software activities in December 2016

So that was 2016! Here’s a summary of what I got up to on my computer(s) in December, a check of how I went against my plan, and the TODO list for the next month or so.

With a short holiday to Oslo, Christmas holidays, Christmas parties (at work and with Alexander at school, football etc.), travelling to Brussels with work, birthdays (Alexander & Antje), I missed a lot of deadlines, and failed to reach most of my Free Software goals (including my goals for new & updated packages in Debian Stretch – the soft freeze is in a couple of days). To top it all off, I lost my grandmother at the ripe old age of 93. Rest in peace Nana. I wish I could have made it to the funeral, but it is sometimes tough living on the other side of the world to your family.

Debian

Ubuntu

  • Added the Ubuntu Studio testsuites to the package tracker, and blogged about running the Manual Tests.

Other

Plan status & update for next month

Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

For the Debian Stretch release:

Ubuntu

  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages. – Done
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Still to do
  • Start testing & bug triaging Ubuntu Studio packages.
  • Test Len’s work on ubuntustudio-controls

Other

  • Continue working to convert my Family History website to Jekyll – Done
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.
  • Give JMRI a good try out and look at what it would take to package it.

02 January, 2017 10:58PM by Ross Gammon

hackergotchi for Santiago García Mantiñán

Santiago García Mantiñán

ScreenLock on Jessie's systemd

Something I was used to and which came as standard on wheezy if you installed acpi-support was screen locking when you where suspending, hibernating, ...

This is something that I still haven't found on Jessie and which somebody had point me to solve via /lib/systemd/system-sleep/whatever hacking, but that didn't seem quite right, so I gave it a look again and this time I was able to add some config files at /etc/systemd and then a script which does what acpi-support used to do before

Edit: Michael Biebl has sugested on my google+ post that this is an ugly hack and that one shouldn't use this solution and instead what we should use are solutions with direct support for logind like desktops with built in support or xss-lock, the reasons for this being ugly are pointed at this bug

Edit (2): I've just done the recommended thing for LXDE but it should be similar for any other desktop or window manager lacking logind integration, you just need to apt-get install xss-lock and then add @xss-lock -- xscreensaver-command --lock to .config/lxsession/LXDE/autostart or do it through lxsession-default-apps on the autostart tab. Oh, btw, you don't need acpid or the acpi-support* packages with this setup, so you can remove them safely and avoid weird things.

The main thing here is this little config file: /etc/systemd/system/screenlock.service

[Unit] Description=Lock X session Before=sleep.target [Service] Type=oneshot ExecStart=/usr/local/sbin/screenlock.sh [Install] WantedBy=sleep.target

This config file is activated by running: systemctl enable screenlock

As you can see that config file calls /usr/local/sbin/screenlock.sh which is this little script:

#!/bin/sh # This depends on acpi-support being installed # and on /etc/systemd/system/screenlock.service # which is enabled with: systemctl enable screenlock test -f /usr/share/acpi-support/state-funcs || exit 0 . /etc/default/acpi-support . /usr/share/acpi-support/power-funcs if [ x$LOCK_SCREEN = xtrue ]; then . /usr/share/acpi-support/screenblank fi

The script of course needs execution permissions. I tend to combine this with my power button making the machine hibernate, which was also easier to do before and which is now done at /etc/systemd/logind.conf (doesn't the name already tell you?) where you have to set: HandlePowerKey=hibernate

And that's all.

02 January, 2017 09:07PM by Santiago García Mantiñán ([email protected])

hackergotchi for Markus Koschany

Markus Koschany

My Free Software Activities in December 2016

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android

Debian Games

  • We have entered the final straight for Stretch, so I kept a close eye on new game releases and bug reports in packages which I think should be part of the next stable release. Bzflag is certainly one of them, a tank battling game that can be played in the first-person perspective and which has arrived in version 2.4.8. I also packaged new releases of trigger-rally, a racing game, Renpy, pygame-sdl2 and Minetest
  • Bálint Réczey introduced libopenhmd to Debian a while ago and asked me in #845657 to enable OpenHMD support for neverball. Neverball is now the first game in the archive, at least as far as I know, that is ready for virtual reality. I have never tried it though because I don’t own the necessary gear from Oculus myself but it sounds like a cool feature.
  • A user of caveexpress reported a bug (#847147) in one level that prevented him from finishing it. I forwarded this one to upstream and he was able to quickly fix the issue and I could release 2.4+git20160609-3 later.
  • I triaged several RC bugs which were reported against our D language games and it turned out that the bug was in gdc (#845377).
  • I also made some small improvements to monopd‘s packaging and applied a patch from Laurent Bigonville to Freeciv that corrected a problem with AppData files (#848720).
  • I worked around another RC FTBFS bug in spring (#846921) which is apparently a regression in binutils (#847356) but its maintainer does not consider this to be release critical.
  • I tried to fix #848063 in ri-li but it seems to surface again under special circumstances. Since compilation works on all buildds for all release architectures and on my systems I downgraded the severity to important.
  • I uploaded Bullet 2.85.1 to experimental. It is currently waiting in the NEW queue due to the SONAME bump and because I decided to simplify the packaging. I don’t think it is longer worth it to provide several standalone binary packages. All Bullet 2 and 3 core libraries can be found in libbullet2.85 now while all the extra stuff is part of libbullet-extras2.85.
  • Last but not least I released debian-games 1.7 and updated the list of games. Castle Combat was removed this month from Debian.

Debian Java

Debian LTS

This was my tenth month as a paid contributor and I have been paid to work 13,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 12. December until 18. December I was in charge of our LTS frontdesk. I triaged bugs in jasper, openjdk-6, bluez, game-music-emu, simplesamlphp, imagemagick, nagios3, most, rabbitmq-server, html5lib and dcmtk.
  • DLA-742-1. Issued a security update for chrony fixing 1 CVE. This update was prepared by Vincent Blut.
  • DLA-745-1. Issued a security update for most fixing 1 CVE.
  • DLA-746-1. Issued a security update for tomcat6 fixing 1 CVE and two regressions from previous updates which were reported to Debian’s bug tracker.
  • DLA-747-1. Issued a security update for libupnp fixing 1 CVE.
  • DLA-748-1. Issued a security update for libupnp4 fixing 1 CVE.
  • DLA-746-2. Issued a regression update for tomcat6.
  • DLA-753-1. Issued a security update for tomcat7 fixing 1 CVE and three regressions that were similar in nature to the ones fixed in Tomcat 6.
  • DLA-761-1. Issued a security update for python-bottle fixing 1 CVE.
  • DLA-763-1. Issued a security update for squid3 fixing 1 CVE.
  • DLA-766-1. Issued a security update for libcrypto++ fixing 1 CVE.
  • I also worked on two CVEs for Asterisk, an Open Source PBX and telephony toolkit. The work is done and can currently be found at this location. I asked on the debian-lts mailing list for feedback and testing and already got some positive feedback. I will wait a few more days before I release the security update.

Non-maintainer uploads

  • I did two NMUs this month. I sponsored an upload of libtorrent for Peter Pentchev fixing #828414 and I fixed a trivial bug in gnash myself (#845847).

02 January, 2017 07:23PM by Apo

Dimitri John Ledkov

Ubuntu Archive and CD/USB images complete migration to 4096 RSA signing keys


Enigma machine photo by Alessandro Nassiri [CC BY-SA 4.0], via Wikimedia Commons

Ubuntu Archive and CD/USB image use OpenPGP cryptography for verification and integrity protection. In 2012, a new archive signing key was created and we have started to dual-sign everything with both old and new keys.

In April 2017, Ubuntu 12.04 LTS (Precise Pangolin) will go end of life. Precise was the last release that was signed with just the old signing key. Thus when Zesty Zapus is released as Ubuntu 17.04, there will no longer be any supported Ubuntu release that require the 2004 signing keys for validation.

The Zesty Zapus release is now signed with just the 2012 signing key, which is 4096 RSA based key. The old 2004 signing keys, where were 1024 DSA based, have been removed from the default keyring and are no longer trusted by default in Zesty and up. The old keys are available in the removed keys keyring in the ubuntu-keyring package, for example in case one wants to verify things from old-releases.ubuntu.com.

Thus the signing key transition is coming to an end. Looking forward, I hope that by 18.04 LTS time-frame the SHA-3 algorithm will make its way into the OpenPGP spec and that we will possibly start a transition to 8096 RSA keys. But this is just wishful thinking as the current key strength, algorithm, and hashsums are deemed to be sufficient.

02 January, 2017 01:54PM by Dimitri John Ledkov ([email protected])

January 01, 2017

Russ Allbery

2016 Book Reading in Review

So, I did not accomplish my reading goal for 2016 (reading and reviewing more books in 2016 than I did in 2015). Many things contributed to that, but the root cause was that I didn't make enough time for reading. Much of the time that could have gone to reading went to playing Hearthstone (a good thing) and obsessing over the 2016 US election (mostly a waste of time and particularly energy, although I'm not sure I could have stopped). That said, I did get quite a lot of reading done at the end of the year, and I'm hoping to keep up that momentum for next year.

In 2016, I did a lot of re-reading and comfort reading. I'm probably going to continue with some of the re-reading in 2017, since I'm enjoying it, but my reading goal for the year is to get back to reading award nominees and previous award winners. There's so much great new stuff being published that I want to discover. I'm not going to set an explicit goal around number of books, but I am going to make an effort to carve out more time in my schedule for reading books (and less for reading on-line news).

This was another year with two 10 out of 10 books. One of them was a re-read: Lord of Emperors, the second book of Guy Gavriel Kay's Sarantine Mosaic. (I also re-read the first book this year, Sailing to Sarantium, and gave it a 9.) I like nearly all of Kay's historical fantasies, but this duology is one of my personal favorites.

The second 10 out of 10 book was a complete surprise: A Man Called Ove by Fredrik Backman (translated by Henning Koch). My mother found this book and suggested it to me, and I loved every moment of it. I will definitely be reading more of Backman's work.

There were two more fiction standouts this year: Digger by Ursula Vernon, and The Philosopher Kings by Jo Walton. The first is a graphic novel about a wombat who is trying to make her way home from an unexpected detour into a mess of magic and gods. The second is the middle book in a trilogy about an attempt to construct Plato's Just City and all of the philosophical and social problems that ensue (with some bonus science fiction and fantasy elements). Both of them are excellent. Walton is consistently one of my favorite authors, and Ursula Vernon was my great discovery of a new author to read this year. (Not that I've followed through on that much, the year in reading being what it was, but I will be doing so.)

My favorite non-fiction book of the year continues my interest in time management in general and Mark Forster's approaches in particular. Secrets of Productive People was the last book I reviewed this year (just a coincidence, not any intentional attempt to set things up for next year) and the best version of his overall approach to date. If you've not read any of Forster based on my previous recommendations, this is a good place to start.

Also worth mentions were Jeffrey Toobin's The Run of His Life, on the O.J. Simpson case, and Andrew Groen's The Empires of EVE, on the history of player empires in the EVE Online MMORPG. I Kickstarted the latter and didn't regret it.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2017 11:42PM

Tim Retout

Happy New Year!

Happy New Year!

Apparently I failed to write a blog entry in all of 2016, and almost all of 2015. Probably says something profound about the rise of social media, or perhaps I was just very busy. I bet my writing has suffered.

I have spent the last few days tidying up and clearing out clothes, bits of paper, and wires. I think there's light at the end of the tunnel.

01 January, 2017 11:17PM

Lucy Wayland

The Red Shoes

Just been watching the video for Kate Bush “The Red Shoes” (I have actually seen the 1948 film). I came to a strange realisation. Activism, especially LGBTQ activism, is like the Red Shoes. When you put them on, you dance their dance, and you can never take them off.

I wonder how many other people have had this happen to them, and understand.


01 January, 2017 04:49PM by aardvarkoffnord

hackergotchi for Junichi Uekawa

Junichi Uekawa

Tried writing an app with WebAudio.

Tried writing an app with WebAudio. I haven't found a way to easily write musical things with this, and so far I have only saw raw Hz and msec waiting kind of APIs. Feels pretty raw. My test app that asks you what note was played is here.

01 January, 2017 12:26PM by Junichi Uekawa

hackergotchi for Joey Hess

Joey Hess

p2p dreams

In one of the good parts of the very mixed bag that is "Lo and Behold: Reveries of the Connected World", Werner Herzog asks his interviewees what the Internet might dream of, if it could dream.

The best answer he gets is along the lines of: The Internet of before dreamed a dream of the World Wide Web. It dreamed some nodes were servers, and some were clients. And that dream became current reality, because that's the essence of the Internet.

Three years ago, it seemed like perhaps another dream was developing post-Snowden, of dissolving the distinction between clients and servers, connecting peer-to-peer using addresses that are also cryptographic public keys, so authentication and encryption and authorization are built in.

Telehash is one hopeful attempt at this, others include snow, cjdns, i2p, etc. So far, none of them seem to have developed into a widely used network, although any of them still might get there. There are a lot of technical challenges due to the current Internet dream/nightmare, where the peers on the edges have multiple barriers to connecting to other peers.

But, one project has developed something similar to the new dream, almost as a side effect of its main goals: Tor's onion services.

I'd wanted to use such a thing in git-annex, for peer-to-peer sharing and syncing of git-annex repositories. On November 13th, I started building it, using Tor, and I'm releasing it concurrently with this blog post.

git-annex's Tor support replaces its old hack of tunneling git protocol over XMPP. That hack was unreliable (it needed a TCP on top of XMPP layer) but worse, the XMPP server could see all the data being transferred. And, there are fewer large XMPP servers these days, so fewer users could use it at all. If you were using XMPP with git-annex, you'll need to switch to either Tor or a server accessed via ssh.

Now git-annex can serve a repository as a Tor onion service, and that can then be accessed as a git remote, using an url like tor-annex::tungqmfb62z3qirc.onion:42913. All the regular git, and git-annex commands, can be used with such a remote.

Tor has a lot of goals for protecting anonymity and privacy. But the important things for this project are just that it has end-to-end encryption, with addresses that are public keys, and allows P2P connections. Building an anonymous file exchange on top of Tor is not my goal -- if you want that, you probably don't want to be exchanging git histories that record every edit to the file and expose your real name by default.

Building this was not without its difficulties. Tor onion services were originally intended to run hidden websites, not to connect peers to peers, and this kind of shows..

Tor does not cater to end users setting up lots of Onion services. Either root has to edit the torrc file, or the Tor control port can be used to ask it to set one up. But, the control port is not enabled by default, so you still need to su to root to enable it. Also, it's difficult to find a place to put the hidden service's unix socket file that's writable by a non-root user. So I had to code around this, with a git annex enable-tor that su's to root and sets it all up for you.

One interesting detail about the implementation of the P2P protocol in git-annex is that it uses two Free monads to build up actions. There's a Net monad which can be used to send and receive protocol messages, and a Local monad which allows only the necessary modifications to files on disk. Interpreters for Free monad actions can chose exactly which actions to allow for security reasons.

For example, before a peer has authenticated, the P2P protocol is being run by an interpreter that refuses to run any Local actions whatsoever. Other interpreters for the Net monad could be used to support other network transports than Tor.

When two peers are connected over Tor, one knows it's talking to the owner of a particular onion address, but the other peer knows nothing about who's talking to it, by design. This makes authentication harder than it would be in a P2P system with a design like Telehash. So git-annex does its own authentication on top of Tor.

With authentication, users would need to exchange absurdly long addresses (over 150 characters) to connect their repositories. One very convenient thing about using XMPP was that a user would have connections to their friend's accounts, so it was easy to share with them. Exchanging long addresses is too hard.

This is where Magic Wormhole saved the day. It's a very elegant way to get any two peers in touch with each other, and the users only have to exchange a short code phrase, like "2-mango-delight", which can only be used once. Magic Wormhole makes some security tradeoffs for this simplicity. It's got vulnerabilities to DOS attacks, and its MITM resistance could be improved. But I'm lucky it came along just in time.

So, it takes only installing Tor and Magic Wormhole, running two git-annex commands, and exchanging short code phrases with a friend, perhaps over the phone or in an encrypted email, to get your git-annex repositories connected and syncing over Tor. See the documentation for details. Also, the git-annex webapp allows setting the same thing up point-and-click style.

The Tor project blog has throughout December been featuring all kinds of projects that are using Tor. Consider this a late bonus addition to that. ;)

I hope that Tor onion services will continue to develop to make them easier to use for peer-to-peer systems. We can still dream a better Internet.


This work was made possible by all my supporters on Patreon.

01 January, 2017 03:59AM

Russ Allbery

Review: Secrets of Productive People

Review: Secrets of Productive People, by Mark Forster

Publisher: Teach Yourself
Copyright: 2015
ISBN: 1-4736-0885-6
Format: Kindle
Pages: 289

Regular readers of my reviews will know that Mark Forster is my favorite writer on time management and productivity. That's mostly because of his flexible toolkit approach that talks about theory and overall goals and then describes multiple ways to get there, rather than presenting a single system that will solve all your problems. There are a lot of writers who explain productivity tips and tricks or describe systems that work for them. There are fewer who can explain why those tricks work (and why they sometimes don't work), and even fewer who can put them into a meaningful analytical framework for thinking about productivity.

Forster has several books, but they're a mixed bag. His clearest and most coherent book prior to this one was Do It Tomorrow. Secrets of Productive People is organized differently, chopped up in to small bite-sized chunks with synopses to an extent that it felt a bit choppy to me, but apart from that it's the closest I've seen to an updating of Do It Tomorrow. His other books can be slight (The Pathway to Awesomeness) or downright weird (How to Make Your Dreams Come True). I still have a soft spot for Do It Tomorrow, and I like how it was organized a bit better than this one, but I think Secrets of Productive People has become my new recommendation for where to start with Forster.

Secrets of Productive People is divided into five sections: The basics of productivity, the productive attitude, productive projects, aids to productivity, and productivity in action. Each part is divided into several small chapters, which open with a generous helping of quotes about some productivity topic (and I'm going to go back and save some of those), present some easily-digestible related set of thoughts (usually with an exercise), and conclude with a summary. Each chapter is about the length of a long blog post. I think this structure interferes with developing an idea at greater length, but it does make for good reading material in an environment where you're regularly interrupted or only have five or ten minutes.

I think I've mentioned in every review that Forster won me over by being willing to talk about the problems with attempting to do too much, not just presenting a system to allow one to accomplish more. Some other books, such as David Allen's famous Getting Things Done, seem to assume you already know what you need to get done, or will easily be able to figure that out when you think for a while, and just need a system to manage all the things you've decided to do. Forster takes the opposite approach, and this book is the clearest yet on this point: most productivity problems are not from being insufficiently efficient, but from doing the wrong things and too many things. You don't need more time; everyone gets the same amount of time. You need to do the right things with the time you have, and that usually means doing fewer things.

Readers of Forster's previous books will recognize many of the themes here. Some of the techniques from Do It Tomorrow and some of the exercises from Get Everything Done show up again here. But Forster has streamlined and focused the advice, discarded some things, made his task management recommendations less elaborate and more focused, and spends much more time hammering home the point that the only prioritization that really matters is whether you commit to doing something or don't.

The specific task management system he recommends here is one of the variations he's been talking about in his blog and is much simpler than the Do It Tomorrow system: pick five tasks, work on them until you've finished three of them, and then refill to five tasks. I've been using it since reading this book, replacing one of Forster's more elaborate systems from his blog, and it's surprisingly effective. He breaks down in some detail why this works and how to extract additional useful feedback information from it, and now I want to do some of the additional exercises he describes. (That said, I'm still dubious about his advice to not keep any larger to-do list and only rely on your analysis at the time of refilling the list to decide what to do. I use this system with a supplemental, longer list of ideas for future tasks, and that works better for me, although I do have to fight forming a sense of obligation about the things on the longer list. David Allen still has a point that if you don't write down a complete inventory of the things you're worrying about, your brain will try to obsess over them to keep from forgetting them.)

The productive projects section told me some things I needed to hear about time commitments. Projects take regular, focused attention, and starting numerous things without giving them that attention is much worse than doing far fewer things but doing them regularly and reliably. I think Forster's coaching on focus and persistence is very valuable; the trick, of course, is building up your willpower for it and learning to say no. This is in-line with recent psychological analysis of multi-tasking and its various negative effects. The working to-do list capped at five things provides enough variety to mentally shift gears if one runs out of steam on some specific project while maintaining enough focus to not leave things behind half-done.

The productivity aids section provides a more random collection of tips and tricks, many of which I've seen before in his previous books. I've mostly not tried these, so can't say much about how effective they are, but Forster's ideas almost always sound interesting and plausible.

I thought the weakest part of the book was the last section, on applied productivity. Here, Forster takes various life areas (exercise, parenting, finances, writing, etc.) and talks about how the principles of this book can be applied to them. Each area could be a book in itself, and the short essay format of these chapters doesn't do justice to large topics. The result is a rather repetitive section that just stresses analysis, metrics, and repeated, focused attention — all valid points, but ones you can pick up from the previous chapters. I don't think these case studies added much value.

Do It Tomorrow is still my favorite Forster book to read, but I think Secrets of Productive People is now the best and most polished overview of his time management and productivity approach. If you're interested in this topic and not already sick of Forster from my previous recommendations, this would be a great place to start. I got a lot out of it even with all the time management reading I've previously done, and will probably re-read sections of it and try more of the exercises.

Rating: 9 out of 10

01 January, 2017 03:54AM

Sam Hartman

2016

I was in such a different place at the beginning of 2016: I was poised to continue to work to help the world find love. Professionally, I was ready to make a much needed transition and find new projects to work on.

The year 2016 sucked. It feels like the year was filled with many different versions of the universe saying "Not interested in what you have to offer." At the beginning of the year, I had the energy to try and reach across large disagreements and help find common ground even when compromise was not possible. Now, my blog lies fallow because I cannot find the strength to be vulnerable enough to write what I would choose to say. Certainly a lot of the global changes of the last year have felt like a strong rejection of the world I'd like to see. However, many of the rejections have been personal. Beyond that, most of the people who stood as pillars of support in my life, together helping me find the strength to be vulnerable, are no longer available.

When the universe sends such strong messages, it's a good idea to ask whether you are on the right path. I certainly have discovered training I need and things I need to improve in order to avoid making costly mistakes that hurt others. However, among the rejections were clear demonstrations of the value of reaching out with love and compassion. Besides, this is what I'm called to do. It's what I want to do. I certainly will not force it on anyone. But it looks like the next few years may be a hard struggle to find pockets of people interested in that work, finding people who will choose love even in the current world, along with some difficult training to learn from challenges of the last year.

Amongst all this, my life if filled with love. There are new connections even as old connections are strained. There is always the hope of finding new ways to connect when the old ones are no longer right. I will rebuild and regain safety. I have the tools to do that. The process is just long and complicated.

01 January, 2017 12:49AM

December 31, 2016

Enrico Zini

hackergotchi for Steve Kemp

Steve Kemp

So I'm gonna start doing arduino-things

Since I've got a few weeks off I've decided I need to find a project, or two, to occupy me. Happily the baby is settling in well, mostly he sleeps for 4-5 hours, then eats, before the cycle repeats. It could have been so much worse.

My plan is to start exploring Arduino-related projects. It has been years since I touched hardware, with the exception of building a new PC for myself every 12-48 months.

There are a few "starter kits" you can buy, consisting of a board, and some discrete components such as a bunch of buttons, an LCD-output screen, some sensors (pressure, water, tilt), etc.

There are also some nifty little pre-cooked components you can buy such as:

The appeal of the former is that I can get the hang of marrying hardware with software, and the appeal of the latter is that the whole thing is pre-built, so I don't need to worry about anything complex. Looking over similar builds people have made, the process is more akin to building with Lego than real hardware-assembling.

So, for the next few weeks my plan is to :

  • Explore the various sensors, and tutorials, via the starter-kit.
  • Wire the MP3-playback device to a wireless D1-mini-board.
    • Which will allow me to listen to (static) music stored on an SD-card.
    • And sending "next", "previous", "play", "volume-up", etc, via a mobile.

The end result should be that I will be able to listen to music in my living room. Albeit in a constrained fashion (if I want to change the music I'll have to swap out the files on the SD-card). But it's something that's vaguely useful, and something that I think is within my capability, even as a beginner.

I'm actually not sure what else I could usefully do, but I figured I could probably wire up a vibration sensor to another wireless board. The device can sit on the top of my washing machine:

  • If vibration is sensed move into the "washing is on" state.
    • If vibration stops after a few minutes move into the "washing machine done" state.
      • Send a HTTP GET-request, which will trigger an SMS/similar.

There's probably more to it than that, but I expect that a simple vibration sensor will be sufficient to allow me to get an alert of some kind when the washing machine is ready to be emptied - and I don't need to poke inside the guts of the washing machine, nor hang reed-switches off the door, etc.

Anyway the only downside to my plan is that no doubt shipping the toys from AliExpress will take 2-4 weeks. Oops.

31 December, 2016 07:11PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

IMDB Top 250: Complete. Sort of.

Back in 2010, inspired by Juliet, I set about doing 101 things in 1001 days. I had various levels of success, but one of the things I did complete was the aim of watching half of the IMDB Top 250. I didn’t stop at that point, but continued to work through it at a much slower pace until I realised that through the Queen’s library I had access to quite a few DVDs of things I was missing, and that it was perfectly possible to complete the list by the end of 2016. So I did.

I should point out that I didn’t set out to watch the list because I’m some massive film buff. It was more a mixture of watching things that I wouldn’t otherwise choose to, and also watching things I knew were providing cultural underpinnings to films I had already watched and enjoyed. That said, people have asked for some sort of write up when I was done. So here are some random observations, which are almost certainly not what they were looking for.

My favourite film is not in the Top 250

First question anyone asks is “What’s your favourite film?”. That depends a lot on what I’m in the mood for really, but fairly consistently my answer is The Hunt for Red October. This has never been in the Top 250 that I’ve noticed. Which either says a lot about my taste in films, or the Top 250, or both. Das Boot was in the list and I would highly recommend it (but then I like all submarine movies it seems).

The Shawshank Redemption is overrated

I can’t recall a time when The Shawshank Redemption was not top of the list. It’s a good film, and I’ve watched it many times, but I don’t think it’s good enough to justify its seemingly unbroken run. I don’t have a suggestion for a replacement, however.

The list is constantly changing

I say I’ve completed the Top 250, but that’s working from a snapshot I took back in 2010. Today the site is telling me I’ve watched 215 of the current list. Last night it was 214 and I haven’t watched anything in between. Some of those are films released since 2010 (in particular new releases often enter high and then fall out of the list over a month or two), but the current list has films as old as 1928 (The Passion of Joan of Arc) that weren’t there back in 2010. So keeping up to date is not simply a matter of watching new releases.

The best way to watch the list is terrestrial TV

There were various methods I used to watch the list. Some I’d seen in the cinema when they came out (or was able to catch that way anyway - the QFT showed Duck Soup, for example). Netflix and Amazon Video had some films, but overall a very disappointing percentage. The QUB Library, as previously mentioned, had a good number of DVDs on the list (especially the older things). I ended up buying a few (Dial M for Murder on 3D Bluray was well worth it; it’s beautifully shot and unobtrusively 3D), borrowed a few from friends and ended up finishing off the list by a Lovefilm one month free trial. The single best source, however, was UK terrestrial TV. Over the past 6 years Freeview (the free-to-air service here) had the highest percentage of the list available. Of course this requires some degree of organisation to make sure you don’t miss things.

Films I enjoyed

Not necessarily my favourite, but things I wouldn’t have necessarily watched and was pleasantly surprised by. No particular order, and I’m leaving out a lot of films I really enjoyed but would have got around to watching anyway.

  • Clint Eastwood films - Gran Torino and Million Dollar Baby were both excellent but neither would have appealed to me at first glance. I hated Unforgiven though.
  • Jimmy Stewart. I’m not a fan of It’s a Wonderful Life (which I’d already watched because it’s Lister’s favourite film), but Harvey is obviously the basis of lots of imaginary friend movies and Rear Window explained a Simpsons episode (there were a lot of Simpsons episodes explained by watching the list).
  • Spaghetti Westerns. I wouldn’t have thought they were my thing, but I really enjoyed the Sergio Leone films (A Fistful of Dollars etc.). You can see where Tarantino gets a lot of his inspiration.
  • Foreign language films. I wouldn’t normally seek these out. And in general it seems I cannot get on with Italian films (except Life is Beautiful), but Amores Perros, Amelie and Ikiru were all better than expected.
  • Kind Hearts and Coronets. For some reason I didn’t watch this until almost the end; I think the title always put me off. Turned out to be very enjoyable.

Films I didn’t enjoy

I’m sure these mark me out as not being a film buff, but there are various things I would have turned off if I’d caught them by accident rather than setting out to watch them.

I’ve kept the full list available, if you’re curious.

31 December, 2016 04:01PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in December 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Celebrated my 10-year anniversary of contributing to Debian. An excerpt of this post was quoted on LWN.
  • Made a number of improvements to AptFS, my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders, including move from the popen2 Python module to subprocess and correcting the parsing of package lists.
  • Corrected an UnboundLocalError exception in the Finnish social security number generator in faker, a tool to generate test data in Python applications. (#441)
  • Made a small change to travis.debian.net (my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change) to fix an issue with malformed YAML.
  • Added the ability to specify the clone target to gbp-import-dsc etc. in git-buildpackage, a tool to build Debian packages using Git. (commit)
  • Filed three issues against the Redis key-value database:
    • Tests fail on the alpha architecture due to "memory efficiency". (#3666)
    • Please update hiredis (#3687)
    • Correct "whenever" typo. (#3652)

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:


I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Optimisations:
    • Avoid unnecessary string manipulation writing --text output (~20x speedup).
    • Avoid n iterations over archive files (~8x speedup).
    • Don't analyse .deb s twice when comparing .changes files (2x speedup).
    • Avoid shelling out to colordiff by implementing color support directly.
    • Memoize calls to distutils.spawn.find_executable to avoid excessive stat(1) syscalls.
  • Progress bar:
    • Show current file / ELF section under analysis etc. in progress bar.
    • Move the --status-fd output to use JSON and to include the current filename.
  • Code tidying:
    • Split out the try.diffoscope.org client so that it can be released separately on PyPI.
    • Completely rework the diffoscope and diffoscope.comparators modules, grouping similar utilities into their own modules, etc.
  • Miscellaneous:
    • Update dex_expected_diffs test to ensure compatibility with enjarify ≥ 1.0.3.
    • Ensure that running from Git will always use that checkout's Python modules.
    • Add a simple profiling framework.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Makefile.PL: Change NAME argument to a Perl package name.
  • Ensure our binaries are available in autopkgtest tests.

try.diffoscope.org

trydiffoscope is a web-based version of the diffoscope in-depth and content-aware diff utility. Continued thanks to Bytemark for sponsoring the hardware.

  • Show progress bar and position in queue, etc. (#25 & #26)
  • Promote command-line client with PyPI instructions.
  • Increase comparison time limit to 90 seconds.

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Added support for version 0.2 .buildinfo files. (#15)

Debian

Debian LTS


This month I have been paid to work 13½ hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 733-1 for openafs, fixing an information leak vulnerability. Due to incomplete initialization or clearing of reused memory, directory objects could contain 'dead' directory entry information.
  • Issued DLA 734-1 for mapserver closing an information leakage vulnerability.
  • Issued DLA 737-1 for roundcube preventing arbitrary remote code execution by sending a specially crafted email.
  • Issued DLA 738-1 for spip patching a cross-site scripting (XSS) vulnerability.
  • Issued DLA 740-1 for libgsf fixing a null pointer deference exploit via a crafted .tar file.

Debian Uploads

  • redis:
    • 3.2.5-5 — Add RunTimeDirectory=redis to systemd .service files.
    • 3.2.5-6 — Add missing Depends on lsb-base for /lib/lsb/init-functions usage in redis-sentinel's initscript.
    • 3.2.6-1 — New upstream release.
    • 4.0-1 & 4.0-rc2-1 — New upstream experimental releases.
  • aptfs: 0.9-1 & 0.10-1 — New upstream releases.


Debian FTP Team


As a Debian FTP assistant I ACCEPTed 107 packages: android-platform-libcore, compiz, debian-edu, dehydrated, dh-cargo, gnome-shell-extension-pixelsaver, golang-1.8, golang-github-btcsuite-btcd-btcec, golang-github-elithrar-simple-scrypt, golang-github-pelletier-go-toml, golang-github-restic-chunker, golang-github-weaveworks-mesh, golang-google-genproto, igmpproxy, jimfs, kpmcore, libbio-coordinate-perl, libdata-treedumper-oo-perl, libdate-holidays-de-perl, libpgobject-type-bytestring-perl, libspecio-library-path-tiny-perl, libterm-table-perl, libtext-hogan-perl, lighttpd, linux, linux-signed, llmnrd, lua-geoip, lua-sandbox-extensions, lua-systemd, node-cli-cursor, node-command-join, node-death, node-detect-indent, node-domhandler, node-duplexify, node-end-of-stream, node-first-chunk-stream, node-from2, node-glob-stream, node-has-binary, node-inquirer, node-interpret, node-is-negated-glob, node-is-unc-path, node-lazy-debug-legacy, node-lazystream, node-load-grunt-tasks, node-merge-stream, node-object-assign-sorted, node-orchestrator, node-pkg-up, node-resolve-from, node-resolve-pkg, node-rx, node-sorted-object, node-stream-shift, node-streamtest, node-string.prototype.codepointat, node-strip-bom-stream, node-through2-filter, node-to-absolute-glob, node-unc-path-regex, node-vinyl, openzwave, openzwave-controlpanel, pcb-rnd, pd-upp, pg-partman, postgresql-common, pybigwig, python-acora, python-cartopy, python-codegen, python-efilter, python-flask-sockets, python-intervaltree, python-jsbeautifier, python-portpicker, python-pretty-yaml, python-protobix, python-sigmavirus24-urltemplate, python-sqlsoup, python-tinycss, python-watson-developer-cloud, python-zc.customdoctests, python-zeep, r-cran-dbitest, r-cran-dynlm, r-cran-mcmcpack, r-cran-memoise, r-cran-modelmetrics, r-cran-plogr, r-cran-prettyunits, r-cran-progress, r-cran-withr, ruby-clean-test, ruby-gli, ruby-json-pure, ruby-parallel, rustc, sagemath, sbuild, scram, sidedoor, toolz & yabasic.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against jimfs, compiz, python-efilter & ruby-json-pure.

31 December, 2016 10:40AM

hackergotchi for Sean Whitton

Sean Whitton

Burkeman on time management

Burkeman: Why time management is ruining our lives

Over the past semester I’ve been trying to convince one graduate student and one professor in my department to use Inbox Zero to get a better handle on their e-mail inboxes. The goal is not to be more productive. The two of them get far more academic work done than I do. However, both of them are far more stressed than I am. And in the case of the graduate student, I have to add items to my own to-do list to chase up e-mails that I’ve sent him, which only spreads this stress and tension around.

The graduate student sent me this essay by Oliver Burkeman about how these techniques can backfire, creating more stress, tension and anxiety. It seems to me that this happens when we think of these techniques as having anything to do with productivity. Often people will say “use this technique and you’ll be less stressed, more productive, and even more productive because you’re less stressed.” Why not just say “use this technique and you’ll be less anxious and stressed”? This is a refusal to treat lower anxiety as merely a means to some further end. People can autonomously set their own ends, and they’ll probably do a better job of this when they’re less anxious. Someone offering a technique to help with their sense of being overwhelmed need not tell them what to do with their new calm.

It might be argued that this response to Burkeman fails to address the huge sense of obligation that an e-mail inbox can generate. Perhaps the only sane response to this infinite to-do list is to let it pile up. If we follow a technique like Inbox Zero, don’t we invest our inbox with more importance than it has? Like a lot of areas of life, the issue is that the e-mails that will advance truly valuable projects and relationships, projects of both ourselves and of others, are mixed in with reams of stuff that doesn’t matter. We face this situation whenever we go into a supermarket, or wonder what to do during an upcoming vacation. In all these situations, we have a responsibility to learn how to filter the important stuff out, just as we have a responsibility to avoid reading celebrity gossip columns when we are scanning through a newspaper. Inbox Zero is a technique to do that filtering in the case of e-mail. Just letting our inbox pile up is an abdication of responsibility, rather than an intelligent response to a piece of technology that most of the world abuses.

31 December, 2016 08:34AM

Russ Allbery

Review: Magician's Gambit

Review: Magician's Gambit, by David Eddings

Series: The Belgariad #3
Publisher: Del Rey
Copyright: June 1983
Printing: February 1990
ISBN: 0-345-33545-7
Format: Mass market
Pages: 305

Magician's Gambit is the third book of the Belgariad, and although it's the best, you probably can't start here. Too much setup has already happened by this point of the story.

In retrospect, it's my memories of this book that led me to hang on to this series all these years and pick it up for a re-read. Garion has finally gotten past his whining (mostly), the party puts Ce'Nedra in a hole and leaves her there for a bunch of the book, and the characters finally do something concrete (and significant) in advancement of the plot. Eddings does a good job of avoiding the series problem of saving up all the climactic moments for the end of the story and instead adds a very good middle climax to the story that wraps up the main plot line of the first three books.

I think this is also the book that best displays the interestingly quirky ways that Eddings approaches the epic fantasy genre. (Now to figure out how to talk about them without spoiling the story....)

One of my favorite bits in this series is the dry voice in the back of Garion's head. I don't want to say too much about it, since discovering the nature of that voice slowly over the series is part of the fun. But this is the book where most of that mystery is revealed, and it's a rather different spin on the wise advisor trope. Garion has more than the usual number of mentor figures, and like many books of this type they function partly as plot devices to force the story down the correct path. One of the things that makes plot devices irritating is that they feel forced on the story from outside. Eddings here pulls a trick to embed that device inside the story, so it feels less like the hand of the author and more like an aspect of the mythology. I'm not sure it would work for everyone, and the execution isn't that sophisticated, but it's entertained me every time I read this series.

It helps that the voice is an interesting character in its own right, and one of the few characters in the book who takes Garion seriously and explains things to him. By comparison, Belgarath and Polgara are still annoyingly high-handed and uncommunicative (particularly Polgara).

Another highlight of this book is a lovely interlude in Belgarath's home, where we meet some of his colleagues (Beldin is a delight) and Garion finally starts experimenting with his own power. I mostly read this series for the supporting characters, not for Garion, who spends far too much time whining and most of the rest of the time being just neutrally there. But in this book Eddings finally gives him some moments of real awe and discovery, using one of my favorite approaches to the powerful chosen one character: making normally-hard things easy but normally-easy things unintuitive because they think about magic differently than anyone else. Most of the moments when Garion does something that shouldn't be possible are highlights.

We also get nearly the final characters for Garion's band, and they're good ones. Relg is mostly amusing in some of his later reactions to specific characters and is a bit tedious in this book, but Errand is a unique idea and one of my favorite characters of the series. He fits well with the tendency of this series to not take itself, or power, or mythology, too seriously. Eddings avoids the trap of making everything Significant and Fraught; characters snark and do things on a whim, power has funny side effects, and the most powerful object in the story is treated in a way that has one both laughing and flinching at the same time. (It's a nice twist on and homage to some of the trappings around the One Ring, while making it far more light-hearted.)

There are a few things I could complain about (the gender stereotyping between Garion and Ce'Nedra is so strong, for instance), but they're more minor here than in previous books. I enjoy reading about characters doing things they're very good at, and there's quite a lot of that here. (That said, I do feel like I should mention somewhere that one of the characters is very good at killing Murgos, who are supposedly human. I suppose it wouldn't be epic fantasy without a bit of casual genocide, and to be fair the Murgos in question are mostly trying to track down and kill the party, but the way that's played partly for laughs gets distasteful if one thinks about it too much.)

This series is at its best when it balances dramatic prophecy with irreverent poking, gives the protagonists moments of odd and very human joy (Garion's colt!), and celebrates the unlikely and unexpected collection of people (or fable characters) who are drawn together by the story. I think this book hits those notes well, and although the flaws of the previous books are still there, they're muted and don't get as much screen time. Instead, we get some fun foreshadowing, a sense of power and growth, a few moments of delight and awe, and a lot of the sort of exasperated wise-cracking common to experienced people who trust each other and are working together on something hard.

Magician's Gambit, and this whole series, will not be to everyone's taste, but it's the first book of the series I can comfortably recommend if you like this sort of thing.

Followed by Castle of Wizardry.

Rating: 8 out of 10

31 December, 2016 03:34AM

December 30, 2016

Antoine Beaupré

My free software activities, November and December 2016

Debian Long Term Support (LTS)

Those were 8th and 9th months working on Debian LTS started by Raphael Hertzog at Freexian. I had trouble resuming work in November as I had taken a long break during the month and started looking at issues only during the last week of November.

Imagemagick, again

I have, again, spent a significant amount of time fighting the ImageMagick (IM) codebase. About 15 more vulnerabilities were found since the last upload, which resulted in DLA-756-1. In the advisory, I unfortunately forgot to mention CVE-2016-8677 and CVE-2016-9559, something that was noticed by my colleague Roberto after the upload... More details about the upload are available in the announcement.

When you consider that I worked on IM back in october, which lead to an upload near the end of November covering around 80 more vulnerabilities, it doesn't look good for the project at all. Of the 15 vulnerabilities I worked on, only 6 had CVEs assigned and I had to request CVEs for the other 9 vulnerabilities plus 11 more that were still unassigned. This lead to the assignment of 25 distinct CVE identifiers as a lot of issues were found to be distinct enough to warrant their own CVEs.

One could also question how many of those issues affect the fork, Graphicsmagick. A lot of the vulnerabilities were found through fuzzing searches that may not have been tested on Graphicsmagick. It seems clear to me that a public corpus of test data should be available to test regressions and cross-project vulnerabilities. It's already hard enough to track issues withing IM itself, I can't imagine what it would be for the fork to keep track of those issues, especially since upstream doesn't systematically request CVEs for issues that they find, a questionable practice considering the number of issues we all need to keep track of.

Nagios

I have also worked on the Nagios package and produced DLA 751-1 which fixed two fairly major issues (CVE-2016-9565 and CVE-2016-9566) that could allow remote root access under certain conditions. Fortunately, the restricted permissions setup by default in the Debian package made both exploits limited to information disclosure and privilege escalation if the debug log is enabled.

This says a lot about how proper Debian packaging can help in limiting the attack surface of certain vulnerabilities. It was also "interesting" to have to re-learn dpatch to add patches to the package: I regret not converting it to quilt, as the operation is simple and quilt is so much easier to use.

People new to Debian packaging may be curious to learn about the staggering number of patching systems historically used in Debian. On that topic, I started a conversation about how much we want to reuse existing frameworks when we work on those odd packages, and the feedback was interesting. Basically, the answer is "it depends"...

NSS

I had already worked on the package in November and continued the work in December. Most of the work was done by Raphael, which fixed a lot of issues with the test suite. I tried to wrap this up by fixing CVE-2016-9074, the build on armel and the test suite. Unfortunately, I had to stop again because I ran out of hours and the fips test suite was still failing, but fortunately Raphael was able to complete the work with DLA-759-1.

As things stand now, the package is in better shape than in other suites as tests (Debian bug #806639) and autopkgtest (Debian bug #806207) are still not shipped in the sid or stable releases.

Other work

For the second time, I forgot to formally assign myself a package before working on it, which meant that I wasted part of my hours working on the monit package. Those hours, of course, were not counted in my regular hours. I still spent some time reviewing mejo's patch to ensure it was done properly and it turned out we both made similar patches working independently, always a good sign.

As I reported in my preliminary November report, I have also triaged issues in libxml2, ntp, openssl and tiff.

Finally, I should mention my short review of the phpMyAdmin upload, among the many posts i sent to the LTS mailing list.

Other free software work

One reason why I had so much trouble getting paid work done in November is that I was busy with unpaid work...

manpages.debian.org

A major time hole for me was trying to tackle the manpages.debian.org service, which had been offline since August. After a thorough evaluation of the available codebases, I figured the problem space wasn't so hard and it was worth trying to do an implementation from scratch. The result is a tool called debmans.

It took, obviously, way longer than I expected, as I experimented with Python libraries I had been keeping an eye on for a while. For the commanline interface, I used the click library, which is really a breeze to use, but a bit heavy for smaller scripts. For a web search service prototype, I looked at flask, which was also very interesting, as it is light and simple enough to use that I could get started quickly. It also, surprisingly, fares pretty well in the global TechEmpower benchmarking tests. Those interested in those tools may want to look at the source code, in particular the main command (using an interesting pattern itself, __main__.py) and the search prototype.

Debmans is the first project for which I have tried the CII Best Practices Badge program, an interesting questionnaire to review best practices in software engineering. It is an excellent checklist I recommend every project manager and programmer to get familiar with.

I still need to complete my work on Debmans: as I write this, I couldn't get access to the new server the DSA team setup for this purpose. It was a bit of a frustrating experience to wait for all the bits to get into place while I had a product ready to test. In the end, the existing manpages.d.o maintainer decided to deploy the existing codebase on the new server while the necessary dependencies are installed and accesses are granted. There's obviously still a bunch of work to be done for this to be running in production so I have postponed all this work to January.

My hope is that this tool can be reused by other distributions, but after talking with Ubuntu folks, I am not holding my breath: it seems everyone has something that is "good enough" and that they don't want to break it...

Monkeysign

I spent a good chunk of time giving a kick in the Monkeysign project, with the 2.2.2 release, which features contributions from two other developers, which may be a record for a single release.

I am especially happy to have adopted a new code of conduct - it has been an interesting process to adapt the code of conduct for such a relatively small project. Monkeysign is becoming a bit of a template on how to do things properly for my Python projects: documentation on readthedocs.org including a code of conduct, support and contribution information, and so on. Even though the code now looks a bit old to me and I am embarrassed to read certain parts, I still think it is a solid project that is useful for a lot of people. I would love to have more time to spend on it.

LWN publishing

As you may have noticed if you follow this blog, I have started publishing articles for the LWN magazine, filed here under the lwn tag. It is a way for me to actually get paid for some of my blogging work that used to be done for free. Reports like this one, for example, take up a significant amount of my time and are done without being paid. Converting parts of this work into paid work is part of my recent effort to reduce the amount of time I spend on the computer.

An funny note: I always found the layout of the site to be a bit odd, until I looked at my articles posted there in a different web browser, which didn't have my normal ad blocker configuration. It turns out LWN uses ads, and Google ones too, which surprised me. I definitely didn't want to publish my work under banner ads, and will never do so on this blog. But it seems fair that, since I get paid for this work, there is some sort of revenue stream associated with it. If you prefer to see my work without ads, you can wait for it to be published here or become a subscriber which allows you to get rid of the ads on the site.

My experience with LWN is great: they're great folks, and very supportive. It's my first experience with a real editor and it really pushed me in improving my writing to make better articles that I normally would here. Thanks to the LWN folks for their support! Expect more of those quality articles in 2017.

Debian packaging

I have added a few packages to the Debian archive:

  • magic-wormhole: easy file-transfer tool, co-maintained with Jamie Rollins
  • slop: screenshot tool
  • xininfo: utility used by teiler
  • teiler (currently in NEW): GUI for screenshot and screencast tools

I have also updated sopel and atheme-services.

Other work

Against my better judgment, I worked again on the borg project. This time I tried to improve the documentation, after a friend asked me for help on "how to make a quick backup". I realized I didn't have any good primer to send regular, non-sysadmin users to and figured that, instead of writing a new one, I could improve the upstream documentation instead.

I generated a surprising 18 commits of documentation during that time, mainly to fix display issues and streamline the documentation. My final attempt at refactoring the docs eventually failed, unfortunately, again reminding me of the difficulty I have in collaborating on that project. I am not sure I succeeded in making the project more attractive to non-technical users, but maybe that's okay too: borg is a fairly advanced project and not currently aimed at such a public. This is yet another project I am thinking of creating: a metabackup program like backupninja that would implement the vision created by liw in his A vision for backups in Debian post, which was discarded by the Borg project.

Github also tells me that I have opened 19 issues in 14 different repositories in November. I would like to particularly bring your attention to the linkchecker project which seems to be dead upstream and for which I am looking for collaborators in order to create a healthy fork.

Finally, I started on reviving the stressant project and changing all my passwords, stay tuned for more!

30 December, 2016 10:00PM