December 31, 2017

hackergotchi for Gregor Herrmann

Gregor Herrmann

RC bugs 2017/30-52

for some reason I'm still keeping track of the release-critical bugs I touch, even though it's a long time since I systematically try to fix them. & since I have the list, I thought I might as well post it here, for the third (& last) time this year:

  • #720666 – src:libxml-validate-perl: "libxml-validate-perl: FTBFS: POD coverage test failure"
    don't run POD tests (pkg-perl)
  • #810655 – libembperl-perl: "libembperl-perl: postinst fails when libapache2-mod-perl2 is not installed"
    upload backported fix to jessie
  • #825011 – libdata-alias-perl: "libdata-alias-perl: FTBFS with Perl 5.24: undefined symbol: LEAVESUB"
    upload new upstream release (pkg-perl)
  • #826465 – texlive-latex-recommended: "texlive-latex-recommended: Unescaped left brace in regex is deprecated"
    propose patch
  • #851506 – cpanminus: "cpanminus: major parts of upstream sources with compressed white-space"
    take tarball from github (pkg-perl)
  • #853490 – src:libdomain-publicsuffix-perl: "libdomain-publicsuffix-perl: ftbfs with GCC-7"
    apply patch from ubuntu (pkg-perl)
  • #853499 – src:libopengl-perl: "libopengl-perl: ftbfs with GCC-7"
    new upstream release (pkg-perl)
  • #867514 – libsolv: "libsolv: find_package called with invalid argument "2.7.13+""
    propose a patch, later upload to DELAYED/1, then patch included in a maintainer upload
  • #869357 – src:libdigest-whirlpool-perl: "libdigest-whirlpool-perl FTBFS on s390x: test failure"
    upload to DELAYED/5
  • #869360 – slic3r: "slic3r: missing dependency on perlapi-*"
    upload to DELAYED/5
  • #869576 – src:gimp-texturize: "gimp-texturize: Local copy of intltool-* fails with perl 5.26"
    add patch, QA upload
  • #869578 – src:gdmap: "gdmap: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869579 – src:granule: "granule: Local copy of intltool-* fails with perl 5.26"
    add patch, QA upload
  • #869580 – src:teg: "teg: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869583 – src:gnome-specimen: "gnome-specimen: Local copy of intltool-* fails with perl 5.26"
    provide a patch
  • #869884 – src:chemical-mime-data: "chemical-mime-data: Local copy of intltool-* fails with perl 5.26"
    provide a patch, upload to DELAYED/5 later
  • #870213 – src:pajeng: "pajeng FTBFS with perl 5.26"
    provide a patch, uploaded by maintainer
  • #870821 – src:esys-particle: "esys-particle FTBFS with perl 5.26"
    propose patch
  • #870832 – src:libmath-prime-util-gmp-perl: "libmath-prime-util-gmp-perl FTBFS on big endian: Failed 2/31 test programs. 8/2885 subtests failed."
    upload new upstream release (pkg-perl)
  • #871059 – src:fltk1.3: "fltk1.3: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/(\${ <-- HERE _IMPORT_PREFIX}/lib)(?!/x86_64-linux-gnu)/ at debian/fix-fltk-targets-noconfig line 6, <> line 1."
    propose patch
  • #871159 – texlive-extra-utils: "pstoedit: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/\\([a-zA-Z]+){([^}]*)}{ <-- HERE ([^}]*)}/ at /usr/bin/latex2man line 1327."
    propose patch
  • #871307 – libmimetic0v5: "libmimetic0v5: requires rebuild against GCC 7 and symbols/shlibs bump"
    implement reporter's recipe (thanks!)
  • #871335 – src:smlnj: "smlnj: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/~?\\begin{ <-- HERE (small|Bold|Italics|Emph|address|quotation|center|enumerate|itemize|description|boxit|Boxit|abstract|Figure)}/ at mltex2html line 1411, <DOCUMENT> line 1."
    extend existing patch, QA upload
  • #871349 – src:ispell-uk: "ispell-uk: FTBFS: The encoding pragma is no longer supported at ../../bin/verb_reverse.pl line 12."
    propose patch
  • #871357 – src:packaging-tutorial: "packaging-tutorial: FTBFS: Unescaped left brace in regex is illegal here in regex; marked by <-- HERE in m/\\end{ <-- HERE document}/ at /usr/share/perl5/Locale/Po4a/TransTractor.pm line 643."
    analyze and propose a possible patch
  • #871367 – src:fftw: "fftw: FTBFS: Unescaped left brace in regex is deprecated here (and will be fatal in Perl 5.30), passed through in regex; marked by <-- HERE in m/\@(\w+){ <-- HERE ([^\{\}]+)}/ at texi2html line 1771."
    propose patch
  • #871818 – src:debian-zh-faq: "debian-zh-faq FTBFS with perl 5.26"
    propose patch
  • #872275 – slic3r-prusa: "slic3r-prusa: Loadable library and perl binary mismatch"
    propose patch
  • #873697 – src:libtext-bibtex-perl: "libtext-bibtex-perl FTBFS on arm*/ppc64el: t/unlimited.t (Wstat: 11 Tests: 4 Failed: 0)"
    upload new upstream release prepared by smash (pkg-perl)
  • #875627 – libauthen-captcha-perl: "libauthen-captcha-perl: Random failure due to bad images"
    upload package with fixed ong prepared by xguimard (pkg-perl)
  • #877841 – src:libxml-compile-wsdl11-perl: "libxml-compile-wsdl11-perl: FTBFS Can't locate XML/Compile/Tester.pm in @INC"
    add missing build dependency (pkg-perl)
  • #877842 – src:libxml-compile-soap-perl: "libxml-compile-soap-perl: FTBFS: Can't locate Test/Deep.pm in @INC"
    add missing build dependencies (pkg-perl)
  • #880777 – src:pdl: "pdl build depends on removed libgd2*-dev provides"
    update build dependency (pkg-perl)
  • #880787 – src:libhtml-formatexternal-perl: "libhtml-formatexternal-perl build depends on removed transitional package lynx-cur"
    update build dependency (pkg-perl)
  • #880843 – src:libperl-apireference-perl: "libperl-apireference-perl FTBFS with perl 5.26.1"
    change handling of 5.26.1 API (pkg-perl)
  • #881058 – gwhois: "gwhois: please switch Depends from lynx-cur to lynx"
    update dependency, upload to DELAYED/15
  • #882264 – src:libtemplate-declare-perl: "libtemplate-declare-perl FTBFS with libhtml-lint-perl 2.26+dfsg-1"
    add patch for compatibility with newer HTML::Lint (pkg-perl)
  • #883673 – src:libdevice-cdio-perl: "fix build with libcdio 1.0"
    add patch from doko (pkg-perl)
  • #885541 – libtest2-suite-perl: "libtest2-suite-perl: file conflicts with libtest2-asyncsubtest-perl and libtest2-workflow-perl"
    add Breaks/Replaces/Provides (pkg-perl)

31 December, 2017 01:04AM

December 30, 2017

hackergotchi for Chris Lamb

Chris Lamb

Favourite books of 2017

Whilst I managed to read just over fifty books in 2017 (down from sixty in 2016) here are ten of my favourites, in no particular order.

Disappointments this year included Doug Stanhope's This Is Not Fame, a barely coherent collection of bar stories that felt especially weak after Digging Up Mother, but I might still listen to the audiobook as I would enjoy his extemporisation on a phone book. Ready Player One left me feeling contemptuous, as did Charles Stross' The Atrocity Archives.

The worst book I finished this year was Adam Mitzner's Dead Certain, beating Dan Brown's Origin, a poor Barcelona tourist guide at best.






https://images-eu.ssl-images-amazon.com/images/P/B005DI9SKW.01._PC__.jpg

Year of Wonders

Geraldine Brooks

Teased by Hilary Mantel's BBC Reith Lecture appearances and not content with her short story collection, I looked to others for my fill of historical fiction whilst awaiting the final chapter in the Wolf Hall trilogy.

This book, Year of Wonders, subtitled A Novel of the Plague, is written from point of view of Anna Frith, recounting what she and her Derbyshire village experience when they nobly quarantine themselves in order to prevent the disease from spreading further.

I found it initially difficult to get to grips with the artificially aged vocabulary — and I hate to be "that guy" — but do persist until the chapter where Anna takes over the village apothecary.


https://images-eu.ssl-images-amazon.com/images/P/B072P185BN.01._PC__.jpg

The Second World Wars

Victor Davis Hanson

If the pluralisation of "Wars" is an affectation, it certainly is an accurate one: whilst we might consider the Second World War to be a unified conflict today, Hanson reasonably points out that this is a post hoc simplification of different conflicts from the late-1910s through 1945.

Unlike most books that attempt to cover the entirety of the war, this book is organised by topic instead of chronology. For example, there are two or three adjacent chapters comparing and contrasting naval strategy before moving onto land armies, constrasting and comparing Germany's eastern and western fronts, etc. This approach leads to a readable and surprisingly gripping book despite its lengthy 720 pages.

Particular attention is given to the interplay between the various armed services and how this tended to lead to overall strategic victory. This, as well as the economics of materiel, simple rates-of-replacement, combined with the irrationality and caprice of the Axis would be an fair summary of the author's general thesis — this is no Churchill, Hitler & The Unnecessary War.

Hanson is not afraid to ask "what if" questions but only where they provide meaningful explanation or provide deeper rationale rather than as an indulgent flight of fancy. His answers to such questions are invariably that some outcome would have come about.

Whilst the author is a US citizen, he does not spare his homeland from criticism, but where Hanson's background as classical-era historian lets him down is in contrived comparisons to the Peloponnesian War and other ancient conflicts. His Napoleonic references do not feel as forced, especially due to Hitler's own obsessions. Recommended.


https://images-eu.ssl-images-amazon.com/images/P/B0711Y3BVG.01._PC__.jpg

Everybody Lies

Seth Stephens-Davidowitz

Vying for the role as the Freakonomics for the "Big Data" generation, Everybody Lies is essentially a compendium of counter-arguments, refuting commonly-held beliefs about the internet and society in general based on large-scale observations. For example:

Google searches reflecting anxiety—such as "anxiety symptoms" or "anxiety help"—tend to be higher in places with lower levels of education, lower median incomes and where a larger portion of the population lives in rural areas. There are higher search rates for anxiety in rural, upstate New York than in New York City.

Or:

On weekends with a popular violent movie when millions of Americans were exposed to images of men killing other men, crime dropped. Significantly.

Some methodological anecdotes are included: a correlation was once noticed between teens being adopted and the use of drugs and skipping school. Subsequent research found this correlation was explained entirely by the 20% of the self-reported adoptees not actually being adopted...

Although replete with the kind of factoids that force you announce them out loud to anyone "lucky" enough to be in the same room as you, Everybody Lies is let down by a chronic lack of structure — a final conclusion that is so self-aware of its limitations that it ready and repeatedly admits to it is still an weak conclusion.


https://images-eu.ssl-images-amazon.com/images/P/B01LWAESYQ.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B0736185ZL.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B01MZI77C0.01._PC__.jpg

The Bobiverse Trilogy

Dennis Taylor

I'm really not a "science fiction" person, at least not in the sense of reading books catalogued as such, with all their indulgent meta-references and stereotypical cover art.

However, I was really taken by the conceit and execution of the Bobiverse trilogy: Robert "Bob" Johansson perishes in an automobile accident the day after agreeing to have his head cryogenically frozen upon death. 117 years later he finds that he has been installed in a computer as an artificial intelligence. He subsequently clones himself multiple times resulting in the chapters being written from various "Bob's" locations, timelines and perspectives around the galaxy.

One particular thing I liked about the books was their complete disregard for a film tie-in; Ready Player One was almost cynically written with this in mind, but the Bobiverse cheerfully handicaps itself by including Homer Simpson and other unlicensable characters.

Whilst the opening world-building book is the most immediately rewarding, the series kicks into gear after this — as the various "Bob's" unfold with differing interests (exploration, warfare, pure science, anthropology, etc.) a engrossing tapestry is woven together with a generous helping of humour and, funnily enough, believability.


https://images-eu.ssl-images-amazon.com/images/P/1784703931.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B00K7ED54M.01._PC__.jpg

Homo Deus: A Brief History of Tomorrow

Yuval Noah Harari

After a number of strong recommendations I finally read Sapiens, this book's prequel.

I was gripped, especially given its revisionist insight into various stages of Man. The idea that wheat domesticated us (and not the other way around) and how adoption of this crop led to truncated and unhealthier lifespans particularly intrigued me: we have an innate bias towards chronocentrism, so to be reminded that progress isn't a linear progression from "bad" to "better" is always useful.

The sequel, Homo Deus, continues this trend by discussing the future potential of our species. I was surprised just how humourous the book was in places. For example, here is Harari on the anthropocentric nature of religion:

You could never convince a monkey to give you a banana by promising him limitless bananas after death in monkey heaven.

Or even:

You can't settle the Greek debt crisis by inviting Greek politicians and German bankers to a fist fight or an orgy.

The chapters on AI and the inexpensive remarks about the impact of social media did not score many points with me, but I certainly preferred the latter book in that the author takes more risks with his own opinion so it's less dry and more more thought-provoking, even if one disagrees.


https://images-eu.ssl-images-amazon.com/images/P/0857535579.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B01N5URPMC.01._PC__.jpg

La Belle Sauvage: The Book of Dust Volume One

Philip Pullman

I have extremely fond memories of reading (and re-reading, etc.) the author's Dark Materials as a teenager despite being started on the second book by a "supply" English teacher.

La Belle Sauvage is a prequel to this original trilogy and the first of another trio. Ms Lyra Belacqua is present as a baby but the protagonist here is Malcolm Polstead who is very much part of the Oxford "town" rather than "gown".

Alas, Pullman didn't make a study of Star Wars and thus relies a little too much on the existing canon, wary to add new, original features. This results in an excess of Magesterium and Mrs Coulter (a superior Delores Umbridge, by the way), and the protagonist is a little too redolent of Will...

There is also an very out-of-character chapter where the magical rules of the novel temporarily multiply resulting in a confusion that was almost certainly not the author's intention. You'll spot it when you get to it, which you should.

(I also enjoyed the slender Lyra's Oxford, essentially a short story set just a few years after The Amber Spyglass.)


https://images-eu.ssl-images-amazon.com/images/P/B002VYJYR8.01._PC__.jpg

Open: An Autobiography

Andre Agassi

Sporting personalities certainly exist, but they are rarely revealed by their "authors" so upon friends' enquiries to what I was reading I frequently caught myself qualifying my response with «It's a sports autobiography, but...».

It's naturally difficult to know what we can credit to Agassi or his (truly excellent) ghostwriter but this book is a real pleasure to read. This is no lost Nabokov or Proust, but the level of wordsmithing went beyond supererogatory. For example:

For a man with so many fleeting identities, it's shocking, and symbolic, that my initials are A. K. A.

Or:

I understand that there's a tax on everything in America. Now, I discover that this is the tax on success in sports: fifteen seconds of time for every fan.

Like all good books that revolve around a subject, readers do not need to know or have any real interest in the topic at hand, so even non-tennis fans will find this an engrossing read. Dark themes abound — Agassi is deeply haunted by his father, a topic I wish he went into more, but perhaps he has not done the "work" himself yet.


https://images-eu.ssl-images-amazon.com/images/P/1405910119.01._PC__.jpg https://images-eu.ssl-images-amazon.com/images/P/B00EAA6QFE.01._PC__.jpg

The Complete Short Stories

Roald Dahl

I distinctly remember reading Roald Dahl's The Wonderful Story of Henry Sugar and Six More collection of short stories as a child, some characters still etched in my mind; the 'od carrier and fingersmith of The Hitchhiker or the protagonist polishing his silver Trove in The Mildenhall Treasure.

Instead of re-reading this collection I embarked on reading his complete short stories, curious whether the rest of his œuvre was at the same level. After reading two entire volumes, I can say it mostly does — Dahl's typical humour and descriptive style are present throughout with only a few show-off sentences such as:

"There's a trick that nearly every writer uses of inserting at least one long obscure word into each story. This makes the reader think that the man is very wise and clever. I have a whole stack of long words stored away just for this purpose." "Where?" "In the 'word-memory' section," he said, epexegetically.

There were a perhaps too many of his early, mostly-factual, war tales that were lacking a an interesting conceit and I still might recommend the Henry Sugar collection for the uninitiated, but I would still heartily recommend either of these two volumes, starting with the second.


https://images-eu.ssl-images-amazon.com/images/P/B00GIUGEO2.01._PC__.jpg

Watching the English

Kate Fox

Written by a social anthropologist, this book dissects "English" behaviour for the layman providing an insight into British humour, rites of passage, dress/language codes, amongst others.

A must-read for anyone who is in — or considering... — a relationship with an Englishman, it is also a curious read for the native Brit: a kind of horoscope for folks, like me, who believe they are above them.

It's not perfect: Fox tediously repeats that her "rules" or patterns are not rules in the strict sense of being observed by 100% of the population; there will always be people who do not, as well as others whose defiance of a so-called "rule" only reinforces the concept. Most likely this reiteration is to sidestep wearisome criticisms but it becomes ponderous and patronising over time.

Her general conclusions (that the English are repressed, risk-averse and, above all, hypocrites) invariably oversimplify, but taken as a series of vignettes rather than a scientifically accurate and coherent whole, the book is worth your investment.

(Ensure you locate the "revised" edition — it not only contains more content, it also profers valuable counter-arguments to rebuttals Fox received since the original publication.)


https://images-eu.ssl-images-amazon.com/images/P/B06Y619TS1.01._PC__.jpg

What Does This Button Do?

Bruce Dickinson

In this entertaining autobiography we are thankfully spared a litany of Iron Maiden gigs, successes and reproaches of the inevitable bust-ups and are instead treated to an introspective insight into just another "everyman" who could very easily be your regular drinking buddy if it weren't for a need to fulfill a relentless inner drive for... well, just about anything.

The frontman's antics as a schoolboy stand out, as are his later sojourns into Olympic fencing and being a commercial pilot. These latter exploits sound bizarre out of context but despite their non-sequitur nature they make a perfect foil (hah!) to the heavy metal.

A big follower of Maiden in my teens, I fell off the wagon as I didn't care for their newer albums so I was blindsided by Dickinson's sobering cancer diagnosis in the closing chapters. Furthermore, whilst Bruce's book fails George Orwell's test that autobiography is only to be trusted when it reveals something disgraceful, it is tour de force enough for to distract from any concept of integrity.

(I have it on excellent authority that the audiobook, which is narrated by the author, is definitely worth one's time.)

30 December, 2017 06:56PM

hackergotchi for Sune Vuorela

Sune Vuorela

Aubergine – Playing with emoji

Playing with emojis

At some point, I needed to copy paste emojis, but couldn’t find a good way to do it. So what does a good hacker do?
Scratch an own itch. As I wrote about in the past, all these projects should be shared with the rest of the world.
So here it is: https://cgit.kde.org/scratch/sune/aubergine.git/

It looks like this with the symbola font for emojis: Screenshot

It basically lets you search for emojis by their description, and by clicking on a emoji, it gets inserted into the clipboard.

As such, I’m not sure the application is really interesting, but there might be two interesting bits in the source code:

  • A parser for the unicode data text files in /usr/share/unicode/NamesList.txt is placed in lib/parser.{h,cpp}
  • A class that tries to expose QClipboard as QML objects placed in app/clipboard.{h,cpp}. I’m not yet sure if this is the right approach for that, but it is the one that currently makes most sense in my mind. If I’m receiving proper feedback, I might be able to extend/finish it and submit it to Qt.

And of course, now it is simple to describe fancy cooking:

🍆 🔪 🔥
(aubergine) (hocho) (fire)

I ❣ emoji

30 December, 2017 06:33PM by Sune Vuorela

Russ Allbery

Review: Saving Francesca

Review: Saving Francesca, by Melina Marchetta

Series: Francesca #1
Publisher: Alfred A. Knopf
Copyright: 2003
Printing: 2011
ISBN: 0-307-43371-4
Format: Kindle
Pages: 245

Francesca is in Year Eleven in St. Sebastian's, in the first year that the school opened to girls. She had a social network and a comfortable place at her previous school, but it only went to Year Ten. Most of her friends went to Pius, but St. Sebastian's is a better school. So Francesca is there, one of thirty girls surrounded by boys who aren't used to being in a co-ed school, and mostly hanging out with the three other girls who had gone to her previous school. She's miserable, out of place, and homesick for her previous routine.

And then, one morning, her mother doesn't get out of bed. Her mother, the living force of energy, the one who runs the household, who pesters Francesca incessantly, who starts every day with a motivational song. She doesn't get out of bed the next day, either. Or the day after that. And the bottom falls out of Francesca's life.

I come at this book from a weird angle because I read The Piper's Son first. It's about Tom Mackee, one of the supporting characters in this book, and is set five years later. I've therefore met these people before: Francesca, quiet Justine who plays the piano accordion, political Tara, and several of the Sebastian boys. But they are much different here as their younger selves: more raw, more childish, and without the understanding of settled relationships. This is the story of how they either met or learned how to really see each other, against the backdrop of Francesca's home life breaking in entirely unexpected ways.

I think The Piper's Son was classified as young adult mostly because Marchetta is considered a young adult writer. Saving Francesca, by comparison, is more fully a young adult novel. Instead of third person with two tight viewpoints, it's all first person: Francesca telling the reader about her life. She's grumpy, sad, scared, anxious, and very self-focused, in the way of a teenager who is trying to sort out who she is and what she wants. The reader follows her through the uncertainty of maybe starting to like a boy who may or may not like her and is maddeningly unwilling to commit, through realizing that the friends she had and desperately misses perhaps weren't really friends after all, and into the understanding of what friendship really means for her. But it's all very much caught up in Francesca's head. The thoughts of the other characters are mostly guesswork for the reader.

The Piper's Son was more effective for me, but this is still a very good book. Marchetta captures the gradual realization of friendship, along with the gradual understanding that you have been a total ass, extremely well. I was somewhat less convinced by Francesca's mother's sudden collapse, but depression does things like that, and by the end of the book one realizes that Francesca has been somewhat oblivious to tensions and problems that would have made this less surprising. And the way that Marchetta guides Francesca to a deeper understanding of her father and the dynamics of her family is emotionally realistic and satisfying, although Francesca's lack of empathy occasionally makes one want to have a long talk with her.

The best part of this book are the friendships. I didn't feel the moments of catharsis as strongly here as in The Piper's Son, but I greatly appreciated Marchetta's linking of the health of Francesca's friendships to the health of her self-image. Yes, this is how this often works: it's very hard to be a good friend until you understand who you are inside, and how you want to define yourself. Often that doesn't come in words, but in moments of daring and willingness to get lost in a moment. The character I felt the most sympathy for was Siobhan, who caught the brunt of Francesca's defensive self-absorption in a way that left me wincing even though the book never lingers on her. And the one who surprised me the most was Jimmy, who possibly shows the most empathy of anyone in the book in a way that Francesca didn't know how to recognize.

I'm not unhappy about reading The Piper's Son first, since I don't think it needs this book (and says some of the same things in a more adult voice, in ways I found more powerful). I found Saving Francesca a bit more obvious, a bit less subtle, and a bit more painful, and I think I prefer reading about the more mature versions of these characters. But this is a solid, engrossing psychological story with a good emotional payoff. And, miracle of miracles, even a bit of a denouement.

Followed by The Piper's Son.

Rating: 7 out of 10

30 December, 2017 04:15AM

December 29, 2017

Antoine Beaupré

An overview of KubeCon + CloudNativeCon

The Cloud Native Computing Foundation (CNCF) held its conference, KubeCon + CloudNativeCon, in December 2017. There were 4000 attendees at this gathering in Austin, Texas, more than all the previous KubeCons before, which shows the rapid growth of the community building around the tool that was announced by Google in 2014. Large corporations are also taking a larger part in the community, with major players in the industry joining the CNCF, which is a project of the Linux Foundation. The CNCF now features three of the largest cloud hosting businesses (Amazon, Google, and Microsoft), but also emerging companies from Asia like Baidu and Alibaba.

In addition, KubeCon saw an impressive number of diversity scholarships, which "include free admission to KubeCon and a travel stipend of up to $1,500, aimed at supporting those from traditionally underrepresented and/or marginalized groups in the technology and/or open source communities", according to Neil McAllister of CoreOS. The diversity team raised an impressive $250,000 to bring 103 attendees to Austin from all over the world.

We have looked into Kubernetes in the past but, considering the speed at which things are moving, it seems time to make an update on the projects surrounding this newly formed ecosystem.

The CNCF and its projects

The CNCF was founded, in part, to manage the Kubernetes software project, which was donated to it by Google in 2015. From there, the number of projects managed under the CNCF umbrella has grown quickly. It first added the Prometheus monitoring and alerting system, and then quickly went up from four projects in the first year, to 14 projects at the time of this writing, with more expected to join shortly. The CNCF's latest additions to its roster are Notary and The Update Framework (TUF, which we previously covered), both projects aimed at providing software verification. Those add to the already existing projects which are, bear with me, OpenTracing (a tracing API), Fluentd (a logging system), Linkerd (a "service mesh", which we previously covered), gRPC (a "universal RPC framework" used to communicate between pods), CoreDNS (DNS and service discovery), rkt (a container runtime), containerd (another container runtime), Jaeger (a tracing system), Envoy (another "service mesh"), and Container Network Interface (CNI, a networking API).

This is an incredible diversity, if not fragmentation, in the community. The CNCF made this large diagram depicting Kubernetes-related projects—so large that you will have a hard time finding a monitor that will display the whole graph without scaling it (seen below, click through for larger version). The diagram shows hundreds of projects, and it is hard to comprehend what all those components do and if they are all necessary or how they overlap. For example, Envoy and Linkerd are similar tools yet both are under the CNCF umbrella—and I'm ignoring two more such projects presented at KubeCon (Istio and Conduit). You could argue that all tools have different focus and functionality, but it still means you need to learn about all those tools to pick the right one, which may discourage and confuse new users.

Cloud Native landscape

You may notice that containerd and rkt are both projects of the CNCF, even though they overlap in functionality. There is also a third Kubernetes runtime called CRI-O built by RedHat. This kind of fragmentation leads to significant confusion within the community as to which runtime they should use, or if they should even care. We'll run a separate article about CRI-O and the other runtimes to try to clarify this shortly.

Regardless of this complexity, it does seem the space is maturing. In his keynote, Dan Kohn, executive director of the CNCF, announced "1.0" releases for 4 projects: CoreDNS, containerd, Fluentd and Jaeger. Prometheus also had a major 2.0 release, which we will cover in a separate article.

There were significant announcements at KubeCon for projects that are not directly under the CNCF umbrella. Most notable for operators concerned about security is the introduction of Kata Containers, which is basically a merge of runV from Hyper.sh and Intel's Clear Containers projects. Kata Containers, introduced during a keynote by Intel's VP of the software and services group, Imad Sousou, are virtual-machine-based containers, or, in other words, containers that run in a hypervisor instead of under the supervision of the Linux kernel. The rationale here is that containers are convenient but all run on the same kernel, so the compromise of a single container can leak into all containers on the same host. This may be unacceptable in certain environments, for example for multi-tenant clusters where containers cannot trust each other.

Kata Containers promises the "best of both worlds" by providing the speed of containers and the isolation of VMs. It does this by using minimal custom kernel builds, to speed up boot time, and parallelizing container image builds and VM startup. It also uses tricks like same-page memory sharing across VMs to deduplicate memory across virtual machines. It currently works only on x86 and KVM, but it integrates with Kubernetes, Docker, and OpenStack. There was a talk explaining the technical details; that page should eventually feature video and slide links.

Industry adoption

As hinted earlier, large cloud providers like Amazon Web Services (AWS) and Microsoft Azure are adopting the Kubernetes platform, or at least its API. The keynotes featured AWS prominently; Adrian Cockcroft (AWS vice president of cloud architecture strategy) announced the Fargate service, which introduces containers as "first class citizens" in the Amazon infrastructure. Fargate should run alongside, and potentially replace, the existing Amazon EC2 Container Service (ECS), which is currently the way developers would deploy containers on Amazon by using EC2 (Elastic Compute Cloud) VMs to run containers with Docker.

This move by Amazon has been met with skepticism in the community. The concern here is that Amazon could pull the plug on Kubernetes when it hinders the bottom line, like it did with the Chromecast products on Amazon. This seems to be part of a changing strategy by the corporate sector in adoption of free-software tools. While historically companies like Microsoft or Oracle have been hostile to free software, they are now not only using free software but also releasing free software. Oracle, for example, released what it called "Kubernetes Tools for Serverless Deployment and Intelligent Multi-Cloud Management", named Fn. Large cloud providers are getting certified by the CNCF for compliance with the Kubernetes API and other standards.

One theory to explain this adoption is that free-software projects are becoming on-ramps to proprietary products. In this strategy, as explained by InfoWorld, open-source tools like Kubernetes are merely used to bring consumers over to proprietary platforms. Sure, the client and the API are open, but the underlying software can be proprietary. The data and some magic interfaces, especially, remain proprietary. Key examples of this include the "serverless" services, which are currently not standardized at all: each provider has its own incompatible framework that could be a deliberate lock-in strategy. Indeed, a common definition of serverless, from Martin Fowler, goes as follows:

Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or "BaaS") or on custom code that's run in ephemeral containers (Function as a Service or "FaaS").

By designing services that explicitly require proprietary, provider-specific APIs, providers ensure customer lock-in at the core of the software architecture. One of the upcoming battles in the community will be exactly how to standardize this emerging architecture.

And, of course, Kubernetes can still be run on bare metal in a colocation facility, but those costs are getting less and less affordable. In an enlightening talk, Dmytro Dyachuk explained that unless cloud costs hit $100,000 per month, users may be better off staying in the cloud. Indeed, that is where a lot of applications end up. During an industry roundtable, Hong Tang, chief architect at Alibaba Cloud, posited that the "majority of computing will be in the public cloud, just like electricity is produced by big power plants".

The question, then, is how to split that market between the large providers. And, indeed, according to a CNCF survey of 550 conference attendees: "Amazon (EC2/ECS) continues to grow as the leading container deployment environment (69%)". CNCF also notes that on-premise deployment decreased for the first time in the five surveys it has run, to 51%, "but still remains a leading deployment". On premise, which is a colocation facility or data center, is the target for these cloud companies. By getting users to run Kubernetes, the industry's bet is that it makes applications and content more portable, thus easier to migrate into the proprietary cloud.

Next steps

As the Kubernetes tools and ecosystem stabilize, major challenges emerge: monitoring is a key issue as people realize it may be more difficult to diagnose problems in a distributed system compared to the previous monolithic model, which people at the conference often referred to as "legacy" or the "old OS paradigm". Scalability is another challenge: while Kubernetes can easily manage thousands of pods and containers, you still need to figure out how to organize all of them and make sure they can talk to each other in a meaningful way.

Security is a particularly sensitive issue as deployments struggle to isolate TLS certificates or application credentials from applications. Kubernetes makes big promises in that regard and it is true that isolating software in microservices can limit the scope of compromises. The solution emerging for this problem is the "service mesh" concept pioneered by Linkerd, which consists of deploying tools to coordinate, route, and monitor clusters of interconnected containers. Tools like Istio and Conduit are designed to apply cluster-wide policies to determine who can talk to what and how. Istio, for example, can progressively deploy containers across the cluster to send only a certain percentage of traffic to newly deployed code, which allows detection of regressions. There is also work being done to ensure standard end-to-end encryption and authentication of containers in the SPIFFE project, which is useful in environments with untrusted networks.

Another issue is that Kubernetes is just a set of nuts and bolts to manage containers: users get all the parts and it's not always clear what to do with them to get a platform matching their requirements. It will be interesting to see how the community moves forward in building higher-level abstractions on top of it. Several tools competing in that space were featured at the conference: OpenShift, Tectonic, Rancher, and Kasten, though there are many more out there.

The 1.9 Kubernetes release should be coming out in early 2018; it will stabilize the Workloads API that was introduced in 1.8 and add Windows containers (for those who like .NET) in beta. There will also be three KubeCon conferences in 2018 (in Copenhagen, Shanghai, and Seattle). Stay tuned for more articles from KubeCon Austin 2017 ...

This article first appeared in the Linux Weekly News.

29 December, 2017 06:23PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

The VR Show

One of the things If I had got the visa on time for Debconf 15 (Germany) apart from the conference itself was the attention on VR (Virtual Reality) and AR (Augmented Reality) . I had heard the hype so much for so many years that I wanted to experience and did know that with Debianities who might be perhaps a bit better in crystal-gazing and would have perhaps more of an idea as I had then. The only VR which I knew about was from Hollywood movies and some VR videos but that doesn’t tell you anything. Also while movie like Chota-Chetan and others clicked they were far lesser immersive than true VR has to be.

I was glad that it didn’t happen after the fact as in 2016 while going to the South African Debconf I experienced VR at Qatar Airport in a Samsung showroom. I was quite surprised as how heavy the headset was and also surprised by how little content they had. Something which has been hyped for 20 odd years had not much to show for it. I was also able to trick the VR equipment as the eye/motion tracking was not good enough so if you put shook the head fast enough it couldn’t keep up with you.

I shared the above as I was invited to another VR conference by a web-programmer/designer friend Mahendra couple of months ago here in Pune itself . We attended the conference and were showcased quite a few success stories. One of the stories which was liked by the geek in me was framastore’s 360 Mars VR Experience on a bus the link shows how the framastore developers mapped Mars or part of Mars on Washington D.C. streets and how kids were able to experience how it would feel to be on Mars without knowing any of the risks the astronauts or the pioneers would have to face if we do get the money, the equipment and the technology to send people to Mars. In reality we are still decades from making such a trip keeping people safe to Mars and back or to have Mars for the rest of their life.

If my understanding is correct, the gravity of Mars is half of earth and once people settle there they or their exoskeleton would no longer be able to support Earth’s gravity, at least a generation who is born on Mars.

An interesting take on how things might turn out is shown in ‘The Expanse

But this is taking away from the topic at hand. While I saw the newer generation VR headsets there are still a bit ways off. It would be interesting once the headset becomes similar to eye-glasses and you do not have to either be tethered to a power unit or need to lug a heavy backpack full of dangerous lithium-ion battery. The chemistry for battery or some sort of self-powered unit would need to be much more safer, lighter.

While being in the conference and seeing the various scenarios being played out between potential developers and marketeers, it crossed my mind that people were not at all thinking of safe-guarding users privacy. Right from what games or choices you make to your biometric and other body sensitive information which has a high chance of being misused by companies and individuals.

There were also questions about how Sony and other developers are asking insane amounts for use of their SDK to develop content while it should be free as games and any content is going to enhance the marketability of their own ecosystem. For both the above questions (privacy and security asked by me) and SDK-related questions asked by some of the potential developers were not really answered.

At the end, they also showed AR or Augmented Reality which to my mind has much more potential to be used for reskilling and upskilling of young populations such as India and other young populous countries. It was interesting to note that both China and the U.S. are inching towards the older demographics while India would relatively be a still young country till another 20-30 odd years. Most of the other young countries (by median age) seem to be in the African continent and I believe (might be a myth) is that they are young because most of the countries are still tribal-like and they still are perhaps a lot of civil wars for resources.

I was underwhelmed by what they displayed in Augmented Reality, part of which I do understand that there may be lot many people or companies working on their IP and hence didn’t want to share or show or show a very rough work so their idea doesn’t get stolen.

I was also hoping somebody would take about motion-sickness or motion displacement similar to what people feel when they are train-lagged or jet-lagged. I am surprised that wikipedia still doesn’t have an article on train-lag as millions of Indians go through the process every year. The one which is most pronounced on Indian Railways is Motion being felt but not seen.

There are both challenges and opportunities provided by VR and AR but until costs come down both in terms of complexity, support and costs (for both the deployer and the user) it would remain a distant dream.

There are scores of ideas that could be used or done. For instance, the whole of North India is one big palace in the sense that there are palaces built by Kings and queens which have their own myth and lore over centuries. A story-teller could use a modern story and use say something like Chota Imambara or/and Bara Imambara where there have been lots of stories of people getting lost in the alleyways.

Such sort of lore, myths and mysteries are all over India. The Ramayana and the Mahabharata are just two of the epics which tell how grand the tales could be spun. The History of Indus Valley Civilization till date and the modern contestations to it are others which come to my mind.

Even the humble Panchtantra can be re-born and retold to generations who have forgotten it. I can’t express it much better as the variety of stories and contrasts to offer as bolokids does as well as SRK did in opening of IFFI. Even something like Khakee which is based on true incidents and a real-life inspector could be retold in so many ways. Even Mukti Bhavan which I saw few months ago, coincidentally before I became ill tells of stories which have complex stories and each person or persons have their own rich background which on VR could be much more explored.

Even titles such as the ever-famous Harry Potter or even the ever-beguiling RAMA could be shared and retooled for generations to come. The Shiva Trilogy is another one which comes to my mind which could be retold as well. There was another RAMA trilogy by the same author and another competing one which comes out in 2018 by an author called PJ Annan

We would need to work out the complexities of both hardware, bandwidth and the technologies but stories or content waiting to be developed is aplenty.

Once upon a time I had the opportunity to work, develop and understand make-believe walk-throughs (2-d blueprints animated/bought to life and shown to investors/clients) for potential home owners in a society (this was in the hey-days and heavy days of growth circa around y2k ) , it was 2d or 2.5 d environment, tools were lot more complex and I was the most inept person as I had no idea of what camera positioning and what source of light meant.

Apart from the gimmickry that was shown, I thought it would have been interesting if people had shared both the creative and the budget constraints while working in immersive technologies and bringing something good enough for the client. There was some discussion in a ham-handed way but not enough as there was considerable interest from youngsters to try this new medium but many lacked both the opportunities, knowledge, the equipment and the software stack to make it a reality.

Lastly, as far as the literature I have just shared bits and pieces of just the Indian English literature. There are 16 recognized Indian languages and all of them have a vibrant literature scene. Just to take an example, Bengal has been a bed-rock of new Bengali Detective stories all the time. I think I had shared the history of Bengali Crime fiction sometime back as well but nevertheless here it is again.

So apart from games, galleries, 3-d visual interactive visual novels with alternative endings could make for some interesting immersive experiences provided we are able to shed the costs and the technical challenges to make it a reality.


Filed under: Miscellenous Tagged: #Augmented Reality, #Debconf South Africa 2016, #Epics, #framastore, #indian literature, #Mars trip, #median age population inded, #motion sickness, #Palaces, #planet-debian, #Pune VR Conference, #RAMA, #RAMA trilogy, #Samsung VR, #Shiva Trilogy, #The Expanse, #Virtual Reality, #VR Headsets, #walkthroughs, Privacy

29 December, 2017 04:39PM by shirishag75

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Compute rescaling progress

My Lanczos rescaling compute shader for Movit is finally nearing usable performance improvements:

BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/640/360         3149 us      69.7767M pixels/s
BM_ResampleEffectInt8/Fragment/Int8Downscale/1280/720/320/180         2720 us      20.1983M pixels/s
BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/640/360      3777 us      58.1711M pixels/s
BM_ResampleEffectHalf/Fragment/Float16Downscale/1280/720/320/180      3269 us      16.8054M pixels/s

BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/640/360          2007 us      109.479M pixels/s  [+ 56.9%]
BM_ResampleEffectInt8/Compute/Int8Downscale/1280/720/320/180          1609 us      34.1384M pixels/s  [+ 69.0%]
BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/640/360       2057 us      106.843M pixels/s  [+ 56.7%]
BM_ResampleEffectHalf/Compute/Float16Downscale/1280/720/320/180       1633 us      33.6394M pixels/s  [+100.2%]

Some tuning and bugfixing still needed; this is on my Haswell (the NVIDIA results are somewhat different). Upscaling also on its way. :-)

29 December, 2017 01:18PM

hackergotchi for Martin-&#201;ric Racine

Martin-Éric Racine

Jackpot

I have no idea whatsover of how I achieved this, but there you go. This citizen's legal draft is moving forward to the Finnish parliament.

29 December, 2017 11:11AM by Martin-Éric ([email protected])

Russ Allbery

Review: One Fell Sweep

Review: One Fell Sweep, by Ilona Andrews

Series: Innkeeper Chronicles #3
Publisher: NYLA
Copyright: 2016
ISBN: 1-943772-71-1
Format: Kindle
Pages: 326

This is the third book of the Innkeeper Chronicles series, and this isn't the sort of series to read out of order. Each book contains substantial spoilers for the previous books, and the characterization and plot benefits from the foundation of previous installments.

Sean has not fully recovered from the events of Sweep in Peace. Dina is still unsure about the parameters of their friendship, or whatever it is. But some initial overtures at processing that complexity are cut off by a Ku with far more enthusiasm than sense arriving in the neighborhood on a boost bike at two in the morning. A Ku with a message from Dina's sister, asking for help.

One Fell Sweep moves farther and farther from urban fantasy in setting, although it still uses the urban fantasy style of first-person narration and an underground group of misfits who know the "real truth" about how the world works. This story opens with a rescue mission to another planet (aided by Dina calling on favors from previous books), and then segues into the main plot: a hunted species of aliens approach Dina for aid in accessing a solution to their plight, one they've already paid dearly for. The result is an episodic and escalating series of threats, both inside the inn and in some away missions. This is much more entertaining than it had any right to be given the repetitive structure. There isn't a great deal of plot here, and much of it is predictable, but there's a lot of competence porn. And I like these people and enjoy reading about them.

This series isn't philosophically deep by any stretch, but Andrews does do a good job of avoiding pitfalls and keeping it entertaining. For example, the aliens are being hunted by religious fanatics who think killing them will send their executioners directly to paradise, but this isn't as close of an analogy to real-world stereotypes as it may seem in a brief description. Andrews mixes enough different sources together and gives the aliens enough unique characterization that the real-world analogies are muted at best. If there is a common theme, it's a suspicion of hierarchical religions, or just about any other hierarchical structure. (I suspect it's obviously American.)

The main new characters in this entry are Dina's sister and her sister's daughter, both of whom are a delight. Andrews is very good at the feeling of family: Dina and her sister are very different people with very different interests, but they have a family similarity and mutual knowledge that comes from growing up in the same house with the same parents. And Dina's sister is just as competent as she is, albeit in somewhat different ways. She's spent much of her life with what this series calls vampires (more like religious Klingons with some of their own unique traditions), and she's raising a half-vampire child who managed to entirely escape my normal dislike of child characters in adult books.

It's also a refreshing change from a lot of urban fantasy that Andrews doesn't drag out the love triangle established in the first book. For once, the resolution obvious to the reader appears to also be obvious to the characters.

I would say that this book is again a notch above the previous books in the series. Sadly, the climax involves a deeply irritating section that sidelines Dina in a way that I found totally out of character. Key parts of the conclusion happen to her rather than because of her. Given that the agency of the protagonist is one of the things I like the most about this series, I found that disappointing and difficult to read, and thought the way that event resolved was infuriatingly dumb. Andrews is mostly above that sort of thing, but occasionally slips into banal tropes. The grand revelation about the hunted alien race was also just a little too neat. In both cases, I would have appreciated more nuanced messiness and internal courage, and less after-school-special morality.

But other than some missteps at the end, this is another surprisingly good book in a series that is much more fun than I had expected. It's darker and more serious than Clean Sweep, but still the sort of book in which you can be fairly certain nothing truly bad is going to happen to the protagonists. Just the sort of thing when one is in the mood for highly competent characters showing a creatively wide range of villains why they shouldn't be underestimated.

There's a clear setup for a sequel, but neither the title nor the publication date have been announced yet, although there's apparently an upcoming novella about Dina's sister.

Rating: 8 out of 10

29 December, 2017 02:40AM

December 28, 2017

hackergotchi for Sean Whitton

Sean Whitton

Debian Policy call for participation -- December 2017

Yesterday we released Debian Policy 4.1.3.0, containing patches from numerous different contributors, some of them first-time contributors. Thank you to everyone who was involved!

Please consider getting involved in preparing the next release of Debian Policy, which is likely to be uploaded sometime around the end of January.

Consensus has been reached and help is needed to write a patch

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

28 December, 2017 10:47PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Get rid of the backpack

In 2008 I read a blog post by Mark Pilgrim which made a profound impact on me, although I didn't realise it at the time. It was

  1. Stop buying stuff you don’t need
  2. Pay off all your credit cards
  3. Get rid of all the stuff that doesn’t fit in your house/apartment (storage lockers, etc.)
  4. Get rid of all the stuff that doesn’t fit on the first floor of your house (attic, garage, etc.)
  5. Get rid of all the stuff that doesn’t fit in one room of your house
  6. Get rid of all the stuff that doesn’t fit in a suitcase
  7. Get rid of all the stuff that doesn’t fit in a backpack
  8. Get rid of the backpack

At the time I first read it, I think I could see (and concur) with the logic behind the first few points, but not further. Revisiting it now I can agree much further along the list and I'm wondering if I'm brave enough to get to the last step, or anywhere near it.

Mark was obviously going on a journey, and another stopping-off point for him on that journey was to delete his entire online persona, which is why I've linked to the Wayback Machine copy of the blog post.

28 December, 2017 10:43PM

Successive Heresies

I prefer the book The Hobbit to The Lord Of The Rings.

I much prefer the Hobbit movies to the LOTR movies.

I like the fact the Hobbit movies were extended with material not in the original book: I'm glad there are female characters. I love the additional material with Radagast the Brown. I love the singing and poems and sense of fun preserved from what was a novel for children.

I find the foreshadowing of Sauron in The Hobbit movies to more effectively convey a sense of dread and power than actual Sauron in the LOTR movies.

Whilst I am generally bored by large CGI battles, I find the skirmishes in The Hobbit movies to be less boring than the epic-scale ones in LOTR.

28 December, 2017 01:37PM

Reproducible builds folks

Reproducible Builds: Weekly report #139

Here's what happened in the Reproducible Builds effort between Sunday December 17 and Saturday December 23 2017:

Packages reviewed and fixed, and bugs filed

Bugs filed in Debian:

Bugs filed in openSUSE:

  • Bernhard M. Wiedemann:
    • WindowMaker (merged) - use modification date of ChangeLog, upstreamable
    • ntp (merged) - drop date
    • bzflag - version upgrade to include already-upstreamed SOURCE_DATE_EPOCH patch

Reviews of unreproducible packages

20 package reviews have been added, 36 have been updated and 32 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (6)
  • Matthias Klose (8)

diffoscope development

strip-nondeterminism development

disorderfs development

reprotest development

reproducible-website development

  • Chris Lamb:
    • rws3:
      • Huge number of formatting improvements, typo fixes, capitalisation
      • Add section headings to make splitting up easier.
  • Holger Levsen:
    • rws3:
      • Add a disclaimer that this part of the website is a Work-In-Progress.
      • Split notes from each session into separate pages (6 sessions).
      • Other formatting and style fixes.
      • Link to Ludovic Courtès' notes on GNU Guix.
  • Ximin Luo:
    • rws3:
      • Format agenda.md to look like previous years', and other fixes
      • Split notes from each session into separate pages (1 session).

jenkins.debian.net development

Misc.

This week's edition was written by Ximin Luo and Bernhard M. Wiedemann & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

28 December, 2017 12:55PM

hackergotchi for Vincent Bernat

Vincent Bernat

(Micro)benchmarking Linux kernel functions

Usually, the performance of a Linux subsystem is measured through an external (local or remote) process stressing it. Depending on the input point used, a large portion of code may be involved. To benchmark a single function, one solution is to write a kernel module.

Minimal kernel module

Let’s suppose we want to benchmark the IPv4 route lookup function, fib_lookup(). The following kernel function executes 1,000 lookups for 8.8.8.8 and returns the average value.1 It uses the get_cycles() function to compute the execution “time.”

/* Execute a benchmark on fib_lookup() and put
   result into the provided buffer `buf`. */
static int do_bench(char *buf)
{
    unsigned long long t1, t2;
    unsigned long long total = 0;
    unsigned long i;
    unsigned count = 1000;
    int err = 0;
    struct fib_result res;
    struct flowi4 fl4;

    memset(&fl4, 0, sizeof(fl4));
    fl4.daddr = in_aton("8.8.8.8");

    for (i = 0; i < count; i++) {
        t1 = get_cycles();
        err |= fib_lookup(&init_net, &fl4, &res, 0);
        t2 = get_cycles();
        total += t2 - t1;
    }
    if (err != 0)
        return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err);
    return scnprintf(buf, PAGE_SIZE, "avg=%llu\n", total / count);
}

Now, we need to embed this function in a kernel module. The following code registers a sysfs directory containing a pseudo-file run. When a user queries this file, the module runs the benchmark function and returns the result as content.

#define pr_fmt(fmt) "kbench: " fmt

#include <linux/kernel.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/inet.h>
#include <linux/timex.h>
#include <net/ip_fib.h>

/* When a user fetches the content of the "run" file, execute the
   benchmark function. */
static ssize_t run_show(struct kobject *kobj,
                        struct kobj_attribute *attr,
                        char *buf)
{
    return do_bench(buf);
}

static struct kobj_attribute run_attr = __ATTR_RO(run);
static struct attribute *bench_attributes[] = {
    &run_attr.attr,
    NULL
};
static struct attribute_group bench_attr_group = {
    .attrs = bench_attributes,
};
static struct kobject *bench_kobj;

int init_module(void)
{
    int rc;
    /* ❶ Create a simple kobject named "kbench" in /sys/kernel. */
    bench_kobj = kobject_create_and_add("kbench", kernel_kobj);
    if (!bench_kobj)
        return -ENOMEM;

    /* ❷ Create the files associated with this kobject. */
    rc = sysfs_create_group(bench_kobj, &bench_attr_group);
    if (rc) {
        kobject_put(bench_kobj);
        return rc;
    }

    return 0;
}

void cleanup_module(void)
{
    kobject_put(bench_kobj);
}

/* Metadata about this module */
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Microbenchmark for fib_lookup()");

In ❶, kobject_create_and_add() creates a new kobject named kbench. A kobject is the abstraction behind the sysfs filesystem. This new kobject is visible as the /sys/kernel/kbench/ directory.

In ❷, sysfs_create_group() attaches a set of attributes to our kobject. These attributes are materialized as files inside /sys/kernel/kbench/. Currently, we declare only one of them, run, with the __ATTR_RO macro. The attribute is therefore read-only (0444) and when a user tries to fetch the content of the file, the run_show() function is invoked with a buffer of PAGE_SIZE bytes as last argument and is expected to return the number of bytes written.

For more details, you can look at the documentation in the kernel and the associated example. Beware, random posts found on the web (including this one) may be outdated.2

The following Makefile will compile this example:

# Kernel module compilation
KDIR = /lib/modules/$(shell uname -r)/build
obj-m += kbench_mod.o
kbench_mod.ko: kbench_mod.c
    make -C $(KDIR) M=$(PWD) modules

After executing make, you should get a kbench_mod.ko file:

$ modinfo kbench_mod.ko
filename:       /home/bernat/code/…/kbench_mod.ko
description:    Microbenchmark for fib_lookup()
license:        GPL
depends:
name:           kbench_mod
vermagic:       4.14.0-1-amd64 SMP mod_unload modversions

You can load it and execute the benchmark:

$ insmod ./kbench_mod.ko
$ ls -l /sys/kernel/kbench/run
-r--r--r-- 1 root root 4096 déc.  10 16:05 /sys/kernel/kbench/run
$ cat /sys/kernel/kbench/run
avg=75

The result is a number of cycles. You can get an approximate time in nanoseconds if you divide it by the frequency of your processor in gigahertz (25 ns if you have a 3 GHz processor).3

Configurable parameters

The module hard-code two constants: the number of loops and the destination address to test. We can make these parameters user-configurable by exposing them as attributes of our kobject and define a pair of functions to read/write them:

static unsigned long loop_count      = 5000;
static u32           flow_dst_ipaddr = 0x08080808;

/* A mutex is used to ensure we are thread-safe when altering attributes. */
static DEFINE_MUTEX(kb_lock);

/* Show the current value for loop count. */
static ssize_t loop_count_show(struct kobject *kobj,
                               struct kobj_attribute *attr,
                               char *buf)
{
    ssize_t res;
    mutex_lock(&kb_lock);
    res = scnprintf(buf, PAGE_SIZE, "%lu\n", loop_count);
    mutex_unlock(&kb_lock);
    return res;
}

/* Store a new value for loop count. */
static ssize_t loop_count_store(struct kobject *kobj,
                                struct kobj_attribute *attr,
                                const char *buf,
                                size_t count)
{
    unsigned long val;
    int err = kstrtoul(buf, 0, &val);
    if (err < 0)
        return err;
    if (val < 1)
        return -EINVAL;
    mutex_lock(&kb_lock);
    loop_count = val;
    mutex_unlock(&kb_lock);
    return count;
}

/* Show the current value for destination address. */
static ssize_t flow_dst_ipaddr_show(struct kobject *kobj,
                                    struct kobj_attribute *attr,
                                    char *buf)
{
    ssize_t res;
    mutex_lock(&kb_lock);
    res = scnprintf(buf, PAGE_SIZE, "%pI4\n", &flow_dst_ipaddr);
    mutex_unlock(&kb_lock);
    return res;
}

/* Store a new value for destination address. */
static ssize_t flow_dst_ipaddr_store(struct kobject *kobj,
                                     struct kobj_attribute *attr,
                                     const char *buf,
                                     size_t count)
{
    mutex_lock(&kb_lock);
    flow_dst_ipaddr = in_aton(buf);
    mutex_unlock(&kb_lock);
    return count;
}

/* Define the new set of attributes. They are read/write attributes. */
static struct kobj_attribute loop_count_attr      = __ATTR_RW(loop_count);
static struct kobj_attribute flow_dst_ipaddr_attr = __ATTR_RW(flow_dst_ipaddr);
static struct kobj_attribute run_attr             = __ATTR_RO(run);
static struct attribute *bench_attributes[] = {
    &loop_count_attr.attr,
    &flow_dst_ipaddr_attr.attr,
    &run_attr.attr,
    NULL
};

The IPv4 address is stored as a 32-bit integer but displayed and parsed using the dotted quad notation. The kernel provides the appropriate helpers for this task.

After this change, we have two new files in /sys/kernel/kbench. We can read the current values and modify them:

# cd /sys/kernel/kbench
# ls -l
-rw-r--r-- 1 root root 4096 déc.  10 19:10 flow_dst_ipaddr
-rw-r--r-- 1 root root 4096 déc.  10 19:10 loop_count
-r--r--r-- 1 root root 4096 déc.  10 19:10 run
# cat loop_count
5000
# cat flow_dst_ipaddr
8.8.8.8
# echo 9.9.9.9 > flow_dst_ipaddr
# cat flow_dst_ipaddr
9.9.9.9

We still need to alter the do_bench() function to make use of these parameters:

static int do_bench(char *buf)
{
    /* … */
    mutex_lock(&kb_lock);
    count = loop_count;
    fl4.daddr = flow_dst_ipaddr;
    mutex_unlock(&kb_lock);

    for (i = 0; i < count; i++) {
        /* … */

Meaningful statistics

Currently, we only compute the average lookup time. This value is usually inadequate:

  • A small number of outliers can raise this value quite significantly. An outlier can happen because we were preempted out of CPU while executing the benchmarked function. This doesn’t happen often if the function execution time is short (less than a millisecond), but when this happens, the outliers can be off by several milliseconds, which is enough to make the average inadequate when most values are several order of magnitude smaller. For this reason, the median usually gives a better view.

  • The distribution may be asymmetrical or have several local maxima. It’s better to keep several percentiles or even a distribution graph.

To be able to extract meaningful statistics, we store the results in an array.

static int do_bench(char *buf)
{
    unsigned long long *results;
    /* … */

    results = kmalloc(sizeof(*results) * count, GFP_KERNEL);
    if (!results)
        return scnprintf(buf, PAGE_SIZE, "msg=\"no memory\"\n");

    for (i = 0; i < count; i++) {
        t1 = get_cycles();
        err |= fib_lookup(&init_net, &fl4, &res, 0);
        t2 = get_cycles();
        results[i] = t2 - t1;
    }

    if (err != 0) {
        kfree(results);
        return scnprintf(buf, PAGE_SIZE, "err=%d msg=\"lookup error\"\n", err);
    }
    /* Compute and display statistics */
    display_statistics(buf, results, count);

    kfree(results);
    return strnlen(buf, PAGE_SIZE);
}

Then, We need an helper function to be able to compute percentiles:

static unsigned long long percentile(int p,
                                     unsigned long long *sorted,
                                     unsigned count)
{
    int index = p * count / 100;
    int index2 = index + 1;
    if (p * count % 100 == 0)
        return sorted[index];
    if (index2 >= count)
        index2 = index - 1;
    if (index2 < 0)
        index2 = index;
    return (sorted[index] + sorted[index+1]) / 2;
}

This function needs a sorted array as input. The kernel provides a heapsort function, sort(), for this purpose. Another useful value to have is the deviation from the median. Here is a function to compute the median absolute deviation:4

static unsigned long long mad(unsigned long long *sorted,
                              unsigned long long median,
                              unsigned count)
{
    unsigned long long *dmedian = kmalloc(sizeof(unsigned long long) * count,
                                          GFP_KERNEL);
    unsigned long long res;
    unsigned i;

    if (!dmedian) return 0;
    for (i = 0; i < count; i++) {
        if (sorted[i] > median)
            dmedian[i] = sorted[i] - median;
        else
            dmedian[i] = median - sorted[i];
    }
    sort(dmedian, count, sizeof(unsigned long long), compare_ull, NULL);
    res = percentile(50, dmedian, count);
    kfree(dmedian);
    return res;
}

With these two functions, we can provide additional statistics:

static void display_statistics(char *buf,
                               unsigned long long *results,
                               unsigned long count)
{
    unsigned long long p95, p90, p50;

    sort(results, count, sizeof(*results), compare_ull, NULL);
    if (count == 0) {
        scnprintf(buf, PAGE_SIZE, "msg=\"no match\"\n");
        return;
    }

    p95 = percentile(95, results, count);
    p90 = percentile(90, results, count);
    p50 = percentile(50, results, count);
    scnprintf(buf, PAGE_SIZE,
          "min=%llu max=%llu count=%lu 95th=%llu 90th=%llu 50th=%llu mad=%llu\n",
          results[0],
          results[count - 1],
          count,
          p95,
          p90,
          p50,
          mad(results, p50, count));
}

We can also append a graph of the distribution function (and of the cumulative distribution function):

min=72 max=33364 count=100000 95th=154 90th=142 50th=112 mad=6
    value │                      ┊                         count
       72 │                                                   51
       77 │▒                                                3548
       82 │▒▒░░                                             4773
       87 │▒▒░░░░░                                          5918
       92 │░░░░░░░                                          1207
       97 │░░░░░░░                                           437
      102 │▒▒▒▒▒▒░░░░░░░░                                  12164
      107 │▒▒▒▒▒▒▒░░░░░░░░░░░░░░                           15508
      112 │▒▒▒▒▒▒▒▒▒▒▒░░░░░░░░░░░░░░░░░░░░░░               23014
      117 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░             6297
      122 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░              905
      127 │▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░           3845
      132 │▒▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░       6687
      137 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░     4884
      142 │▒▒░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░   4133
      147 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  1015
      152 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░  1123

Benchmark validity

While the benchmark produces some figures, we may question their validity. There are several traps when writing a microbenchmark:

dead code
Compiler may optimize away our benchmark because the result is not used. In our example, we ensure to combine the result in a variable to avoid this.
warmup phase
One-time initializations may affect negatively the benchmark. This is less likely to happen with C code since there is no JIT. Nonetheless, you may want to add a small warmup phase.
too small dataset
If the benchmark is running using the same input parameters over and over, the input data may fit entirely in the L1 cache. This affects positively the benchmark. Therefore, it is important to iterate over a large dataset.
too regular dataset
A regular dataset may still affect positively the benchmark despite its size. While the whole dataset will not fit into L1/L2 cache, the previous run may have loaded most of the data needed for the current run. In the route lookup example, as route entries are organized in a tree, it’s important to not linearly scan the address space. Address space could be explored randomly (a simple linear congruential generator brings reproducible randomness).
large overhead
If the benchmarked function runs in a few nanoseconds, the overhead of the benchmark infrastructure may be too high. Typically, the overhead of the method presented here is around 5 nanoseconds. get_cycles() is a thin wrapper around the RDTSC instruction: it returns the number of cycles for the current processor since last reset. It’s also virtualized with low-overhead in case you run the benchmark in a virtual machine. If you want to measure a function with a greater precision, you need to wrap it in a loop. However, the loop itself adds to the overhead, notably if you need to compute a large input set (in this case, the input can be prepared). Compilers also like to mess with loops. At last, a loop hides the result distribution.
preemption
While the benchmark is running, the thread executing it can be preempted (or when running in a virtual machine, the whole virtual machine can be preempted by the host). When the function takes less than a millisecond to execute, one can assume preemption is rare enough to be filtered out by using a percentile function.
noise
When running the benchmark, noise from unrelated processes (or sibling hosts when benchmarking in a virtual machine) needs to be avoided as it may change from one run to another. Therefore, it is not a good idea to benchmark in a public cloud. On the other hand, adding controlled noise to the benchmark may lead to less artificial results: in our example, route lookup is only a small part of routing a packet and measuring it alone in a tight loop affects positively the benchmark.
syncing parallel benchmarks
While it is possible (and safe) to run several benchmarks in parallel, it may be difficult to ensure they really run in parallel: some invocations may work in better conditions because other threads are not running yet, skewing the result. Ideally, each run should execute bogus iterations and start measures only when all runs are present. This doesn’t seem a trivial addition.

As a conclusion, the benchmark module presented here is quite primitive (notably compared to a framework like JMH for Java) but, with care, can deliver some conclusive results like in these posts: “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”

Alternative

Use of a tracing tool is an alternative approach. For example, if we want to benchmark IPv4 route lookup times, we can use the following process:

while true; do
  ip route get $((RANDOM%100)).$((RANDOM%100)).$((RANDOM%100)).5
  sleep 0.1
done

Then, we instrument the __fib_lookup() function with eBPF (through BCC):

$ sudo funclatency-bpfcc __fib_lookup
Tracing 1 functions for "__fib_lookup"... Hit Ctrl-C to end.
^C
     nsecs               : count     distribution
         0 -> 1          : 0        |                    |
         2 -> 3          : 0        |                    |
         4 -> 7          : 0        |                    |
         8 -> 15         : 0        |                    |
        16 -> 31         : 0        |                    |
        32 -> 63         : 0        |                    |
        64 -> 127        : 0        |                    |
       128 -> 255        : 0        |                    |
       256 -> 511        : 3        |*                   |
       512 -> 1023       : 1        |                    |
      1024 -> 2047       : 2        |*                   |
      2048 -> 4095       : 13       |******              |
      4096 -> 8191       : 42       |********************|

Currently, the overhead is quite high, as a route lookup on an empty routing table is less than 100 ns. Once Linux supports inter-event tracing, the overhead of this solution may be reduced to be usable for such microbenchmarks.


  1. In this simple case, it may be more accurate to use:

    t1 = get_cycles();
    for (i = 0; i < count; i++) {
        err |= fib_lookup();
    }
    t2 = get_cycles();
    total = t2 - t1;
    

    However, this prevents us to compute more statistics. Moreover, when you need to provide a non-constant input to the fib_lookup() function, the first way is likely to be more accurate. 

  2. In-kernel API backward compatibility is a non-goal of the Linux kernel. 

  3. You can get the current frequency with cpupower frequency-info. As the frequency may vary (even when using the performance governor), this may not be accurate but this still provides an easier representation (comparable results should use the same frequency). 

  4. Only integer arithmetic is available in the kernel. While it is possible to approximate a standard deviation using only integers, the median absolute deviation just reuses the percentile() function defined above. 

28 December, 2017 09:27AM by Vincent Bernat

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Freezing of tasks failed

It is interesting how a user-space task could lead to hinder a Linux kernel software suspend operation.

[11735.155443] PM: suspend entry (deep)
[11735.155445] PM: Syncing filesystems ... done.
[11735.215091] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11735.215172] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11735.558676] rfkill: input handler enabled
[11735.608859] (NULL device *): firmware: direct-loading firmware rtlwifi/rtl8723befw_36.bin
[11735.609910] (NULL device *): firmware: direct-loading firmware rtl_bt/rtl8723b_fw.bin
[11735.611871] Freezing user space processes ... 
[11755.615603] Freezing of tasks failed after 20.003 seconds (1 tasks refusing to freeze, wq_busy=0):
[11755.615854] digikam         D    0 13262  13245 0x00000004
[11755.615859] Call Trace:
[11755.615873]  __schedule+0x28e/0x880
[11755.615878]  schedule+0x2c/0x80
[11755.615889]  request_wait_answer+0xa3/0x220 [fuse]
[11755.615895]  ? finish_wait+0x80/0x80
[11755.615902]  __fuse_request_send+0x86/0x90 [fuse]
[11755.615907]  fuse_request_send+0x27/0x30 [fuse]
[11755.615914]  fuse_send_readpages.isra.30+0xd1/0x120 [fuse]
[11755.615920]  fuse_readpages+0xfd/0x110 [fuse]
[11755.615928]  __do_page_cache_readahead+0x200/0x2d0
[11755.615936]  filemap_fault+0x37b/0x640
[11755.615940]  ? filemap_fault+0x37b/0x640
[11755.615944]  ? filemap_map_pages+0x179/0x320
[11755.615950]  __do_fault+0x1e/0xb0
[11755.615953]  __handle_mm_fault+0xc8a/0x1160
[11755.615958]  handle_mm_fault+0xb1/0x200
[11755.615964]  __do_page_fault+0x257/0x4d0
[11755.615968]  do_page_fault+0x2e/0xd0
[11755.615973]  page_fault+0x22/0x30
[11755.615976] RIP: 0033:0x7f32d3c7ff90
[11755.615978] RSP: 002b:00007ffd887c9d18 EFLAGS: 00010246
[11755.615981] RAX: 00007f32d3fc9c50 RBX: 000000000275e440 RCX: 0000000000000003
[11755.615982] RDX: 0000000000000002 RSI: 00007ffd887c9f10 RDI: 000000000275e440
[11755.615984] RBP: 00007ffd887c9f10 R08: 000000000275e820 R09: 00000000018d2f40
[11755.615986] R10: 0000000000000002 R11: 0000000000000000 R12: 000000000189cbc0
[11755.615987] R13: 0000000001839dc0 R14: 000000000275e440 R15: 0000000000000000
[11755.616014] OOM killer enabled.
[11755.616015] Restarting tasks ... done.
[11755.817640] PM: suspend exit
[11755.817698] PM: suspend entry (s2idle)
[11755.817700] PM: Syncing filesystems ... done.
[11755.983156] rfkill: input handler disabled
[11756.030209] rfkill: input handler enabled
[11756.073529] Freezing user space processes ... 
[11776.084309] Freezing of tasks failed after 20.010 seconds (2 tasks refusing to freeze, wq_busy=0):
[11776.084630] digikam         D    0 13262  13245 0x00000004
[11776.084636] Call Trace:
[11776.084653]  __schedule+0x28e/0x880
[11776.084659]  schedule+0x2c/0x80
[11776.084672]  request_wait_answer+0xa3/0x220 [fuse]
[11776.084680]  ? finish_wait+0x80/0x80
[11776.084688]  __fuse_request_send+0x86/0x90 [fuse]
[11776.084695]  fuse_request_send+0x27/0x30 [fuse]
[11776.084703]  fuse_send_readpages.isra.30+0xd1/0x120 [fuse]
[11776.084711]  fuse_readpages+0xfd/0x110 [fuse]
[11776.084721]  __do_page_cache_readahead+0x200/0x2d0
[11776.084730]  filemap_fault+0x37b/0x640
[11776.084735]  ? filemap_fault+0x37b/0x640
[11776.084743]  ? __update_load_avg_blocked_se.isra.33+0xa1/0xf0
[11776.084749]  ? filemap_map_pages+0x179/0x320
[11776.084755]  __do_fault+0x1e/0xb0
[11776.084759]  __handle_mm_fault+0xc8a/0x1160
[11776.084765]  handle_mm_fault+0xb1/0x200
[11776.084772]  __do_page_fault+0x257/0x4d0
[11776.084777]  do_page_fault+0x2e/0xd0
[11776.084783]  page_fault+0x22/0x30
[11776.084787] RIP: 0033:0x7f31ddf315e0
[11776.084789] RSP: 002b:00007ffd887ca068 EFLAGS: 00010202
[11776.084793] RAX: 00007f31de13c350 RBX: 00000000040be3f0 RCX: 000000000283da60
[11776.084795] RDX: 0000000000000001 RSI: 00000000040be3f0 RDI: 00000000040be3f0
[11776.084797] RBP: 00007f32d3fca1e0 R08: 0000000005679250 R09: 0000000000000020
[11776.084799] R10: 00000000058fc1b0 R11: 0000000004b9ac50 R12: 0000000000000000
[11776.084801] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000
[11776.084806] QXcbEventReader D    0 13268  13245 0x00000004
[11776.084810] Call Trace:
[11776.084817]  __schedule+0x28e/0x880
[11776.084823]  schedule+0x2c/0x80
[11776.084827]  rwsem_down_write_failed_killable+0x25a/0x490
[11776.084832]  call_rwsem_down_write_failed_killable+0x17/0x30
[11776.084836]  ? call_rwsem_down_write_failed_killable+0x17/0x30
[11776.084842]  down_write_killable+0x2d/0x50
[11776.084848]  do_mprotect_pkey+0xa9/0x2f0
[11776.084854]  SyS_mprotect+0x13/0x20
[11776.084859]  system_call_fast_compare_end+0xc/0x97
[11776.084861] RIP: 0033:0x7f32d1f7c057
[11776.084863] RSP: 002b:00007f32cbb8c8d8 EFLAGS: 00000206 ORIG_RAX: 000000000000000a
[11776.084867] RAX: ffffffffffffffda RBX: 00007f32c4000020 RCX: 00007f32d1f7c057
[11776.084869] RDX: 0000000000000003 RSI: 0000000000001000 RDI: 00007f32c4024000
[11776.084871] RBP: 00000000000000c5 R08: 00007f32c4000000 R09: 0000000000024000
[11776.084872] R10: 00007f32c4024000 R11: 0000000000000206 R12: 00000000000000a0
[11776.084874] R13: 00007f32c4022f60 R14: 0000000000001000 R15: 00000000000000e0
[11776.084906] OOM killer enabled.
[11776.084907] Restarting tasks ... done.
[11776.289655] PM: suspend exit
[11776.459624] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready
[11776.469521] rfkill: input handler disabled
[11776.978733] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready
[11777.038879] IPv6: ADDRCONF(NETDEV_UP): wlp1s0: link is not ready
[11778.022062] wlp1s0: authenticate with 50:8f:4c:82:4d:dd
[11778.033155] wlp1s0: send auth to 50:8f:4c:82:4d:dd (try 1/3)
[11778.038522] wlp1s0: authenticated
[11778.041511] wlp1s0: associate with 50:8f:4c:82:4d:dd (try 1/3)
[11778.059860] wlp1s0: RX AssocResp from 50:8f:4c:82:4d:dd (capab=0x431 status=0 aid=5)
[11778.060253] wlp1s0: associated
[11778.060308] IPv6: ADDRCONF(NETDEV_CHANGE): wlp1s0: link becomes ready
[11778.987669] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11779.117608] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11779.160930] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11779.784045] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11779.913668] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
[11779.961517] [drm:wait_panel_status [i915]] *ERROR* PPS state mismatch
11:58 ♒♒♒   ☺    

Categories: 

Keywords: 

Like: 

28 December, 2017 06:33AM by Ritesh Raj Sarraf

Russ Allbery

Tasker 0.4

As mentioned in my previous post, I'm orphaning this package and won't be working on it further. I doubt anyone cares about my little experiment in an extremely simple web task tracker with almost no features. But Julien ÉLIE sent me a few patches for it many years ago when he was experimenting with it, and it seemed like a shame to let those die in my inbox.

So this is a final release incorporating his fixes (including a few that had been languishing in Git). It has a variety of bug fixes to things like URL encoding, error checking, and non-ASCII group name support. I'm not using this any more, and must admit that I didn't even test this (and it's Python 2, and the coding style is poor, etc.), so this is just an "in case it's useful to someone" release.

You can get the latest version from the Tasker distribution page.

28 December, 2017 04:47AM

December 27, 2017

Carl Chenet

Testing Ansible Playbooks With Vagrant

I use Ansible to automate the deployments of my websites (LinuxJobs.fr, Journal du hacker) and my applications (Feed2toot, Feed2tweet). I’ll describe in this blog post my setup in order to test my Ansible Playbooks locally on my laptop.

Why testing the Ansible Playbooks

I need a simple and a fast way to test the deployments of my Ansible Playbooks locally on my laptop, especially at the beginning of writing a new Playbook, because deploying directly on the production server is both reeeeally slow… and risky for my services in production.

Instead of deploying on a remote server, I’ll deploy my Playbooks on a VirtualBox using Vagrant. This allows getting quickly the result of a new modification, iterating and fixing as fast as possible.

Disclaimer: I am not a profesionnal programmer. There might exist better solutions and I’m only describing one solution of testing Ansible Playbooks I find both easy and efficient for my own use cases.

My process

  1. Begin writing the new Ansible Playbook
  2. Launch a fresh virtual machine (VM) and deploy the playbook on this VM using Vagrant
  3. Fix the issues either from the playbook either from the application deployed by Ansible itself
  4. Relaunch the deployment on the VM
  5. If more errors, go back to step 3. Otherwise destroy the VM, recreate it and deploy to test a last time with a fresh install
  6. If no error remains, tag the version of your Ansible Playbook and you’re ready to deploy in production

What you need

First, you need Virtualbox. If you use the Debian distribution, this link describes how to install it, either from the Debian repositories either from the upstream.

Second, you need Vagrant. Why Vagrant? Because it’s a kind of middleware between your development environment and your virtual machine, allowing programmatically reproducible operations and easy linking your deployments and the virtual machine. Install it with the following command:

# apt install vagrant

Setting up Vagrant

Everything about Vagrant lies in the file Vagrantfile. Here is mine:

Vagrant.require_version ">= 2.0.0"

Vagrant.configure(1) do |config|

 config.vm.box = "debian/stretch64"
 config.vm.provision "shell", inline: "apt install --yes git python3-pip"
 config.vm.provision "ansible" do |ansible|
   ansible.verbose = "v"
   ansible.playbook = "site.yml"
   ansible.vault_password_file = "vault_password_file"
 end
end

Debian, the best OS to operate your online services


  1. The 1st line defines what versions of Vagrant should execute your Vagrantfile.
  2. The first loop of the file, you could define the following operations for as many virtual machines as you wish (here just 1).
  3. The 3rd line defines the official Vagrant image we’ll use for the virtual machine.
  4. The 4th line is really important: those are the missing apps we miss on the VM. Here we install git and python3-pip with apt.
  5. The next line indicates the start of the Ansible configuration.
  6. On the 6th line, we want a verbose output of Ansible.
  7. On the 7th line, we define the entry point of your Ansible Playbook.
  8. On the 8th line, if you use Ansible Vault to encrypt some files, just define here the file with your Ansible Vault passphrase.

When Vagrant launches Ansible, it’s going to launch something like:

$  ansible-playbook --inventory-file=/home/me/ansible/test-ansible-playbook/.vagrant/provisioners/ansible/inventory -v --vault-password-file=vault_password_file site.yml

Executing Vagrant

After writing your Vagrantfile, you need to launch your VM. It’s as simple as using the following command:

$ vagrant up

That’s a slow operation, because the VM will be launched, the additionnal apps you defined in the Vagrantfile will be installed and finally your Playbook will be deployed on it. You should sparsely use it.

Ok, now we’re really ready to iterate fast. Between your different modifications, in order to test your deployments fast and on a regular basis, just use the following command:

$ vagrant provision

Once your Ansible Playbook is finally ready, usually after lots of iterations (at least that’s my case), you should test it on a fresh install, because your different iterations may have modified your virtual machine and could trigger unexpected results.

In order to test it from a fresh install, use the following command:

$ vagrant destroy && vagrant up

That’s again a slow operation. You should use it when you’re pretty sure your Ansible Playbook is almost finished. After testing your deployment on a fresh VM, you’re now ready to deploy in production.Or at least better prepared :p

Possible improvements? Let me know

I find the setup described in this blog post quite useful for my use cases. I can iterate quite fast especially when I begin writing a new playbook, not only on the playbook but sometimes on my own latest apps, not yet ready to be deployed in production. Deploying on a remote server would be both slow and dangerous for my services in production.

I could use a continous integration (CI) server, but that’s not the topic of this blog post.  As said before, the goal is to iterate as fast as possible in the beginning of writing a new Ansible Playbook.

Gitlab, offering Continuous Integration and Continuous Deployment services

Commiting, pushing to your Git repository and waiting for the execution of your CI tests is overkill at the beginning of your Ansible Playbook, when it’s full of errors waiting to be debugged one by one. I think CI is more useful later in the life of the Ansible Playbooks, especially when different people work on it and you have a set or code quality rules to enforce. That’s only my opinion and it’s open to discussion, one more time I’m not a professionnal programmer.

If you have better solutions to test Ansible Playbooks or to improve the one describe here, let me know by writing a comment or by contacting me through my accounts on social networks below, I’ll be delighted to listen to your improvements.

About Me

Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.fr, a job board for Free and Open Source Jobs in France.

Follow Me On Social Networks

 

27 December, 2017 11:00PM by Carl Chenet

hackergotchi for Steve Kemp

Steve Kemp

Translating my website to Finnish

I've now been living in Finland for two years, and I'm pondering a small project to translate my main website into Finnish.

Obviously if my content is solely Finnish it will become of little interest to the world - if my vanity lets me even pretend it is useful at the moment!

The traditional way to do this, with Apache, is to render pages in multiple languages and let the client(s) request their preferred version with Accept-Language:. Though it seems that many clients are terrible at this, and the whole approach is a mess. Pretending it works though we render pages such as:

index.html
index.en.html
index.fi.html

Then "magic happens", such that the right content is served. I can then do extra-things, like add links to "English" or "Finnish" in the header/footers to let users choose.

Unfortunately I have an immediate problem! I host a bunch of websites on a single machine and I don't want to allow a single site compromise to affect other sites. To do that I run each website under its own Unix user. For example I have the website "steve.fi" running as the "s-fi" user, and my blog runs as "s-blog", or "s-blogfi":

root@www ~ # psx -ef | egrep '(s-blog|s-fi)'
s-blogfi /usr/sbin/lighttpd -f /srv/blog.steve.fi/lighttpd.conf -D
s-blog   /usr/sbin/lighttpd -f /srv/blog.steve.org.uk/lighttpd.conf -D
s-fi     /usr/sbin/lighttpd -f /srv/steve.fi/lighttpd.conf -D

There you can see the Unix user, and the per-user instance of lighttpd which hosts the website. Each instance binds to a high-port on localhost, and I have a reverse proxy listening on the public IP address to route incoming connections to the appropriate back-end instance.

I used to use thttpd but switched to lighttpd to allow CGI scripts to be used - some of my sites are slightly/mostly dynamic.

Unfortunately lighttpd doesn't support multiviews without some Lua hacks which will require rewriting - as the supplied example only handles Accept rather than the language-header I want.

It seems my simplest solution is to switch from having lighttpd on the back-end to running apache2 instead, but I've not yet decided which way to jump.

Food for thought, anyway.

hyvää joulua!

27 December, 2017 10:00PM

Russ Allbery

Review: The Tiger's Daughter

Review: The Tiger's Daughter, by K. Arsenault Rivera

Series: Their Bright Ascendancy #1
Publisher: Tor
Copyright: October 2017
ISBN: 0-7653-9254-2
Format: Kindle
Pages: 493

Shizuka is the heir to the Hokkaran Empire, daughter of the empire's most celebrated poet (her father) and its greatest soldier (her mother). Shefali is Qorin, one of the horse people, daughter of the ruler of the clans in all but name. Their mothers slayed a Demon General together and were the closest of friends. When they were introduced at the age of three, Shizuka tried to kill Shefali. Then they started sending letters to each other. By the time they met again at seven, they were inseparable.

This was the second epic fantasy novel inspired by China (well, more Japan in this case) and Mongolia that I read within a couple of days. The other one, Elizabeth Bear's Range of Ghosts, was tightly controlled, careful, and structured, mixing character growth with foreboding glimpses of the antagonist. The Tiger's Daughter is a sprawling, rambling story with a ridiculous frame, full of larger-than-life personalities, expressions of devotion, dramatic stands, impulsive choices, angry denouncements, and unshakable loyalty. It has all the feelings about its characters, and it's much more interested in those feelings than in the structure of the story.

It's a glorious mess and I loved it unreservedly.

There is no way this book should have worked as well as it did, particularly on me. The story frame is an extended "as you know, Bob" retelling of events to a character who was there for 90% of them. Later in the book, there is an unhealing wound story line and some disturbing body horror, two of my least favorite fictional tropes. There were a few parts of this book I found difficult to read. And yet, I loved it anyway. There is something utterly delightful about Shizuka and Shefali's relationship: the rock-solid certainty of it underneath all the drama, the sense of both of them against the entire world if necessary, and the beautiful balancing of Shizuka's aggressive, dramatic arrogance and Shefali's quieter, cautious determination. The unique friendship between their mothers adds more depth, both as a role model and as a contrast. Under all of that sprawl, this book is doing so much work with unapologetic female power and female relationships.

One key to the success of The Tiger's Daughter is that it's unashamed of its feelings about its main characters. This is a book about two very different women and their brilliant, blazing relationship. That is what this book is about, not fighting off a great evil, saving the world, or tracing a coming-of-age story, and it is completely unapologetic about it. The two protagonists do not postpone relationship work to fix some larger problem. They don't sacrifice their relationship for the realm. Each of them is the most important thing in the world to the other, they act accordingly, and they dare the world to make something of it. It's not an uncomplicated relationship: there are moments of depression and despair, misunderstandings, and repeated cases of Shizuka promising things she can't deliver. But there's a solidity, a sense that this book is not going to rip this relationship apart because it would be more dramatic or would be a growth experience.

It's a type of love at first sight, it's perhaps not the most realistic relationship (although what does that mean in a world of demons and gods and strange powers?), but Rivera commits to it and doesn't back down, which gives the story a glorious strength.

I don't think this book would have existed in traditionally-published epic fantasy twenty years ago. You might see characters with Shizuka's skill with swords or Shefali's inability to miss an arrow shot, but Shizuka wouldn't also be the finest calligrapher in the land (and her mother would be the poet and her father the soldier, even if she were still female). And, more centrally, the characters would be focused towards a goal: fighting off a great evil, defending a kingdom, overthrowing a bad ruler. Relationships and story structure would have been bent towards a coming-of-age story, probably focused on a man, that was all about power and responsibility and training. Even urban fantasy, which is more willing to add romance, tended towards a similar arc.

This has changed, and I think that's wonderful. I don't have the critical background to pinpoint where it changed first (I have a personal theory that it's related to the growth of the fan-fiction community, but it's just a theory), but it's given us more books like this where the goal of the characters is to be happy together and glory in their power and skill. They're not apologetic about it, they don't have elder mentor figures in whose shadows they live and whose instructions they have to follow, and they make their own lessons from mistakes instead of being handed analysis by others. And they own every last decision, for good or for ill. There's no overriding fantasy, no guiding prophecy, no unexpected manifestation of powers outside of their control. Just decisions and consequences, where emotions and logic both play a part. This book doesn't have the structure of a romance, but it allows for romance motivations alongside epic fantasy motivations and it's a better story for it.

The Tiger's Daughter has some messy first novel problems, is occasionally overly dramatic, has several tropes I personally find uncomfortable to read, and is full of "most powerful in the land" fantasy wish fulfillment. But I adore these people and would happily read about them for days at a time. The ending is absurdly artificial and yet still had me grinning in delight. What's wrong with wish fulfillment, anyway? Isn't fulfilling wishes a good thing?

I've already pre-ordered the sequel.

This book reaches a definite conclusion, but leaves a lot on the table for a series. Followed by The Phoenix Empress.

Rating: 8 out of 10

27 December, 2017 04:46AM

Reducing obligations

At this year's DebConf, Enrico Zini gave a talk on consensually doing things together that has stuck with me ever since. I recommend watching it if you haven't. The core idea that I took away from it is that volunteer projects should be both voluntary and enjoyable to stay healthy, and an important component of this is for those involved to stop doing things they're not enjoying. (And for others to not pressure them or expect them to be heroes.)

I frequently have to tell myself that one cannot continuously add new obligations without occasionally setting aside existing ones, so this is something I needed to hear. I also have a hard time not responding to email or software contributions, even when I don't have the time or emotional energy to reply. I've therefore accumulated a lot of old email that I "should" respond to (that word is always a warning sign), or patches or ideas for software I maintain that I haven't implemented.

This is the time of the year when I try to step back, look over my life and my current goals and priorities, and decide if there's anything I want to change. My goal for this year is to put aside things that I'm not doing consensually, in Enrico's term, and refocus my time and effort on things that I'm truly enjoying. Or, in some cases, picking up things again that I had been enjoying but hadn't given enough energy to.

So, I'm not going away, from Debian or from free software or from replying to email, by any stretch! But I am giving myself permission to not feel obligated to do a bunch of things I was doing (or not doing but thinking I "should" do). I'm also going to remove myself from the Uploaders control field of more packages that I haven't worked on in some time, just for clarity and fewer things on my packages overview page.

I already orphaned the following packages upstream, but was still maintaining the corresponding Debian packages. I don't use any of this software at the moment, though, so that doesn't make much sense. I'm therefore going to orphan these Debian packages or put them for adoption. (In some cases, there are other possibly obvious maintainers, so I'll ask them first.)

  • krb5-sync
  • lbcd
  • libafs-pag-perl (will give this to the Perl team)
  • libpam-afs-session
  • webauth

I had not yet orphaned the following software for which I'm upstream, but I probably should have, and will fairly soon:

Finally, I have oodles of older mail messages from various people that I wish I'd had the energy and thoughtfulness to reply to at the time. At this point, many of them are years old, and everyone except me has probably forgotten about them. But I still had them saved to respond to, and they were sitting around radiating obligation. This week, I'm giving myself to go through and delete them unanswered. I'm sorry to all the people I didn't reply to! Sadly, energy is short, and even with conversations that I start, sometimes life happens.

Please feel free to try again if there's something you still wanted to talk to me about! I do manage to reply to most things.

27 December, 2017 04:18AM

December 26, 2017

Thorsten Alteholz

Debian-Med bug squashing

The Debian Med Advent Calendar was again really successful this year. As announced on the mailinglist, this year the second highest number of bugs has been closed during that bug squashing:

year number of bugs closed
2011 63
2012 28
2013 73
2014 5
2015 150
2016 95
2017 105

Well done everybody who participated!

26 December, 2017 11:16AM by alteholz

Tianon Gravi

Dockerizing Compiled Software

I recently went through a stint of closing a huge number of issues in the docker-library/php repository, and one of the oldest (and longest) discussions was related to installing depedencies for a compiling extensions, and I wrote a semi-long comment explaining how I do this in a general way for any software I wish to Dockerize.

I’m going to copy most of that comment here and perhaps expand a little bit more in order to have a better/cleaner place to link to!

The first step I take is to write the naïve version of the Dockerfile: download the source, run ./configure && make etc, clean up. I then try building my naïve creation, and in doing so hope for an error message. (yes, really!)

The error message will usually take the form of something like error: could not find "xyz.h" or error: libxyz development headers not found.

If I’m building in Debian, I’ll hit https://packages.debian.org/file:xyz.h (replacing “xyz.h” with the name of the header file from the error message), or even just Google something like “xyz.h debian”, to figure out the name of the package I require.

If I’m building in Alpine, I’ll use https://pkgs.alpinelinux.org/contents to perform a similar search.

The same works to some extent for “libxyz development headers”, but in my experience Google works better for those since different distributions and projects will call these development packages by different names, so sometimes it’s a little harder to figure out exactly which one is the “right” one to install.

Once I’ve got a package name, I add that package name to my Dockerfile, rinse, and repeat. Eventually, this usually leads to a successful build. Occationally I find that some library either isn’t in Debian or Alpine, or isn’t new enough, and I’ve also got to build it from source, but those instances are rare in my own experience – YMMV.

I’ll also often check the source for the Debian (via https://sources.debian.org) or Alpine (via https://git.alpinelinux.org/cgit/aports/tree) package of the software I’m looking to compile, especially paying attention to Build-Depends (ala php7.0=7.0.26-1’s debian/control file) and/or makedepends (ala php7’s APKBUILD file) for package name clues.

Personally, I find this sort of detective work interesting and rewarding, but I realize I’m probably a bit of a unique creature. Another good technique I use occationally is to determine whether anyone else has already Dockerized the thing I’m trying to, so I can simply learn directly from their Dockerfile which packages I’ll need to install.

For the specific case of PHP extensions, there’s almost always someone who’s already figured out what’s necessary for this or that module, and all I have to do is some light detective work to find them.

Anyways, that’s my method! Hope it’s helpful, and happy hunting!

26 December, 2017 07:00AM by Tianon Gravi ([email protected])

December 25, 2017

hackergotchi for Christoph Berg

Christoph Berg

Salsa batch import

Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:

$ cat salsa-import
#!/bin/sh

set -eux

PROJECT="${1%.git}"
DESCRIPTION="$PROJECT packaging"
ALIOTH_URL="https://anonscm.debian.org/git"
ALIOTH_GROUP="collab-maint"
SALSA_URL="https://salsa.debian.org/api/v4"
SALSA_NAMESPACE="2" # 2 is "debian"
SALSA_TOKEN="yourcryptictokenhere"

curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \
  --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"

This will create the GitLab project in the chosen namespace, and import the repository from Alioth.

To get the namespace id, use something like:

curl https://salsa.debian.org/api/v4/groups | jq . | less

Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:

for f in *.git; do sh salsa-import $f; done

25 December, 2017 03:43PM

hackergotchi for Clint Adams

Clint Adams

Fewer than 450 to go

«Chäs us Rohmilch us de Region»

„kein Schweizerdeutsch“

«Englisch?»

„oder Hochdeutsch“

«Chäs us em Chloster vo Einsiedeln»

„kein Schwyzerdütsch“

«Englisch?»

„oder Hochdeutsch“

«Chääs?»

„kein Schwyzertüütsch“

«Englisch?»

„oder Hochdeutsch“

«ʕ •ᴥ•ʔ /ᐠ。ꞈ。ᐟ ▼・ᴥ・▼»

“Hold on, talk to the Portuguese guy.”

«Schwitzertitsch?»

「Chäs」

«ᕕ( ᐛ )ᕗ ヽ༼ຈل͜ຈ༽ノ (◕‿◕✿)»

「゚・:。(ꈍᴗꈍ)ε`)~。*:・゚ 」

«/╲/( ͡° ͡° ͜ʖ ͡° ͡°)/╱»

“What happened?”

「Oh, she just wanted to know where to get Thai food.」

Posted on 2017-12-25
Tags: mintings

25 December, 2017 03:13PM

hackergotchi for Alexander Wirt

Alexander Wirt

salsa.debian.org (git.debian.org replacement) going into beta

Since summer we have worked on our git.debian.org replacement based on GitLab. I am really happy to say that we are launching the beta of our service today. Please keep in mind that it is a beta, we don’t expect any database resets, but under unexpected circumstances it might still happen.

The new service is available at https://salsa.debian.org. Every active Debian Developer already has an account. Please request a password reset via https://salsa.debian.org/users/sign_in – your login is either your Debian login or Debian e-mail address.

Guest users

External users are invited to create an account on salsa. To avoid clashes with future Debian Developers, we are enforcing a ‘-guest’ suffix for any guest username. Therefore we developed a self-service portal which allows non-Debian Developers to sign up, available at https://signup.salsa.debian.org. Please keep in mind that your username will have ‘-guest’ appended.

Project creation

Every user can create projects in their own namespace (similar to GitHub).

Teams

For larger projects you can also create a group to host your projects. To avoid clashes with usernames (that share the same namespace as groups) we are requiring groups to have a ‘-team’ suffix to their name. Groups can be created using the same self-service portal https://signup.salsa.debian.org. For larger, already-established teams it is also possible to ask us to create the group with a name not conforming to the normal team namespace. Examples are teams like debian-qa. Please create an issue in the support project.

Collab-maint

If you want to allow other Debian Developers to work on your packages or software, you can create projects within the Debian group. Every Debian Developer has write access to projects created in this group. If you create a project within the Debian group, you are implicitly welcoming all DDs to contribute directly to the project.

Guest users can only be added to individual projects with the Debian group, but not to the entire Debian group. This is different to the policy for the collab-maint group on Alioth.

GitLab runners

We won’t provide any shared Gitlab runners for now. If you want to sponsor resources for such runners please contact us.

Gitlab pages

We will support Gitlab pages in the future, but more work is needed first. We will post an update when they are ready.

Migration of repositories

We don’t plan to do any automatic migration of alioth repositories. If you use a repository and think it is important (!) migrate it on your own. We will provide a read-only export of all repositories that weren’t exported after disabling alioth.

Timeline

We want to run this beta at least for four weeks. If everything goes well we intend to leave beta around the end of January.

Documentation

Documentation of the service will happen in the Debian Wiki. Please feel free to enhance the documentation. See also the upstream GitLab docs.

Getting help

If you have problems with the service you can reach us:

Don’t expect us to be responsive during the holidays, so be patient :).

Request for help

If you want to take part in salsa administration please get in touch with us. We want to have at least two more administrators for the Gitlab instance.

25 December, 2017 11:00AM

December 24, 2017

hackergotchi for Jonathan Carter

Jonathan Carter

Louis-Philippe Véronneau

Holiday Beer Recipe - Le Courant Noir

It's holiday season once again, and while I'm waiting for the deserts I made for my family's Christmas party to finish cooking (I highly recommend Bon Apétit's Brûléed Bourbon-Maple Pumpkin Pie), I opened one of the beers I brewed recently.

And oh boy, what a success.

I've been brewing beer with 2 other friends for a few years now, and while we've brewed some excellent stuff in the past, I feel Le Courant Noir1 - a blackcurrant witbier-inspired ale - is my most resounding achievement.

This was my first time brewing with fresh fruits, and I'm very happy with the results. The beer has a very pleasant, sharp nose of blackcurrants and esters. To the taste, the blackcurrant comes through, but is counterbalanced by the malt and pretty high alcohol content (~8% ABV). The result is a tart, ever so slightly acidic fruity beer. I love it.

A glass of Courant Noir

So yeah, I thought I'd share the recipe in case you want to try replicating it. Try to get fresh blackcurrant, as what you are looking for is the tart taste of the blackcurrant. Using syrup, you're bound to get some jelly-like aftertaste.

Recipe

The target boil volume is 25L and the target batch size 20L. I'm mashing with a pretty low efficiency (70%), so if you use a proper mash tun, you might want to use a little less grain.

Mash at 67°C and ferment at 19°C. Add the blackcurrants whole once the primary fermentation is over.

Malt:

  • 2.8 kg x 2 row Pale Malt
  • 2.8 kg x White Wheat Malt
  • 1.0 kg x Munich Malt

Hops:

  • 35 g x Saaz (4.4% alpha acid) - 60 min Boil
  • 25 g x Saaz (4.4% alpha acid) - 30 min Boil
  • 15 g x Saaz (4.4% alpha acid) - Dry Hop

Yeast:

  • White Labs Belgian Witbier Ale Yeast - WLP400

Other:

  • 25 g x Coriander Seeds (crushed) - 10 min Boil
  • 1.7 kg x Whole Blackcurrant

Pie

Here's a bonus picture of the pie I referenced earlier.

Pumpkin pie in the over in a cast iron pan


1 - Amongst other things, "courant noir" is the French word-for-word translation for blackcurrant. It's also a very bad translation pun Ⓐ ⚑.

24 December, 2017 05:00AM by Louis-Philippe Véronneau

December 23, 2017

hackergotchi for Vasudev Kamath

Vasudev Kamath

My personal Email setup - Notmuch, mbsync, postfix and dovecot

I've been using personal email setup for quite long and have not documented it anywhere. Recently when I changed my laptop (a post is pending about it) I got lost trying to recreate my local mail setup. So this post is a self documentation so that I don't have to struggle again to get it right.

Server Side

I run my own mail server and I use postfix as SMTP server and Dovecot for the IMAP purpose. I'm not going into detail of setting those up as my setup was mostly done by using scripts created by Jonas for Redpill infrastructure. What redpill is?. (In jonas's own words)

<jonas> Redpill is a concept - a way to setup Debian hosts to collaborate across organisations <jonas> I develop the concept, and use it for the first ever Redpill network-of-networks redpill.dk, involving my own network (jones.dk), my main client's network (homebase.dk), a network in Germany including Skolelinux Germany (free-owl.de), and Vasudev's network (copyninja.info)

Along with that I have a dovecot sieve filtering to classify on high level mails into various folders depending on from where they originate. All the rules live in the ~/dovecot.sieve file under every account which has a mail address.

Again I'm not going into detail of how to set these things up, as its not goal of my this post.

On my Laptop

On my laptop I've following 4 parts setup

  1. Mail syncing : Done using mbsync command
  2. Classification: Done using notmuch
  3. Reading: Done using notmuch-emacs
  4. Mail sending: Done using postfix running as relay server and SMTP client.

Mail Syncing

Mail syncing is done using mbsync tool, I was previously user of offlineimap and recently switched to mbsync as I felt it more lighter and simpler to configure than offlineimap. mbsync command is provided by package isync.

Configuration file is ~/.mbsyncrc. Below is my sample content with some private things redacted.

IMAPAccount  copyninja
Host imap.copyninja.info
User vasudev
PassCmd      "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg"
SSLType IMAPS
SSLVersion TLSv1.2
CertificateFile /etc/ssl/certs/ca-certificates.crt


IMAPAccount gmail-kamathvasudev
Host imap.gmail.com
User [email protected]
PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg"
SSLType IMAPS
SSLVersion TLSv1.2
CertificateFile /etc/ssl/certs/ca-certificates.crt

IMAPStore copyninja-remote
Account copyninja

IMAPStore gmail-kamathvasudev-remote
Account gmail-kamathvasudev

MaildirStore copyninja-local
Path ~/Mail/vasudev-copyninja.info/
Inbox ~/Mail/vasudev-copyninja.info/INBOX

MaildirStore gmail-kamathvasudev-local
Path ~/Mail/Gmail-1/
Inbox ~/Mail/Gmail-1/INBOX

Channel copyninja
Master :copyninja-remote:
Slave :copyninja-local:
Patterns *
Create Both
SyncState *
Sync All

Channel gmail-kamathvasudev
Master :gmail-kamathvasudev-remote:
Slave :gmail-kamathvasudev-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns * ![Gmail]*
Create Both
SyncState *
Sync All

Explanation for some interesting part in above configuration. One is the PassCmd which allows you to provide shell command to obtain the password for the account. This avoids filling in the password in configuration file. I'm using symmetric encryption with gpg and storing password some where on my disk. Which is of course just safe guarded by Unix ACL.

I actually wanted to use my public key to encrypt the file but unlocking the file when script is run in background or via systemd looks difficult (or looked nearly impossible). If you have better suggestion I'm all ears :-).

Next instruction part is Patterns. This allows you to selectively sync mail from your mail server. This was really helpful for me to exclude all crappy [Gmail]/ folders.

Mail Classification

Once mail is locally on your device, we need a way to read the mails easily in a mail reader. My original setup was serving synced Maildir using local dovecot instance and read it in Gnus. This setup was bit of a over kill with all server software setups but inability of Gnus to not cope well with Maildir format this was best way to do it. This setup also has a disadvantage, that is searching a mail quickly when you have huge pile of mail to go through. This is where notmuch comes into picture.

notmuch allows me to easily index through Gigabytes of my mail archives and get what I need very easily. I've created a small script which combines executing of mbsync and notmuch execution. I tag mails based on the Maildirs which are actually created on server side using dovecot sieve. Below is my full shell script which is doing task of syncing classification and deleting of spams.

#!/bin/sh

MBSYNC=$(pgrep mbsync)
NOTMUCH=$(pgrep notmuch)

if [ -n "$MBSYNC" -o -n "$NOTMUCH" ]; then
   echo "Already running one instance of mail-sync. Exiting..."
         exit 0
fi

echo "Deleting messages tagged as *deleted*"
notmuch search --format=text0 --output=files tag:deleted |xargs -0 --no-run-if-empty rm -v

echo "Moving spam to Spam folder"
notmuch search --format=text0 --output=files tag:Spam and \
  to:[email protected] | \
    xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur
notmuch search --format=text0 --output=files tag:Spam and
  to:[email protected] | \
     xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur


MDIR="vasudev-copyninja.info vasudev-debian Gmail-1"
mbsync -Va
notmuch new

for mdir in $MDIR; do
    echo "Processing $mdir"
    for fdir in $(ls -d /home/vasudev/Mail/$mdir/*); do
      if [ $(basename $fdir) != "INBOX" ]; then
          echo "Tagging for $(basename $fdir)"
          notmuch tag +$(basename $fdir) -inbox -- folder:$mdir/$(basename $fdir)
      fi
    done
done

So before running mbsync I search for all mails tagged as deleted and delete them from system. Next I look for mails tagged as Spam on both my accounts and move it to Spam folder. Yeah you got it right these are mails escaping the spam filter and landing in my inbox and personally marked as Spam.

After running mbsync I tag mails based on their folder (searching string folder:). This allows me easily get contents of lets say a mailing list without remembering the list address.

Reading Mails

Now that we have synced and classified mail its time to setup the reading part. I use notmuch-emacs interface to read the mails. I use Spacemacs flavor of emacs so I took some time to write down the a private layer which brings together all my keybindings and classification in one place and does not clutter my entire .spacemacs file. You can find the code for my private layer in notmuch-emacs-layer repository

Sending Mails

Well its not sufficient that if we can read mails, we need to be able to reply to mail. And this was the slightly tricky part where I recently got lost and had to write this post so that I don't forget it again. (And of course don't have to refer some outdated posts on web).

My setup to send mails is using postfix as SMTP client with my own SMTP server as relayhost for it. The problem of relaying is it's not for the hosts with dynamic IP. There are couple of ways to allow hosts with dynamic IP to use relay servers, one is put the IP address from where mail will originate into my_network or second use SASL authentication.

My preferred way is use of SASL authentication. For this I first had to create a separate account one for each machine which is going to relay the mails to my main server. Idea is to not use my primary account for SASL authentication. (Originally I was using primary account, but Jonas gave this idea of account per road runner).

adduser <hostname>_relay

Here replace <hostname> with name of your laptop/desktop or whatever you are using. Now we need to adjust postfix to act as relaying server. So add following lines to postfix configuration

# SASL authentication
smtp_sasl_auth_enable = yes
smtp_tls_security_level = encrypt
smtp_sasl_tls_security_options = noanonymous
relayhost = [smtp.copyninja.info]:submission
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd

So here relayhost is the server name which your postfix instance will be using to relay mails forward into internet. :submission part tells postfix to forward mail on to port 587 (secure). smtp_sasl_tls_security_options is set to disallow anonymous connection. This is must so that relay server trusts your mobile host and agrees to forward the mail for you.

/etc/postfix/sasl_passwd is the file where you need to store password for account to be used for SASL authentication with server. Put following content into it.

[smtp.example.com]:submission    user:password

Replace smtp.example.com with your SMTP server name which you have put in relayhost configuration. Replace user with <hostname>_relay user you created and its password.

To secure the sasl_passwd file and create a hash of it for postfix use following command.

chown root:root /etc/postfix/sasl_passwd
chmod 0600 /etc/postfix/sasl_passwd
postmap /etc/postfix/sasl_passwd

The last command will create /etc/postfix/sasl_passwd.db file which is hash of your file /etc/postfix/sasl_passwd with same owner and permission. Now reload the postfix and check if mail makes out of your system using mail command.

Bonus Part

Well since I've a script created above bringing together mail syncing and classification. I went ahead and created a systemd timer to periodically sync mails in the background. In my case every 10 minutes. Below is mailsync.timer file.

[Unit]
Description=Check Mail Every 10 minutes
RefuseManualStart=no
RefuseManualStop=no

[Timer]
Persistent=false
OnBootSec=5min
OnUnitActiveSec=10min
Unit=mailsync.service

[Install]
WantedBy=default.target

Below is mailsync.service which is needed by mailsync.timer to execute our scripts.

[Unit]
Description=Check Mail
RefuseManualStart=no
RefuseManualStop=yes

[Service]
Type=oneshot
ExecStart=/usr/local/bin/mail-sync
StandardOutput=syslog
StandardError=syslog

Put these files under /etc/systemd/user and run below command to enable them.

systemctl enable --user mailsync.timer
systemctl enable --user mailsync.service
systemctl start --user mailsync.timer

So that's how I've sync and send mail from my system. I came to know about afew from Jonas Smedegaard who also proof read this post. So next step I will try to improve my notmuch configuration using afew and of course a post will follow after that :-).

23 December, 2017 11:13AM by copyninja

December 22, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#14: Finding Binary .deb Files for CRAN Packages

Welcome to the fourteenth post in the rationally rambling R rants series, or R4 for short. The last two posts were concerned with faster installation. First, we showed how ccache can speed up (re-)installation. This was followed by a second post on faster installation via binaries.

This last post immediately sparked some follow-up. Replying to my tweet about it, David Smith wondered how to combine binary and source installation (tl;dr: it is hard as you need to combine two package managers). Just this week, Max Ogden wondered how to install CRAN packages as binaries on Linux, and Daniel Nuest poked me on GitHub as part of his excellent containerit project as installation of binaries would of course also make Docker container builds much faster. (tl;dr: Oh yes, see below!)

So can one? Sure. We have a tool. But first the basics.

The Basics

Packages for a particular distribution are indexed by a packages file for that distribution. This is not unlike CRAN using top-level PACKAGES* files. So in principle you could just fetch those packages files, parse and index them, and then search them. In practice that is a lot of work as Debian and Ubuntu now have several tens of thousands of packages.

So it is better to use the distro tool. In my use case on .deb-based distros, this is apt-cache. Here is a quick example for the (Ubuntu 17.04) server on which I type this:

$ sudo apt-get update -qq            ## suppress stdout display
$ apt-cache search r-cran- | wc -l
419
$

So a very vanilla Ubuntu installation has "merely" 400+ binary CRAN packages. Nothing to write home about (yet) -- but read on.

cran2deb4ubuntu, or c2d4u for short

A decade ago, I was involved in two projects to turn all of CRAN into .deb binaries. We had a first ad-hoc predecessor project, and then (much better) a 'version 2' thanks to the excellent Google Summer of Code work by Charles Blundell (and mentored by me). I ran with that for a while and carried at the peak about 2500 binaries or so. And then my controlling db died, just as I visited CRAN to show it off. Very sad. Don Armstrong ran with the code and rebuilt it on better foundations and had for quite some time all of CRAN and BioC built (peaking at maybe 7k package). Then his RAID died. The surviving effort is the one by Michael Rutter who always leaned on the Lauchpad PPA system to build his packages. And those still exist and provide a core of over 10k packages (but across different Ubuntu flavours, see below).

Using cran2deb4ubuntu

In order to access c2d4u you need an Ubuntu system. For example my Travis runner script does

# Add marutter's c2d4u repository, (and rrutter for CRAN builds too)
sudo add-apt-repository -y "ppa:marutter/rrutter"
sudo add-apt-repository -y "ppa:marutter/c2d4u"

After that one can query apt-cache as above, but take advantage of a much larger pool with over 3500 packages (see below). The add-apt-repository command does the Right Thing (TM) in terms of both getting the archive key, and adding the apt source entry to the config directory.

How about from R? Sure, via RcppAPT

Now, all this command-line business is nice. But can we do all this programmatically from R? Sort of.

The RcppAPT package interface the libapt library, and provides access to a few functions. I used this feature when I argued (unsuccessfully, as it turned out) for a particular issue concerning Debian and R upgrades. But that is water under the bridge now, and the main point is that "yes we can".

In Docker: r-apt

Building on RcppAPT, within the Rocker Project we built on top of this by proving a particular class of containers for different Ubuntu releases which all contain i) RcppAPT and ii) the required apt source entry for Michael's repos.

So now we can do this

$ docker run --rm -ti rocker/r-apt:xenial /bin/bash -c 'apt-get update -qq; apt-cache search r-cran- | wc -l'
3525
$

This fires up the corresponding Docker container for the xenial (ie 16.04 LTS) release, updates the apt indices and then searches for r-cran-* packages. And it seems we have a little over 3500 packages. Not bad at all (especially once you realize that this skews strongly towards the more popular packages).

Example: An rstan container

A little while a ago a seemingly very frustrated user came to Carl and myself and claimed that out Rocker Project sucketh because building rstan was all but impossible. I don't have the time, space or inclination to go into details, but he was just plain wrong. You do need to know a little about C++, package building, and more to do this from scratch. Plus, there was a long-standing issue with rstan and newer Boost (which also included several workarounds).

Be that as it may, it serves as nice example here. So the first question: is rstan packaged?

$ docker run --rm -ti rocker/r-apt:xenial /bin/bash -c 'apt-get update -qq; apt-cache show r-cran-rstan'
Package: r-cran-rstan
Source: rstan
Priority: optional
Section: gnu-r
Installed-Size: 5110
Maintainer: cran2deb4ubuntu <[email protected]>
Architecture: amd64
Version: 2.16.2-1cran1ppa0
Depends: pandoc, r-base-core, r-cran-ggplot2, r-cran-stanheaders, r-cran-inline, r-cran-gridextra, r-cran-rcpp,\
   r-cran-rcppeigen, r-cran-bh, libc6 (>= 2.14), libgcc1 (>= 1:4.0), libstdc++6 (>= 5.2)
Filename: pool/main/r/rstan/r-cran-rstan_2.16.2-1cran1ppa0_amd64.deb
Size: 1481562
MD5sum: 60fe7cfc3e8813a822e477df24b37ccf
SHA1: 75bbab1a4193a5731ed105842725768587b4ec22
SHA256: 08816ea0e62b93511a43850c315880628419f2b817a83f92d8a28f5beb871fe2
Description: GNU R package "R Interface to Stan"
Description-md5: c9fc74a96bfde57f97f9d7c16a218fe5

$

It would seem so. With that, the following very minimal Dockerfile is all we need:

## Emacs, make this -*- mode: sh; -*-

## Start from xenial
FROM rocker/r-apt:xenial

## This handle reaches Carl and Dirk
MAINTAINER "Carl Boettiger and Dirk Eddelbuettel" [email protected]

## Update and install rstan
RUN apt-get update && apt-get install -y --no-install-recommends r-cran-rstan

## Make R the default
CMD ["R"]

In essence, it executes one command: install rstan but from binary taking care of all dependencies. And lo and behold, it works as advertised:

$ docker run --rm -ti rocker/rstan:local Rscript -e 'library(rstan)'
Loading required package: ggplot2
Loading required package: StanHeaders
rstan (Version 2.16.2, packaged: 2017-07-03 09:24:58 UTC, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
$

So there: installing from binary works, takes care of dependencies, is easy and as an added bonus even faster. What's not too like?

(And yes, a few of us are working on a system to have more packages available as binaries, but it may take another moment...)

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 December, 2017 10:34PM

hackergotchi for Ben Hutchings

Ben Hutchings

BPF security issues in Debian

Since Debian 9 "stretch", we've shipped a Linux kernel supporting the "enhanced BPF" feature which allows unprivileged user space to upload code into the kernel. This code is written in a restricted language, but one that's much richer than the older "classic" BPF. The kernel verifies that the code is safe (doesn't loop, only accesses memory it is supposed to, etc.) before running it. However, this means that bugs in the verifier could allow unsafe programs to compromise the kernel's security.

Unfortunately, Jann Horn and others recently found many such bugs in Linux 4.14, and some of them affect older versions too. As a mitigation, consider setting the sysctl kernel.unprivileged_bpf_disabled=1. Updated packages will be available shortly.

Update: There is a public exploit that uses several of these bugs to get root privileges. It doesn't work as-is on stretch with the Linux 4.9 kernel, but is easy to adapt. I recommend applying the above mitigation as soon as possible to all systems running Linux 4.4 or later.

22 December, 2017 09:20PM

hackergotchi for Shirish Agarwal

Shirish Agarwal

24×7 shopping in Maharashtra, Learning and Economics

Dear Friends,

My broadband connectivity (ADSL) by BSNL was down for a month and a bit more hence couldn’t post any blogs. In account of road-work there had been digging and numerous accounts of stealing thick copper cables which can be resold to different people and even melted to extract copper. Optical fiber for communication prices have dropped tremendously , the only expensive and tricky part is splicing and termination of the strands. There is a lobby which has clout and incentives to continue with this outdated and outmoded technology, which is why it continues, although this take discussion from the main topic.

I could have cheated and made a blog post in bits and pieces, something I hope to do this week-end but there has been some encouraging news and views which prompted me to post this blog post –

Mumbai 24/7: Shop, dine and play all night long in the city from today
and
Hotels And Restaurants In Maharashtra To Remain Open 24×7

One of the motives apart from being part of Debconf itself which is a valuable incentive to learn new things, is to see Taiwan shopping 24×7, it is the night market bit that the Taiwan team has shared, something I looked up and was a bit hooked up when I saw what’s it all about.

Of course if things go my way would probably would have to do bit more of research than what I have shared above.

The real meat (figure of speech) of today’s announcements was a discussion on CNBC Awaaz

Posted the youtube link above of the discussion – is in Hindi, the crux of the discussion while was about Mumbai (while I live in Pune, about 250-300 kms. in the neighboring city) the implications are for all those places which have restaurants. small kirana stores which had been facing lot of competition from e-tailers. One of the things being envisaged are places to eat and shop at unearthly hours at discounted rates which will drive people interested in such products and services. Also lot of retail services which depend upon such services are also reckoned to grow bringing more stability and multiplier effects to the Indian economy. Maharashtra ( one of the 29 States/Provinces in India) has been a bit contributor to the Indian economy over decades but hasn’t had its share of investment vis-a-vis what it gives to the national exchequer in terms of various fees and taxes. There are figures and beliefs which both support the argument. I haven’t shared as the blog post would still balloon up without adding anything but if needed can still share it in comments.

It was also shared that it would increase tourism but that got mixed reviews that it might not unless and until liquor timings, licenses are loosened up a bit.

I would contend though that there should be substantial increase and flexibility in domestic tourism and businesses as people would be able to make amicable plans for both parties ( a giver and a receiver.)

If one were to specifically talk about Mumbai, both Marine Drive, Grandstand as well as some other places in Colaba and elsewhere have been/were open all night. But with this shift of policy the civic infrastructure which already was in deficit might increase more as it comes under more pressure while law and order would need to be more beefed up and adequately trained, both of which are under strain as well. What was shared this policy would also end the lower-rank corruption by police officials who used to ask for protection money if shops were even a little late in closing up.

There is also a possibility that traffic congestion might be in night as well but it may reflect into a bit less traffic congestion in the day-time. Again all of this is a bit of imagination, conjecture at this point in time. People like me who can’t stand Mumbai’s humidity in the day-time would find a bit more excuse to be there at night if more budget restaurants were to be open late at night.

Also it is not a blanket thing for everybody, there are restrictions on shops in residential areas which might be expected to be relaxed a bit once things happen in the open. One well-known name which cropped up was the ubiquitous 7-11 stores. There would certainly be lot of interest if such convenience stores opened up all across the city/state.

I am excited to see if this happens.

Although, this has happened in India years ago, just it was ‘illegal’ and now it is ‘legal’. When I was in my college, around 1993 – 1994 I had shared also in 2016 Debconf the net/web had just started in India and I was lucky to be able to see/view the net using a service called NIIT Computerdrome.

Just to be explicit, NIIT is and was a premium offering for students who wanted to learn about programming and various MS-Windows technologies as there were already signs that IT (Information Technology) would be a disruptive force. Although nowadays they have also moved into management and administration courses as local IT industry has yet to grow up and lot of H1B-Visas under scanner.

It was a fancy name for what is now known as a cyber-cafe but we used to get net access at discounted rates. Now this place was about 4-5 kms. from my place so I had to be really careful in planning, figuring out as I had to buy coupons which had an expiry date and everything.

Couple of years later, came to know of a service much closer to home, in the basement of a place called Sagar Arcade. Now for those of us who were addicted to web access either for porn, or net technologies, or net gaming, IRC, Video chatting all used to throng there. At that time, NIIT had an 8 Mbps leased line which was a big deal and still is.

While wholesale bulk bandwidth rates have hit rock-bottom, last-mile connectivity still seems to be an issue. Because of Reliance Jio’s aggressive pitches some of the retail bandwidth rates have softened up but still have miles to go before I could say we have adequate bandwidth. Dropped calls (on mobile and landline) are still an issue while bandwidth tapering off every now and then seems to be endemic behavior in both public and private sector ISPs (Internet Service Provider) most of which are Tier-3 ISPs, the only tier-1 ISP Indian ISP I know is Tatas, see this FAQ as well –

World’s largest wholly owned submarine fibre network – more than 500,000 km of subsea fibre, and more than 210,000 km of terrestrial fibre
Only Tier-1 provider that is in the top five in five continents – by internet routes
Over 24% of the world’s internet routes are on Tata Communications’ network
400+ PoPs reach more than 240 countries and territories
44 data centres and co-location centres with over one million sq. ft. of space
7600 petabytes of internet traffic travels over the Tata Communications’ internet backbone each month
15+ terabits/s of international bandwidth lit capacity

– From Tata Communications FAQ .

but came to know they are merging their end-user business with Airtel (another Tier-3 ISP) while their under-sea fibre optic cable business(see above) will still remain with them but this again is taking outside the current topic.

Back to topic on hand –

I am guessing that there was practically no work being done after hours so NIIT might have in-turn leased some of the capacity to the cyber-cafe.

The cybercafe owner had two rates, normal rates which were comparable to any other cyber-cafes and night rates which were half or one-fourth (happy hours) which extended from 2300 hrs – 0500 hours. In order to indulge into net curiosity/net addiction me and few of my friends used to go there. Few days/couple of weeks later a chinese take-away and then a juice/tea/coffee shop came to serve the cybercafe customers.

This whole setup was illegal as according to laws of the time, no commercial establishment (only exceptions being Railway stations, Police Stations, Hopitals, some specific Petrol Pumps and Medicine dispensations shops were allowed to remain open 24×7 ) But even in the case of Medicine shops and Petrol Pumps there were very few who had got permission (looking back might have a combo for Business/Political patronage to it which were not apparent when I was a teenager.) I also came to know much much later that what we were doing was illegal as in using a commercial establishment after hours even though it was in connivance with the owner. see The Bombay Shop Act, 1948.

Comically, the Bombay Shop Act which has now been superseded by the Maharashtra Shops and Establishments Act 2017 has never been in the syllabus of Commerce Students even when we were graduating with Business Administration as one of the optional subjects. The Act and surrounding topics should have been there in the books and creative discussions and consultations with students being taken up. This was in 1994, a full 46 years after the Act came into being.

But as has been shared on this blog before, this is a dream which seems shall not be realized at least in the immediate future.

While reading today’s newspaper I came across this editorial which also opens up a window what the elitist institutions have shrunk in their collective responsibility. While it only talks about social sciences, another article for students of UPSC Mains which was shared by a student friend of mine. It actually took me back to the term Dismal Science as I came across the term and understood the implications years ago.

While it is too early to state/predict whether it would change things in Pune and Maharashtra as a whole, but am hopeful as it would generate both direct and indirect employment. After years of jobless inflationary growth it would be welcome departure especially as youngsters without adequate job skills are joining the unemployed in millions.


Filed under: Miscellenous Tagged: # Mumbai 24x7, #Business in India, #Copper Cables, #Economic Theory 19th century, #Maharashtra Shops and Establishments Act 2017, #Optical Fiber Prices in India, #planet-debian, #Social Sciences, #Tier 1 ISP, #Tier 3 ISP's, Broadband

22 December, 2017 03:18PM by shirishag75

hackergotchi for Olivier Berger

Olivier Berger

Safely testing my students’ PHP graded labs with docker containers

During the course of Web architecture and applications, our students had to deliver a Silex / Symfony Web app project which I’m grading.

I had initially hacked a Docker container to be able to test that the course’s lab examples and code bases provided would be compatible with PHP 5 even though the nominal environment provided in the lab rooms was PHP 7. As I’m running a recent Debian distro with PHP 7 as the default PHP installation, being able to run PHP 5 in a container is quite handy for me. Yes, PHP 5 is dead, but some students might still have remaining installs of old Ubuntus where PHP5 was the norm. As the course was based on Symfony and Silex and these would run as well on PHP 5 or 7 (provided we configured the right stuff in the composer.json), this was supposed to be perfect.

I’ve used such a container a lot for preparing the labs and it served me well. Most of the time I’ve used it to start the PHP command line interpreter from the current dir to start the embedded Web server with “php -S”, which is the standard way to run programs in dev/tests environment with Silex or Symfony (yes, Symfony requires something like “php -S localthost:8000 -t web/” maybe).

I’ve later discovered an additional benefit of using such a container, when comes the time to grad the work that our students have submitted, and I need to test their code. Of course, it ensures that I may run it even if they used PHP5 and I rely on PHP 7 on my machine. But it also assures that I’m only at risk of trashing stuff in the current directory if sh*t happens. Of course, no student would dare deliver malicious PHP code trying to mess with my files… but better safe than sorry. If the contents of the container is trashed, I’m rather on the safe side.

Of course one may give a grade only by reading the students’ code and not testing, but that would be bad taste. And yes, there are probably ways to escape the container safety net in PHP… but I sould maybe not tempt the smartest students of mine in continuing on this path 😉

If you feel like testing the container, I’ve uploaded the necessary bits to a public repo : https://gitlab.com/olberger/local-php5-sqlite-debian.

22 December, 2017 11:34AM by Olivier Berger

hackergotchi for Gustavo Noronha Silva

Gustavo Noronha Silva

CEF on Wayland

TL;DR: we have patches for CEF to enable its usage on Wayland and X11 through the Mus/Ozone infrastructure that is to become Chromium’s streamlined future. And also for Content Shell!

At Collabora we recently assisted a customer who wanted to upgrade their system from X11 to Wayland. The problem: they use CEF as a runtime for web applications and CEF was not Wayland-ready. They also wanted to have something which was as future-proof and as upstreamable as possible, so the Chromium team’s plans were quite relevant.

Chromium is at the same time very modular and quite monolithic. It supports several platforms and has slightly different code paths in each, while at the same time acting as a desktop shell for Chromium OS. To make it even more complex, the Chromium team is constantly rewriting bits or doing major refactorings.

That means you’ll often find several different and incompatible ways of doing something in the code base. You will usually not find clear and stable interfaces, which is where tools like CEF come in, to provide some stability to users of the framework. CEF neutralizes some of the instability, providing a more stable API.

So we started by looking at 1) where is Chromium headed and 2) what kind of integration CEF needed with Chromium’s guts to work with Wayland? We quickly found that the Chromium team is trying to streamline some of the infrastructure so that it can be better shared among the several use cases, reducing duplication and complexity.

That’s where the mus+ash (pronounced “mustache”) project comes in. It wants to make a better split of the window management and shell functionalities of Chrome OS from the browser while at the same time replacing obsolete IPC systems with Mojo. That should allow a lot more code sharing with the “Linux Desktop” version. It also meant that we needed to get CEF to talk Mus.

Chromium already has Wayland support that was built by Intel a while ago for the Ozone display platform abstraction layer. More recently, the ozone-wayland-dev branch was started by our friends at Igalia to integrate that work with mus+ash, implementing the necessary Mus and Mojo interfaces, window decorations, menus and so on. That looked like the right base to use for our CEF changes.

It took quite a bit of effort and several Collaborans participated in the effort, but we eventually managed to convince CEF to properly start the necessary processes and set them up for running with Mus and Ozone. Then we moved on to make the use cases our customer cared about stable and to port their internal runtime code.

We contributed touch support for the Wayland Ozone backend, which we are in the process of upstreaming, reported a few bugs on the Mus/Ozone integration, and did some debugging for others, which we still need to figure out better fixes for.

For instance, the way Wayland fd polling works does not integrate nicely with the Chromium run loop, since there needs to be some locking involved. If you don’t lock/unlock the display for polling, you may end up in a situation in which you’re told there is something to read and before you actually do the read the GL stack may do it in another thread, causing your blocking read to hang forever (or until there is something to read, like a mouse move). As a work-around, we avoided the Chromium run loop entirely for Wayland polling.

More recently, we have start working on an internal project for adding Mus/Ozone support to Content Shell, which is a test shell simpler than Chromium the browser. We think it will be useful as a test bed for future work that uses Mus/Ozone and the content API but not the browser UI, since it lives inside the Chromium code base. We are looking forward to upstreaming it soon!

PS: if you want to build it and try it out, here are some instructions:

# Check out Google build tools and put them on the path
$ git clone https://chromium.googlesource.com/a/chromium/tools/depot_tools.git
$ export PATH=$PATH:`pwd`/depot_tools

# Check out chromium; note the 'src' after the git command, it is important
$ mkdir chromium; cd chromium
$ git clone -b cef-wayland https://gitlab.collabora.com/web/chromium.git src
$ gclient sync  --jobs 16 --with_branch_heads

# To use CEF, download it and look at or use the script we put in the repository
$ cd src # cef goes inside the chromium source tree
$ git clone -b cef-wayland https://gitlab.collabora.com/web/cef.git
$ sh ./cef/build.sh # NOTE: you may need to edit this script to adapt to your directory structure
$ out/Release_GN_x64/cefsimple --mus --use-views

# To build Content Shell you do not need to download CEF, just switch to the branch and build
$ cd src
$ git checkout -b content_shell_mus_support origin/content_shell_mus_support
$ gn args out/Default --args="use_ozone=true enable_mus=true use_xkbcommon=true"
$ ninja -C out/Default content_shell
$ ./out/Default/content_shell --mus --ozone-platform=wayland

22 December, 2017 11:25AM by kov

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

There are also some notable additions to existing projects:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

22 December, 2017 11:00AM

December 21, 2017

Vincent Fourmond

Run QSoas complely non-interactively

QSoas can run scripts, and, since version 2.0, it can be run completely without user interaction from the command-line (though an interface may be briefly displayed). This possibility relies on the following command-line options:

  • --run, which runs the command given on the command-line;
  • --exit-after-running, which closes automatically QSoas after all the commands specified by --run were run;
  • --stdout (since version 2.1), which redirects QSoas's terminal directly to the shell output.
If you create a script.cmds file containing the following commands:
generate-buffer -10 10 sin(x)
save sin.dat
and run the following command from your favorite command-line interpreter:
~ QSoas --stdout --run '@ script.cmds' --exit-after-running
This will create a sin.dat file containing a sinusoid. However, if you run it twice, a Overwrite file 'sin.dat' ? dialog box will pop up. You can prevent that by adding the /overwrite=true option to save. As a general rule, you should avoid all commands that may ask questions in the scripts; a /overwrite=true option is also available for save-buffers for instance.

I use this possibility massively because I don't like to store processed files, I prefer to store the original data files and run a script to generate the processed data when I want to plot or to further process them. It can also be used to generate fitted data from saved parameters files. I use this to run automatic tests on Linux, Windows and Mac for every single build, in order to quickly spot platform-specific regressions.

To help you make use of this possibility, here is a shell function (Linux/Mac users only, add to your $HOME/.bashrc file or equivalent, and restart a terminal) to run directly on QSoas command files:

qs-run () {
        QSoas --stdout --run "@ $1" --exit-after-running
}
To run the script.cmds script above, just run
~ qs-run script.cmds

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1

21 December, 2017 01:51PM by Vincent Fourmond ([email protected])

hackergotchi for Sandro Knauß

Sandro Knauß

Kontact on Debian

When coding at Kontact you normally don't have to care a lot about dependencies in between the different KDE Pim packages, because there are great tools available already. kdesrc-build finds a solution to build all KDE Pim packages in the correct order. The Kde Pim docker image gives you an environment with all dependencies preinstalled, so you can start hacking on KDE Pim directly.

While hacking on master is nice, most users are not using master on their computers in daily life. To reach the users, distributions need to compile and ship KDE Pim. I am active within Debian and would like to make the newest version of KDE Pim available to Debian users. Because Qt deprecated Qt Webkit within Qt 5.5 KDE Pim had to switch from Qt Webkit to Qt WebEngine. Unfortunately Qt WebEngine wasn't available in Debian, so I had to package Qt WebEngine for Debian before packaging KDE Pim for Debian. Qt WebEngine itself is a beast to package. It was only possible to package Qt WebEngine for the last stable release named "Stretch" in time with the help of others of the Debian Qt/KDE mainatiners especially Scarlett Clark, Dmitry Shachnev and Simon Quigley, and we could only upload it some hours before the deep freeze. So if you have asked yourself why Debian doesn't ship 16.08 within their last stable release, this is the answer. The missing dependency for KDE Pim named Qt WebEngine.
There is a second consequence of the switch: Kontact will only be available for those architectures that are supported by Qt WebEngine. Of 19 supported architectures for 16.04, we can only support five architectures in future.

Now after Debian has woken up again from its slumber, we first had to update Qt and KDE Frameworks. After the first attempt at packaging KDE Pim 17.08.0, that was released for experimental, we are now finally reaching the point where we can package and deliver KDE Pim 17.08.3 to Debian unstable. Because Pino Toscano and I had time we started packaging it and stumbled across the issue of having to package 58 source packages, all dependent on each other. Keep in mind all packaging work is not a oneman or twoman show, mostly all in the Qt/KDE Debian mantainers are involved somehow. Either by putting their name under a upload or by being available via IRC, mail and answering questions, making jokes or doing what so ever. Jonathan Riddell visualized the dependencies for KDE Pim 16.08 with graphviz. But KDE Pim is a fast moving target, and I wanted to make my own graphs and make them more useful for packaging.

Full dependency graph for KDE Pim 17.08

The dependencies you see on this graph are created out of the Build dependencies within Debian for KDE Pim 17.08. I stripped out every dependency that isn't part of KDE Pim. In contrast to Jonathan, I made the arrows from dependency to package. So the starting point of the arrow is the dependency and it is pointing to the packages that can be built from it. The green colour shows you packages that have no dependency inside KDE Pim. The blue indicates packages with nothing depending on them. But to be honest, neither Jonathan's nor my graph tells me any more than they do you. They are simply too convoluted. The only thing these graphs make apparent is that packaging KDE Pim is a very complex task :D

But fortunately we can simplify the graphs. For packaging, I'm not interested in "every" dependency, but only in "new" ones. That means, if a <package> depends on a,b and c, and b depends on a, than I know: Okay, I need to package b and c before <package> and a before b. I would call a an implicit dependency of <package>. Here again in a dot style syntax:

a -> <package>
b -> <package>
c -> <package>
a -> b

can be simplified to:

b -> <package>
c -> <package>
a -> b

With this quite simple rule to strip all implicit dependencies out of the graph we end up with a more useful one:

Simplified dependency graph for KDE Pim 17.08

(You can find the dot file and the code to create such a graph at pkg-kde.alioth.debian.org)

At least this is a lot easier to consume and create a package ordering from. But still it looks scary. So I came up with the idea to define tiers, influenced by the tier model in KDE Frameworks. I defined one tier as the maximum set of packages that are independent from each other and only depend on lower tiers:

Build tiers for KDE Pim 17.08 (The dot file and the code to create such a graph you can find at pkg-kde.alioth.debian.org)

Additionally I only show the dependencies, from the last tier to the current one. So a dependency from tier 0 -> tier 1 but not from tier 0 -> tier 2. That's why it looks like nothing depends on kdav or ktnef. But the ellipse shape tells you, that in higher tiers something depends on them. The lightblue diamond shaped ones in contrast indicating, nothing depending on them anymore. So here you can see the "hotpath" for dependencies. This shows that the bottleneck is libkdepim->pimcommon. Interestingly this is also, more or less, the border between the former split of kdepimlibs and kdepim during KDE SC 4 times.
I think this is a useful visualization of the dependencies and may be a starting point to define a goal, what the dependencies should look like.

You also may ask yourself why an application needs so much more tiers than complete KDE Frameworks? Well, the third tier of KDE Frameworks is more of a collection for leftovers that don't reach tier 1 or tier 2. See the definition of tier 3 is: "Tier 3 Frameworks can depend only on other Tier 3 Frameworks, Tier 2 Frameworks, Tier 1 Frameworks, Qt official frameworks, or other system libraries.". The relevant part is that a framework tier 3 can depend on other tier 3 frameworks. If you use my tier definition in contrast, then you end up with more than ten tiers for KDE Frameworks, too.

After building all of these nice graphs for Debian, I wanted to see if I could create such graphs for KDE Pim directly. As KDE is mostly using the kde-build-metadata.git for documenting dependencies I updated my scripts to create graphs from this data directly:

Simplified dependency graph for for KDE Pim 17.12 Build tiers for KDE Pim 17.12

(the code to build the graphs yourselves is available here: kde-dev-scripts.git/pim-build-deps-graphs.py)

In detail this graph looks different and not just because of the version difference (17.08 vs. master). I think we need to update the dependencies data. This also may explain why sometimes kdesrc-build don't manage it to compile complete KDE Pim in the first run.

21 December, 2017 12:39PM by Sandro Knauß

Bastian Blank

Google Cloud backed Debian mirror

It's been some time that someone at Google told us that they had problems providing a stable mirror of Debian for use by their cloud platform. I wanted to give it a try and see what the platform can give us. At this time I was already responsible for the Debian mirror network inside Microsoft Azure.

So I started to generalize a setup of Debian mirrors in cloud environments. I applied the setup to both Google Cloud Engine and Amazon EC2. The setup on the Google Cloud works pretty fine. I scraped the EC2 setup for now, as it can neither provide the throughput, nor the inter-region connectivity to at a level that can compete with Google.

So I'd like to proudly present a test setup of a Google Cloud backed Debian mirror. It provides access to the main and security archive. I would be glad to see a bit more traffic on it. I'd like to asses if there are problems, both with synchronicity and reacheability.

They can be used by adding one of the following to your sources.list:

deb http://debian.gce-test.mirrors.debian.org/debian stretch main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian buster main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian sid main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian experimental main contrib non-free

If you do and see problems, please report them back to me at [email protected]. Also please note that Google stores load balancer logs for seven days, including the client IP.

21 December, 2017 08:45AM by Bastian Blank

hackergotchi for Martin Pitt

Martin Pitt

Migration from PhantomJS to Chrome DevTools Protocol

Being a web interface, Cockpit has a comprehensive integration test suite which exercises all of its functionality on a real web browser that is driven by the tests. Until recently we used PhantomJS for this, but there was an ever-increasing pressure to replace it.

Why replace PhantomJS?

Phantom’s engine is becoming really outdated: it cannot understand even simple ES6 constructs like Set, arrow functions, or promises, which have been in real browsers for many years; this currently blocks hauling in some new code from the welder project. It also doesn’t understand reasonably modern CSS which particularly is important for making mobile-friendly pages, so that we had to put in workarounds for crashes and other misbehaviour into our code. Also, development got officially declared as abandoned last April.

So about two months ago I started some research for possible replacements. Fortunately, Cockpit’s tests are not directly written in JavaScript using the PhantomJS API, but they use an abstract Browser Python class with methods like open(url), wait_visible(selector), and click(selector). So I “only” needed to reimplement that Browser class, and didn’t have to rewrite the entire test suite.

Candidates

The contenders in the ring which are currently popular and most likely supported for a fair while, with their pros and cons:

  1. Electron. This is the rendering engine of Chromium plus the JS engine from nodejs. It is widely adopted and used, and relatively compact (much smaller than Chromium itself).

    • pro: It has a built in REPL to use it interactively (node_modules/.bin/electron -i) and this API is relatively simple and straightforward to use, if your test is built around an external process. This is the case for our Python tests.
    • pro: If your tests are in JS, there is Nightmare as API for electron. This is a really nice one, and super-easy to get started; npm install nightmare, write your first test in 5 lines of JS, done.
    • pro: Has nice features such as verbose debug logging to watch every change, signal, and action that’s going on. You can also enable the graphical window where you see your test actions fly by, you can click around, and use the builtin inspector/debugger/console.
    • It lags behind the latest Chromium a bit. E. g. latest chromium-browser in Fedora 27 is v62, latest Electron is based on 58. (But this might be a good or bad thing depending on your project - sometimes you actually don’t want to require the very latest browser)
    • con: Not currently packaged in Fedora or Debian, so you need to install it through npm (~ 130 MB uncompressed). I. e. almost twice as big as PhantomJS, although the difference in the compressed download is much smaller.
    • con: It does not represent a “real-life” browser as it uses a custom JS engine. While this should not make much of a difference in theory, there’s always little quirks and bugs to be aware of.

  2. Use a real browser (Chromium, Firefox, Edge) with Selenium

    • pro: Gives very realistic results
    • pro: Long-established standard, so most likely will continue to stay around for a while
    • con: Much harder to set up than the other two
    • con: API is low-level, so you need to have some helper API to write tests in a sensible manner.

  3. Use Chromium itself with the DevTools Protocol, possibly in the stripped down headless variant. You have to use a library on top of that: chrome-remote-interface seems to be the standard one, but it’s tiny and straightforward.

    • pro: This is becoming an established standard which other browsers start to support as well (e. g. Edge)
    • pro By nature, gives very realistic test results, and you can choose which Chrome version to test against.
    • pro: Chromium is packaged in all distros, so this doesn’t require a big npm download for running the tests.
    • con: Relatively hard to set up compared to electron or phantom: you manually need to control the actual chromium process plus your own chrome-remote-interface controller process, and allocate port numbers in a race-free manner (to run tests in parallel).
    • con: Relatively low-level protocol (roughly comparable to Selenium), so this is not directly appropriate for writing tests - you need to create your own high-level library on top of this. (But in Cockpit we already have that)

  4. puppeteer is a high-level JS library on top of the Chromium DevTools Protocol.

    • pro: Comfortable and abstract API, comparable to Nightmare.
    • pro: It does the job of launching and controlling Chromium, so similarly simple to set up as Nightmare or Phantom.
    • con: Does not work with the already installed/packaged Chromium, it bundles its own.

After evaluating all these, my conclusion is that for a new project I can recommend puppeteer. If you can live with pulling in the browser through NPM for every test run (CI services like Semaphore cache your node_modules directory, so it might not be a big issue) and are fine with writing your tests in JavaScript, then puppeteer provides the easiest setup and a comfortable and abstract API.

For our existing Cockpit project however, I eventually went with option 3, i. e. Chrome DevTools protocol directly. puppeteer’s own abstraction does not actually help our tests as we already have the Browser class abstraction, and for our CI system and convenience of local test running it actually does make a difference whether you can use the already installed/packaged Chrome or have to download an entire copy. I also suspect that my troubles with SSL certificates (see below) would be much harder or even impossible to solve/workaround with puppeteer.

Interacting with Chromium

The API documentation is excellent, and one can tinker around in the REPL interpreter in simple and straightforward way and watch the result in an interactive Chromium that runs with a temporary $HOME (to avoid interfering with your real config):

$ rm -rf /tmp/h; HOME=/tmp/h chromium-browser --remote-debugging-port=9222 about:blank &
$ mkdir /tmp/test; cd /tmp/test
$ npm install chrome-remote-interface
$ node_modules/.bin/chrome-remote-interface inspect

In the chrome-remote-interface shell one can directly use the CDP commands, for example: Open Google’s search page, focus the search input line, type a query, and check the current URL afterwards:

>>> Page.navigate({url: "https://www.google.de"})
{ frameId: '4521.1' }
>>> Runtime.evaluate({expression: "document.querySelector('input[name=\"q\"]').focus()"})

>>> // type in the search term and Enter key by key
>>> "cockpit\r".split('').map(c => Input.dispatchKeyEvent({type: "char", text: c}))

>>> Runtime.evaluate({expression: "window.location.toString()"})
{ result:
   { type: 'string',
     value: 'https://www.google.de/search?source=hp&ei=T5...&q=cockpit&oq=cockpit&gs_l=[...]' } }

The porting process

After getting an initial idea and feeling how the DevTools protocol works, the actual porting process went in a pretty typical Pareto way. After two days I had around 150 out of our ~ 180 tests working, and porting most of the API from PhantomJS to CDP invocations was straightforward. A lot of the remaining test failures were due to “ordinary” flakes and bugs in the tests themselves, and a series of four PRs fixed them.

There were three major issues on which I spent the “other 90%” of the time on this though - perhaps this blog post and my upstream bug reports help other people to avoid the same traps:

  • Frame handling: Cockpit is built around the concept of iframes, with each frame representing an “application” on your “server Linux session”. To make an assertion or run a query in an iframe, you need to “drill through” into the desired iframe from the root page DOM. I started with a naïve JavaScript-only solution:

    if (current_frame)
      frame_doc = document.querySelector(`iframe[name="${current_frame}"]`).contentDocument.documentElement;
    else
      frame_doc = document;
    

    and then do queries on frame_doc. This actually works well for all but one of our tests which checks embedding a Cockpit page into a custom HTML page. Then this approach (rightfully) fails due to browser’s Same-origin Policy.

    So I went ahead and implemented a solution using the DevTools “mirror” DOM and API. It took me three different attemps to get that right, and in that regard the API documentation nor a Google search were particularly instructive. This is an area where the protocol really could be improved. I posted my solution and a few suggestions to devtools-protocol issue #72.

  • SSL client certs: Our OpenShift tests kept failing when the OAuth page came up, but only when using Headless mode. I initially thought this was due to the OAuth server having an invalid SSL certificate, as the initial error message suggests something like that. But all approaches with --ignore-certificate-errors or a more elaborate usage of the Security API or even actually installing the OAuth server’s certificate didn’t work - quite frustrating.

    It finally helped to enable a third kind of logs (besides console messages and --enable-logging --v=1) which finally revealed what it was complaining about: OAuth was sending a request for presenting a client-side SSL certificate, and this just causes Chromium Headless to throw its hands into the air. As there is no workaround with Chromium Headless, I had to bite the bullet and install the full graphical Chromium (plus half a metric ton of X/mesa dependencies) and Xvfb into our test containers, plus write the logic to bring these up and down in an orderly and parallel fashion.

  • Silently broken pushState API: One of our tests was reproducibly failing on the infrastructure, and only sometimes locally; the screenshot shows that it clearly was on the wrong page, although the previous navigation requests caused no error. Single-stepping through them also worked. Peter and I have spent about three days debugging this and figuring out why adding a simple sleep(3) at a random place in the test made the test to succeed.

    It turned out that a few months ago the window.history.pushState() method changed behaviour: When there are too many calls to it (> 50 in 10 seconds) it ignores the function call, without returning or logging an error. This was by far the most frustrating and biggest time-sink, but after finally discovering it, we had a good justification why a static sleep() is actually warranted in this case. (Related upstream bug reports: #794923 and #769592)

After figuring all that out, the final patch turned out to be reasonably small and readable. Most of the commits are minor test adjustments which weren’t possible to implement exactly as before in the API. Of course this got preceded with half a dozen preparatory commits, to adjust dependencies in containers, fix test races, and the like.

Now that this is landed, we could clean up a bunch of PhantomJS related hacks, it is now possile to write tests for the mobile navigation, and we can now also test ES6 code (such as welder-web). Debugging tests is much more fun now as you can run them on an interactive graphical browser, to see widgets and pages flying around and interactively mess around or inspect them.

21 December, 2017 08:26AM

December 20, 2017

Russell Coker

Designing Shared Cars

Almost 10 years ago I blogged about car sharing companies in Melbourne [1]. Since that time the use of such services appears to have slowly grown (judging by the slow growth in the reserved parking spots for such cars). This isn’t the sudden growth that public transport advocates and the operators of those companies hoped for, but it is still positive. I have just watched the documentary The Human Scale [2] (which I highly recommend) about the way that cities are designed for cars rather than for people.

I think that it is necessary to make cities more suited to the needs of people and that car share and car hire companies are an important part of converting from a car based city to a human based city. As this sort of change happens the share cars will be an increasing portion of the new car sales and car companies will have to design cars to better suit shared use.

Personalising Cars

Luxury car brands like Mercedes support storing the preferred seat position for each driver, once the basic step of maintaining separate driver profiles is done it’s an easy second step to have them accessed over the Internet and also store settings like preferred radio stations, Bluetooth connection profiles, etc. For a car share company it wouldn’t be particularly difficult to extrapolate settings based on previous use, EG knowing that I’m tall and using the default settings for a tall person every time I get in a shared car that I haven’t driven before. Having Bluetooth connections follow the user would mean having one slave address per customer instead of the current practice of one per car, the addressing is 48bit so this shouldn’t be a problem.

Most people accumulate many items in their car, some they don’t need, but many are needed. Some of the things in my car are change for parking meters, sunscreen, tools, and tissues. Car share companies have deals with councils for reserved parking spaces so it wouldn’t be difficult for them to have a deal for paying for parking and billing the driver thus removing the need for change (and the risk of a car window being smashed by some desperate person who wants to steal a few dollars). Sunscreen is a common enough item in Australia that a car share company might just provide it as a perk of using a shared car.

Most people have items like tools, a water bottle, and spare clothes that can’t be shared which tend to end up distributed in various storage locations. The solution to this might be to have a fixed size storage area, maybe based on some common storage item like a milk crate. Then everyone who is a frequent user of shared cars could buy a container designed to fit that space which is divided in a similar manner to a Bento box to contain whatever they need to carry.

There is a lot of research into having computers observing the operation of a car and warning the driver or even automatically applying the brakes to avoid a crash. For shared cars this is more important as drivers won’t necessarily have a feel for the car and can’t be expected to drive as well.

Car Sizes

Generally cars are designed to have 2 people (sports car, Smart car, van/ute/light-truck), 4/5 people (most cars), or 6-8 people (people movers). These configurations are based on what most people are able to use all the time. Most car travel involves only one adult. Most journeys appear to have no passengers or only children being driven around by a single adult.

Cars are designed for what people can drive all the time rather than what would best suit their needs most of the time. Almost no-one is going to buy a personal car that can only take one person even though most people who drive will be on their own for most journeys. Most people will occasionally need to take passengers and that occasional need will outweigh the additional costs in buying and fueling a car with the extra passenger space.

I expect that when car share companies get a larger market they will have several vehicles in the same location to allow users to choose which to drive. If such a choice is available then I think that many people would sometimes choose a vehicle with no space for passengers but extra space for cargo and/or being smaller and easier to park.

For the common case of one adult driving small children the front passenger seat can’t be used due to the risk of airbags killing small kids. A car with storage space instead of a front passenger seat would be more useful in that situation.

Some of these possible design choices can also be after-market modifications. I know someone who removed the rear row of seats from a people-mover to store the equipment for his work. That gave a vehicle with plenty of space for his equipment while also having a row of seats for his kids. If he was using shared vehicles he might have chosen to use either a vehicle well suited to cargo (a small van or ute) or a regular car for transporting his kids. It could be that there’s an untapped demand for ~4 people in a car along with cargo so a car share company could remove the back row of seats from people movers to cater to that.

20 December, 2017 12:31PM by etbe

Renata D'Avila

My project with Outreachy

Let's get to the project I actually applied to:

To build a calendar for FOSS events

We have a page on Debian wiki where we centralize the information needed to make that a reality, you can find it here: SocialEventAndConferenceCalendars

So, in fact, the first thing I did on my internship was:

  • Search for more sources for FOSS events that hadn't been mentioned in that page yet

  • Update said page with these sources

  • Add some attributes for events that I believe could be useful for people wanting to attend them, such as:

    • Is the registration (and not just the CFP) still open?
    • Does the event has a code of conduct?
    • What about accessibility?

I understand that some of these informations might not be readily available for most of the events, but maybe the mere act of mentioning them in our aggregation system may be enough to an organizer to think about them, if they aim to have their event mentioned "by us"?

Both my mentor, Daniel, and I have been looking around to find projects that have worked on a goal similar to this one, to study them and see what can be learned from what has been done already and what can be reused from it. They are mentioned on the wiki page as well. If you know any others, feel free to add there or to let us know!

Among the proposed deliverables for this project:

  • making a plugin for other community web sites to maintain calendars within their existing web site (plugin for Discourse forums, MoinMoin, Drupal, MediaWiki, WordPress, etc) and export it as iCalendar data
  • developing tools for parsing iCalendar feeds and storing the data into a large central database
  • developing tools for searching the database to help somebody find relevant events or see a list of deadlines for bursary applications

My dear mentor Daniel Pocock suggested that I considered working on a plugin for MoinMoinWiki, because Debian and FSFE use MoinMoin for their wikis. I have to admit that I thought that was an awesome idea as soon as I read it, but I was a bit afraid that it would be a very steep learning curve to learn how MoinMoin worked and how I could contribute to it. I'm glad Daniel calmed my fears and reminded me that the mentors are on my side and glad to help!

So, what else have I been doing?

So far? I would say studying! Studying documentation for MoinMoin, studying code that has already been written by others, studying how to plan and to implement this project.

And what have I learned so far?

What is MoinMoin Wiki?

MoinMoin logo, sort of a white "M" inside a circle with light blue background. The corners of the M are rounded and seem connected like nodes

MoinMoin is a wiki written in... Python (YAY! \o/). Let's say that I have... interacted with development on a wiki-like system back when I created my first (and now defunct) blog post-Facebook.

Ikiwiki logo, the first 'iki' is black and mirrors the second one, with a red 'W' in the middle

Ikiwiki was written in Perl, a language I know close to nothing about it, which limited a lot how I could interact with. I am glad that I will be able to work with a language that I am way more familiarized with. (And, on Prof. Masanori's words: "Python is a cool language.")

I also learned that MoinMoin's storage mechanism is based on flat files and folders, rather than a database (I swear that, despite my defense for flat file systems, this is a coincidence. I mean, if you believe in coincidences). I also found out that the development uses Mercurial for version control. I look forward to learning exploring it, because so far I have only used git.

The past few days I set up a local MoinMoin instance. Even though there is a HowTo guide to get MoinMoinWiki working on Debian, I had a little trouble setting it up using it. Mostly because the guide is sort of confusing with permissions, I think? I mean, it says to create a new user with no login, but then it gives commands that can only be executed by root or sudo. That doesn't seen very wise. So I went on and found a docker image for MoinMoin wiki and was able to work on MoinMoin with it. This image is based on Debian Jessie, so maybe that is something that I might work to improve in the future.

Only after I got everything working with docker that I found this page with instructions for Linux which was what I should've tried in the first place, because I didn't really needed fully configurated server with nginx and uwsgi, only a local instance to play with. It happens.

I studied the development guide for MoinMoin and I have also worked to understand the development process (and what Macros, Plugins and such are in this context), so I could figure out where and how to develop!

Macros

A macro is entered as wiki markup and it processes a few parameters to generate an output, which is displayed in the content area.

Searching for Macros, I found out there is a Calendar Macro. And I have discovered that, besides the Calendar Macro, there is also an EventCalendar macro that was developed years ago. I expect to use the next few days to study the EventCalendar code more throughly, but the first impression I had is that this code that can be reused and improved for the FOSS calendar.

Parsers

A parser is entered as wiki markup and it processes a few parameters and a multiline block of text data to generate an output, which is displayed in the content area.

Actions

An action is mostly called using the menu (or a macro) and generates a complete HTML page on its own.

So maybe I will have to work a bit on this afterwards, to interact with the macro and customize the information to be displayed? I am not sure, I will have to look more into this.

I guess that is all I have to report for now. See you in two weeks (or less)!

20 December, 2017 04:20AM by Renata

My contribution to Github-icalendar

Hello!

Now that you already know a bit about me, let me start talking about my internship with Outreachy.

One of the steps to apply to the internship is to pick the project you would like to work on. I chose the one with Debian to build a calendar database of social events and conferences.

It is also part of the application process to make some contribution for the project. At first, it wasn't clear to me what contribution would that be (I hadn't found that URL yet), so I went to the #debian-outreach IRC channel and... well, asked, of course. That is when I found the page with a description of the task. I was supposed to learn about the iCalendar format (I didn't even know what it was, back then!) and work on an issue on the github-icalendar project: to use repository labels in one of the suggested ways.

My contribution for github-icalendar

Github-icalendar works by accessing the open issues in all repositories that the user has access to and transforming them into an iCalendar feed of VTODO items.

I chose to solve the labels issue using them to filter the list of issues that should appear in a feed. I imagined two use cases for it:

  1. An user wants to get issues from all their repositories that contain a given label (getting all 'bug' issues, for instance)

  2. An user wants to get issues from only an specific repository that contain a given label.

Therefore, the label system should support both of these uses.

Working on this contribution taught me not only about the icalendar format, but it also gave me hands-on experience on interacting with the Github Issues API.

Back in October, I was able to attend Python Brasil, the national conference about Python, during which I stayed in an accomodation with other PyLadies and allies. I used this opportunity to share what I had developed so far and to get some feedback. That's how I learned about pudb and how to use it to debug my code (and find out where I was getting the Github Issues API wrong). Because I found it so useful, on my Pull-Request, I proposed it to be added to the project, to help with future development. I also started adding some tests and wrote some specifications as suggestions to anyone who keeps working on it.

I would like take this opportunity to thank you to the friends who pointed me in the right direction during the application process and made this internship a reality to me, in particular Elias Dorneles.

20 December, 2017 02:01AM by Renata

December 19, 2017

Reproducible builds folks

Reproducible Builds: Weekly report #138

Here's what happened in the Reproducible Builds effort between Sunday December 10 and Saturday December 16 2017:

Upcoming events

The Reproducible Builds project are organising an assembly at 34C3 (the "Galactic Congress") in Leipzig, Germany. We will informally meet every day at 13:37 UTC and would be delighted if you joined us there.

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

43 package reviews have been added, 48 have been updated and 51 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (55)
  • Andreas Beckmann (2)
  • Laurent Bigonville (1)
  • Michael Biebl (1)
  • Pierre Saramito (2)

diffoscope development

reprotest development

Version 0.7.5, 0.7.6 and 0.7.7 was uploaded to unstable by Ximin Luo.

It included contributions already covered by posts of the previous weeks as well as new changes:

buildinfo.debian.net development

reproducible-website development

jenkins.debian.net development

Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

19 December, 2017 02:35PM

hackergotchi for Colin Watson

Colin Watson

An odd test failure

Weird test failures are great at teaching you things that you didn’t realise you might need to know.

As previously mentioned, I’ve been working on converting Launchpad from Buildout to virtualenv and pip, and I finally landed that change on our development branch today. The final landing was mostly quite smooth, except for one test failure on our buildbot that I hadn’t seen before:

ERROR: lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked
worker ID: unknown worker (bug in our subunit output?)
----------------------------------------------------------------------
Traceback (most recent call last):
_StringException: log: {{{
36.384  creating repository in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/.bzr/.
36.388  creating branch <bzrlib.branch.BzrBranchFormat7 object at 0xeb85b36c> in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/
}}}

Traceback (most recent call last):
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/lib/lp/codehosting/codeimport/tests/test_worker.py", line 1108, in test_stacked
    stacked_on.fetch(Branch.open(source_details.url))
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/branch.py", line 186, in open
    possible_transports=possible_transports, _unsupported=_unsupported)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 689, in open
    _unsupported=_unsupported)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 718, in open_from_transport
    find_format, transport, redirected)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/transport/__init__.py", line 1719, in do_catching_redirections
    return action(transport)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 706, in find_format
    probers=probers)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 1155, in find_format
    raise errors.NotBranchError(path=transport.base)
NotBranchError: Not a branch: "/tmp/tmpdwqrc6/trunk/".

When I investigated this locally, I found that I could reproduce it if I ran just that test on its own, but not if I ran it together with the other tests in the same class. That’s certainly my favourite way round for test isolation failures to present themselves (it’s more usual to find state from one test leaking out and causing another one to fail, which can make for a very time-consuming exercise of trying to find the critical combination), but it’s still pretty odd.

I stepped through the Branch.open call in each case in the hope of some enlightenment. The interesting difference was that the custom probers installed by the bzr-svn plugin weren’t installed when I ran that one test on its own, so it was trying to open a branch as a Bazaar branch rather than using the foreign-branch logic for Subversion, and this presumably depended on some configuration that only some tests put in place. I was on the verge of just explicitly setting up that plugin in the test suite’s setUp method, but I was still curious about exactly what was breaking this.

Launchpad installs several Bazaar plugins, and lib/lp/codehosting/__init__.py is responsible for putting most of these in place: anything in Launchpad itself that uses Bazaar is generally supposed to do something like import lp.codehosting to set everything up. I therefore put a breakpoint at the top of lp.codehosting and stepped through it to see whether anything was going wrong in the initial setup. Sure enough, I found that bzrlib.plugins.svn was failing to import due to an exception raised by bzrlib.i18n.load_plugin_translations, which was being swallowed silently but meant that its custom probers weren’t being installed. Here’s what that function looks like:

def load_plugin_translations(domain):
    """Load the translations for a specific plugin.

    :param domain: Gettext domain name (usually 'bzr-PLUGINNAME')
    """
    locale_base = os.path.dirname(
        unicode(__file__, sys.getfilesystemencoding()))
    translation = install_translations(domain=domain,
        locale_base=locale_base)
    add_fallback(translation)
    return translation

In this case, sys.getfilesystemencoding was returning None, which isn’t a valid encoding argument to unicode. But why would that be? It gave me a sensible result when I ran it from a Python shell in this environment. A bit of head-scratching later and it occurred to me to look at a backtrace:

(Pdb) bt
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(703)<module>()
-> main()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(694)main()
-> execsitecustomize()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(548)execsitecustomize()
-> import sitecustomize
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/sitecustomize.py(7)<module>()
-> lp_sitecustomize.main()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(193)main()
-> dont_wrap_bzr_branch_classes()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(139)dont_wrap_bzr_branch_classes()
-> import lp.codehosting
> /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp/codehosting/__init__.py(54)<module>()
-> load_plugins([_get_bzr_plugins_path()])

I wonder if there’s something interesting about being imported from a sitecustomize hook? Sure enough, when I went to look at Python for where sys.getfilesystemencoding is set up, I found this in Py_InitializeEx:

    if (!Py_NoSiteFlag)
        initsite(); /* Module site */
    ...
#if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET)
    /* On Unix, set the file system encoding according to the
       user's preference, if the CODESET names a well-known
       Python codec, and Py_FileSystemDefaultEncoding isn't
       initialized by other means. Also set the encoding of
       stdin and stdout if these are terminals, unless overridden.  */

    if (!overridden || !Py_FileSystemDefaultEncoding) {
        ...
    }

I moved this out of sitecustomize, and it’s working better now. But did you know that a sitecustomize hook can’t safely use anything that depends on sys.getfilesystemencoding? I certainly didn’t, until it bit me.

19 December, 2017 01:52PM by Colin Watson

hackergotchi for Jonathan Dowland

Jonathan Dowland

Containers lecture

I've repeated last year's docker lecture a couple of times recently, now revised and retitled "Introduction to Containers". The material is mostly the same; the demo steps exactly the same and I haven't produced any updated hand-outs this time (sorry). Revised slides: shorter version (terms: CC-BY-SA)

Whilst trying to introduce containers, the approach I've taken is to work up the history of Web site/server/app hosting from physical hosting and via Virtual Machines. This gives you the context for their popularity, but I find VMs are not the best way to explain container technology. I prefer to go the other way and look at a process on a multi-user system, the problems due to lack of isolation and steadily build up the isolation available with tools like chroot, etc.

The other area I've tried to expand on is the orchestration layer on top of containers, and above, including technologies such as Kubernetes and Openshift. If I deliver this again I'd like to expand this material much more. On that note, a colleague recently forwarded a link to a Google research paper originally published in acmqueue in January 2016, Borg, Omega, and Kubernetes which is a great read on the history of containers in Google and what led up to the open sourcing of Kubernetes, their third iteration at designing a container orchestrator.

19 December, 2017 10:07AM

hackergotchi for Norbert Preining

Norbert Preining

Japan-styled Christmas Cards

A friend of mine, Kimberlee Aliasgar of Trinidad and Tobago, has created very nice Christmas cards “Japan styled” over at Xmascardsjapan (Japanese version is here). They pick up some typical themes from Japan and turn them into lovely designed cards that are a present by itself, no need for additional presents 😉

Here is another example with “Merry Christmas” written in Katakana, giving a nice touch.

In case you are interested, head over to the English version or Japanese version of there web shop.

I hope you enjoy the cards!

19 December, 2017 08:12AM by Norbert Preining

hackergotchi for Colin Watson

Colin Watson

Kitten Block equivalent for Firefox 57

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

19 December, 2017 12:00AM by Colin Watson

December 17, 2017

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

littler 0.3.3

max-heap image

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)

  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 December, 2017 04:37PM

hackergotchi for Lars Wirzenius

Lars Wirzenius

The proof is in the pudding

I wrote these when I woke up one night and had trouble getting back to sleep, and spent a while in a very philosophical mood thinking about life, success, and productivity as a programmer.

Imagine you're developing a piece of software.

  • You don't know it works, unless you've used it.

  • You don't know it's good, unless people tell you it is.

  • You don't know you can do it, unless you've already done it.

  • You don't know it can handle a given load, unless you've already tried it.

  • The real bottlenecks are always a surprise, the first time you measure.

  • It's not ready for production until it's been used in production.

  • Your automated tests always miss something, but with only manual tests, you always miss more.

17 December, 2017 09:25AM

Petter Reinholdtsen

Cura, the nice 3D print slicer, is now in Debian Unstable

After several months of working and waiting, I am happy to report that the nice and user friendly 3D printer slicer software Cura just entered Debian Unstable. It consist of five packages, cura, cura-engine, libarcus, fdm-materials, libsavitar and uranium. The last two, uranium and cura, entered Unstable yesterday. This should make it easier for Debian users to print on at least the Ultimaker class of 3D printers. My nearest 3D printer is an Ultimaker 2+, so it will make life easier for at least me. :)

The work to make this happen was done by Gregor Riepl, and I was happy to assist him in sponsoring the packages. With the introduction of Cura, Debian is up to three 3D printer slicers at your service, Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D printer, give it a go. :)

The 3D printer software is maintained by the 3D printer Debian team, flocking together on the 3dprinter-general mailing list and the #debian-3dprinting IRC channel.

The next step for Cura in Debian is to update the cura package to version 3.0.3 and then update the entire set of packages to version 3.1.0 which showed up the last few days.

17 December, 2017 06:00AM

December 16, 2017

hackergotchi for Steve Kemp

Steve Kemp

IoT radio: Still in-progress ..

So back in September I was talking about building a IoT Radio, and after that I switched to talking about tracking aircraft via software-defined radio. Perhaps time for a followup.

So my initial attempt at a IoT radio was designed with RDA5807M module. Frustratingly the damn thing was too small to solder easily! Once I did get it working though I found that either the specs lied to me, or I'd misunderstood them: It wouldn't drive headphones, and performance was poor. (Though amusingly the first time I got it working I managed to tune to Helsinki's rock-station, and the first thing I heard was Rammstein's Amerika.)

I made another attempt with an Si4703-based "evaluation board". This was a board which had most of the stuff wired in, so all you had to do was connect an MCU to it, and do the necessary software dancing. There was a headphone-socket for output, and no need to fiddle with the chip itself, it was all pretty neat.

Unfortunately the evaluation board was perfect for basic use, but not at all suitable for real use. The board did successfully output audio to a pair of headphones, but unfortunately it required the use of headphones, as the cable would be treated as an antenna. As soon as I fed the output of the headphone-jack to an op-amp to drive some speakers I was beset with the kind of noise that makes old people reminisce about how music was better back in their day.

So I'm now up to round 3. I have a TEA5767-based project in the works, which should hopefully resolve my problems:

  • There are explicit output and aerial connections.
  • I know I'll need an amplifier.
  • The hardware is easy to control via arduino/esp8266 MCUs.
    • Numerous well-documented projects exist using this chip.

The only downside I can see is that I have to use the op-amp for volume control too - the TEA5767-chip allows you to mute/unmute via software but doesn't allow you to set the volume. Probably for the best.

In unrelated news I've got some e-paper which is ESP8266/arduino controlled. I have no killer-app for it, but it's pretty great. I should write that up sometime.

16 December, 2017 10:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.1.4

drat user

A new version of drat just arrived on CRAN as another no-human-can-delay-this automatic upgrade directly from the CRAN prechecks (though I did need a manual reminder from Uwe to remove a now stale drat repo URL -- bad @hrbrmstr -- from the README in a first attempt).

This release is mostly the work of Neal Fultz who kindly sent me two squeaky-clean pull requests addressing two open issue tickets. As drat is reasonably small and simple, that was enough to motivate a quick release. I also ensured that PACKAGES.rds will always if committed along (if we're in commit mode), which is a follow-up to an initial change from 0.1.3 in September.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.4 (2017-12-16)

  • Changes in drat functionality

    • Binaries for macOS are now split by R version into two different directories (Neal Futz in #67 addring #64).

    • The target branch can now be set via a global option (Neal Futz in #68 addressing #61).

    • In commit mode, add file PACKAGES.rds unconditionally.

  • Changes in drat documentation

    • Updated 'README.md' removing another stale example URL

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 December, 2017 05:26PM

hackergotchi for Daniel Lange

Daniel Lange

IMAPFilter 2.6.11-1 backport for Debian Jessie AMD64 available

One of the perks you get as a Debian Developer is a @debian.org email address. And because Debian is old and the Internet used to be a friendly place this email address is plastered all over the Internet. So you get email spam, a lot of spam.

I'm using a combination of server and client site filtering to keep spam at bay. Unfortunately the IMAPFilter version in Debian Jessie doesn't even support "dry run" (-n) which is not so cool when developing complex filter rules. So I backported the latest (sid) version and agreed with Sylvestre Ledru, one of its maintainers, to share it here and see whether making an official backport is worth it. It's a straight recompile so no magic and no source code or packaging changes required.

Get it while its hot:

imapfilter_2.6.11-1~bpo8+1_amd64.deb (IMAPFilter Jessie backport)
SHA1: bedb9c39e576a58acaf41395e667c84a1b400776

Clever LUA snippets for ~/.imapfilter/config.lua appreciated.

16 December, 2017 02:59PM by Daniel Lange ([email protected])

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.13

A small maintenance release, version 0.6.13, of the digest package arrived on CRAN and in Debian yesterday.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

This release accomodates a request by Luke and Tomas to make the version argument of serialize() an argument to digest() too, which was easy enough to accomodate. The value 2L is the current default (and for now only permitted value). The ALTREP changes in R 3.5 will bring us a new, and more powerful, format with value 3L. Changes can be set in each call, or globally via options(). Other than we just clarified one aspect of raw vector usage in the manual page.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

16 December, 2017 12:43PM

December 15, 2017

hackergotchi for Michael Prokop

Michael Prokop

Usage of Ansible for Continuous Configuration Management

It all started with a tweet of mine:

Screenshot of https://twitter.com/mikagrml/status/941304704004448257

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

15 December, 2017 10:29PM by mika

Reproducible builds folks

Reproducible Builds: Weekly report #137

Here's what happened in the Reproducible Builds effort between Sunday December 3 and Saturday December 9 2017:

Documentation update

There was more discussion on different logos being proposed for the project.

Reproducible work in other projects

Cyril Brulebois wrote about Tails' work on reproducibility

Gabriel Scherer submitted a pull request to the OCaml compiler to honour the BUILD_PATH_PREFIX_MAP environment variable.

Packages reviewed and fixed

Patches filed upstream:

  • Bernhard M. Wiedemann:
  • Eli Schwartz:
  • Foxboron
    • gopass: - use SOURCE_DATE_EPOCH in Makefile
  • Jelle
    • PHP: - use SOURCE_DATE_EPOCH for Build Date
  • Chris Lamb:
    • pylint - file ordering, nondeterminstic data structure
    • tlsh - clarify error message (via diffoscope development)
  • Alexander "lynxis" Couzens:

Patches filed in Debian:

Patches filed in OpenSUSE:

  • Bernhard M. Wiedemann:
    • build-compare (merged) - handle .egg as .zip
    • neovim (merged) - hostname, username
    • perl (merged) - date, hostname, username
    • sendmail - date, hostname, username

Patches filed in OpenWRT:

  • Alexander "lynxis" Couzens:

Reviews of unreproducible packages

17 package reviews have been added, 31 have been updated and 43 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (13)
  • Andreas Beckmann (2)
  • Emilio Pozuelo Monfort (3)

reprotest development

  • Santiago Torres:
    • Use uname -m instead of arch.

trydiffoscope development

Version 66 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts of the previous weeks as well as new ones from:

  • Chris Lamb:
    • Parse dpkg-parsechangelog instead of hard-coding version
    • Bump Standards-Version to 4.1.2
    • flake8 formatting

reproducible-website development

tests.reproducible-builds.org

reproducible Arch Linux:

reproducible F-Droid:

Misc.

This week's edition was written by Ximin Luo, Alexander Couzens, Holger Levsen, Chris Lamb, Bernhard M. Wiedemann and Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

15 December, 2017 06:49PM

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, November 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.

About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

15 December, 2017 02:15PM by Raphaël Hertzog

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 2.18

Weblate 2.18 has been released today. The biggest improvement is probably reviewer based workflow, but there are some other enhancements as well.

Full list of changes:

  • Extended contributor stats.
  • Improved configuration of special chars virtual keyboard.
  • Added support for DTD file format.
  • Changed keyboard shortcuts to less likely collide with browser/system ones.
  • Improved support for approved flag in Xliff files.
  • Added support for not wrapping long strings in Gettext po files.
  • Added button to copy permalink for current translation.
  • Dropped support for Django 1.10 and added support for Django 2.0.
  • Removed locking of translations while translating.
  • Added support for adding new units to monolingual translations.
  • Added support for translation workflows with dedicated reviewers.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

15 December, 2017 01:30PM

Dimitri John Ledkov

What does FCC Net Neutrality repeal mean to you?

Sorry, the web page you have requested is not available through your internet connection.

We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.

If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.
If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

15 December, 2017 09:09AM by Dimitri John Ledkov ([email protected])

hackergotchi for Urvika Gola

Urvika Gola

KubeCon + CloudNativeCon, Austin

KubeCon + CloudNativeCon, North America took place in Austin, Texas from 6th to 8th December. But before that, I stumbled upon this great opportunity by Linux Foundation which would make it possible for me to attend and expand my knowledge about cloud computing, containers and all things cloud native!

cncf

I would like to thank the diversity committee members – @michellenoorali ,  @Kris__Nova, @jessfraz , @evonbuelow and everyone (+Wendy West!!) behind this for making it possible for me and others by going extra miles to achieve the greatest initiative for diversity inclusion. It gave me an opportunity to learn from experts and experience the power of Kubernetes.

After travelling 23+ in flight, I was able to attend the pre-conference sessions on 5th December. The day concluded with amazing Empower Her Evening Event where I met some amazing bunch of people! We had some great discussions and food, Thanks

diversity-scholars-cncfWith diversity scholarship recipients at EmpowerHer event (credits – Radhika Nair)

On 6th December, I was super excited to attend Day 1 of the conference, when I reached at the venue, Austin Convention Center, there was a huge hall with *4100* people talking about all things cloud native!

It started with informational KeyNote by Dan Kohn, the Executive Director of Cloud Native Computing Foundation. He pointed out how CNCF has grown over the year, from having 4 projects in 2016 to 14 projects in 2017. From having 1400 Attendees in March 2017 to 4100 Attendees in December 2017. It was really thrilling to know about the growth and power of Kubernetes, which really inspired me to contribute towards this project.

dan-cncfDan Kohn Keynote talk at KubeCon+CloudNativeCon

It was hard to choose what session to attend because there was just so much going on!! I attended sessions mostly which were beginner & intermediate level. Missed out on the ones which required technical expertise I don’t possess, yet! Curious to know more about other tech companies working on, I made sure I visited all sponsor booths and learn what technology they are building. Apart from that they had cool goodies and stickers, the place where people are labelled at sticker-person or non-sticker-person! 😀

There was a diversity luncheon on 7th December, where I had really interesting conversations with people about their challenges and stories related to technology. I made some great friends at the table and thank you for voting my story as the best story of getting into open source & thank you Samsung for sponsoring this event.

KubeCon + CloudNativeCon was a very informative and hugee event put up by Cloud Native Computing Foundation. It was interesting to know how cloud native technologies have expanded along with the growth of community! Thank you the Linux foundation for this experience! 🙂

IMG_20171206_190624Keeping Cloud Native Weird!
IMG_20171207_192927Open bar all attendee party! (Where I experienced my first snow fall )

 

austin-lakeGoodbye Austin!

15 December, 2017 07:50AM by urvikagola

hackergotchi for Sean Whitton

Sean Whitton

A second X server on vt8, running a different Debian suite, using systemd-nspawn

Two tensions

  1. Sometimes the contents of the Debian archive isn’t yet sufficient for working in a software ecosystem in which I’d like to work, and I want to use that ecosystem’s package manager which downloads the world into $HOME – e.g. stack, pip, lein and friends.

    But I can’t use such a package manager when $HOME contains my PGP subkeys and other valuable files, and my X session includes Firefox with lots of saved passwords, etc.

  2. I want to run Debian stable on my laptop for purposes of my day job – if I can’t open Emacs on a Monday morning, it’s going to be a tough week.

    But I also want to do Debian development on my laptop, and most of that’s a pain without either Debian testing or Debian unstable.

The solution

Have Propellor provision and boot a systemd-nspawn(1) container running Debian unstable, and start a window manager in that container with $DISPLAY pointing at an X server in vt8. Wooo!

In more detail:

  1. Laptop runs Debian stable. Main account is spwhitton.
  2. Achieve isolation from /home/spwhitton by creating a new user account, spw, that can’t read /home/spwhitton. Also, in X startup scripts for spwhitton, run xhost -local:.
  3. debootstrap a Debian unstable chroot into /var/lib/container/develacc.
  4. Install useful desktop things like task-british-desktop into /var/lib/container/develacc.
  5. Boot /var/lib/container/develacc as a systemd-nspawn container called develacc.
  6. dm-tool switch-to-greeter to start a new X server on vt8. Login as spw.
  7. Propellor installs a script enter-develacc which uses nsenter(1) to run commands in the develacc container. Create a further script enter-develacc-i3 which does

     /usr/local/bin/enter-develacc sh -c "cd ~spw; DISPLAY=$1 su spw -c i3"
    
  8. Finally, /home/spw/.xsession starts i3 in the chroot pointed at vt8’s X server:

     sudo /usr/local/bin/enter-develacc-i3 $DISPLAY
    
  9. Phew. May now pip install foo. And Ctrl-Alt-F7 to go back to my secure session. That session can read and write /home/spw, so I can dgit push etc.

The Propellor configuration

develaccProvisioned :: Property (HasInfo + DebianLike)
develaccProvisioned = propertyList "develacc provisioned" $ props
    & User.accountFor (User "spw")
    & Dotfiles.installedFor (User "spw")
    & User.hasDesktopGroups (User "spw")
    & withMyAcc "Sean has 'spw' group"
        (\u -> tightenTargets $ User.hasGroup u (Group "spw"))
    & withMyHome "Sean's homedir chmodded"
        (\h -> tightenTargets $ File.mode h 0O0750)
    & "/home/spw" `File.mode` 0O0770

    & "/etc/sudoers.d/spw" `File.hasContent`
        ["spw ALL=(root) NOPASSWD: /usr/local/bin/enter-develacc-i3"]
    & "/usr/local/bin/enter-develacc-i3" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "echo \"$1\" | grep -q -E \"^:[0-9.]+$\" || exit 1"
        , ""
        , "/usr/local/bin/enter-develacc sh -c \\"
        , "\t\"cd ~spw; DISPLAY=$1 su spw -c i3\""
        ]
    & "/usr/local/bin/enter-develacc-i3" `File.mode` 0O0755

    -- we have to start xss-lock outside of the container in order that it
    -- can interface with host logind
    & "/home/spw/.xsession" `File.hasContent`
        [ "if [ -e \"$HOME/local/wallpaper.png\" ]; then"
        , "    xss-lock -- i3lock -i $HOME/local/wallpaper.png &"
        , "else"
        , "    xss-lock -- i3lock -c 3f3f3f -n &"
        , "fi"
        , "sudo /usr/local/bin/enter-develacc-i3 $DISPLAY"
        ]

    & Systemd.nspawned develAccChroot
    & "/etc/network/if-up.d/develacc-resolvconf" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "cp -fL /etc/resolv.conf \\"
        ,"\t/var/lib/container/develacc/etc/resolv.conf"
        ]
    & "/etc/network/if-up.d/develacc-resolvconf" `File.mode` 0O0755
  where
    develAccChroot = Systemd.debContainer "develacc" $ props
        -- Prevent propellor passing --bind=/etc/resolv.conf which
        -- - won't work when system first boots as WLAN won't be up yet,
        --   so /etc/resolv.conf is a dangling symlink
        -- - doesn't keep /etc/resolv.conf up-to-date as I move between
        --   wireless networks
        ! Systemd.resolvConfed

        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.suiteAvailablePinned Experimental 1
        -- use host apt cacher (we assume I have that on any system with
        -- develaccProvisioned)
        & Apt.proxy "http://localhost:3142"

        & Apt.installed [ "i3"
                , "task-xfce-desktop"
                , "task-british-desktop"
                , "xss-lock"
                , "emacs"
                , "caffeine"
                , "redshift-gtk"
                , "gnome-settings-daemon"
                ]

        & Systemd.bind "/home/spw"
        -- note that this won't create /home/spw because that is
        -- bind-mounted, which is what we want
        & User.accountFor (User "spw")
        -- ensure that spw inside the container can read/write ~spw
        & scriptProperty
            [ "usermod -u $(stat --printf=\"%u\" /home/spw) spw"
            , "groupmod -g $(stat --printf=\"%g\" /home/spw) spw"
            ] `assume` NoChange

Comments

I first tried using a traditional chroot. I bound lots of /dev into the chroot and then tried to start lightdm on vt8. This way, the whole X server would be within the chroot; this is in a sense more straightforward and there is not the overhead of booting the container. But lightdm refuses to start.

It might have been possible to work around this, but after reading a number of reasons why chroots are less good under systemd as compared with sysvinit, I thought I’d try systemd-nspawn, which I’ve used before and rather like in general. I couldn’t get lightdm to start inside that, either, because systemd-nspawn makes it difficult to mount enough of /dev for X servers to be started. At that point I realised that I could start only the window manager inside the container, with the X server started from the host’s lightdm, and went from there.

The security isn’t that good. You shouldn’t be running anything actually untrusted, just stuff that’s semi-trusted.

  • chmod 750 /home/spwhitton, xhost -local: and the argument validation in enter-develacc-i3 are pretty much the extent of the security here. The containerisation is to get Debian sid on a Debian stable machine, not for isolation

  • lightdm still runs X servers as root even though it’s been possible to run them as non-root in Debian for a few years now (there’s a wishlist bug against lightdm)

I now have a total of six installations of Debian on my laptop’s hard drive … four traditional chroots, one systemd-nspawn container and of course the host OS. But this is easy to manage with propellor!

Bugs

Screen locking is weird because logind sessions aren’t shared into the container. I have to run xss-lock in /home/spw/.xsession before entering the container, and the window manager running in the container cannot have a keybinding to lock the screen (as it does in my secure session). To lock the spw X server, I have to shut my laptop lid, or run loginctl lock-sessions from my secure session, which requires entering the root password.

15 December, 2017 12:18AM