Planet GNU

Aggregation of development blogs from the GNU Project

March 14, 2018

automake @ Savannah

March 13, 2018

FSF Blogs

Two new entries for the GNU Licenses FAQ

We recently made some new additions to our resource Frequently Asked Questions about the GNU Licenses (FAQ). The FAQ is one of our most robust articles, covering common questions for using and understanding GNU licenses. We are always looking to improve our materials, so this week we've made some fresh updates.

The first is an update to our entry on using works under the GNU General Public License (GPL) on a Web site. This entry explains that people are free to use modified versions of GPL'ed works internally without releasing source code, and that using GPL'ed code to run your site is just a special case of that. The problem was that the entry went on to explain how things are different when it comes to the Affero GNU General Public License (AGPL). That transition in the old entry wasn't quite as elegant as we would have liked, and so people were often writing to us to ask for clarification. They were getting confused about whether the comments on the AGPL also applied to the GPL. So we've updated that entry, and moved the information on the AGPL to its own entry. The updated text and new entry were both created by long-time licensing team volunteer Yoni Rabkin.

We also added a new entry on containers. The entry just reaffirms that the analysis for whether two things form a single work is unchanged by the fact that containers are involved.

We always want to keep improving our licensing materials, to make it easier for users to understand their rights under free licenses. We hope these new additions will bring greater clarity, but if there is something you are still not sure about, you can always ask us directly at [email protected]. Your question could even be the inspiration for another new entry in the FAQ someday!

Resources like the FAQ are made possible by your support. If you'd like to help, here's what you can do:

13 March, 2018 02:53PM

March 11, 2018

automake @ Savannah

March 08, 2018

FSF Events

Richard Stallman - "Por una sociedad digital libre" (San Luis Potosí, Mexico)

Existen muchas amenazas a la libertad en la sociedad digital, tales como la vigilancia masiva, la censura, las esposas digitales, el software privativo que controla a los usuarios y la guerra contra la práctica de compartir. El uso de servicios web presenta otras más amenazas a la libertad de los usuarios. Por último, no contamos con ningún derecho concreto para hacer nada en Internet, todas nuestras actividades en línea son precarias y podremos continuar con ellas siempre y cuando las empresas deseen cooperar.

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.

No hay registro, pero favor de pasar a la Facultad de Ingeniería de la USLP por su boleto de entrada gratuito, hasta dos semanas antes del evento para asegurarse un asiento.

Lugar: División de Difusión Cultural Centro Cultural Universitario Bicentenario, Sierra Leona 550, Lomas Segunda Sección, 78210 San Luis, S.L.P., San Luis Potosí, México

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de San Luis Potosí.

08 March, 2018 11:45PM

FSF Blogs

Want to help the FSF? Apply to be an Outreachy intern

The Free Software Foundation (FSF) is excited to share that we'll be participating in Outreachy, a paid internship program to help those who are underrepresented in tech start contributing to free software projects.

In past years, the FSF has helped GNU projects participate in Outreachy by providing financial sponsorship for interns. This is the first year that we'll be working with an intern to help them to explore even more ways people can contribute to the fight for user freedom.

Outreachy is important to us for a number of reasons:

  • We've made growing the community a priority. The High Priority Projects list calls on the free software community to "encourage contribution by people underrepresented in the community." This is one of the ways we can help contribute to that goal.

  • Representation matters to us. Better representation in free software is something personally important to the staff of the FSF -- nearly half of us (and the entire campaigns team) are women, our deputy director is Asian American, and we've all had the experience of finding our place in the free software community.

  • Strong communities make strong movements. Most importantly, we know that a variety of experiences and viewpoints -- which comes from having a broad range of activists, contributors, enthusiasts, and users -- creates a strong, robust community that will have the power needed to help user freedom triumph in a world where the status quo is proprietary.

We have three possible projects for interns. We're looking to work with someone to update the Email Self-Defense Guide (ESD). There are a wide range of ways to get involved, including design, illustration, and writing. On the more technical side, there are projects on working on Trisquel GNU/Linux and the Free Software Directory, building MediaWiki skills, responsive theming, licensing and privacy, and documentation.

Interested in being an Outreachy participant with the FSF? Please read our community page!

08 March, 2018 09:12PM

Friday Free Software Directory IRC meetup: March 9th starting at 12:00 p.m. EST/17:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

Last week we were working on updating entries with cryptocurrency donation information, and we'll have to come back to that theme again soon. But this week we're back to a classic: adding new entries to the Directory. A new project leader has taken on the Directory import project, and we hope to discuss that work and get it rolling, but we also want to keep adding all the free software we love that isn't on the Directory already.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

08 March, 2018 03:49PM

March 01, 2018

GNU Spotlight with Mike Gerwitz: 25 new GNU releases!

For announcements of most new GNU releases, subscribe to the info-gnu mailing list: https://lists.gnu.org/mailman/listinfo/info-gnu.

To download: nearly all GNU software is available from https://ftp.gnu.org/gnu/, or preferably one of its mirrors from https://www.gnu.org/prep/ftp.html. You can use the URL https://ftpmirror.gnu.org/ to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

This month, we welcome Nathon Nichols as maintainer of GNU LibreJS, and Roel Jansen and Ricardo Wurmus as maintainers of the new GNU GWL.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see https://www.gnu.org/server/takeaction.html#unmaint if you'd like to help. The general page on how to help GNU is at https://www.gnu.org/help/help.html.

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see https://www.gnu.org/help/evaluation.html.

As always, please feel free to write to us at [email protected] with any GNUish questions or suggestions for future installments.

01 March, 2018 04:55PM

February 28, 2018

Friday Free Software Directory IRC cryptocurrency special meetup: March 2nd starting at 12:00 p.m. EST/17:00 UTC

Help improve the Free Software Directory by adding new entries and updating existing ones. Every Friday we meet on IRC in the #fsf channel on irc.freenode.org.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

When a user comes to the Directory, they know that everything in it is free software, has only free dependencies, and runs on a free OS. With over 16,000 entries, it is a massive repository of information about free software.

While the Directory has been and continues to be a great resource to the world for many years now, it has the potential to be a resource of even greater value. But it needs your help! And since it's a MediaWiki instance, it's easy for anyone to edit and contribute to the Directory.

Last month, the FSF was extremely fortunate to receive a massive $1 million bitcoin donation from the Pineapple Fund. This incredible generosity will power so much good work for the free software community. Cryptocurrency enthusiasts have supported our work for years, and now we want to help make it easy for them to support individual projects as well. We recently added a property to the Directory that lets users indicate that a project accepts cryptocurrency donations. We hope that this new feature will make it easy for donors to find projects that can accept such donations. But in order to do that, we need to start tagging our favorite crypto-accepting projects in the Directory. This week's meeting will focus on starting that project and hopefully designating a team leader to keep it running strong.

If you are eager to help, and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly Directory Meeting pages that everyone is welcome to contribute to before, during, and after each meeting.

28 February, 2018 09:16PM

FSF News

Free Software Foundation releases FY2016 Annual Report

BOSTON, Massachusetts, USA -- Wednesday, February 28, 2018 -- The Free Software Foundation (FSF) today published its Fiscal Year (FY) 2016 Annual Report.

The report is available in low-resolution (11.5 MB PDF) and high-resolution (207.2 MB PDF).

The Annual Report reviews the Foundation's activities, accomplishments, and financial picture from October 1, 2015 to September 30, 2016. It is the result of a full external financial audit, along with a focused study of program results. It examines the impact of the FSF's programs, and FY2016's major events, including LibrePlanet, the creation of ethical criteria for code-hosting repositories, and the expansion of the Respects Your Freedom computer hardware product certification program.

"More people and businesses are using free software than ever before," said FSF executive director John Sullivan in his introduction to the FY2016 report. "That's big news, but our most important measure of success is the support for the ideals. In that area, we have momentum on our side."

As with all of the Foundation's activities, the Annual Report was made using free software, including Inkscape, GIMP, and PDFsam, along with freely licensed fonts and images.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://my.fsf.org/donate. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942 x 17
[email protected]

28 February, 2018 07:20PM

February 27, 2018

FSF Events

Richard Stallman - "El software libre y tu libertad" (Zacatecas, Mexico)

Richard Stallman hablará sobre las metas y la filosofía del movimiento del Software Libre, y el estado y la historia del sistema operativo GNU, el cual junto con el núcleo Linux, es actualmente utilizado por decenas de millones de personas en todo el mundo.

Esa charla de Richard Stallman formará parte del Congreso Internacional de Software Libre FLOSS Versión 3.0. No será técnica y será abierta al público; todos están invitados a asistir.

El lugar exacto de la charla será determinado.

Lugar: por determinar

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Zacatecas.

27 February, 2018 06:05PM

February 22, 2018

parallel @ Savannah

GNU Parallel 20180222 ('Henrik') released

GNU Parallel 20180222 ('Henrik') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Haiku of the month:

Alias and vars
export them more easily
with env_parallel
-- Ole Tange

New in this release:

  • --embed makes it possible to embed GNU parallel in a shell script. This is useful if you need to distribute your script to someone who does not want to install GNU parallel.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 February, 2018 09:56PM by Ole Tange

February 19, 2018

GUIX Project news

Join GNU Guix through Outreachy or GSoC

We are happy to announce that for the first time this year, GNU Guix offers a three-month internship through Outreachy, the inclusion program for groups traditionally underrepresented in free software and tech. We currently propose two subjects to work on:

  1. improving the user experience for the guix package command-line tool;
  2. enhancing Guile tools for the Guix package manager.

Eligible persons should apply by March 22nd.

Guix also participates in the Google Summer of Code (GSoC), under the aegis of the GNU Project. We have collected project ideas for Guix, GuixSD, and the GNU Shepherd, covering a range of topics. The list is far from exhaustive, so feel free to bring your own!

If you are an eligible student, make sure to apply by March 27th.

If you’d like to contribute to computing freedom, Scheme, functional programming, or operating system development, now is a good time to join us. Let’s get in touch on the mailing lists and on the #guix channel on the Freenode IRC network!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

19 February, 2018 04:00PM by Ludovic Courtès

February 17, 2018

libffcall @ Savannah

GNU libffcall 2.1 is released

libffcall version 2.1 is released.

New in this release:

  • Added support for Linux/arm with PIE-enabled gcc, Solaris 11.3 on x86_64, OpenBSD 6.1, HardenedBSD.
  • Fixed a bug regarding passing of pointers on Linux/x86_64 with x32 ABI.
  • Fixed a crash in trampoline on Linux/mips64el.

17 February, 2018 12:58PM by Bruno Haible

February 13, 2018

FSF Events

Richard Stallman - "Computing, freedom and privacy" (Capstone Presentations, Burlington, VT)

Richard Stallman will be delivering the keynote speech at this year's edition of Champlain College's Capston Presentations (2018-04-28), a conference during which seniors at the college present the results of their capstone projects.

The way digital technology is developing, it threatens our freedom, within our computers and in the internet. What are the threats? What must we change?

This speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Location: The Champlain Room (CCM 302), Champlain College, 375 Maple Street, Burlington, VT 05402

Please fill out our contact form, so that we can contact you about future events in and around Burlington.

13 February, 2018 12:31PM

February 09, 2018

Richard Stallman - "Privacy by design" (Rennes, France)

Richard Stallman will be speaking at the Hackathon CampOSV (2018-03-13–--15), part of inOut 2018. His speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Registration, which can be done anonymously, is required; it will help us ensure we can accommodate all the people who wish to attend.

Location: PSA de Rennes-La Janais, Janais, La Calvenais à St Jacques de La Lande (bus line 57, Abbé Grimault; train Ter Bretagne, St Jacques de La Lande), Rennes, France

Please fill out our contact form, so that we can contact you about future events in and around Rennes.

09 February, 2018 03:40PM

Richard Stallman - Free Software Awards (LibrePlanet, Cambridge, MA)

Richard Stallman will be delivering the Free Software Foundation's Free Software Awards at LibrePlanet 2018 (2018-03-24–25).

Register for the event here.

Location: ground floor, Stata Center (aka, building 32), 32 Vassar St., Massachusetts Institute of Technology, Cambridge, MA 02139, USA

Please fill out our contact form, so that we can contact you about future events in and around the Boston area.

09 February, 2018 03:37PM

February 07, 2018

Andy Wingo

design notes on inline caches in guile

Ahoy, programming-language tinkerfolk! Today's rambling missive chews the gnarly bones of "inline caches", in general but also with particular respect to the Guile implementation of Scheme. First, a little intro.

inline what?

Inline caches are a language implementation technique used to accelerate polymorphic dispatch. Let's dive in to that.

By implementation technique, I mean that the technique applies to the language compiler and runtime, rather than to the semantics of the language itself. The effects on the language do exist though in an indirect way, in the sense that inline caches can make some operations faster and therefore more common. Eventually inline caches can affect what users expect out of a language and what kinds of programs they write.

But I'm getting ahead of myself. Polymorphic dispatch literally means "choosing based on multiple forms". Let's say your language has immutable strings -- like Java, Python, or Javascript. Let's say your language also has operator overloading, and that it uses + to concatenate strings. Well at that point you have a problem -- while you can specify a terse semantics of some core set of operations on strings (win!), you can't choose one representation of strings that will work well for all cases (lose!). If the user has a workload where they regularly build up strings by concatenating them, you will want to store strings as trees of substrings. On the other hand if they want to access characterscodepoints by index, then you want an array. But if the codepoints are all below 256, maybe you should represent them as bytes to save space, whereas maybe instead as 4-byte codepoints otherwise? Or maybe even UTF-8 with a codepoint index side table.

The right representation (form) of a string depends on the myriad ways that the string might be used. The string-append operation is polymorphic, in the sense that the precise code for the operator depends on the representation of the operands -- despite the fact that the meaning of string-append is monomorphic!

Anyway, that's the problem. Before inline caches came along, there were two solutions: callouts and open-coding. Both were bad in similar ways. A callout is where the compiler generates a call to a generic runtime routine. The runtime routine will be able to handle all the myriad forms and combination of forms of the operands. This works fine but can be a bit slow, as all callouts for a given operator (e.g. string-append) dispatch to a single routine for the whole program, so they don't get to optimize for any particular call site.

One tempting thing for compiler writers to do is to effectively inline the string-append operation into each of its call sites. This is "open-coding" (in the terminology of the early Lisp implementations like MACLISP). The advantage here is that maybe the compiler knows something about one or more of the operands, so it can eliminate some cases, effectively performing some compile-time specialization. But this is a limited technique; one could argue that the whole point of polymorphism is to allow for generic operations on generic data, so you rarely have compile-time invariants that can allow you to specialize. Open-coding of polymorphic operations instead leads to code bloat, as the string-append operation is just so many copies of the same thing.

Inline caches emerged to solve this problem. They trace their lineage back to Smalltalk 80, gained in complexity and power with Self and finally reached mass consciousness through Javascript. These languages all share the characteristic of being dynamically typed and object-oriented. When a user evaluates a statement like x = y.z, the language implementation needs to figure out where y.z is actually located. This location depends on the representation of y, which is rarely known at compile-time.

However for any given reference y.z in the source code, there is a finite set of concrete representations of y that will actually flow to that call site at run-time. Inline caches allow the language implementation to specialize the y.z access for its particular call site. For example, at some point in the evaluation of a program, y may be seen to have representation R1 or R2. For R1, the z property may be stored at offset 3 within the object's storage, and for R2 it might be at offset 4. The inline cache is a bit of specialized code that compares the type of the object being accessed against R1 , in that case returning the value at offset 3, otherwise R2 and offset r4, and otherwise falling back to a generic routine. If this isn't clear to you, Vyacheslav Egorov write a fine article describing and implementing the object representation optimizations enabled by inline caches.

Inline caches also serve as input data to later stages of an adaptive compiler, allowing the compiler to selectively inline (open-code) only those cases that are appropriate to values actually seen at any given call site.

but how?

The classic formulation of inline caches from Self and early V8 actually patched the code being executed. An inline cache might be allocated at address 0xcabba9e5 and the code emitted for its call-site would be jmp 0xcabba9e5. If the inline cache ended up bottoming out to the generic routine, a new inline cache would be generated that added an implementation appropriate to the newly seen "form" of the operands and the call-site. Let's say that new IC (inline cache) would have the address 0x900db334. Early versions of V8 would actually patch the machine code at the call-site to be jmp 0x900db334 instead of jmp 0xcabba6e5.

Patching machine code has a number of disadvantages, though. It inherently target-specific: you will need different strategies to patch x86-64 and armv7 machine code. It's also expensive: you have to flush the instruction cache after the patch, which slows you down. That is, of course, if you are allowed to patch executable code; on many systems that's impossible. Writable machine code is a potential vulnerability if the system may be vulnerable to remote code execution.

Perhaps worst of all, though, patching machine code is not thread-safe. In the case of early Javascript, this perhaps wasn't so important; but as JS implementations gained parallel garbage collectors and JS-level parallelism via "service workers", this becomes less acceptable.

For all of these reasons, the modern take on inline caches is to implement them as a memory location that can be atomically modified. The call site is just jmp *loc, as if it were a virtual method call. Modern CPUs have "branch target buffers" that predict the target of these indirect branches with very high accuracy so that the indirect jump does not become a pipeline stall. (What does this mean in the face of the Spectre v2 vulnerabilities? Sadly, God only knows at this point. Saddest panda.)

cry, the beloved country

I am interested in ICs in the context of the Guile implementation of Scheme, but first I will make a digression. Scheme is a very monomorphic language. Yet, this monomorphism is entirely cultural. It is in no way essential. Lack of ICs in implementations has actually fed back and encouraged this monomorphism.

Let us take as an example the case of property access. If you have a pair in Scheme and you want its first field, you do (car x). But if you have a vector, you do (vector-ref x 0).

What's the reason for this nonuniformity? You could have a generic ref procedure, which when invoked as (ref x 0) would return the field in x associated with 0. Or (ref x 'foo) to return the foo property of x. It would be more orthogonal in some ways, and it's completely valid Scheme.

We don't write Scheme programs this way, though. From what I can tell, it's for two reasons: one good, and one bad.

The good reason is that saying vector-ref means more to the reader. You know more about the complexity of the operation and what side effects it might have. When you call ref, who knows? Using concrete primitives allows for better program analysis and understanding.

The bad reason is that Scheme implementations, Guile included, tend to compile (car x) to much better code than (ref x 0). Scheme implementations in practice aren't well-equipped for polymorphic data access. In fact it is standard Scheme practice to abuse the "macro" facility to manually inline code so that that certain performance-sensitive operations get inlined into a closed graph of monomorphic operators with no callouts. To the extent that this is true, Scheme programmers, Scheme programs, and the Scheme language as a whole are all victims of their implementations. JavaScript, for example, does not have this problem -- to a small extent, maybe, yes, performance tweaks and tuning are always a thing but JavaScript implementations' ability to burn away polymorphism and abstraction results in an entirely different character in JS programs versus Scheme programs.

it gets worse

On the most basic level, Scheme is the call-by-value lambda calculus. It's well-studied, well-understood, and eminently flexible. However the way that the syntax maps to the semantics hides a constrictive monomorphism: that the "callee" of a call refer to a lambda expression.

Concretely, in an expression like (a b), in which a is not a macro, a must evaluate to the result of a lambda expression. Perhaps by reference (e.g. (define a (lambda (x) x))), perhaps directly; but a lambda nonetheless. But what if a is actually a vector? At that point the Scheme language standard would declare that to be an error.

The semantics of Clojure, though, would allow for ((vector 'a 'b 'c) 1) to evaluate to b. Why not in Scheme? There are the same good and bad reasons as with ref. Usually, the concerns of the language implementation dominate, regardless of those of the users who generally want to write terse code. Of course in some cases the implementation concerns should dominate, but not always. Here, Scheme could be more flexible if it wanted to.

what have you done for me lately

Although inline caches are not a miracle cure for performance overheads of polymorphic dispatch, they are a tool in the box. But what, precisely, can they do, both in general and for Scheme?

To my mind, they have five uses. If you can think of more, please let me know in the comments.

Firstly, they have the classic named property access optimizations as in JavaScript. These apply less to Scheme, as we don't have generic property access. Perhaps this is a deficiency of Scheme, but it's not exactly low-hanging fruit. Perhaps this would be more interesting if Guile had more generic protocols such as Racket's iteration.

Next, there are the arithmetic operators: addition, multiplication, and so on. Scheme's arithmetic is indeed polymorphic; the addition operator + can add any number of complex numbers, with a distinction between exact and inexact values. On a representation level, Guile has fixnums (small exact integers, no heap allocation), bignums (arbitrary-precision heap-allocated exact integers), fractions (exact ratios between integers), flonums (heap-allocated double-precision floating point numbers), and compnums (inexact complex numbers, internally a pair of doubles). Also in Guile, arithmetic operators are a "primitive generics", meaning that they can be extended to operate on new types at runtime via GOOPS.

The usual situation though is that any particular instance of an addition operator only sees fixnums. In that case, it makes sense to only emit code for fixnums, instead of the product of all possible numeric representations. This is a clear application where inline caches can be interesting to Guile.

Third, there is a very specific case related to dynamic linking. Did you know that most programs compiled for GNU/Linux and related systems have inline caches in them? It's a bit weird but the "Procedure Linkage Table" (PLT) segment in ELF binaries on Linux systems is set up in a way that when e.g. libfoo.so is loaded, the dynamic linker usually doesn't eagerly resolve all of the external routines that libfoo.so uses. The first time that libfoo.so calls frobulate, it ends up calling a procedure that looks up the location of the frobulate procedure, then patches the binary code in the PLT so that the next time frobulate is called, it dispatches directly. To dynamic language people it's the weirdest thing in the world that the C/C++/everything-static universe has at its cold, cold heart a hash table and a dynamic dispatch system that it doesn't expose to any kind of user for instrumenting or introspection -- any user that's not a malware author, of course.

But I digress! Guile can use ICs to lazily resolve runtime routines used by compiled Scheme code. But perhaps this isn't optimal, as the set of primitive runtime calls that Guile will embed in its output is finite, and so resolving these routines eagerly would probably be sufficient. Guile could use ICs for inter-module references as well, and these should indeed be resolved lazily; but I don't know, perhaps the current strategy of using a call-site cache for inter-module references is sufficient.

Fourthly (are you counting?), there is a general case of the former: when you see a call (a b) and you don't know what a is. If you put an inline cache in the call, instead of having to emit checks that a is a heap object and a procedure and then emit an indirect call to the procedure's code, you might be able to emit simply a check that a is the same as x, the only callee you ever saw at that site, and in that case you can emit a direct branch to the function's code instead of an indirect branch.

Here I think the argument is less strong. Modern CPUs are already very good at indirect jumps and well-predicted branches. The value of a devirtualization pass in compilers is that it makes the side effects of a virtual method call concrete, allowing for more optimizations; avoiding indirect branches is good but not necessary. On the other hand, Guile does have polymorphic callees (generic functions), and call ICs could help there. Ideally though we would need to extend the language to allow generic functions to feed back to their inline cache handlers.

Finally, ICs could allow for cheap tracepoints and breakpoints. If at every breakable location you included a jmp *loc, and the initial value of *loc was the next instruction, then you could patch individual locations with code to run there. The patched code would be responsible for saving and restoring machine state around the instrumentation.

Honestly I struggle a lot with the idea of debugging native code. GDB does the least-overhead, most-generic thing, which is patching code directly; but it runs from a separate process, and in Guile we need in-process portable debugging. The debugging use case is a clear area where you want adaptive optimization, so that you can emit debugging ceremony from the hottest code, knowing that you can fall back on some earlier tier. Perhaps Guile should bite the bullet and go this way too.

implementation plan

In Guile, monomorphic as it is in most things, probably only arithmetic is worth the trouble of inline caches, at least in the short term.

Another question is how much to specialize the inline caches to their call site. On the extreme side, each call site could have a custom calling convention: if the first operand is in register A and the second is in register B and they are expected to be fixnums, and the result goes in register C, and the continuation is the code at L, well then you generate an inline cache that specializes to all of that. No need to shuffle operands or results, no need to save the continuation (return location) on the stack.

The opposite would be to call ICs as if their were normal procedures: shuffle arguments into fixed operand registers, push a stack frame, and when the IC returns, shuffle the result into place.

Honestly I am looking mostly to the simple solution. I am concerned about code and heap bloat if I specify to every last detail of a call site. Also maximum speed comes with an adaptive optimizer, and in that case simple lower tiers are best.

sanity check

To compare these impressions, I took a look at V8's current source code to see where they use ICs in practice. When I worked on V8, the compiler was entirely different -- there were two tiers, and both of them generated native code. Inline caches were everywhere, and they were gnarly; every architecture had its own implementation. Now in V8 there are two tiers, not the same as the old ones, and the lowest one is a bytecode interpreter.

As an adaptive optimizer, V8 doesn't need breakpoint ICs. It can always deoptimize back to the interpreter. In actual practice, to debug at a source location, V8 will patch the bytecode to insert a "DebugBreak" instruction, which has its own support in the interpreter. V8 also supports optimized compilation of this operation. So, no ICs needed here.

Likewise for generic type feedback, V8 records types as data rather than in the classic formulation of inline caches as in Self. I think WebKit's JavaScriptCore uses a similar strategy.

V8 does use inline caches for property access (loads and stores). Besides that there is an inline cache used in calls which is just used to record callee counts, and not used for direct call optimization.

Surprisingly, V8 doesn't even seem to use inline caches for arithmetic (any more?). Fair enough, I guess, given that JavaScript's numbers aren't very polymorphic, and even with a system with fixnums and heap floats like V8, floating-point numbers are rare in cold code.

The dynamic linking and relocation points don't apply to V8 either, as it doesn't receive binary code from the internet; it always starts from source.

twilight of the inline cache

There was a time when inline caches were recommended to solve all your VM problems, but it would seem now that their heyday is past.

ICs are still a win if you have named property access on objects whose shape you don't know at compile-time. But improvements in CPU branch target buffers mean that it's no longer imperative to use ICs to avoid indirect branches (modulo Spectre v2), and creating direct branches via code-patching has gotten more expensive and tricky on today's targets with concurrency and deep cache hierarchies.

Besides that, the type feedback component of inline caches seems to be taken over by explicit data-driven call-site caches, rather than executable inline caches, and the highest-throughput tiers of an adaptive optimizer burn away inline caches anyway. The pressure on an inline cache infrastructure now is towards simplicity and ease of type and call-count profiling, leaving the speed component to those higher tiers.

In Guile the bounded polymorphism on arithmetic combined with the need for ahead-of-time compilation means that ICs are probably a code size and execution time win, but it will take some engineering to prevent the calling convention overhead from dominating cost.

Time to experiment, then -- I'll let y'all know how it goes. Thoughts and feedback welcome from the compilerati. Until then, happy hacking :)

07 February, 2018 03:14PM by Andy Wingo

February 05, 2018

remotecontrol @ Savannah

Andy Wingo

notes from the fosdem 2018 networking devroom

Greetings, internet!

I am on my way back from FOSDEM and thought I would share with yall some impressions from talks in the Networking devroom. I didn't get to go to all that many talks -- FOSDEM's hallway track is the hottest of them all -- but I did hit a select few. Thanks to Dave Neary at Red Hat for organizing the room.

Ray Kinsella -- Intel -- The path to data-plane micro-services

The day started with a drum-beating talk that was very light on technical information.

Essentially Ray was arguing for an evolution of network function virtualization -- that instead of running VNFs on bare metal as was done in the days of yore, that people started to run them in virtual machines, and now they run them in containers -- what's next? Ray is saying that "cloud-native VNFs" are the next step.

Cloud-native VNFs to move from "greedy" VNFs that take charge of the cores that are available to them, to some kind of resource sharing. "Maybe users value flexibility over performance", says Ray. It's the Care Bears approach to networking: (resource) sharing is caring.

In practice he proposed two ways that VNFs can map to cores and cards.

One was in-process sharing, which if I understood him properly was actually as nodes running within a VPP process. Basically in this case VPP or DPDK is the scheduler and multiplexes two or more network functions in one process.

The other was letting Linux schedule separate processes. In networking, we don't usually do it this way: we run network functions on dedicated cores on which nothing else runs. Ray was suggesting that perhaps network functions could be more like "normal" Linux services. Ray doesn't know if Linux scheduling will work in practice. Also it might mean allowing DPDK to work with 4K pages instead of the 2M hugepages it currently requires. This obviously has the potential for more latency hazards and would need some tighter engineering, and ultimately would have fewer guarantees than the "greedy" approach.

Interesting side things I noticed:

  • All the diagrams show Kubernetes managing CPU node allocation and interface assignment. I guess in marketing diagrams, Kubernetes has completely replaced OpenStack.

  • One slide showed guest VNFs differentiated between "virtual network functions" and "socket-based applications", the latter ones being the legacy services that use kernel APIs. It's a useful terminology difference.

  • The talk identifies user-space networking with DPDK (only!).

Finally, I note that Conway's law is obviously reflected in the performance overheads: because there are organizational isolations between dev teams, vendors, and users, there are big technical barriers between them too. The least-overhead forms of resource sharing are also those with the highest technical consistency and integration (nodes in a single VPP instance).

Magnus Karlsson -- Intel -- AF_XDP

This was a talk about getting good throughput from the NIC to userspace, but by using some kernel facilities. The idea is to get the kernel to set up the NIC and virtualize the transmit and receive ring buffers, but to let the NIC's DMA'd packets go directly to userspace.

The performance goal is 40Gbps for thousand-byte packets, or 25 Gbps for traffic with only the smallest packets (64 bytes). The fast path does "zero copy" on the packets if the hardware has the capability to steer the subset of traffic associated with the AF_XDP socket to that particular process.

The AF_XDP project builds on XDP, a newish thing where a little kind of bytecode can run on the kernel or possibly on the NIC. One of the bytecode commands (REDIRECT) causes packets to be forwarded to user-space instead of handled by the kernel's otherwise heavyweight networking stack. AF_XDP is the bridge between XDP on the kernel side and an interface to user-space using sockets (as opposed to e.g. AF_INET). The performance goal was to be within 10% or so of DPDK's raw user-space-only performance.

The benefits of AF_XDP over the current situation would be that you have just one device driver, in the kernel, rather than having to have one driver in the kernel (which you have to have anyway) and one in user-space (for speed). Also, with the kernel involved, there is a possibility for better isolation between different processes or containers, when compared with raw PCI access from user-space..

AF_XDP is what was previously known as AF_PACKET v4, and its numbers are looking somewhat OK. Though it's not upstream yet, it might be interesting to get a Snabb driver here.

I would note that kernel-userspace cooperation is a bit of a theme these days. There are other points of potential cooperation or common domain sharing, storage being an obvious one. However I heard more than once this weekend the kind of "I don't know, that area of the kernel has a different culture" sort of concern as that highlighted by Daniel Vetter in his recent LCA talk.

François-Frédéric Ozog -- Linaro -- Userland Network I/O

This talk is hard to summarize. Like the previous one, it's again about getting packets to userspace with some support from the kernel, but the speaker went really deep and I'm not quite sure what in the talk is new and what is known.

François-Frédéric is working on a new set of abstractions for relating the kernel and user-space. He works on OpenDataPlane (ODP), which is kinda like DPDK in some ways. ARM seems to be a big target for his work; that x86-64 is also a target goes without saying.

His problem statement was, how should we enable fast userland network I/O, without duplicating drivers?

François-Frédéric was a bit negative on AF_XDP because (he says) it is so focused on packets that it neglects other kinds of devices with similar needs, such as crypto accelerators. Apparently the challenge here is accelerating a single large IPsec tunnel -- because the cryptographic operations are serialized, you need good single-core performance, and making use of hardware accelerators seems necessary right now for even a single 10Gbps stream. (If you had many tunnels, you could parallelize, but that's not the case here.)

He was also a bit skeptical about standardizing on the "packet array I/O model" which AF_XDP and most NICS use. What he means here is that most current NICs move packets to and from main memory with the help of a "descriptor array" ring buffer that holds pointers to packets. A transmit array stores packets ready to transmit; a receive array stores maximum-sized packet buffers ready to be filled by the NIC. The packet data itself is somewhere else in memory; the descriptor only points to it. When a new packet is received, the NIC fills the corresponding packet buffer and then updates the "descriptor array" to point to the newly available packet. This requires at least two memory writes from the NIC to memory: at least one to write the packet data (one per 64 bytes of packet data), and one to update the DMA descriptor with the packet length and possible other metadata.

Although these writes go directly to cache, there's a limit to the number of DMA operations that can happen per second, and with 100Gbps cards, we can't afford to make one such transaction per packet.

François-Frédéric promoted an alternative I/O model for high-throughput use cases: the "tape I/O model", where packets are just written back-to-back in a uniform array of memory. Every so often a block of memory containing some number of packets is made available to user-space. This has the advantage of packing in more packets per memory block, as there's no wasted space between packets. This increases cache density and decreases DMA transaction count for transferring packet data, as we can use each 64-byte DMA write to its fullest. Additionally there's no side table of descriptors to update, saving a DMA write there.

Apparently the only cards currently capable of 100 Gbps traffic, the Chelsio and Netcope cards, use the "tape I/O model".

Incidentally, the DMA transfer limit isn't the only constraint. Something I hadn't fully appreciated before was memory write bandwidth. Before, I had thought that because the NIC would transfer in packet data directly to cache, that this wouldn't necessarily cause any write traffic to RAM. Apparently that's not the case. Later over drinks (thanks to Red Hat's networking group for organizing), François-Frédéric asserted that the DMA transfers would eventually use up DDR4 bandwidth as well.

A NIC-to-RAM DMA transaction will write one cache line (usually 64 bytes) to the socket's last-level cache. This write will evict whatever was there before. As far as I can tell, there are three cases of interest here. The best case is where the evicted cache line is from a previous DMA transfer to the same address. In that case it's modified in the cache and not yet flushed to main memory, and we can just update the cache instead of flushing to RAM. (Do I misunderstand the way caches work here? Do let me know.)

However if the evicted cache line is from some other address, we might have to flush to RAM if the cache line is dirty. That causes a memory write traffic. But if the cache line is clean, that means it was probably loaded as part of a memory read operation, and then that means we're evicting part of the network function's working set, which will later cause memory read traffic as the data gets loaded in again, and write traffic to flush out the DMA'd packet data cache line.

François-Frédéric simplified the whole thing to equate packet bandwidth with memory write bandwidth, that yes, the packet goes directly to cache but it is also written to RAM. I can't convince myself that that's the case for all packets, but I need to look more into this.

Of course the cache pressure and the memory traffic is worse if the packet data is less compact in memory; and worse still if there is any need to copy data. Ultimately, processing small packets at 100Gbps is still a huge challenge for user-space networking, and it's no wonder that there are only a couple devices on the market that can do it reliably, not that I've seen either of them operate first-hand :)

Talking with Snabb's Luke Gorrie later on, he thought that it could be that we can still stretch the packet array I/O model for a while, given that PCIe gen4 is coming soon, which will increase the DMA transaction rate. So that's a possibility to keep in mind.

At the same time, apparently there are some "coherent interconnects" coming too which will allow the NIC's memory to be mapped into the "normal" address space available to the CPU. In this model, instead of having the NIC transfer packets to the CPU, the NIC's memory will be directly addressable from the CPU, as if it were part of RAM. The latency to pull data in from the NIC to cache is expected to be slightly longer than a RAM access; for comparison, RAM access takes about 70 nanoseconds.

For a user-space networking workload, coherent interconnects don't change much. You still need to get the packet data into cache. True, you do avoid the writeback to main memory, as the packet is already in addressable memory before it's in cache. But, if it's possible to keep the packet on the NIC -- like maybe you are able to add some kind of inline classifier on the NIC that could directly shunt a packet towards an on-board IPSec accelerator -- in that case you could avoid a lot of memory transfer. That appears to be the driving factor for coherent interconnects.

At some point in François-Frédéric's talk, my brain just died. I didn't quite understand all the complexities that he was taking into account. Later, after he kindly took the time to dispell some more of my ignorance, I understand more of it, though not yet all :) The concrete "deliverable" of the talk was a model for kernel modules and user-space drivers that uses the paradigms he was promoting. It's a work in progress from Linaro's networking group, with some support from NIC vendors and CPU manufacturers.

Luke Gorrie and Asumu Takikawa -- SnabbCo and Igalia -- How to write your own NIC driver, and why

This talk had the most magnificent beginning: a sort of "repent now ye sinners" sermon from Luke Gorrie, a seasoned veteran of software networking. Luke started by describing the path of righteousness leading to "driver heaven", a world in which all vendors have publically accessible datasheets which parsimoniously describe what you need to get packets flowing. In this blessed land it's easy to write drivers, and for that reason there are many of them. Developers choose a driver based on their needs, or they write one themselves if their needs are quite specific.

But there is another path, says Luke, that of "driver hell": a world of wickedness and proprietary datasheets, where even when you buy the hardware, you can't program it unless you're buying a hundred thousand units, and even then you are smitten with the cursed non-disclosure agreements. In this inferno, only a vendor is practically empowered to write drivers, but their poor driver developers are only incentivized to get the driver out the door deployed on all nine architectural circles of driver hell. So they include some kind of circle-of-hell abstraction layer, resulting in a hundred thousand lines of code like a tangled frozen beard. We all saw the abyss and repented.

Luke described the process that led to Mellanox releasing the specification for its ConnectX line of cards, something that was warmly appreciated by the entire audience, users and driver developers included. Wonderful stuff.

My Igalia colleague Asumu Takikawa took the last half of the presentation, showing some code for the driver for the Intel i210, i350, and 82599 cards. For more on that, I recommend his recent blog post on user-space driver development. It was truly a ray of sunshine in dark, dark Brussels.

Ole Trøan -- Cisco -- Fast dataplanes with VPP

This talk was a delightful introduction to VPP, but without all of the marketing; the sort of talk that makes FOSDEM worthwhile. Usually at more commercial, vendory events, you can't really get close to the technical people unless you have a vendor relationship: they are surrounded by a phalanx of salesfolk. But in FOSDEM it is clear that we are all comrades out on the open source networking front.

The speaker expressed great personal pleasure on having being able to work on open source software; his relief was palpable. A nice moment.

He also had some kind words about Snabb, too, saying at one point that "of course you can do it on snabb as well -- Snabb and VPP are quite similar in their approach to life". He trolled the horrible complexity diagrams of many "NFV" stacks whose components reflect the org charts that produce them more than the needs of the network functions in question (service chaining anyone?).

He did get to drop some numbers as well, which I found interesting. One is that recently they have been working on carrier-grade NAT, aiming for 6 terabits per second. Those are pretty big boxes and I hope they are getting paid appropriately for that :) For context he said that for a 4-unit server, these days you can build one that does a little less than a terabit per second. I assume that's with ten dual-port 40Gbps cards, and I would guess to power that you'd need around 40 cores or so, split between two sockets.

Finally, he finished with a long example on lightweight 4-over-6. Incidentally this is the same network function my group at Igalia has been building in Snabb over the last couple years, so it was interesting to see the comparison. I enjoyed his commentary that although all of these technologies (carrier-grade NAT, MAP, lightweight 4-over-6) have the ostensible goal of keeping IPv4 running, in reality "we're day by day making IPv4 work worse", mainly by breaking the assumption that just because you get traffic from port P on IP M, doesn't mean you can send traffic to M from another port or another protocol and have it reach the target.

All of these technologies also have problems with IPv4 fragmentation. Getting it right is possible but expensive. Instead, Ole mentions that he and a cross-vendor cabal of dataplane people have a "dark RFC" in the works to deprecate IPv4 fragmentation entirely :)

OK that's it. If I get around to writing up the couple of interesting Java talks I went to (I know right?) I'll let yall know. Happy hacking!

05 February, 2018 05:22PM by Andy Wingo

February 02, 2018

freeipmi @ Savannah

FreeIPMI 1.6.1 Released

https://ftp.gnu.org/gnu/freeipmi/freeipmi-1.6.1.tar.gz

FreeIPMI 1.6.1 - 02/02/18
-------------------------
o Add IPv6 hostname support to FreeIPMI, all of FreeIPMI can now
take IPv6 addresses as inputs to "host" parameters, options, or
inputs.
o Support significant portions of IPMI IPv6 configuration in
libfreeipmi.
o Add --no-session option in ipmi-raw.
o Add SDR cache options to ipmi-config.
o Legacy -f short option for --flush-cache and -Q short option
for quiet-cache. Backwards compatible for tools that supported
it before.
o In ipmi-oem, support Gigabyte get-bmc-services and set-bmc-
services.
o Various performance improvements:
- Remove excessive calls to secure_memset to clear memory.
- Remove excessive memsets and clears of data.
- Remove unnecessary "double input checks".
- Remove expensive input checks in libfreeipmi fiid library.
Fallout from this may include FIID_ERR_FIELD_NOT_FOUND errors
in different fiid functions.
- Remove unnecessary input checks in libfreeipmi fiid library.
- Add recent 'lookups' of fields in fiid library to internal
cache.
o Various minor fixes/improvements
- Update libfreeipmi core API to use poll() instead of
select(), to avoid issues with applications with a high
number of threads.

02 February, 2018 11:47PM by Albert Chu

January 30, 2018

FSF News

Free Software Foundation receives $1 million donation from Pineapple Fund

BOSTON, Massachusetts, USA -- Tuesday, January 30, 2018 -- The Free Software Foundation (FSF) announced it has received a record-breaking charitable contribution of 91.45 Bitcoin from the Pineapple Fund, valued at $1 million at the time of the donation. This gift is a testament to the importance of free software, computer user freedom, and digital rights when technology is interwoven with daily life.

"Free software is more than open source; it is a movement that encourages community collaboration and protects users' freedom," wrote Pine, the Pineapple Fund's founder. "The Free Software Foundation does amazing work, and I'm certain the funds will be put to good use."

"The FSF is honored to receive this generous donation from the Pineapple Fund in service of the free software movement," said John Sullivan, FSF executive director. "We will use it to further empower free software activists and developers around the world. Now is a critical time for computer user freedom, and this gift will make a tremendous difference in our ability, as a movement, to meet the challenges."

The anonymous Pineapple Fund, created to give away $86 million worth of Bitcoin to charities and social causes, "is about making bold and smart bets that hopefully impact everyone in our world."

The FSF believes free software does impact everyone, and this gift from the Pineapple Fund will be used to:

  • Increase innovation and the number of new projects in high priority areas of free software development, including the GNU Project;

  • Expand the FSF's licensing, compliance, and hardware device certification programs;

  • Bring the free software movement to new audiences;

  • Contribute to the long-term stability of the organization.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

John Sullivan
Executive Director
Free Software Foundation
+1 (617) 542 5942
[email protected]

30 January, 2018 04:25PM

January 29, 2018

GUIX Project news

Meet Guix at FOSDEM

GNU Guix will be present at FOSDEM in the coming days with a couple of talks:

We are also organizing a one-day Guix workshop where contributors and enthusiasts will meet, thanks to the efforts of Manolis Ragkousis and Pjotr Prins. The workshop takes place on Friday Feb. 2nd at the Institute of Cultural Affairs (ICAB) in Brussels. The morning will be dedicated to talks—among other things, we are happy to welcome Eelco Dolstra, the founder of Nix, without which Guix would not exist today. The afternoon will be a more informal discussion and hacking session.

Attendance to the workshop is free and open to everyone, though you are invited to register. Check out the workshop’s wiki page for the program, registration, and practical info. Hope to see you in Brussels!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

29 January, 2018 03:00PM by Ludovic Courtès

January 28, 2018

dico @ Savannah

Version 2.5

Version 2.5 of GNU dico is available for download. Main new feature in this release: support for four-column index files in dict.org format.

Previous versions of dico supported only three-column index files. This is most common format. However, some dictionaries have four-column index files. When trying to load such dictionaries using prior versions of GNU dico, you would get the error message "X.index:Y: malformed entry". The present version fixes this problem.

28 January, 2018 02:49PM by Sergey Poznyakoff

January 27, 2018

GNUnet News

gnURL 7.58.0

I'm no longer publishing release announcements on gnunet.org. Read the full gnURL 7.58.0 release announcement on our developer mailinglist and on info-gnu once my email has passed the moderation.

27 January, 2018 03:48PM by ng0

January 26, 2018

Lonely Cactus

The Ridiculous Gopher Project: BBSs and ZModem

In the previous entry, I talked about the ridiculous Gopher project, in which I might try to make a presence for myself in Gopher Space.

So my first though was that I would have a blog and a webgallery over gopher.

The blog entries are a very simple prospect, since they need to be plain text.  I don't really like the block paragraph style, but, I did sketch out a conversion from markdown to troff to text that does some nice formatting.

The directory of the blog entries is a bit more complicated.  I had an idea for a cgi that handled directory structures and indices that are date-based with a parallel directory structure and index that is keyword based.

But anyway, I got stuck on my first step, and fell down a rabbit hole, as per usual.

So I thought to myself, what if I wanted to have comments for my gopher blog?  How would that work?  What technology would I used?  Well, in the original Gopher spec, there is a capacity for a Telnet session.  I thought that I could make a tiny Telnet-based BBS with just enough functionality to let one leave a comment or read comments.

So I went on the internet to find a tiny BBS to examine.  I found just about the simplest BBS one could imagine.  It is called Puppy BBS.
I found it in here: http://cd.textfiles.com/simtel/simtel20/MSDOS/FIDO/.index.html

So there this California-based guy named Tom Jennings who does a lot of stuff in the intersection between tech and art. Once upon a time he was a driving force behind FidoNet, which was a pre-internet community of dial-up BBSs. He's done many cool things since FidoNet.

Check out his cool art at http://www.sensitiveresearch.com/

I guess Tom wrote PuppyBBS as a reaction to how complicated BBSs had become back in the late 1980s.

So I thought, hey, does this thing still build and run? Well, not exactly. First off, it uses a MS-DOS C library that handles serial comms, which, of course, doesn't work on Microsoft Windows 10 or on Linux. And even if that library did still exist, I couldn't try it even if I wanted to. I mean, if I wanted to try it I would need two landlines and two dial-up modems so I could call myself. I do have a dial-up modem in a box in the garage, but, I'm not going to get another landline for this nonsense.

Anyway, I e-mailed Tom and asked if I could hack it up and post it on Github, and he said okay. And so this is what this is PuppyBBS.

Puppy BBS has four functions:
  • write messages
  • read messages
  • upload files
  • download files
From there, I started writing a Telnet-based BBS, which PupperBBS.  And that went pretty well.  It took very little time to get the message reading and writing running.  I was on a roll, so I decided that I would quickly tackle the other two functions that PuppyBBS had: uploading and downloading files.  And that was where it all got complicated.

PuppyBBS used XModem for file transfer, because it was the 80's and that was what people did.  But I thought ZModem, which was faster and more reliable, would be the way to go.  So, I thought I'd just link a zmodem library to the BBS and I'd be ready to go.

But, I couldn't find a zmodem library that was ready to go.  All zmodem code seems to be derived for lrzsz, so I downloaded the code from lrzsz and made it into a library.  To do that, I had to understand the code, so I tried to read it.  That code is so very 1980s.  It is terrible, so I had to fix it.

(Let the record show that by "terrible" I mean terrible from a reader's point of view.  It was written with so much global state and no indication of which procedures modify that state.  There is no isolation, no separation of concerns.  As a practical matter, it works great.)

And that led to a full week of untangling it all, which is what became the libzmodem library.  Now my libzmodem isn't really much more readable than the original code, but, at least it makes more sense to me.

Great, now I linked libzmodem to PupperBBS to add some ZModem send and receive functionality.  Now to test it.  I set up PupperBBS.  I telnetted in to the system, got to the BBS, and tried to upload and download some files.  It became apparent that for ZModem to work, the telnet program itself has to have some parnership with rz and sz, launching one or the other as appropriate.

Since this had to have worked in the past, some internet searches led me to zssh on sourceforge  . zssh has a telnet program that has a built-in zmodem send and receive functionality.  Unfortunately, it wasn't packaged on Fedora didn't compile out of the box, so I started trying to understand it and fix it.

So, anyway to summarize:
  1. Let's do a Gopher blog!
  2. How do you do comments?
  3. Telnet works on Gopher!
  4. Let's make a BBS
  5. BBS's do Zmodem
  6. Let's make a ZModem library
  7. Let's make a Telnet client that does ZModem.
And this is why I never finish anything.

26 January, 2018 05:38AM by Mike ([email protected])

January 25, 2018

GUIX Project news

aarch64 build machines donated

Good news! We got a present for our build farm in the form of two SoftIron OverDrive 1000 aarch64 machines donated by ARM Holdings. One of them is already running behind our new build farm, which distributes binaries from https://berlin.guixsd.org, and the other one should be operational soon.

The OverDrive has 4 cores and 8 GiB of RAM. It comes in a fancy VCR-style case, which looks even more fancy with the obligatory stickers:

An OverDrive 1000 with its fancy Guix stickers.

A few months ago we reported on the status of the aarch64 port, which was already looking good. The latest releases include a pre-built binary tarball of Guix for aarch64.

Until now though, the project’s official build farms were not building aarch64 binaries. Consequently, Guix on aarch64 would build everything from source. We are glad that this is about to be fixed. We will need to expand our build capacity for this architecture and for ARMv7 as well, and you too can help!

Thanks to ARM Holdings and in particular to Richard Henwood for contributing to our build infrastructure!

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

25 January, 2018 09:00PM by Ludovic Courtès

Christopher Allan Webber

On standards divisions and collaboration (or: Why can't the decentralized social web people just get along?)

A couple of days ago I wrote about ActivityPub becoming a W3C Recommendation. This was one output of the Social Working Group, and the blogpost was about my experiences, and most of my experiences were on my direct work on ActivityPub. But the Social Working Group did more than ActivityPub; it also on the same day published WebSub, a useful piece of technology in its own right which amongst other things also plays a significant historical role in what is even ActivityPub's history (but is not used by ActivityPub itself), and it has also published several documents which are not compatible with ActivityPub at all, and appear to play the same role. This, to outsiders, may appear confusing, but there are reasons which I will go into in this post.

On that note, friend and Social Working Group co-participant Amy Guy just wrote a reasonably and (to my own feelings) highly empathizable frustrated blogpost (go ahead and read it before you finish this blogpost) about the kinds of comments you see with different members of different decentralized social web communities sniping at each other. Yes, reading the comments is always a precarious idea, particularly on tech news sites. But what's especially frustrating is seeing comments that we either:

These comments seem to be being made by people who were not part of the standards process, so as someone who spent three years of their life on it, let me give the perspective of someone who was actually there.

So yes, first of all, it's true that in the end we pushed out two "stacks" that were mostly incompatible. These would more or less be the "restful + linked data" stack, which is ActivityPub and Linked Data Notifications using ActivityStreams as its core (but extensible) vocabulary (which are directly interoperable, and use the same "inbox" property for delivery), and the "Indieweb stack", which is Micropub and Webmention. (And there's also WebSub, which is not really either specifically part of one or the other of those "stacks" but which can be used with either, and is of such historical significance to federation that we wanted it to be standardized.) Amy Guy did a good job of mapping the landscape in her Social Web Protocols document.

Gosh, two stacks! It does kind of look confusing, if you weren't in the group, to see how this could have happened. Going through meeting logs is boring (though the meeting logs are up there if you feel like it) so here's what happened, as I remember it.

First of all, we didn't just start out with two stacks, we started out with three. At the beginning we had the linked data folks, the RESTful "just speak plain JSON" development type folks, and the Indieweb folks. Nobody really saw eye to eye at first, but eventually we managed to reach some convergence (though not as much as I would have liked). In fact we managed to merge two approaches entirely: ActivityPub is a RESTful API that can be read and interpreted as just JSON, but thanks to JSON-LD you have the power of linked data for extensions or maybe because you really like doing fancy RDF the-web-is-a-graph things. And ActivityPub uses the very same inbox of Linked Data Notifications, and is directly interoperable. Things did not start out as directly interoperable, but Sarven Capadisli and Amy Guy (who was not yet a co-author of ActivityPub) were willing to sit down and discuss and work out the details, and eventually we got there.

Merging the RESTful + Linked Data stuff with the Indieweb stuff was a bit more of a challenge, but for a while it looked like even that might completely happen. For those that don't know, Linked Data type people and Indieweb type people have, for whatever reason, historically been at each others' throats despite (or perhaps because of) the enormous similarity between the kind of work that they're doing (the main disagreements being "should we treat everything like a graph" and "are namespaces a good idea" and also, let's be honest, just historical grudges). But Amy Guy long made the case in the group that actually the divisions between the groups were very shallow and that with just a few tweaks we could actually bridge the gap (this was the real origin of the Social Web Protocols document, which though it eventually became a document of the different things we produced, was originally an analysis of how they weren't so different at all). At the face to face summit in Paris (which I did not attend, but ActivityPub co-editor Jessica Tallon did) there was apparently an energetic meeting over a meal where I'm told that Jessica Tallon and Aaron Parecki (editor of Micropub and Webmention) hit some kind of epiphany and realized yes, by god, we can actually merge these approaches together. Attending remotely, I wasn't there for the meal, but when everyone returned it was apparent that something had changed: the conversation had shifted towards reconciling differences. Between the Paris face to face meeting and the next one, energy was high and discussions active on how to bring things together. Aaron even began to consider that maybe Micropub (and/or? I forget if it was just one) Webmention could support ActivityStreams, since ActivityStreams already had an extension mechanism worked out. At the next face to face meeting, things started out optimistic as well... and then suddenly, within the span of minutes, the whole idea of merging the specs fell apart. In fact it happened so quickly that I'm not even entirely sure what did it, but I think it was over two things: one, Micropub handled an update of fields where you could add or remove a specific element from a list (without giving the entire changed list as a replacement value) and it wasn't obvious how it could be done with ActivityPub, and two, something like "well we already have a whole vocabulary in Microformats anyway, we might as well stick with it." (I could have the details wrong here a bit... again, it happened very fast, and I remember in the next break trying to figure out whether or not things did just fall apart or not.)

With the the dream of Linked Data and Indieweb stuff being reconciled given up on, we decided that at least we could move forward in parallel without clobbering, and in fact while actively supporting, each other. I think, at this point, this was actually the best decision possible, and in a sense it was even very fruitful. At this point, not trying to reconcile and compromise on a single spec, the authors and editors of the differing specifications still spent much time collaborating as the specifications moved forward. Aaron and other Indieweb folks provided plenty of useful feedback for ActivityPub and the ActivityPub folks provided plenty of useful feedback for the Indieweb folks, and I'd say all our specifications were improved greatly by this "friendly treaty" of sorts. If we could not unify, we could at least cooperate, and we did.

I'd even say that we came to a good amount of mutual understanding and respect between these groups within the Social Web Working Group. People approached these decentralization challenges with different building blocks, assumptions, principles, and goals... hence at some point they've encountered approaches that didn't quite jive with their "world view" on how to do it right (TM). And that's okay! Even there, we have plenty of space for cooperation and can learn from each other.

This is also true with the continuation of the Social Web Working Group, which is the SocialCG, where the two co-chairs are myself and Aaron Parecki, who are both editors of specifications of the conflicting "stacks". Within the Social Web Community Group we have a philosophy that our scope is to work on collaboration on social web protocols. If you use a different protocol than another person, you probably can still collaborate a lot, because there's a lot of overlap between the problem domains between social web protocols. Outside the SocialWG and SocialCG it still seems to be a different story, and sadly linked data people and Indieweb people seem to still show up on each others' threads to go after each other. I consider that a disappointment... I wish the external world would reflect the kind of sense of mutual understanding we got in the SocialWG and SocialCG.

Speaking of best attempts at bringing unity, my main goal at participating in the SocialWG, and my entire purpose of showing up in the first place, was always to bring unity. The first task I performed over the course of the first few months at the Social Working Group was to try to bring all of the existing distributed social networks to participate in the SocialWG calls. Even at that time, I was worried about the situation with a "fractured federation"... MediaGoblin was about to implement its own federation code, and I was unhappy that we had a bunch of libre distributed social network projects but none of them could talk to each other, and no matter what we chose we would just end up contributing to the problem. I was called out as naive (which I suppose, in retrospect, was accurate) for a belief that if we could just get everyone around the table we could reconcile our differences, agree on a standard that everyone could share in, and maybe we'd start singing Kumbaya or something. And yes, I was naive, but I did reach out to everyone I could think of (if I missed you somehow, I'm sorry): Diaspora, GNU Social, Pump.io (well, they were already there), Hubzilla, Friendica, Owncloud (later Nextcloud)... etc etc (Mastodon and some others didn't even exist at this point, though we would connect later)... I figured this was our one chance to finally get everyone on board and collaborate. We did have Diaspora and Owncloud participants for a time (and Nextcloud even has begun implementing ActivityPub), and plenty of groups said they'd like to participate, but the main barrier was that the standards process took a lot of time (true story), which not everyone was able to allocate. But we did our best to incorporate and respond to feedback whever we got it. We did detailed analysis on what the major social networks were providing and what we needed to cover as a result. What I'm trying to say is: ActivityPub was my best attempt to bring unity to this space. It grew out of direct experiences from developing previous standards between OStatus, the Pump API, and over a decade of developing social network protocols and software, including by people who pioneered much of the work in that territory. We tried through long and open comment periods to reconcile the needs of various groups and potential users. Maybe we didn't always succeed... but we did try, and always gave it our best. Maybe ActivityPub will succeed in that role or maybe it won't... I'm hopeful, but time is the true test.

Speaking of attempting to bring unity to the different decentralized social network projects, probably the main thing that disappoints me is the amount of strife we have between these different projects. For example, there are various threads pitting Mastodon vs GNU Social. In fact, Mastodon's lead developer and GNU Social's lead developer get along just fine... it's various members of the communities of each that tend to (sounds familiar?) be hostile.

Here's something interesting: decentralized social web initiatives haven't yet faced an all-out attack from what would be presumably be their natural enemies in the centralized social web: Facebook, Twitter, et all. I mean, there have been some aggressions, in the senses that bridging projects that let users mirror their timelines get shut down as terms of service violations and some comparatively minor things, but I don't know of (as of yet) an outright attack. But maybe they don't have to: participants in the decentralized social web is so good at fighting each other that apparently we do that work for them.

But it doesn't have to be that way. You might be able to come to consensus on a good way forward. And if you can't come to consensus, you can at least have friendly and cooperative communication.

And if somehow, you can't do any of that, you just not openly attack each other. We've got enough hard work to fight to make the federated social web work without fighting ourselves. Thanks.

Update: A previous version of this article said "I even saw someone tried to write a federation history and characterize it as war", but it's been pointed out that I'm being unfair here, since the very article I'm pointing to itself refutes the idea of this being war. Fair point, and I've removed that bit.

25 January, 2018 08:35PM by Christopher Lemmer Webber

January 24, 2018

ActivityPub is a W3C Recommendation

Having spent the majority of the last three years of my life on it, I'm happy to announce that ActivityPub is now a W3C Recommendation. Whew! At last! Horray! Finally! I've written some more words on this over on the FSF's blog so maybe read that.

As for things I didn't put there, that fit more on a personal blog? I guess that's where I speak about my personal life experience and feelings about it and I would say they're a mix of elation (for making it), relief (also for making it, because it wasn't always clear that we would), and burnout (I had no idea this process was going to suck up so much of my life).

I didn't expect this to take over my life so thoroughly. I did say this bit on the FSF blogpost but when Jessica Tallon and I got involved in the Social Working Group we figured we were just showing up for an hour a week to make sure things were on track. I did think the goal of the Social Working Group was the right one: we had a lot of libre social networks but they were largely fractured and failed at interoperability... surely we could do better if we got everyone in a room together! (Getting everyone in the room wasn't easy and didn't always happen, though I sure as heck tried, particularly early on.) But I figured the other people in the room would be the experts, the responsible ones, and we'd just be tagging along to make sure our needs were met. Well, the next thing you know we're co-editors of ActivityPub, and that time grew from an hour a week to filling most of my week to sometimes urgent, grueling deadlines (granted, I made most of them a lot more complicated than I needed to be by doing example implementations in obscure languages, etc etc).

I'm feeling great about things now, but that wasn't always the case through this. I've come to learn how hard standards work is, and I've been doing other specification work recently too (more on that in a coming blogpost), but I'll say that for whatever reason (and I can think of quite a few, but it's not worth going into here), ActivityPub has been far harder than anything else I've worked on in the standards space. (Maybe that's just because it's the first standard I've gotten to completion though.)

In fact, in early-to-middle 2017 I was in quite a bit of despair, because it seemed clear that ActivityPub was going to not make it in time as an official recommended standard. The Social Working Group's charter was going to run out at mid-2017, and it had already been extended once... apparently getting a second extension was nearly unheard of. I resigned myself to the idea that ActivityPub would be published as a note, but that there was no way that we would be able to make it to getting the shiny foil stamp of being an actual recommended standard. Instead, I shifted my effort to making sure that my ActivityPub implementation work would support enough of ActivityStreams (which is what ActivityPub uses as its vocabulary) to make sure that at least that would make it as a standard with all the components we required, since we at least needed to be able to refer to that vocabulary.

But Mastodon saved ActivityPub. I'll admit that at first I was skeptical about all the hype I was hearing about Mastodon... but Amy Guy (co-author of ActivityPub, and whose PHD thesis, "Presentation of Self on a Decentralised Web", is worth a read at the memorable domain of dr.amy.gy) convinced me that I really ought to check out what was going on in Mastodon land. And I found I really did like what was happening there... and connected to a community that felt like what I had missed from the heyday of StatusNet/identi.ca, while having a bit of its own flavor of culture, one that I really felt at home in. It turned out this was good timing... Mastodon was having trouble expanding the privacy needs of its users on OStatus, and it turns out private addressing was exactly one of the reasons that ActivityPub was developed. (I'm not claiming credit for this, I'm just talking from my perspective... the Mastodon ActivityPub implementation issue can give you a better sense of where credit is due, and here I didn't really do much.) This interest came right at the right time... it began to also drum up interest from many other participants too... and it pretty much directly lead to another extension to the Social Working Group, giving us until the end of 2017 to wrap up the work on standardizing ActivityPub. Whew!

But Mastodon is not alone. Today there are a growing number of implementers of ActivityPub. I'd encourage you, if you haven't, to watch this video of PeerTube and Mastodon federating over ActivityPub. Pretty cool stuff! ActivityPub has been a massive group effort, and I'm relieved to see that all that hard work has paid off, for all of us.

Meanwhile, there's a lot to do still ahead. MediaGoblin, ironically, has fallen behind on its own federation support in the interest of advancing federation standards (we have some federation code, but it's for the old pre-ActivityPub Pump API, and it's bitrotted quite a bit) and I need to figure out what the next steps are and discuss with the community (expect more on that in the next few months, and sure to be discussed at my talk at Libreplanet 2018). And ActivityPub may be "done" in the sense that "it made it through the standards process", but some of the most interesting work is still ahead. The Social Web Community Group, of which I am co-chair, meets bi-weekly to talk and collaborate on the interesting problems that implementers of libre networks are encountering. (It's open to everyone, maybe you should join?)

On that note, in a recent Social Web Community Group meeting, Evan Prodromou was showing off some of his latest ActivityPub projects (tags.pub and places.pub). I'm paraphrasing here, but he said something interesting, which has stuck with me: "We did all that standardizing work, and that's great, but now we get to the fun part... now we get to build things."

I agree. I look forward to what the next few years of fun ActivityPub development bring. Onwards!

24 January, 2018 05:00AM by Christopher Lemmer Webber

January 22, 2018

parallel @ Savannah

GNU Parallel 20180122 ('Mayon') released [stable]

GNU Parallel 20180122 ('Mayon') [stable] has been released. It is
available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a
stable release.

Quote of the month:

GNU Parallel is making me pretty happy this morning
-- satanpenguin

New in this release:

  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one
or more computers. A job can be a single command or a small script
that has to be run for each of the lines in the input. The typical
input is a list of files, a list of hosts, a list of users, a list of
URLs, or a list of tables. A job can also be a command that reads from
a pipe. GNU Parallel can then split the input and pipe it into
commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to
use as GNU Parallel is written to have the same options as xargs. If
you write loops in shell, you will find GNU Parallel may be able to
replace most of the loops and make them run faster by running several
jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as
you would get had you run the commands sequentially. This makes it
possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O -
pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline
will love you for it.

When using programs that use GNU Parallel to process data for
publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login:
The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/

Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists

not already there)

  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing
databases through all the different databases' command line clients.
So far the focus has been on giving a common way to specify login
information (protocol, username, password, hostname, and port number),
size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you
will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different
Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or
other system activity) is above a certain limit. When the limit is
reached the program will be suspended for some time. If the limit is a
soft limit the program will be allowed to run for short amounts of
time before being suspended again. If the limit is a hard limit the
program will only be allowed to run when the system is below the
limit.

22 January, 2018 04:28PM by Ole Tange

January 21, 2018

libtasn1 @ Savannah

libtasn1 moved to gitlab

The new primary development site is at:
https://gitlab.com/gnutls/libtasn1

21 January, 2018 09:47AM by Nikos Mavrogiannopoulos

January 17, 2018

Andy Wingo

instruction explosion in guile

Greetings, fellow Schemers and compiler nerds: I bring fresh nargery!

instruction explosion

A couple years ago I made a list of compiler tasks for Guile. Most of these are still open, but I've been chipping away at the one labeled "instruction explosion":

Now we get more to the compiler side of things. Currently in Guile's VM there are instructions like vector-ref. This is a little silly: there are also instructions to branch on the type of an object (br-if-tc7 in this case), to get the vector's length, and to do a branching integer comparison. Really we should replace vector-ref with a combination of these test-and-branches, with real control flow in the function, and then the actual ref should use some more primitive unchecked memory reference instruction. Optimization could end up hoisting everything but the primitive unchecked memory reference, while preserving safety, which would be a win. But probably in most cases optimization wouldn't manage to do this, which would be a lose overall because you have more instruction dispatch.

Well, this transformation is something we need for native compilation anyway. I would accept a patch to do this kind of transformation on the master branch, after version 2.2.0 has forked. In theory this would remove most all high level instructions from the VM, making the bytecode closer to a virtual CPU, and likewise making it easier for the compiler to emit native code as it's working at a lower level.

Now that I'm getting close to finished I wanted to share some thoughts. Previous progress reports on the mailing list.

a simple loop

As an example, consider this loop that sums the 32-bit floats in a bytevector. I've annotated the code with lines and columns so that you can correspond different pieces to the assembly.

   0       8   12     19
 +-v-------v---v------v-
 |
1| (use-modules (rnrs bytevectors))
2| (define (f32v-sum bv)
3|   (let lp ((n 0) (sum 0.0))
4|     (if (< n (bytevector-length bv))
5|         (lp (+ n 4)
6|             (+ sum (bytevector-ieee-single-native-ref bv n)))
7|          sum)))

The assembly for the loop before instruction explosion went like this:

L1:
  17    (handle-interrupts)     at (unknown file):5:12
  18    (uadd/immediate 0 1 4)
  19    (bv-f32-ref 1 3 1)      at (unknown file):6:19
  20    (fadd 2 2 1)            at (unknown file):6:12
  21    (s64<? 0 4)             at (unknown file):4:8
  22    (jnl 8)                ;; -> L4
  23    (mov 1 0)               at (unknown file):5:8
  24    (j -7)                 ;; -> L1

So, already Guile's compiler has hoisted the (bytevector-length bv) and unboxed the loop index n and accumulator sum. This work aims to simplify further by exploding bv-f32-ref.

exploding the loop

In practice, instruction explosion happens in CPS conversion, as we are converting the Scheme-like Tree-IL language down to the CPS soup language. When we see a Tree-Il primcall (a call to a known primitive), instead of lowering it to a corresponding CPS primcall, we inline a whole blob of code.

In the concrete case of bv-f32-ref, we'd inline it with something like the following:

(unless (and (heap-object? bv)
             (eq? (heap-type-tag bv) %bytevector-tag))
  (error "not a bytevector" bv))
(define len (word-ref bv 1))
(define ptr (word-ref bv 2))
(unless (and (<= 4 len)
             (<= idx (- len 4)))
  (error "out of range" idx))
(f32-ref ptr len)

As you can see, there are four branches hidden in the bv-f32-ref: two to check that the object is a bytevector, and two to check that the index is within range. In this explanation we assume that the offset idx is already unboxed, but actually unboxing the index ends up being part of this work as well.

One of the goals of instruction explosion was that by breaking the operation into a number of smaller, more orthogonal parts, native code generation would be easier, because the compiler would only have to know about those small bits. However without an optimizing compiler, it would be better to reify a call out to a specialized bv-f32-ref runtime routine instead of inlining all of this code -- probably whatever language you write your runtime routine in (C, rust, whatever) will do a better job optimizing than your compiler will.

But with an optimizing compiler, there is the possibility of removing possibly everything but the f32-ref. Guile doesn't quite get there, but almost; here's the post-explosion optimized assembly of the inner loop of f32v-sum:

L1:
  27    (handle-interrupts)
  28    (tag-fixnum 1 2)
  29    (s64<? 2 4)             at (unknown file):4:8
  30    (jnl 15)               ;; -> L5
  31    (uadd/immediate 0 2 4)  at (unknown file):5:12
  32    (u64<? 2 7)             at (unknown file):6:19
  33    (jnl 5)                ;; -> L2
  34    (f32-ref 2 5 2)
  35    (fadd 3 3 2)            at (unknown file):6:12
  36    (mov 2 0)               at (unknown file):5:8
  37    (j -10)                ;; -> L1

good things

The first thing to note is that unlike the "before" code, there's no instruction in this loop that can throw an exception. Neat.

Next, note that there's no type check on the bytevector; the peeled iteration preceding the loop already proved that the bytevector is a bytevector.

And indeed there's no reference to the bytevector at all in the loop! The value being dereferenced in (f32-ref 2 5 2) is a raw pointer. (Read this instruction as, "sp[2] = *(float*)((byte*)sp[5] + (uptrdiff_t)sp[2])".) The compiler does something interesting; the f32-ref CPS primcall actually takes three arguments: the garbage-collected object protecting the pointer, the pointer itself, and the offset. The object itself doesn't appear in the residual code, but including it in the f32-ref primcall's inputs keeps it alive as long as the f32-ref itself is alive.

bad things

Then there are the limitations. Firstly, instruction 28 tags the u64 loop index as a fixnum, but never uses the result. Why is this here? Sadly it's because the value is used in the bailout at L2. Recall this pseudocode:

(unless (and (<= 4 len)
             (<= idx (- len 4)))
  (error "out of range" idx))

Here the error ends up lowering to a throw CPS term that the compiler recognizes as a bailout and renders out-of-line; cool. But it uses idx as an argument, as a tagged SCM value. The compiler untags the loop index, but has to keep a tagged version around for the error cases.

The right fix is probably some kind of allocation sinking pass that sinks the tag-fixnum to the bailouts. Oh well.

Additionally, there are two tests in the loop. Are both necessary? Turns out, yes :( Imagine you have a bytevector of length 1025. The loop continues until the last ref at offset 1024, which is within bounds of the bytevector but there's one one byte available at that point, so we need to throw an exception at this point. The compiler did as good a job as we could expect it to do.

is is worth it? where to now?

On the one hand, instruction explosion is a step sideways. The code is more optimal, but it's more instructions. Because Guile currently has a bytecode VM, that means more total interpreter overhead. Testing on a 40-megabyte bytevector of 32-bit floats, the exploded f32v-sum completes in 115 milliseconds compared to around 97 for the earlier version.

On the other hand, it is very easy to imagine how to compile these instructions to native code, either ahead-of-time or via a simple template JIT. You practically just have to look up the instructions in the corresponding ISA reference, is all. The result should perform quite well.

I will probably take a whack at a simple template JIT first that does no register allocation, then ahead-of-time compilation with register allocation. Getting the AOT-compiled artifacts to dynamically link with runtime routines is a sufficient pain in my mind that I will put it off a bit until later. I also need to figure out a good strategy for truly polymorphic operations like general integer addition; probably involving inline caches.

So that's where we're at :) Thanks for reading, and happy hacking in Guile in 2018!

17 January, 2018 10:30AM by Andy Wingo

January 16, 2018

libsigsegv @ Savannah

libsigsegv 2.12 is released

libsigsegv version 2.12 is released.

New in this release:

  • Added support for catching stack overflow on Hurd/i386.
  • Added support for catching stack overflow on Haiku.
  • Corrected distinction between stack overflow and other fault on AIX.
  • Reliability improvements on Linux, FreeBSD, NetBSD.
  • NOTE: Support for Cygwin and native Windows is currently not up-to-date.

Download: https://ftp.gnu.org/gnu/libsigsegv/libsigsegv-2.12.tar.gz

16 January, 2018 08:47PM by Bruno Haible

FSF News

Announcing LibrePlanet 2018 keynote speakers

The keynote speakers for the tenth annual LibrePlanet conference will be anthropologist and author Gabriella Coleman, free software policy expert and community advocate Deb Nicholson, Electronic Frontier Foundation (EFF) senior staff technologist Seth Schoen, and FSF founder and president Richard Stallman. Register for this year's conference here!

LibrePlanet is an annual conference for people who care about their digital freedoms, bringing together software developers, policy experts, activists, and computer users to learn skills, share accomplishments, and tackle challenges facing the free software movement. The theme of this year's conference is Freedom. Embedded. In a society reliant on embedded systems -- in cars, digital watches, traffic lights, and even within our bodies -- how do we defend computer user freedom, protect ourselves against corporate and government surveillance, and move toward a freer world? LibrePlanet 2018 will explore these topics in sessions for all ages and experience levels.

Gabriella (Biella) Coleman is best known in the free software community for her book Coding Freedom: The Ethics and Aesthetics of Hacking. Trained as an anthropologist, Coleman holds the Wolfe Chair in Scientific and Technological Literacy at McGill University. Her scholarship explores the intersection of the cultures of hacking and politics, with a focus on the sociopolitical implications of the free software movement and the digital protest ensemble Anonymous, the latter in her book Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous.

Deb Nicholson is a free software policy expert and a passionate community advocate, notably contributing to GNU MediaGoblin and OpenHatch. She is the Community Outreach Director for the Open Invention Network, the world's largest patent non-aggression community, which serves the kernel Linux, GNU, Android, and other key free software projects. A perennial speaker at LibrePlanet, this is Nicholson's first keynote at the conference.

"They are all too modest to say it, but these speakers will blow your mind," said FSF executive director John Sullivan. "Don't miss this opportunity to hear about how technology controls our core freedoms, how people are working together in communities to build software that truly empowers, and how you can both benefit from and contribute to these efforts."

Seth David Schoen has worked at the EFF for over a decade, creating the Staff Technologist position and helping other technologists understand the civil liberties implications of their work, helping EFF staff better understand technology related to EFF's legal work, and helping the public understand what the products they use really do. Schoen last spoke at LibrePlanet in 2015, when he introduced Let's Encrypt, the automated, free software-based certificate authority.

FSF president Richard Stallman will present the Free Software Awards, and discuss pressing threats and important opportunities for software freedom. Dr. Richard Stallman launched the free software movement in 1983 and started the development of the GNU operating system (see www.gnu.org) in 1984. GNU is free software: everyone has the freedom to copy it and redistribute it, with or without changes. The GNU/Linux system, basically the GNU operating system with Linux added, is used on tens of millions of computers today. Stallman has received the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the Electronic Frontier Foundation's Pioneer Award, and the the Takeda Award for Social/Economic Betterment, as well as several doctorates honoris causa, and has been inducted into the Internet Hall of Fame.

About LibrePlanet

LibrePlanet is the annual conference of the Free Software Foundation. Begun as a modest gathering of FSF members, the conference now is a large, vibrant gathering of free software enthusiasts, welcoming anyone interested in software freedom and digital rights. Registration is now open, and admission is gratis for FSF members and students.

For the fifth year in a row, LibrePlanet will be held at the Massachusetts Institute of Technology in Cambridge, Massachusetts, on March 24th and 25th, 2017. Co-presented by the Free Software Foundation and MIT's Student Information Processing Board (SIPB), the rest of the LibrePlanet program will be announced soon. The opening keynote at LibrePlanet 2017 was given by Kade Crockford, Director of the Technology for Liberty Program at the ACLU of Massachusetts, and the closing keynote was given by Sumana Harihareswara, founder of Changeset Consulting.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software — particularly the GNU operating system and its GNU/Linux variants — and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contact

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942
[email protected]

16 January, 2018 07:05PM

January 11, 2018

Andy Wingo

spectre and the end of langsec

I remember in 2008 seeing Gerald Sussman, creator of the Scheme language, resignedly describing a sea change in the MIT computer science curriculum. In response to a question from the audience, he said:

The work of engineers used to be about taking small parts that they understood entirely and using simple techniques to compose them into larger things that do what they want.

But programming now isn't so much like that. Nowadays you muck around with incomprehensible or nonexistent man pages for software you don't know who wrote. You have to do basic science on your libraries to see how they work, trying out different inputs and seeing how the code reacts. This is a fundamentally different job.

Like many I was profoundly saddened by this analysis. I want to believe in constructive correctness, in math and in proofs. And so with the rise of functional programming, I thought that this historical slide from reason towards observation was just that, historical, and that the "safe" languages had a compelling value that would be evident eventually: that "another world is possible".

In particular I found solace in "langsec", an approach to assessing and ensuring system security in terms of constructively correct programs. One obvious application is parsing of untrusted input, and indeed the langsec.org website appears to emphasize this domain as one in which a programming languages approach can be fruitful. It is, after all, a truth universally acknowledged, that a program with good use of data types, will be free from many common bugs. So far so good, and so far so successful.

The basis of language security is starting from a programming language with a well-defined, easy-to-understand semantics. From there you can prove (formally or informally) interesting security properties about particular programs. For example, if a program has a secret k, but some untrusted subcomponent C of it should not have access to k, one can prove if k can or cannot leak to C. This approach is taken, for example, by Google's Caja compiler to isolate components from each other, even when they run in the context of the same web page.

But the Spectre and Meltdown attacks have seriously set back this endeavor. One manifestation of the Spectre vulnerability is that code running in a process can now read the entirety of its address space, bypassing invariants of the language in which it is written, even if it is written in a "safe" language. This is currently being used by JavaScript programs to exfiltrate passwords from a browser's password manager, or bitcoin wallets.

Mathematically, in terms of the semantics of e.g. JavaScript, these attacks should not be possible. But practically, they work. Spectre shows us that the building blocks provided to us by Intel, ARM, and all the rest are no longer "small parts understood entirely"; that instead now we have to do "basic science" on our CPUs and memory hierarchies to know what they do.

What's worse, we need to do basic science to come up with adequate mitigations to the Spectre vulnerabilities (side-channel exfiltration of results of speculative execution). Retpolines, poisons and masks, et cetera: none of these are proven to work. They are simply observed to be effective on current hardware. Indeed mitigations are anathema to the correctness-by-construction: if you can prove that a problem doesn't exist, what is there to mitigate?

Spectre is not the first crack in the edifice of practical program correctness. In particular, timing side channels are rarely captured in language semantics. But I think it's fair to say that Spectre is the most devastating vulnerability in the langsec approach to security that has ever been uncovered.

Where do we go from here? I see but two options. One is to attempt to make the behavior of the machines targetted by secure language implementations behave rigorously as architecturally specified, and in no other way. This is the approach taken by all of the deployed mitigations (retpolines, poisoned pointers, masked accesses): modify the compiler and runtime to prevent the CPU from speculating through vulnerable indirect branches (prevent speculative execution), or from using fetched values in further speculative fetches (prevent this particular side channel). I think we are missing a model and a proof that these mitigations restore target architectural semantics, though.

However if we did have a model of what a CPU does, we have another opportunity, which is to incorporate that model in a semantics of the target language of a compiler (e.g. micro-x86 versus x86). It could be that this model produces a co-evolution of the target architectures as well, whereby Intel decides to disclose and expose more of its microarchitecture to user code. Cacheing and other microarchitectural side-effects would then become explicit rather than transparent.

Rich Hickey has this thing where he talks about "simple versus easy". Both of them sound good but for him, only "simple" is good whereas "easy" is bad. It's the sort of subjective distinction that can lead to an endless string of Worse Is Better Is Worse Bourbaki papers, according to the perspective of the author. Anyway transparent caching in the CPU has been marvelously easy for most application developers and fantastically beneficial from a performance perspective. People needing constant-time operations have complained, of course, but that kind of person always complains. Could it be, though, that actually there is some other, better-is-better kind of simplicity that should replace the all-pervasive, now-treacherous transparent cacheing?

I don't know. All I will say is that an ad-hoc approach to determining which branches and loads are safe and which are not is not a plan that inspires confidence. Godspeed to the langsec faithful in these dark times.

11 January, 2018 01:44PM by Andy Wingo

January 07, 2018

gzip @ Savannah

gzip-1.9 released [stable]

07 January, 2018 10:50PM by Jim Meyering

nano @ Savannah

GNU nano 2.9.2 was released

The most important change in this version is that now you can use <Tab> to indent a marked region and <Shift+Tab> to unindent it. Furthermore, with the option 'set trimblanks' in your nanorc, nano will now snip those pesky trailing spaces when automatic hard-wrapping occurs (when using the --fill option, for example). Apart from those things, there are several small fixes and improvements. Recommended upgrade.

07 January, 2018 11:09AM by Benno Schulenberg

January 04, 2018

Lonely Cactus

The ridiculous gopher project

My primary New Year's Resolution for 2018 is to start no new projects, and to only finish old ones.  In looking over my repos  -- more aptly titled the graveyard of 1,000 Saturdays -- I have excavated a couple of projects from the earth.

I'm starting, for now, with what is one of the most ridiculous of all possible projects: a gopher-protocol blog.  Do you remember gopher?  It was a protocol and a network ecosystem that existed just before HTTP took over the world.  I presented the world as directories that contained files, and users could poke around and look at those files to their heart's content.

There is a reason that I'm nostalgic for those days, and it lies primarily in how all of the world of HTTP and the world of iPhone and Android applications are really data-mining spy operations.  The gopher protocol is too primitive to allow the wholesale data mining operations that the modern web has become.  I has no client side scripting and cookies.  And because the world of gopher is so strange and hard to reach, there is a bit of a pioneer mindset among aficionados


So yeah, gopher.  A big directory of files of the types on the following list.  Take a look at this table of filetypes that Gopher handles natively.


Itemtype Content
0Text file
1Directory
5PC binary
6UNIX uuencoded file
8Telnet Session
9Binary File
gGIF image
sSound
IImage (other than GIF)
0Text file

Pretty old school, eh?  Just feel the power of the 1990s.

There are a lot of people running blogs in Gopher.  Really they just are directories of plain text files, ordered by date.  It is very pure, but, slightly boring.  So I looked at that list and asked myself if I could create a modern (lol) Gopher blog engine.

You can, of course, write servers that push out dynamic-generated content, but the clients only receive these static files.  Do you remember back when Perl5 was the way one would write CGI scripts that created "dynamic" HTML?  You can to the same thing here: make CGI scripts that create text files or GIF images.

In my conception, a modern gopher weblog engine would have text files of blog entries, a gallery of GIFs, and a commenting system.  Lacking any other gopher available method, the commenting system would be a Telnet session.

So I have picked up a couple of old ideas: a weblog software with a Gopher interface, a web gallery with a Gopher interface, and a tiny Telnet BBS where people can leave comments.  I've (re)started with the BBS, because it is the most ridiculous.

04 January, 2018 11:14PM by Mike ([email protected])

January 02, 2018

Writing as little as possible

My New Year's Resolution for 2018 is to start no new projects.  For 2018, I will only finish my many, many uncompleted projects.

I've started up with one of my most pointless coding projects: a telnet BBS.  Writing a BBS in the late 1980's and early 1990's was something of a rite of passage.  Much like writing your own blog software was in the late 1990's and 2000's.

But going back to the idea of finishing things, I've given myself some additional constraints.
  • Write as little code as possible.
  • Use common libraries and components sensibly and liberally.
  • Bend my concept to the strengths and constraints created by the libraries and components, instead of wrangling them into matching my vision.
This ends up being very hard to do.  To be specific, it is very difficult to quash my ego and perfectionism; that perfectionism is why my repo has two dozens projects, of which only three are functional.

One of the forces that pushes me to write my own code, instead of using other people's code, is that reading and understanding other people's code and documentation is hard and it doesn't feel like an accomplishment.  To properly use another library, one really does need to put in the work of reading the docs and understanding their logic, which is deeply unsatisfying.

Will 2018 be the year I recover from Incompletion Syndrome?  Time will tell.

02 January, 2018 05:40AM by Mike ([email protected])

January 01, 2018

health @ Savannah

Native GNU Health GTK client !

Dear community

I am happy to announce the native GNU Health GTK client for series 3.2 !

The GNU Health GTK Client

The GTK client allows to connect to the GNU Health server from the desktop.

Starting from GNU Health version 3.2, you can directly download the gnuhealth client from GNU.org or pypi.

Installation

The GNU Health client is pip installable :

For a system-wide installation (you need to be root)

# pip install gnuhealth-client

Alternatively, you can do a local installation :

$ pip install --user gnuhealth-client

For the latest information about the GNU Health client on pypi visit : https://pypi.python.org/pypi/gnuhealth-client

Alternatively, you can also install it from source :

$ wget https://ftp.gnu.org/gnu/health/gnuhealth-client-latest.tar.gz

Technology

The GNU Health GTK client derives from the Tryton GTK client, with specific features of GNU Health and healthcare sector.

GNU Health client series 3.2.x use GTK2+ and Python2. This is a
transition series for the upcoming 3.4, that will use GTK3+ and Python3

The default profile

The GNU Health client comes with a pre-defined profile, which points to the GNU Health community demo server

Server : health.gnusolidario.org
Port : 8000
User : admin
Passwd : gnusolidario

GNU Health Plugins

You can download GNU Health plugins for specific functionality.

For example:

  • The GNU Health Crypto plugin to digitally sign documents using GNUPG
  • The GNU Health Camera to use cameras and store them directly on the system (person registration, histological samples, etc..)

More information about the GNU Health plugins at :

https://en.wikibooks.org/wiki/GNU_Health/Plugins

The GNU Health client configuration file

The default configuration file resides in

$HOME/.config/gnuhealth/gnuhealth-client.conf

Using a custom greeter / banner

You can customize the login greeter banner to fit your institution.

In the section [client] include the banner param with the absolute path of the png file.

Something like

[client]
banner = /home/yourlogin/myhospitalbanner.png

The default resolution of the banner is 500 x 128 pixels. Adjust yours to approximately this size.

Development

The development of the GNU Health client will be done on GNU Savannah, using the Mercurial repository.

Tasks, bugs and mailing lists will be on [email protected] , for development.

General questions can be done on [email protected] mailing list.

Homepage

http://health.gnu.org

Documentation

The GNU Health GTK documentation will be at the corresponding chapter in the GNU Health Wikibook

https://en.wikibooks.org/wiki/GNU_Health

01 January, 2018 10:23PM by Luis Falcon

gdbm @ Savannah

Version 1.14

Version 1.14 is available for download. This is a bug-fix release. A list of important changes follows:

Make sure created databases are byte-for-byte reproducible

This fixes two longstanding bugs: (1) when allocating database file header blocks, the unused memory is filled with zeroes; (2) when expanding
a mmapped memory area, the added extent is filled with zeroes.

Fix build with --enable-gdbm-export

Make gdbm_error global variable thread safe

Fix possible segmentation violation in gdbm_setopt

Fix handling of group headers in --help output

01 January, 2018 09:59PM by Sergey Poznyakoff

December 27, 2017

coreutils @ Savannah

coreutils-8.29 released [stable]

27 December, 2017 06:44PM by Pádraig Brady

December 23, 2017

gnuastro @ Savannah

Gnuastro 0.5 released

As a small holiday gift, I am happy to announce the fifth release of Gnuastro (version 0.5).

GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of various command-line programs and library functions for the manipulation and analysis of astronomical data. All the programs share the same basic command-line user interface for the comfort of both the users and developers. For the full list of Gnuastro's library and programs along with a comprehensive general tutorial, please see the links below, respectively:

https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-library.html
https://www.gnu.org/s/gnuastro/manual/html_node/Gnuastro-programs-list.html
https://www.gnu.org/s/gnuastro/manual/html_node/General-program-usage-tutorial.html

Many new features have been added since the fourth release and almost all bugs that were found have been fixed. For the full list of new features, please see the NEWS file below [1]. Some of the highlights are as follows: there is a new "Match" program to match catalogs (in 1D or 2D). NoiseChisel uses signal contiguity to grow the true detections (before it was a blind dilation). This is much more successful in tracing the outer low surface brightness regions. Filtering operators are added to Arithmetic. CosmicCalculator's functions are now available in the library for use in your own programs, and now it can also print single requested calculations (instead of a full list of all calculations). Gnuastro's top webpage is now also available in French.

You will also find a new section in the "Tutorial" chapter of the book ("General program usage tutorial", link above). It contains an extended and pedagogic tutorial to help you get started in using Gnuastro's infra-structure effectively. With the aim of detecting galaxies in an image and estimating their colors, it takes you through most of the programs. Just be patient and follow through the steps to master Gnuastro's powerful features. This tutorial was made as part of the "Exploring the ultra-low surface brightness universe" workshop in the International Space Science Institute (ISSI in Bern, Switzerland). I am very grateful to the hosts and participants for the very fruitful week.

If any of Gnuastro's program are useful in your work, please run the relevant programs with a `--cite' option (it can be different for different programs). Citations are vital for the continued work on Gnuastro, so please don't forget to support us by doing so.

Boud Roukema and Vladimir Markelov contributed to the code of this release. Lucas MacQuarrie, Thérèse Godefroy and the GNU French Translation Team also kindly initiated and are managing the French translation of the top Gnuastro webpage. I am finally very grateful to (in alphabetic order) Leindert Boogaard, Nicolas Bouché, Benjamin Clement, Madusha Gunawardhana, Takashi Ichikawa, Raúl Infante Sainz, Aurélien Jarno, Floriane Leclercq, Alan Lefor, Bob Proulx, Alejandro Serrano Borlaff, Lee Spitler, Ole Streicher, Alfred Szmidt, Ignacio Trujillo and David Valls-Gabaud who provided many great comments, suggestions and bug reports to this release.

Below, you can get the compressed sources and a GPG detached signatures for this release. See [2] for uncompressing Lzip tarballs.

http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.gz (4.6 MB)
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.lz (3.1 MB)

Use a mirror for higher download bandwidth:
https://ftpmirror.gnu.org/gnuastro/gnuastro-0.5.tar.gz (4.6 MB)
https://ftpmirror.gnu.org/gnuastro/gnuastro-0.5.tar.lz (3.1 MB)

The GPG detached signatures are also available below. See [3] for how to verify the integrity of this tarball with the signature.

http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.gz.sig (833 B)
http://ftp.gnu.org/gnu/gnuastro/gnuastro-0.5.tar.lz.sig (833 B)

Here are the MD5 and SHA1 checksums:

This tarball was bootstrapped (initially built) with the tools
below. Note that these are not installation dependencies.

  • Texinfo 6.5
  • Autoconf 2.69
  • Automake 1.15.1
  • Help2man 1.47.5
  • Gnulib v0.1-1729-gf583f328b
  • Autoconf archives v2017.09.28-14-g2445b89

For installation dependencies, please see:
https://www.gnu.org/software/gnuastro/manual/html_node/Dependencies.html

I wish you happy holidays,
Cheers,
Mohammad

--
Postdoctoral research fellow,
Centre de Recherche Astrophysique de Lyon (CRAL),
Observatoire de Lyon. 9, Avenue Charles André,
Saint Genis Laval (69230), France.

[1] NEWS file for Gnuastro 0.5

New features
  • Manual/Book: An extended tutorial is added showing some general applications of almost all the programs. This may be a good place to get a feeling of how Gnuastro is intended to be used and some of the programs.
  • New Program and library: Match is a new program that will match two given inputs (currently catalogs). Its output is the re-arranged inputs with the same number of rows/records such that all the rows match. The main work is also done with the new low-level `gal_match_catalog' library function which can also be used in more generic contexts.
  • All programs: a value of `0' to the `--numthreads' option will use the number of threads available to the system at run time.
  • Arithmetic: The new operators `filter-median' and `filter-mean' can be used to filter (smooth) the input. The size of the filter can be set as the other operands to these operators.
  • BuildProgram: The new `--la' option allows the identification of a different Libtool `.la' file for Libtool linking information.
  • BuildProgram: The new `--deletecompiled' option will delete the compiled program after running it.
  • CosmicCalculator: all the various cosmological calculations can now be requested individually in one line with a specific option added for each calculation (for example `--age' or `--luminositydist' for the age of the universe at a given redshift or the luminosity distance). Therefore the old `--onlyvolume' and `--onlyabsmagconv' options are now removed. To effectively use these new features, please review the "Invoking CosmicCalculator" section of the book.
  • Fits: when an extension/HDU is identified on the command-line with the `--hdu' option and no operation is requested, the full list of header keywords in that HDU will be printed (as if only `--printallkeys' was called).
  • MakeCatalog: physical nature agnostic WCS column names. Previously the first WCS axis was always assumed to be RA and the second DEC. So for example even if you had a spectrum (with X and wavelength as the two WCS dimensions), you would have to ask for `--ra' and `--dec'. The new `--w1' and `--w2' options are now generic and don't assume any particular type only their order in the FITS header. MakeCatalog now also uses the CTYPE and CUNIT keywords to set the names and units of its output columns. The `--ra' and `--dec' options are now just internal aliases for `--w1' or `--w2' which will be determined based on the input's CTYPE keyword. Also the new `--geow1', `--geow2', `--clumpsw1', `--clumpsw2', `--clumpsgeow1', `--clumpsgeow2' options replace the old options `--geora', `--geodec', `--clumpsra', `--clumpsdec', `--clumpsgeora', `--clumpsgeodec'. No alias is currently defined for the latter group.
  • MakeCatalog: the new `--uprange' option allows you to specify a range for the random values around each object. This is useful when the noise properties of the dataset vary gradually and sampling from the whole dataset might produce biased results.
  • NoiseChisel: with the new `--convolved' and `--convolvedhdu' options, NoiseChisel will not convolve the input any more and use the given dataset instead. In many cases, as the inputs get larger, convolution is the most time consuming step of NoiseChisel. With this option, you can greatly speed up your tests (to find the best parameters by varying them, for a given analysis). See the book for more information and examples.
  • NoiseChisel: with the new `--widekernel' option it is now possible to use a wider kernel to identify which tiles contain signal. The rest of the steps (identifying the quantile threshold on the selected tiles and etc) are done on the dataset convolved with `--kernel' as they were before. Since it is time consuming, this is an optional feature.
  • NoiseChisel: with the new `--qthreshtilequant' option, it is now possible to discard high-valued (outlier) tiles before estimating qthresh over the whole image. This can be useful in detecting very large diffuse/flat regions that would otherwise be detected as background (and effectively removed).
  • NoiseChisel: the finally selected true detections are now grown based on signal contiguity, not by blind dilation. The growth process is the same as the growing of clumps to define objects. Only for true detections, the growth occurs in the noise. You can configure this growth with the `--detgrowquant' and `--detgrowmaxholesize'. With this new feature it is now possible to detect signal out to much lower surface brightness limits and the detections don't look boxy any more.
  • Cosmology library: A new set of cosmology functions are now included in the library (declared in `gnuastro/cosmology.h'). These functions are also used in the CosmicCalculator program.
  • `gal_table_read' can now return the number of columns matched with each input column (for example with regular expressions), a new argument has been added to allow this feature.
  • `gal_fits_key_img_blank': returns the value that must be used in the BLANK keyword for the given type as defined by the FITS standard.
  • `gal_txt_write' and `gal_fits_tab_write' now accept an extension name as argument to allow a name for the FITS extension they write.
  • `gal_box_bound_ellipse_extent' will return the maximum extent of an ellipse along each axis from the ellipse center in floating point.
Removed features
  • Installation: The `--enable-bin-op-*' configuration options that were introduced in Gnuastro 0.3 have been removed. By managing the arithmetic functions in a better manner (a separate source file for each operator), compilation for all types (when done in parallel) takes about the same time as it took with the default (only four) types until now.
  • MakeCatalog: `--zeropoint' option doesn't have a short option name any more. Previously it was `-z' which was confusing because `-x' and `-y' were used to refer to image coordinate positions.
  • NoiseChisel: The `--dilate' and `--dilatengb' options have been removed. Growing of true detections is no longer done through dilation but through the `--detgrowquant' and `--detgrowmaxholesize' options (see above).
Changed features
  • CosmicCalculator: The redshift is no longer mandatory. When no redshift is given, it will only print the input parameters (cosmology) and abort.
  • MakeCatalog: when the output is a FITS file, the two object and clumps catalogs will be stored as multiple extensions of a single file. Until now, two separate FITS files would be created. Plain text outputs are the same as before (two files will be created).
  • `gal_binary_fill_holes' now accepts a `connectivity' and `maxsize' argument to specify the connectivity of the holes and the maximum size of acceptable holes to fill.
  • `gal_fits_img_read' and `gal_fits_img_read_to_type' now also read the WCS structure of the extension/HDU in a FITS file and have two extra arguments: `hstartwcs' and `hendwcs'. With these options it is possible to limit the range of header keywords to read the WCS, similar to how they are used in `gal_wcs_read'.
  • `gal_txt_write', `gal_table_write_log', `gal_fits_tab_write' and `gal_txt_write' don't have the `dontdelete' argument any more. The action they take if the file already exists depends on the file: for FITS, a new extension will be added and for text, they will abort with an error.
  • `gal_tile_block_write_const_value' and `gal_tile_full_values_write' now accept a new `withblank' option to set all pixels that are blank in the tile's block to be blank in the check image.
  • `gal_wcs_pixel_area_arcsec2' will return NaN (instead of aborting) when input is unreasonable (not two dimensions or not in units of degrees).
  • `gal_wcs_world_to_img' and `gal_wcs_img_to_world': Until now, these two WCS conversion functions would explicitly assume RA and Dec and work based on input arrays (so for example it was also necessary to give the number of elements and etc). They now accept `gal_data_t' as input for the input coordinates, thus their API has been greatly simplified and their functionality increased.
Bug fixes
  • ConvertType crash when changing values (bug #52010).
  • Arithmetic not accounting for integer blank pixels in binary operators (bug #52014).
  • NoiseChisel segfault when memory mapping to a file (bug #52043).
  • CFITSIO 3.42 and libcurl crash at Gnuastro configure time (bug #52152).
  • MakeCatalog crash in upper-limit with full size label (bug #52281).
  • NoiseChisel leaving unlabeled regions after clump growth (bug #52327).
  • Libtool checks only in non-current directory (bug #52427).

[2] Using Lzip

Lzip has a much better compression ratio and much better archival features than the common `.gz' or `.xz'. Therefore Gnuastro's stable releases are made in `.gz' (for historical reasons) and `.lz'. The alpha/test releases are only in `.lz'. If you don't have Lzip, can download and install it from its webpage.

If you have GNU Tar, then the single command below should uncompress and un-pack the tarball:

If the command above doesn't work, you have to un-compress and un-pack it with two separate commands:

[3] Verifying tarballs

Use a .sig file to verify that the corresponding file (without the .sig suffix) is intact. First, be sure to download both the .sig file and the corresponding tarball. Then, run a command like this:

If that command fails because you don't have the required public key, then run this command to import it:

and rerun the 'gpg --verify' command.

23 December, 2017 04:26PM by Mohammad Akhlaghi

parallel @ Savannah

GNU Parallel 20171222 ('Jerusalem') released

GNU Parallel 20171222 ('Jerusalem') has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

Quote of the month:

You know what?
GNU Parallel is cool.
Concurrency, but in the Unix-philosophy style,
without the Enterprise wankeriness.
-- NickM bokkiedog@twitter

New in this release:

  • env_parset for ash, dash, ksh, sh, zsh
  • Automatically create hostgroups if argument ends in @sshlogin
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

23 December, 2017 12:04AM by Ole Tange

December 22, 2017

GUIX Project news

Porting GuixSD to ARMv7

GuixSD porting to ARMv7 is a difficult topic. There are plenty of different machines, with specific hardware configurations and vendor-tuned bootloaders, and ACPI support is still experimental. For those reasons it is currently impossible to provide a GuixSD image that runs on most ARMv7 machines like on x86_64 targets.

The GuixSD port on ARMv7 has to be done machine by machine and the first supported one is the BeagleBone Black. It was choosen mainly because it runs with mainline U-Boot and Linux-libre kernel.

As Guix already supported armv7, only three things were missing:

  1. A rework of the GuixSD bootloader layer to support not just GRUB but also U-Boot and Extlinux. This has been integrated in the 0.14 release.
  2. Some developments and fixes on Guix scripts to support image generation, system reconfiguration and installation on ARMv7 in the same way as it is already possible on i686 and x86_64 machines.
  3. The definition on an installation image for the BeagleBone Black.

Points 2 and 3 were addressed recently so we are now ready to show you how to run GuixSD on your BeagleBone Black board!

Installing GuixSD on a BeagleBone Black

Let's try to install GuixSD on the 4GB eMMC (built-in flash memory) of a BeagleBone Black.

Future Guix releases will provide pre-built installer images for the BeagleBone Black. For now, as support just landed on "master", we need to build this image by ourselves.

This can be done this way:

guix system disk-image --system=armhf-linux -e "(@ (gnu system install) beaglebone-black-installation-os)"

Note that it is not yet possible to cross-compile a disk image. So you will have to either run this command on an armhf-linux system where you have previously installed Guix manually, or offload the build to such a system.

You will eventually get something like:

installing bootloader...
[ 7710.782381] reboot: Restarting system
/gnu/store/v33ccp7232gj5wdahdgpjcw4nvh14d7s-disk-image

Congrats! Let's flash this image onto a microSD card with the command:

dd if=/gnu/store/v33ccp7232gj5wdahdgpjcw4nvh14d7s-disk-image of=/dev/mmcblkX bs=4M

where mmcblkX is the name of your microSD card on your GNU/Linux machine.

You can now insert the microSD card into you BeagleBone Black, plug in a UART cable and power-on your device while pressing the "S2" button to force the boot from microSD instead of eMMC.

GuixSD installer on BeagleBone Black

Let's follow the Guix documentation here to install GuixSD on eMMC.

First of all, let's plug in an ethernet cable and set up SSH access in order to be able to get rid of the UART cable.

ifconfig eth0 up
dhclient eth0
herd start ssh-daemon

Let's partition the eMMC (/dev/mmcblk1) as a 4GB ext4 partition, mount it, and launch the cow-store service, still following the documentation.

cfdisk
mkfs.ext4 -L my-root /dev/mmcblk1p1
mount LABEL=my-root /mnt
herd start cow-store /mnt

We have reached the most important part of this whole process. It is now time to write the configuration file of our new system. The best thing to do here is to start from the template beaglebone-black.scm:

mkdir /mnt/etc
cp /etc/configuration/beaglebone-black.scm /mnt/etc/config.scm
zile /mnt/etc/config.scm

Once you are done preparing the configuration file, the new system must be initialized with this command:

guix system init /mnt/etc/config.scm /mnt

When this is over, you can turn off the board and remove the microSD card. When you'll power it on again, it will boot a bleeding edge GuixSD---isn't that nice?

Preparing a dedicated system configuration

Installing GuixSD on eMMC is great but you can also use Guix to prepare a portable microSD card image for your favorite server configuration. Say you want to run an mpd server on a BeagleBone Black directly from microSD card, with a minimum of configuration steps.

The system configuration could look like this:

(use-modules (gnu) (gnu bootloader extlinux))
(use-service-modules audio networking ssh)
(use-package-modules screen ssh)

(operating-system
  (host-name "my-mpd-server")
  (timezone "Europe/Berlin")
  (locale "en_US.utf8")
  (bootloader (bootloader-configuration
               (bootloader u-boot-beaglebone-black-bootloader)
               (target "/dev/sda")))
  (initrd (lambda (fs . rest)
            (apply base-initrd fs
                   ;; This module is required to mount the sd card.
                   #:extra-modules (list "omap_hsmmc")
                   rest)))
  (file-systems (cons (file-system
                        (device "my-root")
                        (title 'label)
                        (mount-point "/")
                        (type "ext4"))
                      %base-file-systems))
  (users (cons (user-account
                (name "mpd")
                (group "users")
                (home-directory "/home/mpd"))
               %base-user-accounts))
  (services (cons* (dhcp-client-service)
                   (service mpd-service-type)
                   (agetty-service
                    (agetty-configuration
                     (extra-options '("-L"))
                     (baud-rate "115200")
                     (term "vt100")
                     (tty "ttyO0")))
                   %base-services)))

After writing this configuration to a file called mpd.conf, it's possible to forge a disk image from it, with the following command:

guix system disk-image --system=armhf-linux mpd.conf

Like in the previous section, the resulting image should be copied to a microSD card. Then, booting from it on the BeagleBone Black, you should get:

...
Service mpd has been started.
This is the GNU system.  Welcome.
my-mpd-server login:

With only two commands you can build a system image from a configuration file, flash it and run it on a BeagleBone Black!

Next steps

  • Porting GuixSD to other ARMv7 machines.

While most of the work for supporting ARMv7 machines is done, there's still work left to create specific installers for other machines. This mostly consists of specifying the right bootloader and initrd options, and testing the whole thing.

One of the next supported systems might be the EOMA68-A20 as we should get a pre-production unit soon. Feel free to add support for your favorite machine!

This topic will be discussed in a future post.

  • Allow system cross-compilation.

This will be an interesting feature to allow producing a disk image from a desktop machine on x86_64 for instance. More development work is needed, but we'll keep you informed.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686, x86_64 and armv7 machines. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el and aarch64.

22 December, 2017 01:00PM by Mathieu Othacehe

December 21, 2017

FSF News

FSF adds PureOS to list of endorsed GNU/Linux distributions

PureOS logo

The FSF's list showcases GNU/Linux operating system distributions whose developers have made a commitment to follow its Guidelines for Free System Distributions. Each one includes and endorses exclusively free "as in freedom" software.

After extensive evaluation and many iterations, the FSF concluded that PureOS, a modern and user-friendly Debian-derived distribution, meets these criteria.

"The FSF's high standards for distributions help users know which ones will honor their desire to be fully in control of their computers and devices. These standards also help drive the development work needed to make the free world's tools more practical and powerful than the proprietary dystopia exemplified by Windows, iOS, and Chrome. PureOS is living -- and growing -- proof that you can meet ethical standards while also achieving excellence in user experience," said John Sullivan, FSF's executive director.

"PureOS is a GNU operating system that embodies privacy, security, and convenience strictly with free software throughout. Working with the Free Software Foundation in this multi-year endorsement effort solidifies our longstanding belief that free software is the nucleus for all things ethical for users. Using PureOS ensures you are using an ethical operating system, committed to providing the best in privacy, security, and freedom," said Todd Weaver, Founder & CEO of Purism.

PureOS screenshot

FSF's licensing and compliance manager, Donald Robertson, added, "An operating system like PureOS is a giant collection of software, much of which in the course of use encourages installation of even more software like plugins and extensions. Issues are inevitable, but the team behind PureOS worked incredibly hard to fix everything we identified. They didn't just fix the issues for their own distribution -- they sent fixes upstream, and are developing new extension 'store' mechanisms that won't recommend nonfree software to users. Our endorsement means we are confident not just in the current state of affairs, but also in the team's commitment to quickly address any problems that do arise."

PureOS is developed through a combination of volunteer contributions and work funded by the company Purism. The FSF's announcement today is about the PureOS distribution, which can be installed by users on many kinds of computers and devices. It is not a certification of any particular hardware shipping with PureOS. Any such endorsements will be announced separately as part of the FSF's Respects Your Freedom device certification program.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About the GNU Operating System and Linux

Richard Stallman announced in September 1983 the plan to develop a free software Unix-like operating system called GNU. GNU is the only operating system developed specifically for the sake of users' freedom. See https://www.gnu.org/gnu/the-gnu-project.html.

In 1992, the essential components of GNU were complete, except for one, the kernel. When in 1992 the kernel Linux was re-released under the GNU GPL, making it free software, the combination of GNU and Linux formed a complete free operating system, which made it possible for the first time to run a PC without nonfree software. This combination is the GNU/Linux system. For more explanation, see https://www.gnu.org/gnu/gnu-linux-faq.html.

Media Contacts

Donald Robertson, III
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942
[email protected]

Image and logo by the PureOS team licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

21 December, 2017 04:03PM

December 18, 2017

guile-cv @ Savannah

Guile-CV version 0.1.8

Guile-CV version 0.1.8 is released! (2017.12.18)

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log

18 December, 2017 12:48AM by David Pirotte

December 17, 2017

tar @ Savannah

Version 1.30

Version 1.30 of GNU tar is available for download. See the NEWS file, for a list of important changes in this release.

17 December, 2017 12:24PM by Sergey Poznyakoff

December 07, 2017

GUIX Project news

GNU Guix and GuixSD 0.14.0 released

We are pleased to announce the new release of GNU Guix and GuixSD, version 0.14.0!

The release comes with GuixSD ISO-9660 installation images, a virtual machine image of GuixSD, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries.

It’s been 6 months since the previous release, during which 88 people contributed code and packages. The highlights include:

See the release announcement for details.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686 or x86_64 machine. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el, armv7, and aarch64.

07 December, 2017 01:00PM by Ludovic Courtès

December 06, 2017

GNUnet News

gnURL 7.57.0 released

Today gnURL has been released in version 7.57.0, following the release of cURL 7.57.0.

The download is available in our directory on the GNU FTP and FTP mirrors (/gnu/gnunet/). 7.57.0 is the last version that will be available at https://gnunet.org/gnurl, future releases will be on the FTP.
If you are a distro maintainer for gnURL make sure to read the whole post with details below.

06 December, 2017 04:56PM by ng0

December 04, 2017

health @ Savannah

GNU Health patchset 3.2.9 released

Dear community

GNU Health 3.2.9 patchset has been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.2.9.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health and Tryton kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Patch 3.2.9 mainly fixes issues with real time computation of fields in the evaluation, lab, and APACHE II score system

Minor view reordering on the WHR and BMI fields have been also applied.

Refer to the List of issues related to this patchset for a comprehensive list of fixed bugs.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.2.8, then just follow the general instructions.
You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

In most cases, GNU Health Control center (gnuhealth-control) takes care of applying the patches for you.

Follow the general instructions at

After applying the patches, make a full update of your GNU Health database as explained in the documentation.

  • Restart the GNU Health Tryton server

List of issues and tasks related to this patchset

  • bug #52580: Removing the patient field before saving the record generates an error
  • bug #52579: some on_change numeric method operations generate traceback
  • bug #52578: WHR should be on the same line as hip and waist fields

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

For detailed information you can read about Patches and Patchsets

04 December, 2017 09:52PM by Luis Falcon

December 03, 2017

GNUnet News

The GNUnet System

Grothoff C. The GNUnet System. Informatique [Internet]. 2017 ;HDR:181. Available from: https://grothoff.org/christian/habil.pdf

03 December, 2017 02:52PM by Christian Grothoff

November 29, 2017

Values of Internet Technologies (VIT) - Announcement and Call for Donations

The Internet Society Switzerland Chapter (ISOC-CH) and the Swiss p≡p Foundation are proud to announce the first workshop out of six around Values of Internet Technologies (VIT). This first workshop will focus on Decentralization. Privacy, trust and security are at stake ‒ in a pure technical sense as well as economically and socially. Decentralization is a significant feature for more agility, justice and resilience in our societies and their digital infrastructure.

29 November, 2017 12:36PM by nk

November 27, 2017

gnURL 7.56.1-2 released

Today gnURL has been released in version 7.56.1-2.

This is a first rough solution to make gnURL build without a long list of configure switches.
This release fixes https://gnunet.org/bugs/view.php?id=5143
The download is available at the usual place, https://gnunet.org/gnurl

27 November, 2017 07:42PM by ng0

November 22, 2017

parallel @ Savannah

GNU Parallel 20171122 ('Mugabe') released [stable]

GNU Parallel 20171122 ('Mugabe') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

Poem of the month:

An ode to GNU parallel
An ode to GNU parallel
An ode to GNU parallel
An ode to GNU parallel
An ode to GNU parallel
An ode to GNU parallel
-- Adam Stuckert PoisonEcology@twitter

New in this release:

  • Using GNU Parallel to speed up Google Cloud Infrastructure management https://medium.com/@pczarkowski/using-gnu-parallel-to-speed-up-google-cloud-infrastructure-management-53e5c555ec05
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

22 November, 2017 10:17PM by Ole Tange

November 17, 2017

ignuit @ Savannah

ignuit 2.24.3 released

iGNUit is a flashcard trainer for the GNOME desktop.

With this release, the application icon has been revamped (thanks Tirifto!), and an option to exclude native markup when exporting to CSV or TSV has been added. Build files were also updated.

17 November, 2017 05:47AM by Timothy Musson

November 16, 2017

hyperbole @ Savannah

GNU Hyperbole 7, a.k.a the Git Ready for Action Release, is now available

This is the main public release of GNU Hyperbole for 2017 and it is
bursting with new features and further quality improvements. New
capabilities, including Git and Github object links, are summarized
here:

https://git.savannah.gnu.org/cgit/hyperbole.git/plain/HY-NEWS

A short explanation of Hyperbole is included below. For more
detail or how to obtain and install it, see:

https://www.gnu.org/s/hyperbole

For a list of use cases, see:

https://www.gnu.org/s/hyperbole/HY-WHY.html

For what users think about Hyperbole, see:

https://www.gnu.org/s/hyperbole/hyperbole.html#user-quotes

16 November, 2017 04:46AM by Robert Weiner

November 15, 2017

health @ Savannah

GNU Health patchsets 3.2.7 & 3.2.8 released

Dear community

GNU Health 3.2.7 and 3.2.8 patchsets have been released !

Priority: High

Table of Contents

  • About GNU Health Patchsets
  • Updating your system with the GNU Health control Center
  • Summary of this patchset
  • Installation notes
  • List of issues related to this patchset

About GNU Health Patchsets

We provide "patchsets" to stable releases. Patchsets allow applying bug fixes and updates on production systems. Always try to keep your production system up-to-date with the latest patches.

Patches and Patchsets maximize uptime for production systems, and keep your system updated, without the need to do a whole installation.

For more information about GNU Health patches and patchsets you can visit https://en.wikibooks.org/wiki/GNU_Health/Patches_and_Patchsets

NOTE: Patchsets are applied on previously installed systems only. For new, fresh installations, download and install the whole tarball (ie, gnuhealth-3.2.8.tar.gz)

Updating your system with the GNU Health control Center

Starting GNU Health 3.x series, you can do automatic updates on the GNU Health and Tryton kernel and modules using the GNU Health control center program.

Please refer to the administration manual section ( https://en.wikibooks.org/wiki/GNU_Health/Control_Center )

The GNU Health control center works on standard installations (those done following the installation manual on wikibooks). Don't use it if you use an alternative method or if your distribution does not follow the GNU Health packaging guidelines.

Summary of this patchset

Yes folks, two for the price of one :-)

- Patch 3.2.7 fixes a problem related to signing a death certificate
- Patch 3.2.8 fixes a depdendency issue related to the calendar_webdav3 package, on installations that use the python packages (setup / pip). The standard installation method is not affected.

We also updated the descriptions and URLs of trytond_calendar_webdav3 and trytond_webdav3 packages, that no longer are supported by Tryton, and that are now GNU Health developments.

Refer to the List of issues related to this patchset for a comprehensive list of fixed bugs.

Installation Notes

You must apply previous patchsets before installing this patchset. If your patchset level is 3.2.6, then just follow the general instructions.
You can find the patchsets at GNU Health main download site at GNU.org (https://ftp.gnu.org/gnu/health/)

Follow the general instructions at

  • Restart the GNU Health Tryton server

List of issues and tasks related to this patchset

  • bug #52366: Error when signing the death certificate
  • task #14626: Renaming Package names prefix trytond_ from Pypi

(https://savannah.gnu.org/task/index.php?14626)

For detailed information about each issue, you can visit https://savannah.gnu.org/bugs/?group=health
For detailed information about each task, you can visit https://savannah.gnu.org/task/?group=health

15 November, 2017 06:40PM by Luis Falcon

November 08, 2017

easejs @ Savannah

GNU ease.js 0.2.9 released

This release succeeds v0.2.8, which was released 15 July, 2016. There are no backwards-incompatible changes, but certain default behaviors have changed (see changes below). Support continues for ECMAScript 3+.

Changes between 0.2.8 and 0.2.9
-------------------------------

  • Class constructors are now virtual by default. The manual has been updated with information about this change.
  • Method overrides are now implicitly virtual. This is consistent with other object-oriented languages and solves the problem with breaking stackable traits if the author forgets to supply `virtual' to an overridden (intended-to-be-stackable) method. The manual has been updated.
  • New methods `Class.assertInstanceOf' and its alias `Class.assertIsA' have been added to eliminate boilerplate of enforcing polymorphism. They are like `Cass.isInstanceOf' and `Class.isA' respectively, but will fail by throwing a TypeError. The manual has been updated to include these two methods, along with some rewording of the containing section.
  • `Class.extend(Base)', where `Base' is a class, will now assume that you forgot the class definition and throw an error rather than trying to use Base' as the definition object.
  • [bugfix] Using `#constructor' (alias of `#__construct') in Error subtypes will no longer complain about an attempt to redefined `#__construct'.
  • `Constructors' section of manual has been reworded and references to poor practices (static classes, Singletons) have been removed.
  • Manual (and website) examples modernized to use ECMAScript 6 syntax. Users must still write ES3 syntax if they want to use ease.js in ES3 environments, of course.
  • INSTALL file added to repository (removed from .gitignore). This was previously (and unintentionally) only available in the distribution archives.
  • Copyright years updated on combined and minified distributions.

This release contains a number of bugfixes for traits, which is stable but
still under development:

  • [bugfix] Methods of trait class supertypes now apply with the correct context. (Feature added in 0.2.7)
  • [bugfix] Traits extending classes may now be named using the `Trait('Name').extend(C, dfn)' notation. (Feature added in 0.2.7)
  • [bugfix] Can now mix in traits with class supertypes that define constructors. (Feature added in 0.2.7)
  • [bugfix] `this.__inst' in traits now correctly references the object mixed into; previously, this was `undefined'.

I apologize for the (extreme) delay in this release: the process was stalled for many months while waiting for certain legal documents after my employer was purchased by another company.

Release notes for past releases are available at:
https://www.gnu.org/software/easejs/release-notes.html

More information, including an online manual, can be found on GNU's website:
https://gnu.org/software/easejs

08 November, 2017 04:18AM by Mike Gerwitz

November 02, 2017

mailutils @ Savannah

Version 3.4

Version 3.4 is available for download. This is a bug-fix release.

02 November, 2017 11:57AM by Sergey Poznyakoff

October 31, 2017

FSF News

Federal employees can now support the FSF through the Combined Federal Campaign

The Combined Federal Campaign (CFC) is the world's largest annual workplace giving campaign, allowing US federal civilian, postal, and military employees to pledge donations to nonprofit charities such as the Free Software Foundation (FSF). Last year, federal employees voluntarily participating in the CFC contributed more than $167 million to charitable causes.

The FSF's work relies on thousands of individual supporters and members across the United States and around the world, who contribute, on average, less than $200 each. "We know there are many free software supporters working in the US federal government," said FSF executive director John Sullivan. "We're glad they will have this new way to contribute to the free software movement."

Pledges to support the FSF through the Combined Federal Campaign can be made by designating the Free Software Foundation as the beneficiary charity. The FSF's CFC identification code is 63210. Donors can pledge until the end of the campaign period on January 12, 2018.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at https://fsf.org and https://gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

Media Contacts

Georgia Young
Program Manager
Free Software Foundation
+1 (617) 542 5942
[email protected]

31 October, 2017 07:40PM