Fedora People

Mutation Testing vs. Coverage

Posted by Alexander Todorov on December 27, 2016 09:48 AM

At GTAC 2015 Laura Inozemtseva made a lightning talk titled Coverage is Not Strongly Correlated with Test Suite Effectiveness which is the single event that got me hooked up with mutation testing. This year, at GTAC 2016, Rahul Gopinath made a counter argument with his lightning talk Code Coverage is a Strong Predictor of Test Suite Effectiveness. So which one is better ? I urge you to watch both talks and take notes before reading about my practical experiment and other opinions on the topic!

DISCLAIMER: I'm a heavy contributor to Cosmic-Ray, the mutation testing tool for Python so my view is biased!

Both Laura and Rahul (you will too) agree that a test suite effectiveness depends on the strength of its oracles. In other words the assertions you make in your tests. This is what makes a test suite good and determines its ability to detect bugs when present. I've decided to use pelican-ab as a practical example. pelican-ab is a plugin for Pelican, the static site generator for Python. It allows you to generate A/B experiments by writing out the content into different directories and adjusting URL paths based on the experiment name.

Can 100% code coverage detect bugs

Absolutely NOT! In version 0.2.1, commit ef1e211, pelican-ab had the following bug:

Given: Pelican's DELETE_OUTPUT_DIRECTORY is set to True (which it is by default)
When: we generate several experiments using the commands:
    AB_EXPERIMENT="control" make regenerate
    AB_EXPERIMENT="123" make regenerate
    AB_EXPERIMENT="xy" make regenerate
    make publish
Actual result: only the "xy" experiment (the last one) would be published online.
And: all of the other contents will be deleted.

Expected result: content from all experiments will be available under the output directory.

This is because before each invocation Pelican deletes the output directory and re-creates the entire content structure. The bug was not caught regardless of having 100% line + branch coverage. See Build #10 for more info.

Can 100% mutation coverage detect bugs

So I've branched off since commit ef1e211 into the mutation_testing_vs_coverage_experiment branch (requires Pelican==3.6.3).

After initial execution of Cosmic Ray I have 2 mutants left:

$ cosmic-ray run --baseline=10 --test-runner=unittest example.json pelican_ab -- tests/
$ cosmic-ray report example.json 
job ID 29:Outcome.SURVIVED:pelican_ab
command: cosmic-ray worker pelican_ab mutate_comparison_operator 3 unittest -- tests/
--- mutation diff ---
--- a/home/senko/pelican-ab/pelican_ab/__init__.py
+++ b/home/senko/pelican-ab/pelican_ab/__init__.py
@@ -14,7 +14,7 @@
     def __init__(self, output_path, settings=None):
         super(self.__class__, self).__init__(output_path, settings)
         experiment = os.environ.get(jinja_ab._ENV, jinja_ab._ENV_DEFAULT)
-        if (experiment != jinja_ab._ENV_DEFAULT):
+        if (experiment > jinja_ab._ENV_DEFAULT):
             self.output_path = os.path.join(self.output_path, experiment)
             Content.url = property((lambda s: ((experiment + '/') + _orig_content_url.fget(s))))
             URLWrapper.url = property((lambda s: ((experiment + '/') + _orig_urlwrapper_url.fget(s))))

job ID 33:Outcome.SURVIVED:pelican_ab
command: cosmic-ray worker pelican_ab mutate_comparison_operator 7 unittest -- tests/
--- mutation diff ---
--- a/home/senko/pelican-ab/pelican_ab/__init__.py
+++ b/home/senko/pelican-ab/pelican_ab/__init__.py
@@ -14,7 +14,7 @@
     def __init__(self, output_path, settings=None):
         super(self.__class__, self).__init__(output_path, settings)
         experiment = os.environ.get(jinja_ab._ENV, jinja_ab._ENV_DEFAULT)
-        if (experiment != jinja_ab._ENV_DEFAULT):
+        if (experiment not in jinja_ab._ENV_DEFAULT):
             self.output_path = os.path.join(self.output_path, experiment)
             Content.url = property((lambda s: ((experiment + '/') + _orig_content_url.fget(s))))
             URLWrapper.url = property((lambda s: ((experiment + '/') + _orig_urlwrapper_url.fget(s))))

total jobs: 33
complete: 33 (100.00%)
survival rate: 6.06%

The last one, job 33 is equivalent mutation. The first one, job 29 is killed by the test added in commit b8bff85. For all practical purposes we now have 100% code coverage and 100% mutation coverage. The bug described above still exists thought.

How can we detect the bug

The bug isn't detected by any test because we don't have tests designed to perform and validate the exact same steps that a physical person will execute when using pelican-ab. Such test is added in commit ca85bd0 and you can see that it causes Build #22 to fail.

Experiment with setting DELETE_OUTPUT_DIRECTORY=False in tests/pelicanconf.py and the test will PASS!

Is pelican-ab bug free

Not of course. Even after 100% code and mutation coverage and after manually constructing a test which mimics user behavior there is at least one more bug present. There is a pylint bad-super-call error, fixed in commit 193e3db. For more information about the error see this blog post.

Other bugs found

During my humble experience with mutation testing so far I've added quite a few new tests and discovered two bugs which went unnoticed for years. The first one is constructor parameter not passed to parent constructor, see PR#96, pykickstart/commands/authconfig.py

     def __init__(self, writePriority=0, *args, **kwargs):
-        KickstartCommand.__init__(self, *args, **kwargs)
+        KickstartCommand.__init__(self, writePriority, *args, **kwargs)
         self.authconfig = kwargs.get("authconfig", "")

The second bug is parameter being passed to parent class constructor, but the parent class doesn't care about this parameter. For example PR#96, pykickstart/commands/driverdisk.py

-    def __init__(self, writePriority=0, *args, **kwargs):
-        BaseData.__init__(self, writePriority, *args, **kwargs)
+    def __init__(self, *args, **kwargs):
+        BaseData.__init__(self, *args, **kwargs)

Also note that pykickstart has nearly 100% test coverage as a whole and the affected files were 100% covered as well.

The bugs above don't seem like a big deal and when considered out of context are relatively minor. However pykickstart's biggest client is anaconda, the Fedora and Red Hat Enterprise Linux installation program. Anaconda uses pykickstart to parse and generate text files (called kickstart files) which contain information for driving the installation in a fully automated manner. This is used by everyone who installs Linux on a large scale and is pretty important functionality!

writePriority controls the order of which individual commands are written to file at the end of the installation. In rare cases the order of commands may depend on each other. Now imagine the bugs above produce a disordered kickstart file, which a system administrator thinks should work but it doesn't. It may be the case this administrator is trying to provision hundreds of Linux systems to bootstrap a new data center or maybe performing disaster recovery. You get the scale of the problem now, don't you?

To be honest I've seen bugs of this nature but not in the last several years.

This is all to say a minor change like this may have an unexpectedly big impact somewhere down the line.

Conclusion

With respect to the above findings and my bias I'll say the following:

  • Neither 100% coverage, nor 100% mutation coverage are a silver bullet against bugs;
  • 100% mutation coverage appears to be better than 100% code coverage in practice;
  • Mutation testing clearly shows out pieces of code which need refactoring which in turn minimizes the number of possible mutations;
  • Mutation testing causes you to write more asserts and construct more detailed tests which is always a good thing when testing software;
  • You can't replace humans designing test cases just yet but can give them tools to allow them to write more and better tests;
  • You should not rely on a single tool (or two of them) because tools are only able to find bugs they were designed for!

Bonus: What others think

As a bonus to this article let me share a transcript from the mutation-testing.slack.com community:

atodorov 2:28 PM
Hello everyone, I'd like to kick-off a discussion / interested in what you think about
Rahul Gopinath's talk at GTAC this year. What he argues is that test coverage is still
the best metric for how good a test suite is and that mutation coverage doesn't add much
additional value. His talk is basically the opposite of what @lminozem presented last year
at GTAC. Obviously the community here and especially tools authors will have an opinion on
these two presentations.

tjchambers 12:37 AM
@atodorov I have had the "pleasure" of working on a couple projects lately that illustrate
why LOC test coverage is a misnomer. I am a **strong** proponent of mutation testing so will
declare my bias.

The projects I have worked on have had a mix of test coverage - one about 50% and
another > 90%.

In both cases however there was a significant difference IMO relative to mutation coverage
(which I have more faith in as representative of true tested code).

Critical factors I see when I look at the difference:

- Line length: in both projects the line lengths FAR exceeded visible line lengths that are
"acceptable". Many LONGER lines had inline conditionals at the end, or had ternary operators
and therefore were in fact only 50% or not at all covered, but were "traversed"

- Code Conviction (my term): Most of the code in these projects (Rails applications) had
significant Hash references all of which were declared in "traditional" format hhh[:symbol].
So it was nearly impossible for the code in execution to confirm the expectation of the
existence of a hash entry as would be the case with stronger code such as "hhh.fetch(:symbol)"

- Instance variables abound: As with most of Rails code the number of instance variables
in a controller are extreme. This pattern of reference leaked into all other code as well,
making it nearly impossible with the complex code flow to ascertain proper reference
patterns that ensured the use of the instance variables, so there were numerous cases
of instance variable typos that went unnoticed for years. (edited)

- .save and .update: yes again a Rails issue, but use of these "weak" operations showed again
that although they were traversed, in many cases those method references could be removed
during mutation and the tests would still pass - a clear indication that save or update was
silently failing.

I could go on and on, but the mere traversal of a line of code in Ruby is far from an indication
of anything more than it may be "typed in correctly".

@atodorov Hope that helps.

LOC test coverage is a place to begin - NOT a place to end.

atodorov 1:01 AM
@tjchambers: thanks for your answer. It's too late for me here to read it carefully but
I'll do it tomorrow and ping you back

dkubb 1:13 AM
As a practice mutation testing is less widely used. The tooling is still maturing. Depending on your
language and environment you might have widely different experiences with mutation testing

I have not watched the video, but it is conceivable that someone could try out mutation testing tools
for their language and conclude it doesn’t add very much

mbj 1:14 AM
Yeah, I recall talking with @lminozem here and we identified that the tools she used likely
show high rates of false positives / false coverage (as the tools likely do not protect against
certain types of integration errors)

dkubb 1:15 AM
IME, having done TDD for about 15+ years or so, and mutation testing for about 6 years, I think
when it is done well it can be far superior to using line coverage as a measurement of test quality

mbj 1:16 AM
Any talk pro/against mutation testing must, as the tool basis is not very homogeneous, show a non consistent result.

dkubb 1:16 AM
Like @tjchambers says though, if you have really poor line coverage you’re not going to
get as much of a benefit from mutation testing, since it’s going to be telling you what
you already know — that your project is poorly tested and lots of code is never exercised

mbj 1:19 AM
Thats a good and likely the core point. I consider that mutation testing only makes sense
when aiming for 100% (and this is to my experience not impractical).

tjchambers 1:20 AM
I don't discount the fact that tool quality in any endeavor can bring pro/con judgements
based on particular outcomes

dkubb 1:20 AM
What is really interesting for people is to get to 100% line coverage, and then try mutation
testing. You think you’ve done a good job, but I guarantee mutation testing will find dozens
if not hundreds of untested cases .. even in something with 100% line coverage

To properly evaluate mutation testing, I think this process is required, because you can’t
truly understand how little line coverage gives you in comparison

tjchambers 1:22 AM
But I don't need a tool to tell me that a 250 character line of conditional code that by
itself would be an oversized method AND counts more because there are fewer lines in the
overall percentage contributes to a very foggy sense of coverage.

dkubb 1:22 AM
It would not be unusual for something with 100% line coverage to undergo mutation testing
and actually find out that the tests only kill 60-70% of possible mutations

tjchambers 1:22 AM
@dkubb or less

dkubb 1:23 AM
usually much less :stuck_out_tongue:

it can be really humbling

mbj 1:23 AM
In this discussion you miss that many test suites (unless you have noop detection):
Will show false coverage.

tjchambers 1:23 AM
When I started with mutant on my own project which I developed I had 95% LOC coverage

mbj 1:23 AM
Test suites need to be fixed to comply to mutation testing invariants.

tjchambers 1:23 AM
I had 34% mutation coverage

And that was ignoring the 5% that wasn't covered at all

mbj 1:24 AM
Also if the tool you compare MT with line coverage on: Is not very strong,
the improvement may not be visible.

dkubb 1:24 AM
another nice benefit is that you will become much better at enumerating all
the things you need to do when writing tests

tjchambers 1:24 AM
@dkubb or better yet - when writing code.

The way I look at it - the fewer the alive mutations the better the test,
the fewer the mutations the better the code.

dkubb 1:29 AM
yeah, you can infer a kind of cyclomatic complexity by looking at how many mutations there are

tjchambers 1:31 AM
Even without tests (not recommended) you can judge a lot from the mutations themselves.

I still am an advocate for mutations/LOC metric

As you can see members in the community are strong supporters of mutation testing, all of them having much more experience than I do.

I'd like to hear more practical examples if you are able to share them since I'm collecting conference material on this topic. Thanks for reading and happy testing!

rkt image build command reference

Posted by Kushal Das on December 27, 2016 06:15 AM

In my last post, I wrote about my usage of rkt. I have also posted the basic configuration to create your own container images. Today we will learn more about those various build commands of the .acb files. We use these commands with the acbuild tool.

begin

begin starts a new build. The build information is stored inside the .acbuild directory in the current directory. By default, it starts with an empty rootfs. But we can pass some options to change that behavior. We can start with either a local filesystem, or a local aci image, or even from a remote aci image. To create the Fedora 25 aci image, I extracted the rootfs on a local directory and used that with begin command. Examples:

begin /mnt/fedora
begin ./fedora-25-linux-amd64.aci

dep

dep command is used to add any separate aci as a dependency to the current aci. In the rootfs the current aci will be on top of any dependency image. The order of the dependencies is important, so keep an eye to that while working on a new aci image. For example to build any image on top of the Fedora aci image we use the following line

dep add kushal.fedorapeople.org/rkt/fedora:25

run

We can execute any command inside the container we are building using the run command. For example to install a package using dnf we will use the following line:

run -- dnf install htop -y

The actual command (which will run inside the container) is after --, anything before that is considered part of the dep command itself.

environment

We can also add or remove any environment variable in the container image. We use environment command for the same.

environment add HOME /mnt
environment add DATAPATH /opt/data

copy

copy command is used to copy a file or a directory from the local filesystem to the aci image. For example, here we are coping dnf.conf file to the /etc/dnf/ directory inside the container image.

copy ./dnf.conf /etc/dnf/dnf.conf

mount

We use mount command to mark a location in the aci image which should be mounted while running the container. Remember one thing about mount points (this is true for ports too), they worked based on the name you give. Here, we are creating a mount point called apphome and then the next command we are actually specifying the host mount point for the same.

mount add apphome /opt/app/data
rkt run --volume apphome,kind=host,source=/home/kushal/znc,readOnly=false my-image.aci

port

Similar to the mount command, we can use the port command to mark any port of the container which can be mapped to the host system. We need to specify a name, the protocol (can be either udp or tcp) and finally the port number. We use the provided name to map it to a host port in the host.

port add http tcp 80
port add https tcp 443

set-user

set-user command specifies the user which will be used in the container environment.

set-user kushal

Remember to create the user before you try to use it.

set-group

Similar to the set-user command, it specifies the group which will be used to run the application inside the container.

set-working-directory

set-working-directory is used to set the working directory for the application inside the container.

set-working-directory /opt/data

set-exec

Using set-exec we specify a command to run as the application. In the below example we are running the znc command as the application in the container.

set-exec -- /usr/bin/znc --foreground

write

The final command for today is write. Using this command we create the final image from the current build environment. There is --overwrite flag, using which we can overwrite the image file we are creating.

write --overwrite znc-latest-linux-amd64.aci

I hope this post will help to understand the build commands, and you can use the same to build your own rkt images. In future, if I need to find the command reference, I can read this blog post itself.

On recipes, one more time

Posted by Matthias Clasen on December 26, 2016 06:36 PM

I’m still not quite done with this project. And since it is vacation time, I had some time to spend on it, leading to a release with some improvements that I’d like to present briefly.

One thing I noticed missing right away when I started to transcribe one of my mothers recipes was a segmented ingredients list. What I mean by that is the typical cake recipe that will say “For the dough…” “For the frosting…”

So I had to add support for this before I could continue with the recipe. The result looks like this:

Another weak point that became apparent was editing the ingredients on the edit page.  Initially, the ingredients list was just a plain text field. The previous release changed this to a list view, but the editing support consisted just of a popover with plain entries to add a new row.

This turned out to be hard to get right, and I had to go back to the designers (thanks, Jakub and Elvin) to get some ideas.  I am reasonably happy with the end result. The popover now provides suggestions for both ingredients and units, while still allowing you to enter free-form text. And the same popover is now also available to edit existing ingredients:

Just in time for the Christmas release, I was reminded that we have a nice and simple solution for spell-checking in GTK+ applications now, with Sébastian Wilmet’s gspell library. So I quickly added spell-checking to the text fields in Recipes:

Lastly, not really a new feature or due to my efforts, but Recipes looks really good in dark as well.

Looking back at the goals that are listed on the design page for this application,  we are almost there:

  • Find delicious food recipes to cook from all over the world
  • Assist people with dietary restrictions
  • Allow defining ingredient constraints
  • Print recipes so I can pin them on my fridge
  • Share recipes with my friends using e-mail

The one thing that is not covered yet  is sharing recipes by email. For that, we need work on the Flatpak side, to create a sharing portal that lets applications send email.

And for the first goal we really need your support – if you have been thinking about writing up one of your favorite recipes, the holiday season is the perfect opportunity to cook it again, take some pictures of the result and contribute your recipe!

 

Work sprints with a Pomodoro timer

Posted by Fedora Magazine on December 26, 2016 03:04 PM

Time management is important for everyone. When we get our tasks done efficiently, we leave more time for other things we’re passionate about. There are numerous tools on your Fedora system to help you manage your time effectively. One of them is a Pomodoro timer.

The Pomodoro technique was invented by Francesco Cirillo. He named it after the tomato-shaped timer he used in his university years to manage his time. There’s more to the method than just a timer, but basically it means setting up sprint time.

During a sprint, you focus only on the task and goal at hand, and avoid distractions. Each sprint has a specific goal, and the end of the sprint signals a break to relax and set up for the next sprint. Sprints often come in a series, and a longer break follows the end of the series.

By breaking your work into sprints like this, you can focus intently on a specific goal. As you complete sprints, you build up accomplishment and morale. If your sprints are organized around a larger project, you’ll often see big progress in a short time.

Installing the Pomodoro timer

Fedora Workstation’s GNOME Shell has a Pomodoro timer extension available. To install it, search for Pomodoro in the Software tool, or run this command:

sudo dnf install gnome-shell-extension-pomodoro

To see the timer, hit Alt+F2, type r and hit Enter to restart the Shell. You can also logout and log back in, although you’ll need to save any of your work first. The timer will appear at the top right of your Shell:

pomodoro-control-locate

You can use the Preferences panel to have more control over your sprints. There are some interesting options for sound (like a softly ticking clock) you might find energizing — or annoying! Use this control panel to adjust the intervals and interface to suit your preferences.

One of the custom options I like is the ability to start a sprint with a key combination. By default, Ctrl+Alt+P starts the timer, but you can adjust this as desired. Any time I hit a stride while writing and think a sprint is in order, I can use the keyboard to easily start and commit to one.

Other timer apps

But what if you’re not using GNOME? There are options for you, too.

KDE

If you’re using KDE, you can use the Timer app, but you might prefer adding the widget to your screen. Right-click the widget to set the timer to a preset limit, or you can use a mouse wheel to customize. Then start the timer to remind you when the sprint is finished. A notification appears when the timer is done.

XFCE

If you happen to be using XFCE, you might like the xfce4-timer-plugin app. This app functions both as a countdown timer and a scheduled timer. You can set up a custom countdown timer for different sized sprints, and recall them as desired in the alarm list. You can also provide a custom command to run at the conclusion of the sprint.

Cinnamon

Although not available in Fedora directly, there is a fully featured Pomodoro Timer applet for Cinnamon. One source where you can find the timer is via Cinnamon Spices. This applet has a variety of settings, similar to the GNOME timer, specifically built for the Pomodoro method.

 

Closing thoughts

Using the Pomodoro method won’t single-handedly make you more efficient. It can be an important part, though, of an overall approach to managing your time and effort. Do you want more tools you can use to track and improve your work habits? Check out this previous article in the Magazine for some thoughts on the subject.


Cover Image based on https://www.flickr.com/photos/gazeronly/7944002016/ — CC-BY

The art of cutting edge, Doom 2 vs the modern Security Industry

Posted by Josh Bressers on December 25, 2016 06:05 PM
During the holiday, I started playing Doom 2. I bet I’ve not touched this game in more than ten years. I can't even remember the last time I played it. My home directory was full of garbage and it was time to clean it up when I came across doom2.wad. I’ve been carrying this file around in my home directory for nearly twenty years now. It’s always there like an old friend you know you can call at any time, day or night. I decided it was time to install one of the doom engines and give it a go. I picked prboom, it’s something I used a long time ago and doesn’t have any fancy features like mouselook or jumping. Part of the appeal is to keep the experience close to the original. Plus if you could jump a lot of these levels would be substantially easier. The game depends on not having those features.

This game is a work of art. You don’t see games redefining the industry like this anymore. The original Doom is good, but Doom 2 is like adding color to a black and white picture, it adds a certain quality to it. The game has a story, it’s pretty bad but that's not why we play it. The appeal is the mix of puzzles, action, monsters, and just plain cleverness. I love those areas where you have two crazy huge monsters fighting, you wonder which will win, then start running like crazy when you realize the winner is now coming after you. The games today are good, but it’s not exactly the same. The graphics are great, the stories are great, the gameplay is great, but it’s not something new and exciting. Doom was new and exciting. It created a whole new genre of gaming, it became the bar every game that comes after it reaches for. There are plenty of old games that when played today are terrible, even with the glasses of nostalgia on. Doom has terrible graphics, but that doesn’t matter, the game is still fantastic.

This all got me thinking about how industries mature. Crazy new things stop happening, the existing players find a rhythm that works for them and they settle into it. When was the last time we saw a game that redefined the gaming industry? There aren’t many of these events. This brings us to the security industry. We’re at a point where everyone is waiting for an industry defining event. We know it has to happen but nobody knows what it will be.

I bet this is similar to gaming back in the days of Doom. The 486 just came out, it had a ton of horsepower compared to anything that had come before it. Anyone paying attention knew there were going to be awesome advancements. We gave smart people awesome new tools. They delivered.

Back to security now. We have tons of awesome new tools. Cloud, DevOps, Artificial Intelligence, Open Source, microservices, containers. The list is huge and we’re ready for the next big thing. We all know the way we do security today doesn’t really work, a lot of our ideas and practices are based on the best 2004 had to offer. What should we be doing in 2017 and beyond? Are there some big ideas we’re not paying attention to but should be?

Do you have thoughts on the next big thing? Or maybe which Doom 2 level is the best (Industrial Zone). Let me know.

Episode 22 - IoT Wild West

Posted by Open Source Security Podcast on December 25, 2016 01:36 PM
Josh and Kurt talk about planned obsolescence and IoT devices. Should manufacturers brick devices? We also have a crazy discussion about the ethics of hacking back.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/299448186&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Rsync .ssh keys and permissions

Posted by kriptonium on December 24, 2016 12:56 PM
I've had the same ssh keys for years. I just rsync them to a new system when I get one. I always seem to end up with mucked up permissions moving them around and never seem to remember how the permissions were set. Here's my reminder.
~/.ssh 700
~/.ssh/authorized_keys 600
~/.ssh/config 644
~/.ssh/id_dsa 600
~/.ssh/id_dsa.pub 644
~/.ssh/known_hosts 644

Kannolo, a pure-KDE Fedora Remix

Posted by Kevin Kofler on December 24, 2016 03:03 AM

It has been a long time since my last blog post, but that does not mean I stopped doing Fedora-related development. Today, I would like to announce a new project of mine that I had been silently working on for a couple years already. In several cultures, it is customary to make gifts today (in the evening) or tomorrow, so you can take this as a gift for the holidays.

Kannolo is an installable graphical Fedora Remix without GTK+, based on the KDE Plasma Desktop workspace and the Calamares installer. About the name: A “torta fedora” is a Sicilian cake. A “cannolo” is a similar Sicilian sweet with a different shape. The ‘K’ stands for “KDE”.

There is currently a version based on the upstream release 25, and a version based on the upstream release 24, both with all updates up to 2016-12-23. (This is now the third ISO release of Kannolo. The first release was on 2016-12-15.) Only x86_64 images are available at this time.

A distinguishing feature of Kannolo is that it is a hybrid live and netinstall image, powered by the Calamares netinstall module, which offers a selection of recommended (selected by default) and featured optional packages (including developer-oriented packages) to install from the Internet during the installation process.

You can find more information and screenshots at https://sourceforge.net/projects/kannolo/. Further information can be found in the release notes.

You can find these (and eventually future) releases in the Files section. Future release announcements will be posted in the News section.

Enjoy!


Les fichiers po c’est pas si mal !

Posted by Jean-Baptiste Holcroft on December 23, 2016 11:00 PM

Selon ma petite expérience de traducteur, les fichiers po, c’est souvent le moins mauvais format, et pourquoi donc ?

L’ordinateur vs les applications en ligne

Les applications qui s’installent sur le poste de travail utilisent déjà quasiment toutes les fichiers po via gettext et les fichiers ts (monde de Qt). Et presque personne ne vient critiquer fortement ce format. C’est souvent le choix naturel, sauf pour les amis de Mozilla qui ont inventé mieux que tout le monde avec un fichier L20N. Malheureusement je n’ai pas eu l’occasion de comprendre les tenants et aboutissants ni ce que celui-ci apportait.

Là où ça semble plus compliqué, c’est pour les applications web, peut-être du fait que l’arborescence du disque est probablement moins contraignante, ou peut-être est-ce lié à une évolution plus rapide des techniques ? Dans tous les cas, la façon d’internationaliser une application en ligne est vraiment différente, on trouve un peu de tout, et ma petite expérience ne m’a pas permis de trouver une tendance évidente autre qu’une faible présence de gettext.

Le cas des applications mobile est plus simple, Android a son format, je suppose que pour iOS et Windows phone la situation est quasiment la même.

Les formats que j’ai pu voir sur les applications en ligne

Très souvent, ce sont des fichiers mono-langues, c’est-à-dire qui utilisent un mécanisme de clef-valeur. On indique la clef dans l’interface, puis une bibliothèque JavaScript ira chercher la valeur correspondante dans la langue de l’utilisateur, puis par défaut l’anglais.

On peut voir notamment des fichiers :

  • JSON, le JavaScript aimerait bien ce format, c’est la mode,
  • YAML, souvent vu dans des fichiers de configurations, plus léger à lire
  • XML… pas d’explications ;)
  • etc.

Parfois des fichiers multi-langues, c’est-à-dire qui incluent dans chaque fichier de traduction la chaîne source et la chaîne cible. On indique simplement dans l’interface que la phrase doit être traduisible, et celle-ci sera remplacée à la génération du site ou à la volée par sa traduction.

À part les fichiers gettext (po), et rarement un fichier XML, je ne connais pas d’emploi.

Pour avoir plus d’information sur la tête de ces fichiers, on peut utiliser la documentation de Weblate pour voir quelques exemples ou celle du translate-toolkit sur laquelle cet outil s’appuie.

Quelle différence pour le traducteur ?

Pour le traducteur, le fichier po :

  • comporte et la phrase source anglaise et sa traduction :
    • Les autres formats imposent d’avoir les de faire des comparaisons entre deux fichiers, ce qui est peu pratique.
  • indique où se trouve la phrase originelle :
    • L’absence de cette information impose de faire des recherches, si tenté qu’on sache le faire.
  • laisse le contenu original inchangé :
    • La lecture du contexte est plus simple

Enfin, il existe divers outils permettant de modifier des fichiers po sur sa machine (je connais Poedit et Lokalize), pour lancer des règles qualité (Dennis, Pology, Translate-toolkit, etc.).

Amis développeurs, prenez dès le début le format de fichier le plus standard, cela facilite la vie de tout le monde. Non les fichiers po ne sont pas parfait, mais ils possèdent des qualités indéniables.

Il parait que les Moziliens ont créé encore mieux, ça s’appelle L20N, mais je n’ai pas encore compris ce que ça apportait, ne l’ayant jamais vu utilisé…

Creating Fedora 25 LXQt Remix

Posted by Christian Dersch on December 23, 2016 10:57 PM
After some discussions and initial thoughts within LXQt SIG I decided to put a first Fedora 25 LXQt remix together. Now I'd like to share the idea to get some input, especially on selection of applications 😊

LXQt in Fedora – the current state

We have packages and a package group in Fedora for quite a while now. It was submitted as a change in Fedora 22.
Up-to-date packages (version 0.11.x) are available for all supported Fedora releases and EPEL 7, but right now there is no live spin for this nice desktop. Fedora already provides a nice selection of spins for Cinnamon, LXDE, MATE, Plasma and Xfce. As LXQt is a growing project and some other distributions already provide (e.g. Manjaro) or develop (e.g. Lubuntu) LXQt versions of their distributions, Fedora should not wait any longer, especially as most packaging is done. The Fedora LXQt SIG also got some requests for a spin by users. As I made experiences on composing spins when I created Fedora Astronomy, I decided to change this.

Fedora LXQt – Spin or Remix?

A spin is an official compose of Fedora which went through the process of community discussions, trademark approval etc. and is integrated into Fedora's release engineering. A Fedora Remix on the other hand is a non-official project which is based on Fedora but not approved by the Fedora project itself. In case of the LXQt Remix, which is 100% based on Fedora, we provide it in that way first as we want to get some community input. It would not be possible to provide an official LXQt Spin for 25. For Fedora 26 we plan to submit the LXQt spin as a change, so if everything works fine we'll have an official Fedora LXQt Spin then 😊

The remix – current state

We are starting from scratch, the first step was to create a working live image. This has been done now, the remix contains a very basic LXQt desktop with a very limited set of packages (almost nothing except a browser (QupZilla), a terminal (qterminal) and the Qt port of the PCManFM file manager). For the official spin this has to be changed of course, we already have a list of Qt based applications to enhance it. This includes tools the typical set of applications like instant messaging, media player etc. Also the spin needs a proper theme, the default looks like it's from the 90ies 😛 So you'll get a very first impression, hopefully there will be huge development until Fedora 26 final release. But IMHO it is better to have some kind of working basis than discussing things in theory (probably forever).

This is also an important point where you can help us! Do you have preferred (Qt based) applications you'd like to see in an official spin? E.g. which media player do you prefer?

Finally: How to test?

ISO Images
Pagure project
Please report issues using pagure

Fedora and KDE/spin's treatment - Discussion

Posted by Shawn Starr on December 23, 2016 09:25 PM

KDE Project:

I think it's important that the Fedora KDE / Spins Community speak out about how Fedora treats KDE and other spins. Given Fedora is about to have FESCo election, now is the perfect time to get community feedback on what candidates think.

For those who know me, they know I enjoy and support Fedora/Red Hat and have for awhile. However, they also know I strongly dislike how Fedora treats KDE as a 2nd class citizen. Why do I say that? It's well known the history of Fedora/Red Hat has been GNOMEcentric from the very beginning.

This blog post isn't about GNOME vs KDE Plasma and please do not make it about that, there is room for both desktop environments to collaborate and share ideas.

Fedora is about freedom but as part of that freedom Fedora users being able to select the flavor of Fedora they want to use. We can do better to embrace more people into the Fedora family by improving the visibility of other Fedora projects within the Fedora family.

I get that Fedora wants 'products' for Workstation, Server, Cloud and that's fine, but don't hide the other Fedora projects who want to help Fedora! The goal is to bring more people in Fedora, I don't care if it's for KDE or Mate, Cinnamon but GROW the Fedora community.

See discussions from:
https://pagure.io/design/issue/411
https://pagure.io/design/issue/412

Right now, I do not see this, i see an unhealthy split that alienates other projects, so, my question is this to FESCo candidates and Fedora KDE users, Spins users.

"What should Fedora do to encourage, engage and enhance Fedora's projects and foster a healthy environment of developing the best Linux distribution for everyone to enjoy for productivity, gaming and more?"

Using rkt on my Fedora servers

Posted by Kushal Das on December 23, 2016 04:51 PM

Many of you already know that I moved all my web applications into containers on Fedora Atomic image based hosts. In the last few weeks, I moved a few of them from Docker to rkt on Fedora 25. I have previously written about trying out rkt in Fedora. Now I am going to talk about how can we build our own rkt based container images, and then use them in real life.

Installation of rkt

First I am going to install all the required dependencies, I added htop and tmux and vim on the list because I love to use them :)

$ sudo dnf install systemd-container firewalld vim htop tmux gpg wget
$ sudo systemctl enable firewalld
$ sudo systemctl start firewalld
$ sudo firewall-cmd --add-source=172.16.28.0/24 --zone=trusted
$ sudo setenforce Permissive

As you can see in the above-mentioned commands, rkt still does not work well with the SELinux on Fedora. We hope this problem will be solved soon.

Then install the rkt package as described in the upstream document.

$ sudo rkt run --interactive --dns=8.8.8.8 --insecure-options=image kushal.fedorapeople.org/rkt/fedora:25

The above-mentioned command downloads the Fedora 25 image I built and then executes the image. This is the base image for all of my other work images. You may not have to provide the DNS value, but I prefer to do so. The --interactive provides you an interactive prompt. If you forget to provide this flag on the command line, then your container will just exit. I was confused for some time and was trying to find out what was going on.

Building our znc container image

Now the next step is to build our own container images for particular applications. In this example first I am going to build one for znc. To build the images we will need acbuild tool. You can follow the instructions here to install it in the system.

I am assuming that you have your znc configuration handy. If you are installing for the first time, you can generate your configuration with the following command.

$ znc --makeconf

Now below is the znc.acb file for my znc container. We can use acbuild-script tool to create the container from this image.

#!/usr/bin/env acbuild-script

# Start the build with an empty ACI
begin

# Name the ACI
set-name kushal.fedorapeople.org/rkt/znc
dep add kushal.fedorapeople.org/rkt/fedora:25

run -- dnf update -y
run -- dnf install htop vim znc -y
run -- dnf clean all

mount add znchome /home/fedora/.znc
port add znc tcp 6667

run --  groupadd -r fedora -g 1000 
run -- useradd -u 1000 -d /home/fedora -r -g fedora fedora

set-user fedora

set-working-directory /home/fedora/
set-exec -- /usr/bin/znc --foreground 

# Write the result
write --overwrite znc-latest-linux-amd64.aci

If you look closely to the both mount and port adding command, you will see that I have assigned some name to the mount point, and also to the port (along with the protocol). Remember that in the rkt world, all mount points or ports work based on these assigned names. So, for one image HTTP name can be assigned to the standard port 80, but in another image, the author can choose to use port 8080 with the same name. While running the image we choose to decide how to map the names to the host side or vice-versa. Execute the following command to build our first image.

$ sudo acbuild-script znc.acb

If everything goes well, you will find an image named znc-latest-linux-amd64.aci in the current directory.

Running the container

$ sudo rkt --insecure-options=image --debug run --dns=8.8.8.8  --set-env=HOME=/home/fedora --volume znchome,kind=host,source=/home/kushal/znc,readOnly=false  --port znc:8010 znc-latest-linux-amd64.aci

Now let us dissect the above command. I am using --insecure-options=image option as I am not verifying the image, --debug flag helps to print some more output on the stdout. This helps to find any problem with a new image you are building. As I mentioned before I passed a DNS entry to the container using --dns=8.8.8.8. Next, I am overriding the $HOME environment value, I still have to dig more to find why it was pointing to /root/, but for now we will remember that --set-env can help us to set/override any environment inside the container.

Next, we mount /home/kushal/znc directory (which has all the znc configuration) in the mount name znchome and also specifying that it is not a readonly mount. In the same way we are doing a host port mapping of 8010 to the port named znc inside of the container. As the very last argument, I passed the image itself.

The following is the example where I am copying a binary (the ircbot application written in golang) into the image.

#!/usr/bin/env acbuild-script

# Start the build with an empty ACI
begin

# Name the ACI
set-name kushal.fedorapeople.org/rkt/ircbot
dep add kushal.fedorapeople.org/rkt/fedora:25

copy ./ircbot /usr/bin/ircbot

mount add mnt /mnt

set-working-directory /mnt
set-exec -- /usr/bin/ircbot

# Write the result
write --overwrite ircbot-latest-linux-amd64.aci

In future posts, I will explain how can you run the containers as systemd services. For starting, you can use a tmux session to keep them running. If you have any doubt, remember to go through the rkt documents. I found them very informative. You can also try to ask your doubts in the #rkt channel on Freenode.net.

Now it is an exercise for the reader to find out the steps to create an SELinux module from the audit log, and then use the same on the system. The last step should be putting the SELinux back on Enforcing mode.

$ sudo setenforce Enforcing

Frohe Weihnachten und guten Rutsch!

Posted by Fedora-Blog.de on December 23, 2016 04:14 PM
<figure class="wp-caption aligncenter" id="attachment_5857" style="width: 700px"><figcaption class="wp-caption-text">(c) 2009 D3struct0</figcaption></figure>

Und es begab sich, das die Nerds wie jedes Jahr heimkehrten an die Stätte ihrer Geburt, um die IT ihrer Familien zu fixen 😎 🎄 🎅

Mit diesem kleinen Weihnachtvers möchten wir uns bei allen Leserinnen und Lesern dafür bedanken, das sie uns auch dieses Jahr die Treue gehalten haben und wünschen Euch allen frohe und besinnliche Feiertage sowie einen guten Rutsch ins Jahr 2017.

Wir werden über die Feiertage sowie während der Tage „zwischen den Jahren“ eine kleine Pause einlegen, um dann im neuen Jahr wieder mit frischen Elan ans Werk zu gehen.

radv and doom - kinda

Posted by Dave Airlie on December 23, 2016 07:26 AM
Yesterday Valve gave me a copy of DOOM for Christmas (not really for Christmas), and I got the wine bits in place from Fedora, then I spent today trying to get DOOM to render on radv.



Thanks to ParkerR on #radeon for taking the picture from his machine, I'm too lazy.

So it runs kinda, it hangs the GPU a fair bit, it misrenders some colors in some scenes, but you can see most of it. I'm not sure if I'll get back to this before next year (I'll try), but I'm pretty happy to have gotten it this far in a day, though I'm sure the next few things will me much more difficult to debug.

The branch is here:
https://github.com/airlied/mesa/commits/radv-wip-doom-wine

Linux at Uni

Posted by Aly Machaca on December 23, 2016 05:50 AM

Linux at Uni

Linux at Uni se llevo acabo el 26 de Noviembre del 2016 de 8:00am a 5:00pm en la Universidad Nacional de Ingeniería.

El evento arrancó con un video de "Revolution of the Operating System" y en paralelo se hicieron instalaciones de Fedora 25. 

El evento estuvo compuesto por charlas de contribución a Fedora y Gnome, algunas charlas fueron presenciales y la mayoría fueron videoconferencias. Disfrutamos de las charlas que estuvieron interesantes siempre con la mentalidad de obtener más contribuidores para el Proyecto Fedora.


flyer.jpg









Había torta de Fedora por el Release Party y otros bocaditos. :D




 











Estas fueron algunas de las fotos, gracias a todos los que fueron participes de este evento. Nos vemos en otra entrada. Bye bye...



Release Party Fedora 25 en Lima Este

Posted by Bernardo C. Hermitaño Atencio on December 22, 2016 11:37 PM

Reunidos con un grupo de estudiantes que terminaban su carrera de Computación e Informática después de 3 años de estudios y que forman parte de un programa nacional de Beca, ellos manifiestan el interés por tener un recuerdo de Fedora un S.O. que nos acompaño a aprender mucho en estos últimos 3 años, así que mencionamos como propuesta realizar un Release Party de Fedora 25 en coordinación con Diva García una de las estudiantes enviamos a confeccionar un polos recordatorios con el logo de Fedora.
Es fin año y las agendas están muy recargadas para estudiantes, donde tienen que entregar trabajos y rendir evaluaciones, mientras que los docentes realizar evaluaciones, ejecutarlas y llenar informes entre otros, esto nos llevo programar un evento en día de semana y no un sábado como siempre se acostumbra.
Invitamos a todos los estudiantes de la carrera a participar de manera libre, asi que decimos realizar bajo el estilo del barcamp de lo cual se genero la siguiente lista de temas:
– Novedades en Fedora 25                        Jose Mogollon
– Fedora en Tecnologías ARM                  Bernardo Hermitaño A.
– Patron de diseño MVC                             Jose Trujillo y Jason Leon
– DAO y MVC con php y MariaDB            Miguel Jimenez
– Generar reportes basados en Parámetros        Luis Centeno

Es necesario en hacer mención que José Mogollón es egresado del Instituto y ahora forma parte de la comunidad y ha iniciado activamente a colaborar con la comunidad.
Luego de las charlas a los interesados en Fedora 25 se les hizo entrega de las imagenes en formato iso, se preparon memorias usb vivos y se apoyaron a los invitados que querían empezar con Fedora.
Por otra parte los esperados polos (t-shirt) no llegaron para el día esperado así que tuvimos que esperar al día siguiente para poder tenerlas en las manos.
El último día de clase decidimos realizar la entrega del polo recordatorio, y también poder hablar sobre el futuro de ellos, algunos en proceso de formar parte de la comunidad e interesados que querer seguir aprendiendo y aportando, finalizamos el año con una nostalgia que pudimos hacer más y también a la vez recordar que hay mucho que hacer en educación pública, a pesar de la limitaciones seguimos avanzando…

img-20161222-wa0021 img-20161222-wa0020 img-20161222-wa0019 img-20161222-wa0018 img-20161220-wa0125 img-20161220-wa0124 img_20161217_124418 20161220_205722_hdr 20161220_205446_hdr 20161220_200943_hdr 20161220_200223_hdr 20161220_190910_hdr 20161217_063555 20161215_203949 20161215_202502 20161215_201826 20161215_201428 20161215_195941 20161215_194504 20161215_194249 20161215_192252 20160914_130356_hdr 3 2a 1

 

 


Generic Cluster Management + Virtualization Flavor

Posted by Fabian Deutsch on December 22, 2016 09:23 PM

oVirt is managing a cluster of machines, which form the infrastructure to run virtual machines on top.

Yes - That’s true. We can even formulate this - without any form of exaggeration and you can probably even find a proof for this - mathematically:

  Generic Cluster Knowledge
+ Virtualization Specific Cluster Knowledge
--------------------------------------------------------
  Absolutely Complete Virtualization Management Solution

You might disagree with this view, that’s fine - it is just one of many views on this topic. But for the sake of discussion, let’s take this view.

Add maths<script async="async" charset="utf-8" src="http://embedr.flickr.com/assets/client-code.js"></script>

What I consider to be generic cluster knowledge is stuff like:

  • Host maintenance mode
  • Fencing
  • To some degree even scheduling
  • Upgrading a cluster
  • Deploying a cluster (i.e. the node lifecycle, like joining a cluster)

Besides of that even broader topics are not specific to virtualization like i.e. storage - regardless of what is running on a cluster - you do need to provide storage to it, or at least run it of some storage (don’t pull out PXE now …). The same is true for networking - Workloads on a cluster are usually not isolated, and thus need a way to communicate.

And then there are workload specific bits, i.e. in oVirt it is all about virtualization:

  • Specific metrics for scheduling
  • Logic to create VMs (busses, devices, look at a domxml)
  • Different scheduling strategies
  • Hotplugging
  • Live Migration
  • Specifics on network on storage related to virtualization
  • Host device passthrough

… to name just a few. These (and many more) form the virtualization specific knowledge in oVirt.

So why is it so important to me to separate the logic contained in oVirt in this particular way? Well - oVirt is interesting to people who want to manage VMs (on a data center scale and reliability level). This is pretty specific. And it’s all tightly integrated inside of oVirt. Which is good on the one hand, because we can tune it at any level towards our specific use-case. The drawback is that we need to write every level in this stack mostly by ourselves.

Wit this separation at hand, we can see that this kind of generic cluster functionality might be found in other cluster managers as well (maybe not exactly, but to a some degree). If such a cluster manager exists, then we could look and see if it makes sense to share functionality, and then - to tune it towards our use-case - just add our flavor.

Any flavor you like V5.0<script async="async" charset="utf-8" src="http://embedr.flickr.com/assets/client-code.js"></script>

“Yes, but …”

Yes, so true - but let’s continue for now.

A sharp look into the sea of technology reveals a few cluster managers. One of them is Kubernetes (which is also available on Fedora and CentOS).

It describes itself as:

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.

Yes - the container word was mentioned. Let’s not go into it this right now, but instead, let’s continue.

After looking a bit into Kubernetes it looks like there are areas - the generic bits - in which there is an overlap between Kubernetes and oVirt.

Yes - There are also gaps, granted, but that is acceptable, as we always start with gaps and find ways to master them. And sometimes you are told to mind the gap - but that’s something else.

Getting back to the topic - If we now consider VMs to be just yet another workload, and that VM management is actually just an (in oVirt a Java) application (excluding the exceptions), then the gap might not be that large anymore.

… until you get to the exceptions - and the details. But that is something for next year.

A stable base for Flatpak: 0.8

Posted by Alexander Larsson on December 22, 2016 12:54 PM

Earlier this week I released Flatpak 0.8.0. The version change is meant to signal the start of a new long-term supported branch. The 0.8.x series will be strictly bugfixes, and all new
features will happen in 0.9.x.

The release has a few changes, such as a streamlined command line interface and OCI support, but it also has several additions that make Flatpak more future-proof. For instance, we added versioning to all file formats, and a minimal-flatpak-version-required field for applications.

My goal is to get the 0.8 series into the Debian 9 release, and as many other distributions as possible, so that people who create flatpaks can consider the features it supports as a reliable baseline.

Sandboxing has always been one of the pillars of Flatpak, but even more important to me is cross-distro application distribution, even if not sandboxed. This is important because it gives upstream developers a way to directly interact with their users, without having an intermediate distributor. With 0.8 I think we have reached a level where the support for this is solid. So, if you ever thought about experimenting with Flatpak, now is the time!

I leave you with a small screencast showing the new streamlined way to install an application om the command line (on an otherwise empty system):

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="371" src="https://www.youtube.com/embed/uCtiTyp1Y-8?feature=oembed" width="660"></iframe>

For information on how to get flatpak, see flatpak.org. Version 0.8.0 is already in the Ubuntu PPA and in  Fedora. Other distributions hopefully will get it soon.

6 great monospaced fonts for code and terminal in Fedora

Posted by Fedora Magazine on December 22, 2016 12:15 PM

Because they spend most of their days looking at them, most sysadmins and developers are pretty choosy when it comes to picking a monospaced font for use in terminal emulators or text editors. Here are six great monospace fonts that can be easily installed from the official Fedora repositories to make your text editor or terminal emulator look and function just that little bit nicer.

Inconsolata

A favourite of many programmers, Inconsolata is a clear and highly readable humanist monospaced font designed by Raph Levien. It features a slashed zero to distinguish that glyph from the uppercase O, and also has easily distinguishable different glyphs for the lowercase L and the numeral 1.

Inconsolata_sample

To install Inconsolata, search for it in the Software application in Fedora Workstation, or install the levien-inconsolata-fonts package using DNF or yum on the command line.

Source Code Pro

Source Code Pro is a monospaced typeface released under the SIL Open Font License by Adobe. It features a dotted zero to distinguish that glyph from the uppercase O, and also has different glyphs for the lowercase L and the numeral 1.

sourcecodepro_sample

To install Source Code Pro, search for it in the Software application in Fedora Workstation, or install the adobe-source-code-pro-fonts package using DNF or yum on the command line.

Fira Mono

Fira Mono is the monospaced variant of the Firefox brand font Fira Sans. It has a little more weight than some of the other fonts in our list. It also features a dotted zero, and different glyphs for the lowercase L and the numeral 1.

firamono_sample

To install Fira Mono, search for it in the Software application in Fedora Workstation, or install the mozilla-fira-mono-fonts package using DNF or yum on the command line.

Droid Sans Mono

Droid Sans Mono is part of the Droid Family of fonts commisioned by Google for earlier versions of Android. One downside to this font is the lack of a dotted or slashed zero, making the zero glyph hard to distinguish from the uppercase O. There is also versions of Droid Sans Mono available on a 3rd party website that add a dotted or slashed zero to this font, but these arent available in the Fedora repos, so you will need to download and install the font manually.

To install Droid Sans Mono, search for it in the Software application in Fedora Workstation, or install the google-droid-sans-mono-fonts package using DNF or yum on the command line.

DejaVu Sans Mono

To install DejaVu Sans Mono, search for it in the Software application in Fedora Workstation, or install the dejavu-sans-mono-fonts package using DNF or yum on the command line.

Hack

Hack bills itself as having “No frills. No gimmicks. Hack is hand groomed and optically balanced to be a workhorse face for code.” Hack builds on the monospaced versions in the Bitstream Vera and DejaVu font families, modifying and enhancing glyph coverage, shapes and spacing. Hack works best in the 8px to 12px range on regular DPI monitors, and as low as 6px on higher DPI monitors.

hack

Hack is not in the official Fedora repos yet, but is being worked on. Luckily there is a copr repo that packages up the fonts for you, or you can install the font files directly from the Hack github.


This post was originally published in October 2015. It was updated in in October 2016 to add Hack.

Creando un servicio personal de OpenVPN

Posted by Ismael Olea on December 21, 2016 11:00 PM

He decidido, por fin, crear mi propio servicio VPN. Los motivos principales son poder asegurar navegación privada y cercionarme que uso un servicio de confianza 100% auditado… por mi.

Requisitos

  • servicio OpenVPN
  • usando docker
  • servidor Centos 7
  • reutilizando alguna configuración existente
  • pero sin reutilizar imágenes publicadas en el Docker Hub, por celo en la seguridad
  • poder conectar desde máquinas Linux y teléfonos Android

La configuración elegida es una creada por Kyle Manna: https://github.com/kylemanna/docker-openvpn/ ¡Gracias Kyle!

Procedimiento de instalación y configuración del servidor

En este caso usamos CentOS 7, pero como no está disponible docker-compose he tenido que retro-portarlo y lo tenéis disponible en un repositorio específico.

Preparación:

cd /etc/yum.repos.d ; wget https://copr.fedorainfracloud.org/coprs/olea/docker-compose/repo/epel-7/olea-docker-compose-epel-7.repo
yum install -y docker docker-compose
yum install -y docker-lvm-plugin.x86_64 docker-latest.x86_64
yum upgrade -y
groupadd docker
usermod -G docker -a USUARIO
echo "VG=sys" > /etc/sysconfig/docker-storage-setup
docker-storage-setup
systemctl enable docker
systemctl start docker

Si docker ha podido arrancar entonces probablemente está listo para empezar a trabajar.

Obviamente también hay que configurar el DNS del servicio VPN.MISERVIDOR.COM en el servidor correspondiente.

Entrando en materia:

mkdir servicio-VPN.MISERVIDOR.COM
cd servicio-VPN.MISERVIDOR.COM
git clone https://github.com/kylemanna/docker-openvpn
cat <<EOF > docker-compose.yml
version: '2'
services:
    openvpn:
        build:
            context: docker-openvpn/
        cap_add:
            - NET_ADMIN
        image: Mi-ID/openvpn
        ports:
            - "1194:1194/udp"
        restart: always
        volumes:
            - ./openvpn/conf:/etc/openvpn
EOF

Y continuando con las instrucciones indicadas:

  • construimos localmente la imagen docker desde cero de una sola vez:
docker-compose run --rm openvpn ovpn_genconfig -u udp://VPN.MISERVIDOR.COM
  • iniciamos la AC local propia (se nos pedirá la contraseña de la clave privada):
docker-compose run --rm openvpn ovpn_initpki
  • finalmente lanzamos el contenedor:
docker-compose up -d openvpn

Procedimiento de altas de usuarios

  • Alta del usuario:
docker-compose run --rm openvpn easyrsa build-client-full USUARIO nopass
  • generación de la configuración local de OpenVPN para el mismo usuario:
docker-compose run --rm openvpn ovpn_getclient USUARIO > USUARIO.ovpn
  • Este fichero lo copiaremos a nuestra máquina porque es el que nos habilitará el acceso VPN.

Problema importando configuraciones de OpenVPN y NetworkManager

Personalmente me he encontrado el problema varias veces de que el GUI de configuración de NetworkManager no es capaz de importar los certificados criptográficos al configurar una conexión VPN importando ficheros ovpn. Tras investigarlo varias veces he concluido que se debe a un bug documentado que en mi caso no está resuelto en NetworkManager-openvpn-gnome-1.0.8-2.fc23 pero sí en NetworkManager-openvpn-gnome-1.2.4-2.fc24.

Si aún os encontráis con ese problema habría dos alternativas: o actualizar a una versión reciente de NM o conectarse manualmente desde el CLI:

sudo /usr/sbin/openvpn --config USUARIO.ovpn

Life of Kernel Bisecter

Posted by "CAI Qian" on December 21, 2016 03:20 PM
Bisecting is extremely useful to fix a regression in big projects like upstream kernel. The goal here is to get the regression fixed instead of just reporting it and forget about it. Usually upstream regression reports have easily been ignored due to the bandwidth of the kernel developers, complex of the code analysis involved to find out the root cause, developers limited access to the hardware etc. However, since it is a regression, it is usually possible to track down which exact commit introduced it. Hence, make it is way easier for developers to figure out the root cause and come up with a fix. Also, the original authors who introduced the regression usually response quickly (within one working day) because they want to maintain good reputations within the community. By introducing regression with their patches without fixing them quickly makes lives harder for them to get their future patches accepted by Linus and sub-system maintainers. Linus and friends are usually not afraid of and good at making them feel public peer pressure once happened. In the worst case, the solution is to send a revert patch to fix the regression. Usually, it will be accepted as Linus and friends because they absolutely hate regressions even the trivial ones.

However, git bisect kernel regression is not usually an easy task. Especially, for big project like upstream kernel, there could be a lot of going on between a commit introduced the regression and when the people actually encountered the regression, so lots of things could go wrong during the "git bisect". Therefore, it is important to test upstream kernel as often as possible to make bisecter's life easier. Below are some hard lessons learned from my years' experiences of kernel git bisecting.

Always test on tagged commit first if possible. Tagged commits like v4.7 and v4.7-rc2 usually are more stable and have less compilation errors or boot issues that you will either need to deal with it before further bisecting. For example, if v4.7 is bad and v4.2 is good. Don't just start "git bisect" yet, as the next commit to test will be some random commit in between. Instead, test the middle tagged commits like v4.5 instead. If v4.7 is bad and v4.6 is good, bisect manually further for those tagged commits v4.7-rc* until you know the exact v4.7-rc release that introduced the regression before start the "git bisect".

If there is an unexpected happened during the testing of one of the commit like compilation errors, boot issues or early events mask regression reproduction, you usually need to deal with them first by figuring out the patch (may also means to start another reversed mini-bisect to find out the patches) to fix them and manually carry them in the kernel during the bisecting. If you are lucky, you might be able to fix them by using git skip, tweaking the kernel config, searching the git log, mailing lists and bugzilla or find out other workarounds.

If you are chasing a lockdep regression, make sure you applied patches for other lockdep issues that will happen before you running the reproducer. Otherwise, you may get a false positive due to the design of lockdep that only the first one will show up. Always check the dmesg before running the reproducer to see if the lock debugging is disabled. For example, enabled KASAN config will likely disable the lock debugging due to kernel is tainted.

You will need lots of CPUs to be efficient to compile lots of kernels using "make -j". Otherwise, it could take you weeks to find the culprit. Also, you will better have a big partitions for /boot/ and /lib/ to store many kernels, so you save time without needing to delete them to make rooms for new kernels. Usually, you can skip "make clean" to accelerate the kernel compilation. However, if you suspect some old data structures was used by the new kernel, recompiled the kernel after "make clean". Also, use things like ccache or distcc if possible.

The more you know about the kernel internal, the more efficient of a bisecting process might be. Bisecting is essentially a block-box testing process, but you can use your kernel internal knowledge to reduce the amount of commits need to test. Once the commits left to be tested reduced to a number that you can manage to read the commit logs for those, you can read all of them while waiting for kernel to compile. Then, you can test some suspected commits. If your guesses are wrong, make sure marking those commits as usually as either good or bad, so you can resume the black-box bisecting from there.

Usually rc1 is most likely to introduce regressions since most of big merges happened at that time. Hence, put rc1 on a slightly higher priority to test during the process if necessary. LWN have good summaries for what included for each RC release.

If the bisect points to a merge commit like,
...
commit 711bef65e91d2a06730bf8c64bb00ecab48815a1
Merge: acdfffb 0f5aa88
...
you will then run "git bisect good acdfffb" and "git bisect bad 0f5aa88" in order to find out the exact non-merge commit introduced the regression.

Once you found the culprit commit, tested it again by reverting the commit against the latest git head to confirm the finding. The older the commit, the less likely it will be a straightforward revert. If there are some revert conflicts, try to resolve them as many as possible. If it is too difficult for you, try to revert the commit while it is set as the head to confirm the finding to avoid some side-effects like kernel config differences etc. Include those information in the final email (below) to send to the community.

Beware of kernel config differences that could cause a regression that you try to bisect. So, if you suspect the git bisect process leads to some commits that do not make sense, double-check the kernel config differences between the closest good and bad commits to see if they may cause the problem.

Once you are ready to send your bisecting result by email, always add the original author and the people who provided the "Signed-off-by" tag to the "To" list, and put a single relevant mailing-list to the "CC" list, e.g., linux-xfs@ for xfs issues, so it will be achieved somewhere for future reference. Optionally, you can include the people who provided the "Reported-by" tag to the CC list as well, so they may be interested to test the patches once available. Append something like "[Bisected]" at the beginning of the email subject to draw more attention from your hard working. If there is no response from anyone after one working day, submit to the upstream bugzilla with the regression flag, so it will likely be included by automated regression reports in the future and not to get lost.

Episode 21 - CVE 10K Extravaganza

Posted by Open Source Security Podcast on December 21, 2016 02:14 PM
Josh and Kurt talk about CVE 10K. CVE IDs have finally crossed the line, we need 5 digits to display them. This has never happened before now.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/298898472&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


IRC and me

Posted by Kushal Das on December 21, 2016 06:42 AM

During college days when I started using Linux for the time, one of my favorite past time was to click on every application icon on my computer. By random clicking, I tried to learn the usage of the applications. Once I managed to open a tool called xchat in the similar fashion, but after seeing the big list of servers I was lost. Never managed to find any guide which I could follow. I learned it is for IRC, but nothing more.

Using the first time

I learned a lot of things at foss.in. The 2005 edition was the first conference I ever attended. A fresh graduate from college awes struck by meeting so many people whose names I read on the Internet. It was a super big thing. During the conference, I met Till Adam, a KDE developer who taught me various important things throughout the conference. Things we count as first steps these days, like how to create a patch, how to apply a patch, and how to compile a big application (KDE in this case), etc. After the conference is over, he invited me to have dinner with him, and some friends. When I reached his hotel room he told me to wait while he checked if there was any message for him, and also who else were coming down. I don’t remember which application he was using. While I was waiting, Till asked me if I stay in the same IRC channel. “I don’t know what is IRC” was my reply. Till took his time to explain the basic usage of IRC to me in the hotel room. Before we came down for dinner, I already got a full overview and was eagerly waiting to start using it. There are many funny stories, and great memories from the dinner, but maybe in a later post.

Initial days

Pradeepto founded KDE India during foss.in, and we got an IRC channel to stay and talk to all the new friends we made during the conference. I also found ##linux-india where I managed to meet many other ilug guys whom I talked before only on a mailing list. It was a fun time. After coming back from office, joining IRC was the first thing to do every day. Stay up very late, talk to people, make new friends was part of it. Meanwhile, I also started contributing to Fedora and started staying in all the Fedora channels too. First time saw how meetings can be arranged on IRC. There were too many new things to learn from so many people. I got hooked into Freenode server.

Effects in my life

IRC became an integral part of my life. I stay in touch with most of my friends over IRC, even with many friends who lives nearby :) The Durgapur Linux Users Group became synonyms to #dgplug channel over Freenode server. I landed in few of my previous jobs because of IRC. If I have any question about any new technology, or about a new place I am going to visit, I know I can go to IRC, and ask my friends. I still have many friends whom I never met in real life. I managed to meet more people after I started going out to more conferences. Like I met tatica only after 8 years of IRC chat during Flock 2013, there were too many other Fedora developers whom I met for the first time in a single conference. Same thing happened in PyCon 2013 where I suddenly found Yhg1s aka Thomas Wouters. I don’t know how many times he helped me over IRC before, and there suddenly I met him in the conference. In my current job, most of the meetings happen over IRC only. PSF board meetings also use IRC as a recording medium, and we vote on the agendas in IRC. Sometimes Anwesha complains that I can stay happy in a big city or in a remote village as I only need my IRC connection to work :)

Beginner’s guide to IRC

Posted by Fedora Magazine on December 21, 2016 04:00 AM

IRC, short for Internet Relay Chat, is a great way for individuals and teams to communicate and work together. Although there are new apps like Slack that mimic it, IRC itself has been around for decades. It’s a time-tested system with a wealth of features. However, it’s also simple to get started using it with tools in Fedora.

IRC servers on the internet accept and relay messages to connected users, each of whom is running an IRC client. The clients all use the IRC protocol, a set of agreed upon rules for communication. There are many separate IRC networks on the internet. Each network has one or more servers around the world that work together to relay messages.

Each network also has many channels, sometimes called rooms, where users can gather. A channel usually has a specific topic, and a name that starts with a “#”, such as #hyundai-cars. When you enter or join that channel, it’s because you want to discuss that topic. You can also start your own channel.

You can also privately message other users in most cases. It’s also possible to configure your user account on a network, or your client, not to get such messages. IRC has many options available, but this article will only cover a few simple ones.

IRC clients

There are several useful IRC client apps available on Fedora. The one we’ll use here is Hexchat. To install it, open the Software application, type hexchat into the search bar, and select the Install button for the app. Or at the terminal, use this command:

su -c 'dnf install hexchat'

Once the app is installed, select it from the application menu for your desktop environment, or run hexchat at the terminal. Other clients include:

  • Polari. Polari is designed to work well with GNOME. It has a simple, beautiful interface to help you get online quickly and focus on your conversations.
  • Smuxi. Smuxi is a slightly more complicated client, but it includes a proxy component. Rather than using a separate proxy like ZNC — which we covered in the Magazine earlier — Smuxi includes this feature. (You can use a regular proxy with any client, of course.)
  • Konversation. Konversation is designed to work well with KDE. Its interface is similar to Hexchat and it has many useful options.
  • Irssi. Unlike the other clients mentioned here, irssi is a command-line application. It is highly configurable, and requires more knowledge to use.
  • WeeChat. This is another highly configurable command-line IRC client, but also supports some other protocols.

Connecting to IRC

Here is the startup screen for the Hexchat app:

Hexchat startup screen

On the network, you are permitted to use an IRC nickname, or nick, to identify yourself. Common names may be in use since some IRC networks have thousands of users. The nick paul in this example screen is probably already in use. The second or third choices will be used in the event of a collision. It’s a better idea to pick something more unusual. For example, pwf16 is probably not in use.

Then select the appropriate network for IRC. The network you choose depends on whom you’re trying to talk with. Each different group or project on the internet will use a specific network. For example, many free software projects use the Freenode network.

Your group or project of interest should provide you either the name or connection details for the right IRC network. Once you’ve selected the network in the list, select Connect to get online.

If you don’t see an entry for the network, select Add and provide connection details. The connection information should be provided on that IRC network’s web page. Often the address will be written in the form server.example.org:6667. The network or server address is server.example.org, and 6667 represents the port. Port 6667 is used by most IRC servers. You can often leave it out in clients, since 6667 may be assumed if not provided.

In many cases, the network will display information in your client as it connects. At least the first few times you use a network, check the notices to see if you are expected to do anything else. For example, some networks require you to register your nick with a real email address. Follow instructions as needed. (This article cannot cover all networks, so if you don’t understand instructions, contact that IRC network’s support staff.)

Good manners in IRC

Before joining any IRC channels, you should understand basic manners online. Joining a channel is like attending a party at someone’s home. Good manners on IRC are as important as good manners when visiting someone in real life. If you show up and behave rudely, you’ll probably be asked to leave, or even be banned. Be polite, and assume the other people in the channel also have good intentions — especially if you don’t understand something they say.

It’s also important to know how people have conversations in IRC. Often a channel has more than one conversation going at a time. Again, like a party, sometimes conversations overlap, so it’s important to remember a comment may not be directed to you.

To avoid confusion, people in a conversation often use each other’s IRC nicks to indicate to whom they’re talking. The nick is followed by a colon or a comma, and then the comment. Most IRC clients also understand this rule, and notify you if someone uses your name this way.

<pwf16> rlerch: Hi, it's good to see you again!
<rlerch> pwf16: You too. 
<pwf16> rlerch: What did you think about that banner image I made?
<rlerch> pwf16: Well, it's got some issues, but let me see if I can tweak it.

Another reason this rule is helpful is you cannot assume others are looking at IRC at the same time as you. They may be away from their computer, or working on something else. When you address comments, the notification will be waiting for the other person to return. That person can then reply to you.

Some communities or channels have their own rules and guidelines. Some channels have operators, or ops, who monitor the channel to make sure things are going well. They may take action if someone is being rude or abusing the channel. It’s important to understand and respect channel and community rules when you use IRC. Take the time to read them before joining any conversations. Doing so will avoid problems or misunderstandings, just like house rules when you visit someone.

Joining a channel

Once joined to a network, Hexchat displays this screen:

Hexchat connection complete screen

If you know the name of the channel you want to join, type it in the provided box and select OK.

Now you’ll see the main Hexchat window. It shows you a list of networks and channels you have joined, a conversation window, and a list of nicks in the current channel. There’s also a small line at the bottom of the window next to your nick, where you can type a comment or an IRC command.

Hexchat main window

Note the conversation starts with a topic for the channel. In this case, the #hexchat channel topic includes several pieces of information, including the home page for the project and a link to the documentation. Sometimes you may see rules for the channel in the topic as well.

Before you type anything, remember: anything you type will be sent to the channel. The only exception is a line that starts with a slash “/” character. The slash tells your client you are typing an IRC command and not a comment for the channel.

Helpful IRC commands

There are many IRC commands available. This article will only cover a few.

You don’t have to type a command in a graphical client like Hexchat. Many commands can be run through the menu in the app window, or by right-clicking an object such as a network, channel, or nick.

  • /HELP displays a list of all the commands available. To read more about most commands, type /HELP followed by the command. For example: /HELP PING
  • /MSG followed by a nick and a message sends that message privately to that person. Right-click a nick, choose Open Dialog Window, and send a message, or type: /MSG pwf16 Hey, can we talk about Friday plans?
  • /NICK followed by a nick will change your IRC nick. Be careful doing this when joined to channels. If you do it too often, it may be considered abuse. Click your nick at the bottom of the window and enter a new one, or type: /NICK pwf16-test
  • /AWAY followed by a message indicates you aren’t seated at a console where you can see IRC, although your client is still signed on. To indicate you’re back, type /AWAY without any message. You should only use this if you’ll be away for a while. A good rule of thumb is an hour or more. Frequent use may be considered abuse. Select Server > Marked Away in the menu, or type: /AWAY Back at 9pm EST
  • /BACK indicates you are no longer away, and may be used interchangeably with /AWAY in some clients. Deselect Server > Marked Away in the menu, or type one of these commands.
  • /JOIN followed by a channel name joins another channel. Select Server > Join a Channel… in the menu, or type: /JOIN #hyundai-cars
  • /PART disconnects your client from the current channel. You can optionally include a channel name to leave a channel other than the current one, as well as a message your client will send upon leaving. Right-click the channel in the list and select Close, or type: /PART #hyundai-cars Thank you and goodnight

Other IRC resources

You can find some useful general information on etiquette and using IRC here:

Happy chatting with IRC!


This post was originally published in January 2016

This post was originally published in January 2016Save

Fin de vie de Fedora 23

Posted by Charles-Antoine Couret on December 20, 2016 10:34 PM

Depuis le 20 décembre 2016, Fedora 23 a été déclaré comme en fin de vie.

Qu'est-ce que c'est ?

Un mois après la sortie d'une Fedora version n, ici Fedora 25, la version n-2 (donc Fedora 23) est déclarée comme en fin de vie. Ce mois sert à donner du temps aux utilisateurs pour faire la mise à niveau. Ce qui fait qu'en moyenne une version est officiellement supportée pendant 13 mois.

En effet, la fin de vie d'une version signifie qu'elle n'aura plus de mises à jour et plus aucun bogue ne sera corrigé. Pour des questions de sécurité, avec des failles non corrigées, il est vivement conseillé aux utilisateurs de Fedora 23 et antérieurs d'effectuer la mise à niveau vers Fedora 25 ou 24.

Que faire ?

Si vous êtes concernés, il est nécessaire de faire la mise à niveau de vos systèmes. Vous pouvez téléchargez des images CDs plus récentes par Torrent ou par HTTP.

Il est également possible de faire la mise à niveau sans réinstaller via DNF. Pour cela, taper les commandes suivantes en root dans votre terminal :

# dnf install dnf-plugin-system-upgrade
# dnf system-upgrade download --releasever=24
# dnf system-upgrade reboot

Notez que vous pouvez également passer directement à Fedora 25 par ce biais en changeant le numéro de version correspondante dans la ligne idoine. Cependant cette procédure est plus risquée car moins testée.

GNOME Logiciels a également dû vous prévenir par une pop-up de la disponibilité de Fedora 24 ou 25. N'hésitez pas à lancer la mise à niveau par ce biais.

Zero Downtime Upgrades With Openshift Ansible

Posted by Devan Goodwin on December 20, 2016 07:27 PM

A large portion of my time on the OpenShift team has been spent working on cluster lifecycle improvements, particularly in the realm of upgrades. Throughout this work we’ve been targeting the ability to upgrade clusters without requiring application downtime. I recently took some time to demonstrate that we can hit that target, please check out the results on the OpenShift Blog:

Zero Downtime Upgrades With OpenShift Ansible

Chatty kernel logs

Posted by Laura Abbott on December 20, 2016 07:00 PM

Most people don't care about the kernel until it breaks or they think it is broken. When this happens, usually the first place people look is the kernel logs by using dmesg or journalctl -k. This dumps the output of the in- kernel ringbuffer. The messages from the kernel ring buffer mostly come from the kernel itself calling printk. The ring buffer can't hold an infinite amount of data, and even if it could more information isn't necessarily better. In general, the kernel community tries to limit kernel prints to error messages or limited probe information. Under normal operation the kernel should be neither seen nor heard. The kernel doesn't always match those guidelines though and not every kernel message is an urgent problem to be fixed.

The kernel starts dumping information almost immediately after the bootloader passes it control. Very early information is designed to give an idea what kind of kernel is running and what kind of system it is running on. This may include dumping out CPU features and what areas of RAM were found by the kernel. As the kernel continues booting and initializing, the printed messages get more selective. Drivers may print out only hardware information or nothing at all. The latter is preferred in most cases.

I've had many arguments on both sides about whether a driver should be printing something. There's usually a lot of "but my driver reaaaallly needs this". The preferred solution to this problem is usually to adjust the log level the messages are printed at. The kernel provides several log levels to filter out appropriate messages. Most drivers will make use of KERN_ERR KERN_WARN and KERN_INFO. These have the meaning you would expect: true errors, just gentle warning and some useful information. KERN_DEBUG should be used to provide more verbose debugging/tracing on an as needed basis. The kernel option CONFIG_DYNAMIC_DEBUG can be used to enable and disable individual pr_debug messages at runtime. This option is enabled on Fedora kernels.

Even with the different levels of kernel messages available, it may not always be clear how important a message actually is. It's very common to get Fedora kernel bug reports of "dmesg had message X". If there is nothing else going wrong on the system, the bug may either get closed as NOTABUG or placed at low priority. A common ongoing complaint is with firmware. Drivers may look for multiple firmware versions, starting with the newest. If a particular firmware isn't available, the firmware layer itself will spit out an error even if a matching version may eventually be found. Sometimes the kernel driver may not match the hardware exactly and the driver will choose to be 'helpful' and indicate there may be problems. While it is true that hardware which is not 100% compliant may cause issues, messages like this are unhelpful without a message that this isn't likely to be fixed in the kernel. Even with statements like "the kernel is fine", it can be confusing and difficult to explain this to users.

The kernel logs are a vital piece of information for reporting problems. Not all messages in the logs are an indication of a problem that needs a kernel fix. It's still important to report bugs so we know what bothers people. It may be possible to report issues to upstream driver owners to report better error messages and make the kernel logs more useful for everyone.

Fabrice Bellard’s RISCVEMU supports Fedora/RISC-V

Posted by Richard W.M. Jones on December 20, 2016 06:40 PM

You can now boot Fedora 25 for RISC-V in Fabrice Bellard’s RISCVEMU RISC-V emulator. Here’s how in four simple steps:

  1. Download riscvemu-XXX.tar.gz and diskimage-linux-riscv64-XXX.tar.gz from Fabrice’s site.
  2. Download the latest stage 4 disk image for Fedora/RISC-V from here.
  3. Compile riscvemu. You should just need to do make.
  4. Run everything like this:
    ./riscvemu -b 64 ../diskimage-linux-riscv64-XXX/bbl.bin stage4-disk.img
    

If you’re going to do serious work inside the disk image then you’ll probably want to customize it with extra packages. See these instructions.


DNF 2.0.0 and DNF-PLUGINS-CORE 1.0.0 has been released

Posted by DNF on December 20, 2016 08:30 AM

DNF-2.0 has been released! The next major version release of DNF brings many user experience improvements such as better dependency problem reporting messages, weak dependencies shown in transaction summary, more intuitive help usage invoking and others. Repoquery plugin has been moved into DNF itself. Whole DNF stack release fixes over 60 bugs. DNF-2.0 release is focused on improving yum compatibility i.e. treat yum configuration options the same (`include`, `includepkgs` and `exclude`). Unfortunately this release is not fully compatible with DNF-1. See the list of DNF-1 and DNF-2 incompatible changes. Authors of dnf plugins will need to check compatibility of their plugins with the new DNF argument parser. For complete list of changes see DNF and plugins release notes.

How to get funding for your new users group?

Posted by Kushal Das on December 20, 2016 04:33 AM

Over the years I met many students who want to start a new Users Group in their college, sometimes it is a Linux Users’ Group, sometimes it is related to any other particular technology. They all have one common question: how to find funding for the group? In this post I am going to talk about what did we do for Durgapur Linux Users Group, and how are we managing currently.

The domain name and the whole back story

Back in 2004 when I started the group, I started a page in Geocities. For the young kids, Geocities was a service from Yahoo! where you can create static HTML pages and was one of the easiest ways to have a website. After a few weeks, I found another service which was providing .cc domains for free. So, we moved to dgplug.cc for few months. Meanwhile, we kept searching for a way to get our own domain name. We found one Indian provider where we could get a domain and some space for around Rs.750 annually. That was the cheapest we managed to find. It may not sound too much money for a year, but in 2004 none of us had that much money. We could not even raise that much money among all the regular participants.

I called my uncle Dr. Abhijit Majumder (Assistant Professor, IIT Bombay), then a Ph.D. student in IIT Kanpur, if he could help me with Rs.1500 (for the first 2 years funding). He generally supported all my crazy ideas before, and he also agreed to help this time. But then the question came how to get the money. Online money transfer was not an option for us. Luckily during the same week, my college senior Dr. Maunendra Sankar De Sarkar (he was an M.Tech student in the same institute as Abhijit) was coming down to Durgapur. He agreed to pass me the money in cash and later collect it from Abhijit after he goes back to Kanpur.

Even after we found the way to fund us for the first time, we got a new problem. To pay the domain register we have to use a credit card or use HDFC’s cheque. I started asking around all of my Computer Science department professors if anyone can help. Finally, with the help of Prof. Baijant we managed to deposit the cheque to the bank.

For the first two years, that was the only point for which we required money. For the meetings, there was no other expenditure. We never provided any food or drinks in the meetings. If anyone wanted to eat anything, they did that after the meetings by paying for themselves. Not having any recurring costs was a huge benefit. I would suggest the same to everyone, try to minimize the costs as much as possible. Though in a big city it is much easier to get sponsorship from different companies or organization for the meetups, remember it is someone’s money. Spend it like it is your own money. Find out what the bare minimum requirements are, and only spend money on those.

How do we currently fund dgplug?

Before we answer this question, we should identify the expenses. Because all of our meetings happen over IRC we don’t have any recurring costs for meetings. The only things we are spending money for are:

  • The domain name
  • The web hosting (we use Dreamhost for static hosting)
  • Various bots (running in containers)

After I started working, I started paying for the first two expenses. The containers run beside my other containers in my personal servers, so no extra cost there. Among dgplug and friends now we have many volunteers who are working professionals. We know that we can manage any extra cost that comes up among ourselves. For example, this year, during PyCon India we gave a scholarship to Trishna so that she can attend the conference. We had a quick call, and the funding was arranged in a few minutes. We also had group dinners at a few previous PyCon(s), and provided lodging for women student participants, that was also handled among the group members.

But all of this never came from the start. It took time to grow the community, to have friends who take dgplug as important as I do. We still try to cut costs wherever possible. If you are starting a new group, and try to spend as little as possible, many members from the existing communities will notice that. The group will be in a much better shape to ask for any required funding when it is really necessary. People will help you out, but first, the volunteers have to earn the trust from the community. That comes from work done at the ground level. Be true to yourselves, and keep doing the good work. Good things will happen.

Initial analysis session

Posted by Suzanne Hillman (Outreachy) on December 20, 2016 12:30 AM

Last Thursday Mo Duffy and Matthew Miller and I did some initial analysis of the data I collected in my interviews.

I originally thought that we’d be doing some affinity mapping analysis, but that turned out to be less relevant with the data I brought. Instead, we discussed the patterns that I’d noticed in my interviews, most notably the difficulty of finding Fedora contributors and Fedora-related events in specific locations.

Patterns

These patterns broke down into a number of pieces, whether it was about finding someone who might be interested in helping organize an event or a sponsorship, finding people who might want to attend an event, or identifying Fedora contributors who are still actively involved in the community.

Finding people, finding events

It can be quite difficult to find Fedora contributors who are interested in helping organize or potentially attend an event, or determining if the people who you have found are still active contributors. This is especially difficult if you are hoping to recruit contributors who are not ambassadors.

The problem with finding events was that it’s hard to tell if you’re failing to find anything because nothing exists, or because you haven’t yet found the right places to look. Somewhat tangential to Hubs, it can also be difficult to find out about events that aren’t specific to Fedora, but at which sponsorship, or having a presentation or workshop, might be good.

Too much information!

Less common, but still a definite problem, was the difficulty in finding information that others may already have found. There is no common place to keep information about venues, hotels, or swag vendors for future event planners to refer to. There is also no way to access information that previous event planners have learned through experience, such as general budget expectations for events such as FLOCK, or how much swag one might need to bring to an event.

Somewhat similar, in that it involves a great deal of information, is the problem of having too much information to easily keep track of. Depending on the complexity of an event, they minimally include a wiki page and email, IRC, or social media communication. Some of the more complicated events may also include documents, spreadsheets, and PDF files. It can be quite a lot of effort to keep track of all these disparate sources of information, even if any particular individual doesn’t find it taxing.

Workflows

In addition to the general patterns I found among the interviews, Mo also suggested that I identify and describe the general workflows of the interviewees. There are two major categories into which the workflows fell: that of ambassadors and that of event planners.

Overall, ambassadors are typically doing two things: getting Fedora sponsorship into existing events, and acting as resources for the population they serve. The first requires the ability to find events for which Fedora sponsorship would be worthwhile and a great deal of organization, and the second typically involves a great deal of information to store and sort through.

Event planners all involve organizing a lot of information, including timing, costs, and locations. They often also include budgeting, travel, publicity, and topic selection.

What are the problems?

Now that the initial group analysis is complete, I am collating a list of problems that need solving so we can identify which are relevant to this project, and what the priorities are. That will be in my next post! For more information on the research I have done, see https://pagure.io/fedora-hubs/issue/279

The aforementioned pagure issue also includes the documents that I created for our analysis: summaries of the interviews, patterns and workflows, and a spreadsheet of the major points from each interview.

Until next time!

Wallabag est sur Weblate !

Posted by Jean-Baptiste Holcroft on December 19, 2016 11:00 PM

Wallabag utilise la plateforme de traduction Weblate !

Avant, Wallabag attendait de ses traducteurs des demandes d’intégration. Mon avis c’est que je n’aime pas avoir de commentaire sur ses demandes d’intégration, mon travail n’est pas de contrôler l’intégrité du fichier et corriger les virgules, mais de fournir une traduction contextualisée de qualité.

À l’origine

J’ai été dérangé par quelques traductions incomplètes ou pas tout à fait à mon goût sur mon installation de Wallabag (grâce au système YunoHost).

En me rendant sur le dépôt du projet, je me suis rendu compte que tout était dans des fichiers YAML…

Naïvement, j’ai tenté de modifier dans GitHub le fichier puis fait une demande d’intégration (je ne sais pas vraiment utiliser Git pour ça et je trouve anormal de devoir l’apprendre pour traduire). Cependant, j’ai trouvé l’expérience laborieuse, voir le détail de ma demande d’intégration.

Finalement, peut-être par désespoir devant mon inaptitude, c’est Jérémy Benoist, un des mainteneurs du projet qui a finalisé la demande et l’a intégré dans le reste du code…

Le mécanisme d’avis technique sur les demandes d’intégration me semble être inapproprié pour un traducteur, qui doit pouvoir se focaliser sur la langue.

La demande

Comme tout utilisateur de logiciel libre devrait le faire, j’ai créé un ticket pour exprimer mon souhait d’utilisation de Weblate. Jérémy, peut-être sensibilisé par nos échanges sur la demande d’intégration a tout de suite compris l’intérêt et y a été favorable.

Vous pouvez donc aller traduire Wallabag sur cette plateforme Weblate

On voit environ qu’il y a eu 2000 modifications depuis la création du projet et qu’il y a maintenant un support complet de l’anglais, du français, de l’allemand et du japonais. De nombreuses autres langues sont disponibles, mais en dessous de 80 %, j’ai le sentiment que c’est un peu trop incomplet pour parler de support.

Voilà, il faut prendre son temps, mais dans le logiciel libre, il suffit parfois de présenter les besoins des traducteurs pour obtenir satisfaction ! La partie outillage est terminée, maintenant il faut passer aux aspects organisationnels afin de coordonner efficacement les traducteurs avant une sortie, mais là c’est au projet de se saisir du sujet.

xf86-input-synaptics is not a Synaptics, Inc. driver

Posted by Peter Hutterer on December 19, 2016 10:47 PM

This is a common source of confusion: the legacy X.Org driver for touchpads is called xf86-input-synaptics but it is not a driver written by Synaptics, Inc. (the company).

The repository goes back to 2002 and for the first couple of years it Peter Osterlund was the sole contributor. Back then it was called "synaptics" and really was a "synaptics device" driver, i.e. it handled PS/2 protocol requests to initialise Synaptics, Inc. touchpads. Evdev support was added in 2003, punting the initialisation work to the kernel instead. This was the groundwork for a generic touchpad driver. In 2008 the driver was renamed to xf86-input-synaptics and relicensed from GPL to MIT to take it under the X.Org umbrella. I've been involved with it since 2008 and the official maintainer since 2011.

For many years now, the driver has been a generic touchpad driver that handles any device that the Linux kernel can handle. In fact, most bugs attributed to the synaptics driver not finding the touchpad are caused by the kernel not initialising the touchpad correctly. The synaptics driver reads the same evdev events that are also handled by libinput and the xf86-input-evdev driver, any differences in behaviour are driver-specific and not related to the hardware. The driver handles devices from Synaptics, Inc., ALPS, Elantech, Cypress, Apple and even some Wacom touch tablets. We don't care about what touchpad it is as long as the evdev events are sane.

Synaptics, Inc.'s developers are active in kernel development to help get new touchpads up and running. Once the kernel handles them, the xorg drivers and libinput will handle them too. I can't remember any significant contribution by Synaptics, Inc. to the X.org synaptics driver, so they are simply neither to credit nor to blame for the current state of the driver. The top 10 contributors since August 2008 when the first renamed version of xf86-input-synaptics was released are:


8 Simon Thum
10 Hans de Goede
10 Magnus Kessler
13 Alexandr Shadchin
15 Christoph Brill
18 Daniel Stone
18 Henrik Rydberg
39 Gaetan Nadon
50 Chase Douglas
396 Peter Hutterer
There's a long tail of other contributors but the top ten illustrate that it wasn't Synaptics, Inc. that wrote the driver. Any complaints about Synaptics, Inc. not maintaining/writing/fixing the driver are missing the point, because this driver was never a Synaptics, Inc. driver. That's not a criticism of Synaptics, Inc. btw, that's just how things are. We should have renamed the driver to just xf86-input-touchpad back in 2008 but that ship has sailed now. And synaptics is about to be superseded by libinput anyway, so it's simply not worth the effort now.

The other reason I included the commit count in the above: I'm also the main author of libinput. So "the synaptics developers" and "the libinput developers" are effectively the same person, i.e. me. Keep that in mind when you read random comments on the interwebs, it makes it easier to identify people just talking out of their behind.

libinput touchpad pointer acceleration analysis

Posted by Peter Hutterer on December 19, 2016 09:36 PM

A long-standing criticism of libinput is its touchpad acceleration code, oscillating somewhere between "terrible", "this is bad and you should feel bad" and "I can't complain because I keep missing the bloody send button". I finally found the time and some more laptops to sit down and figure out what's going on.

I recorded touch sequences of the following movements:

  • super-slow: a very slow movement as you would do when pixel-precision is required. I recorded this by effectively slowly rolling my finger. This is an unusual but sometimes required interaction.
  • slow: a slow movement as you would do when you need to hit a target several pixels across from a short distance away, e.g. the Firefox tab close button
  • medium: a medium-speed movement though probably closer to the slow side. This would be similar to the movement when you move 5cm across the screen.
  • medium-fast: a medium-to-fast speed movement. This would be similar to the movement when you move 5cm across the screen onto a large target, e.g. when moving between icons in the file manager.
  • fast: a fast movement. This would be similar to the movement when you move between windows some distance apart.
  • flick: a flick movement. This would be similar to the movement when you move to a corner of the screen.
Note that all these are by definition subjective and somewhat dependent on the hardware. Either way, I tried to get something of a reasonable subset.

Next, I ran this through a libinput 1.5.3 augmented with printfs in the pointer acceleration code and a script to post-process that output. Unfortunately, libinput's pointer acceleration internally uses units equivalent to a 1000dpi mouse and that's not something easy to understand. Either way, the numbers themselves don't matter too much for analysis right now and I've now switched everything to mm/s anyway.

A note ahead: the analysis relies on libinput recording an evemu replay. That relies on uinput and event timestamps are subject to a little bit of drift across recordings. Some differences in the before/after of the same recording can likely be blamed on that.

The graph I'll present for each recording is relatively simple, it shows the velocity and the matching factor.The x axis is simply the events in sequence, the y axes are the factor and the velocity (note: two different scales in one graph). And it colours in the bits that see some type of acceleration. Green means "maximum factor applied", yellow means "decelerated". The purple "adaptive" means per-velocity acceleration is applied. Anything that remains white is used as-is (aside from the constant deceleration). This isn't really different to the first graph, it just shows roughly the same data in different colours.

Interesting numbers for the factor are 0.4 and 0.8. We have a constant acceleration of 0.4 on touchpads, i.e. a factor of 0.4 "don't apply acceleration", the latter is "maximum factor". The maximum factor is twice as big as the normal factor, so the pointer moves twice as fast. Anything below 0.4 means we decelerate the pointer, i.e. the pointer moves slower than the finger.

The super-slow movement shows that the factor is, aside from the beginning always below 0.4, i.e. the sequence sees deceleration applied. The takeaway here is that acceleration appears to be doing the right thing, slow motion is decelerated and while there may or may not be some tweaking to do, there is no smoking gun.


Super slow motion is decelerated.

The slow movement shows that the factor is almost always 0.4, aside from a few extremely slow events. This indicates that for the slow speed, the pointer movement maps exactly to the finger movement save for our constant deceleration. As above, there is no indicator that we're doing something seriously wrong.


Slow motion is largely used as-is with a few decelerations.

The medium movement gets interesting. If we look at the factor applied, it changes wildly with the velocity across the whole range between 0.4 and the maximum 0.8. There is a short spike at the beginning where it maxes out but the rest is accelerated on-demand, i.e. different finger speeds will produce different acceleration. This shows the crux of what a lot of users have been complaining about - what is a fairly slow motion still results in an accelerated pointer. And because the acceleration changes with the speed the pointer behaviour is unpredictable.


In medium-speed motion acceleration changes with the speed and even maxes out.

The medium-fast movement shows almost the whole movement maxing out on the maximum acceleration factor, i.e. the pointer moves at twice the speed to the finger. This is a problem because this is roughly the speed you'd use to hit a "mentally preselected" target, i.e. you know exactly where the pointer should end up and you're just intuitively moving it there. If the pointer moves twice as fast, you're going to overshoot and indeed that's what I've observed during the touchpad tap analysis userstudy.


Medium-fast motion easily maxes out on acceleration.

The fast movement shows basically the same thing, almost the whole sequence maxes out on the acceleration factor so the pointer will move twice as far as intuitively guessed.


Fast motion maxes out acceleration.

So does the flick movement, but in that case we want it to go as far as possible and note that the speeds between fast and flick are virtually identical here. I'm not sure if that's me just being equally fast or the touchpad not quite picking up on the short motion.


Flick motion also maxes out acceleration.

Either way, the takeaway is simple: we accelerate too soon and there's a fairly narrow window where we have adaptive acceleration, it's very easy to top out. The simplest fix to get most touchpad movements working well is to increase the current threshold on when acceleration applies. Beyond that it's a bit harder to quantify, but a good idea seems to be to stretch out the acceleration function so that the factor changes at a slower rate as the velocity increases. And up the acceleration factor so we don't top out and we keep going as the finger goes faster. This would be the intuitive expectation since it resembles physics (more or less).

There's a set of patches on the list now that does exactly that. So let's see what the result of this is. Note ahead: I also switched everything from mm/s which causes some numbers to shift slightly.

The super-slow motion is largely unchanged though the velocity scale changes quite a bit. Part of that is that the new code has a different unit which, on my T440s, isn't exactly 1000dpi. So the numbers shift and the result of that is that deceleration applies a bit more often than before.


Super-slow motion largely remains the same.

The slow motions are largely unchanged but more deceleration is now applied. Tbh, I'm not sure if that's an artefact of the evemu replay, the new accel code or the result of the not-quite-1000dpi of my touchpad.


Slow motion largely remains the same.

The medium motion is the first interesting one because that's where we had the first observable issues. In the new code, the motion is almost entirely unaccelerated, i.e. the pointer will move as the finger does. Success!


Medium-speed motion now matches the finger speed.

The same is true of the medium-fast motion. In the recording the first few events were past the new thresholds so some acceleration is applied, the rest of the motion matches finger motion.


Medium-fast motion now matches the finger speed except at the beginning where some acceleration was applied.

The fast and flick motion are largely identical in having the acceleration factor applied to almost the whole motion but the big change is that the factor now goes up to 2.3 for the fast motion and 2.5 for the flick motion, i.e. both movements would go a lot faster than before. In the graphics below you still see the blue area marked as "previously max acceleration factor" though it does not actually max out in either recording now.


Fast motion increases acceleration as speed increases.

Flick motion increases acceleration as speed increases.

In summary, what this means is that the new code accelerates later but when it does accelerate, it goes faster. I tested this on a T440s, a T450p and an Asus VivoBook with an Elantech touchpad (which is almost unusable with current libinput). They don't quite feel the same yet and I'm not happy with the actual acceleration, but for 90% of 'normal' movements the touchpad now behaves very well. So at least we go from "this is terrible" to "this needs tweaking". I'll go check if there's any champagne left.

Converging Monoid Addition for T-Digest

Posted by Erik Erlandson on December 19, 2016 08:29 PM

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky. "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied. "Why is the net wired randomly?", asked Minsky. "I do not want it to have any preconceptions of how to play", Sussman said. Minsky then shut his eyes. "Why do you close your eyes?" Sussman asked his teacher. "So that the room will be empty." At that moment, Sussman was enlightened.

Recently I've been doing some work with the t-digest sketching algorithm, from the paper by Ted Dunning and Omar Ertl. One of the appealing properties of t-digest sketches is that you can "add" them together in the monoid sense to produce a combined sketch from two separate sketches. This property is crucial for sketching data across data partitions in scale-out parallel computing platforms such as Apache Spark or Map-Reduce.

In the original Dunning/Ertl paper, they describe an algorithm for monoidal combination of t-digests based on randomized cluster recombination. The clusters of the two input sketches are collected together, then randomly shuffled, and inserted into a new t-digest in that randomized order. In Scala code, this algorithm might look like the following:

```scala def combine(ltd: TDigest, rtd: TDigest): TDigest = { // randomly shuffle input clusters and re-insert to a new t-digest shuffle(ltd.clusters.toVector ++ rtd.clusters.toVector)

.foldLeft(TDigest.empty)((d, e) => d + e)

} ```

I implemented this algorithm and used it until I noticed that a sum over multiple sketches seemed to behave noticeably differently than either the individual inputs, or the nominal underlying distribution.

To get a closer look at what was going on, I generated some random samples from a Normal distribution ~N(0,1). I then generated t-digest sketches of each sample, took a cumulative monoid sum, and kept track of how closely each successive sum adhered to the original ~N(0,1) distribution. As a measure of the difference between a t-digest sketch and the original distribution, I computed the Kolmogorov-Smirnov D-statistic, which yields a distance between two cumulative distribution functions. (Code for my data collections can be viewed here) I ran multiple data collections and subsequent cumulative sums and used those multiple measurements to generate the following box-plot. The result was surprising and a bit disturbing:

plot1

As the plot shows, the t-digest sketch distributions are gradually diverging from the underlying "true" distribution ~N(0,1). This is a potentially significant problem for the stability of monoidal t-digest sums, and by extension any parallel sketching based on combining the partial sketches on data partitions in map-reduce-like environments.

Seeing this divergence motivated me to think about ways to avoid it. One property of t-digest insertion logic is that the results of inserting new data can differ depending on what clusters are already present. I wondered if the results might be more stable if the largest clusters were inserted first. The t-digest algorithm allows clusters closest to the distribution median to grow the largest. Combining input clusters from largest to smallest would be like building the combined distribution from the middle outwards, toward the distribution tails. In the case where one t-digest had larger weights, it would also somewhat approximate inserting the smaller sketch into the larger one. In Scala code, this alternative monoid addition looks like so:

```scala def combine(ltd: TDigest, rtd: TDigest): TDigest = { // insert clusters from largest to smallest (ltd.clusters.toVector ++ rtd.clusters.toVector).sortWith((a, b) => a.2 > b.2)

.foldLeft(TDigest.empty(delta))((d, e) => d + e)

} ```

As a second experiment, for each data sampling I compared the original monoid addition with the alternative method using largest-to-smallest cluster insertion. When I plotted the resulting progression of D-statistics side-by-side, the results were surprising:

plot2a

As the plot demonstrates, not only was large-to-small insertion more stable, its D-statistics appeared to be getting smaller instead of larger. To see if this trend was sustained over longer cumulative sums, I plotted the D-stats for cumulative sums over 100 samples:

plot2

The results were even more dramatic; These longer sums show that the standard randomized-insertion method continues to diverge, but in the case of large-to-small insertion the cumulative t-digest sums continue to converge towards the underlying distribution!

To test whether this effect might be dependent on particular shapes of distribution, I ran similar experiments using a Uniform distribution (no "tails") and an Exponential distribution (one tail). I included the corresponding plots in the appendix. The convergence of this alternative monoid addition doesn't seem to be sensitive to shape of distribution.

I have upgraded my implementation of t-digest sketching to use this new definition of monoid addition for t-digests. As you can see, it is easy to change one implementation for another. One or two lines of code may be sufficient. I hope this idea may be useful for any other implementations in the community. Happy sketching!

Appendix: Plots with Alternate Distributions

plot3

plot4

Wheee, another addition.

Posted by Paul W. Frields on December 19, 2016 08:18 PM

The post Wheee, another addition. appeared first on The Grand Fallacy.

I’m thrilled to announce that Jeremy Cline has joined the Fedora Engineering team, effective today. Like our other recent immigrant, Randy Barlow, Jeremy was previously a member of Red Hat’s Pulp team. (This is mostly coincidental — the Pulp team’s a great place to work, and people there don’t just move to Fedora automatically.) Jeremy is passionate about and has a long history of open source contribution. We had many excellent applicants for our job opening, and weren’t even able to interview every qualified candidate before we had to make a decision. I’m very pleased with the choice, and I hope the Fedora community joins me in welcoming Jeremy!

Does "real" security matter?

Posted by Josh Bressers on December 19, 2016 06:45 PM
As the dumpster fire that is 2016 crawls to the finish line, we had another story about a massive Yahoo breach. 1 billion user accounts had data stolen. Just to give some context here, that has to be hundreds of gigabytes at an absolute minimum. That's a crazy amount of data.

And nobody really cares.

Sure, there is some noise about all this, but in a week or two nobody will even remember. There has been a similar story to this about every month all year long. Can you even remember any of them? The stock market doesn't, basically everyone who has ever had a crazy breach hasn't seen a long term problem with their stock. Sure there will be a blip where everyone panics for a few days, then things go back to normal.

So this brings us to the title of this post.

Does anyone care about real security? What I mean here is I'm going to lump things into three buckets: no security, real security, and compliance security.

No Security
This one is pretty simple. You don't do anything. You just assume things will be OK, someday they aren't, then you clean up whatever mess you find. You could call this "reactive security" if you wanted. I'm feeling grumpy though.

Real Security
This is when you have a real security team, and you spend real money on features and technology. You have proper logging, and threat models, and attack surfaces, and hardened operating systems. Your applications go through a security development process and run in sandbox. This stuff is expensive. And hard.

Compliance Security
This is where you do whatever you have to because some regulation from somewhere says you have to. Password lengths, enabling TLS 1.2, encrypted data, the list is long. Just look at PCI if you want an example. I have no problem with this, and I think it's the future. Here is a picture of how things look today.

I don't think anyone would disagree that if you're doing the minimum compliance suggests, you still will have plenty of insecurity. The problem with the real security is that you're probably not getting any ROI, it's likely a black hole you dump money into and get minimal value back (remember the bit about long term stock prices not mattering here).

However, when we look at the sorry state of nearly all infrastructure and especially the IoT universe, it's clear that No Security is winning this race. Expecting anyone to make great leaps in security isn't going to happen. Most won't follow unless they absolutely have to. This is why compliance is the future. We have to keep nudging compliance to the right on this graph, but we have to move it slowly.

It's all about the Benjamins
As I mentioned above, security problems don't seem to cause a lot of negative financial impact. Compliance problems do. Right now there are very few instances where compliance is required, and even when it is it's not always as strong as it could be. Good security will have to firstly show value (actual measurable value, not some made up statistics), then once we see the value, it should be mandated by regulation. Not everything should be regulated, but we need clear rules as to what should need compliance, why, and especially how. I used to despise the idea of mandatory compliance around security but I think at this point it's the only plausible solution. This problem isn't going to fix itself. If you want to make a prediction ask yourself: is there a reason 2017 will be more secure than 2016?

Do you have thoughts on compliance? Let me know.

Comparing OpenStack Neutron ML2+OVS and OVN – Control Plane

Posted by Russell Bryant on December 19, 2016 05:43 PM

We have done a lot of performance testing of OVN over time, but one major thing missing has been an apples-to-apples comparison with the current OVS-based OpenStack Neutron backend (ML2+OVS).  I’ve been working with a group of people to compare the two OpenStack Neutron backends.  This is the first piece of those results: the control plane.  Later posts will discuss data plane performance.

Control Plane Differences

The ML2+OVS control plane is based on a pattern seen throughout OpenStack.  There is a series of agents written in Python.  The Neutron server communicates with these agents using an rpc mechanism built on top of AMQP (RabbitMQ in most deployments, including our tests).

OVN takes a distributed database-driven approach.  Configuration and state is managed through two databases: the OVN northbound and southbound databases.  These databases are currently based on OVSDB.  Instead of receiving updates via RPC, components are watching relevant portions of the database for changes and applying them locally.  More detail about these components can be found in my post about the first release of OVN, or even more detail is in the ovn-architecture document.

OVN does not make use of any of the Neutron agents.  Instead, all required functionality is implemented by ovn-controller and OVS flows.  This includes things like security groups, DHCP, L3 routing, and NAT.

Hardware and Software

Our testing was done in a lab using 13 machines which were allocated to the following functions:

  • 1 OpenStack TripleO Undercloud for provisioning
  • 3 Controllers (OpenStack and OVN control plane services)
  • 9 Compute Nodes (Hypervisors)

The hardware had the following specs:

  • <section class="section text-center" id="get-trial">
  • 2x E5-2620 v2 (12 total cores, 24 total threads)
  • 64GB RAM
  • 4 x 1TB SATA
  • 1 x Intel X520 Dual Port 10G
  • </section> <section class="section text-center" id="contact"></section>

Software:

  • CentOS 7.2
  • OpenStack, OVS, and OVN from their master branches (early December, 2016)
  • Neutron configuration notes
    • (OVN) 6 API workers, 1 RPC worker (since rpc is not used and neutron requires at least 1) for neutron-server on each controller (x3)
    • (ML2+OVS) 6 API workers, 6 RPC workers for neutron-server on each controller (x3)
    • (ML2+OVS) DVR was enabled

Test Configuration

The tests were run using OpenStack Rally.  We used the Browbeat project to easily set up, configure, and run the tests, as well as store, analyze, and compare results.  The rally portion of the browbeat configuration was:

rerun: 3
...
rally:
  enabled: true
  sleep_before: 5
  sleep_after: 5
  venv: /home/stack/rally-venv/bin/activate
  plugins:
    - netcreate-boot: rally/rally-plugins/netcreate-boot
    - subnet-router-create: rally/rally-plugins/subnet-router-create
    - neutron-securitygroup-port: rally/rally-plugins/neutron-securitygroup-port
  benchmarks:
    - name: neutron
      enabled: true
      concurrency:
        - 8
        - 16
        - 32 
      times: 500
      scenarios:
        - name: create-list-network
          enabled: true
          file: rally/neutron/neutron-create-list-network-cc.yml
        - name: create-list-port
          enabled: true
          file: rally/neutron/neutron-create-list-port-cc.yml
        - name: create-list-router
          enabled: true
          file: rally/neutron/neutron-create-list-router-cc.yml
        - name: create-list-security-group
          enabled: true
          file: rally/neutron/neutron-create-list-security-group-cc.yml
        - name: create-list-subnet
          enabled: true
          file: rally/neutron/neutron-create-list-subnet-cc.yml
    - name: plugins
      enabled: true
      concurrency:
        - 8
        - 16
        - 32 
      times: 500
      scenarios:
        - name: netcreate-boot
          enabled: true
          image_name: cirros
          flavor_name: m1.xtiny
          file: rally/rally-plugins/netcreate-boot/netcreate_boot.yml
        - name: subnet-router-create
          enabled: true
          num_networks:  10
          file: rally/rally-plugins/subnet-router-create/subnet-router-create.yml
        - name: neutron-securitygroup-port
          enabled: true
          file: rally/rally-plugins/neutron-securitygroup-port/neutron-securitygroup-port.yml

This configuration defines several scenarios to run.  Each one is set to run 500 times, at three different concurrency levels.  Finally, “rerun: 3” at the beginning says we run the entire configuration 3 times.  This is a bit confusing, so let’s look at one example.

The “netcreate-boot” scenario is to create a network and boot a VM on that network.  The configuration results in the following execution:

  • Run 1
    • Create 500 VMs, each on their own network, 8 at a time, and then clean up
    • Create 500 VMs, each on their own network, 16 at a time, and then clean up
    • Create 500 VMs, each on their own network, 32 at a time, and then clean up
  • Run 2
    • Create 500 VMs, each on their own network, 8 at a time, and then clean up
    • Create 500 VMs, each on their own network, 16 at a time, and then clean up
    • Create 500 VMs, each on their own network, 32 at a time, and then clean up
  • Run 3
    • Create 500 VMs, each on their own network, 8 at a time, and then clean up
    • Create 500 VMs, each on their own network, 16 at a time, and then clean up
    • Create 500 VMs, each on their own network, 32 at a time, and then clean up

In total, we will have created 4500 VMs.

Results

Browbeat includes the ability to store all rally test results in elastic search and then display them using Kibana.  A live dashboard of these results is on elk.browbeatproject.org.

The following tables show the results for the average times, 95th percentile, Maximum, and minimum times for all APIs executed throughout the test scenarios.

API ML2+OVS Average OVN Average % improvement
nova.boot_server 80.672 23.45 70.93%
neutron.list_ports 6.296 6.478 -2.89%
neutron.list_subnets 5.129 3.826 25.40%
neutron.add_interface_router 4.156 3.509 15.57%
neutron.list_routers 4.292 3.089 28.03%
neutron.list_networks 2.596 2.628 -1.23%
neutron.list_security_groups 2.518 2.518 0.00%
neutron.remove_interface_router 3.679 2.353 36.04%
neutron.create_port 2.096 2.136 -1.91%
neutron.create_subnet 1.775 1.543 13.07%
neutron.delete_port 1.592 1.517 4.71%
neutron.create_security_group 1.287 1.372 -6.60%
neutron.create_network 1.352 1.285 4.96%
neutron.create_router 1.181 0.845 28.45%
neutron.delete_security_group 0.763 0.793 -3.93%

 

API ML2+OVS 95% OVN 95% % improvement
nova.boot_server 163.2 35.336 78.35%
neutron.list_ports 11.038 11.401 -3.29%
neutron.list_subnets 10.064 6.886 31.58%
neutron.add_interface_router 7.908 6.367 19.49%
neutron.list_routers 8.374 5.321 36.46%
neutron.list_networks 5.343 5.171 3.22%
neutron.list_security_groups 5.648 5.556 1.63%
neutron.remove_interface_router 6.917 4.078 41.04%
neutron.create_port 5.521 4.968 10.02%
neutron.create_subnet 4.041 3.091 23.51%
neutron.delete_port 2.865 2.598 9.32%
neutron.create_security_group 3.245 3.547 -9.31%
neutron.create_network 3.089 2.917 5.57%
neutron.create_router 2.893 1.92 33.63%
neutron.delete_security_group 1.776 1.72 3.15%

 

API ML2+OVS Maximum OVN Maximum % improvement
nova.boot_server 221.877 47.827 78.44%
neutron.list_ports 29.233 32.279 -10.42%
neutron.list_subnets 35.996 17.54 51.27%
neutron.add_interface_router 29.591 22.951 22.44%
neutron.list_routers 19.332 13.975 27.71%
neutron.list_networks 12.516 13.765 -9.98%
neutron.list_security_groups 14.577 13.092 10.19%
neutron.remove_interface_router 35.546 9.391 73.58%
neutron.create_port 53.663 40.059 25.35%
neutron.create_subnet 46.058 26.472 42.52%
neutron.delete_port 5.121 5.149 -0.55%
neutron.create_security_group 14.243 13.206 7.28%
neutron.create_network 32.804 32.566 0.73%
neutron.create_router 14.594 6.452 55.79%
neutron.delete_security_group 4.249 3.746 11.84%

 

API ML2+OVS Minimum OVN Minimum % improvement
nova.boot_server 18.665 3.761 79.85%
neutron.list_ports 0.195 0.22 -12.82%
neutron.list_subnets 0.252 0.187 25.79%
neutron.add_interface_router 1.698 1.556 8.36%
neutron.list_routers 0.185 0.147 20.54%
neutron.list_networks 0.21 0.174 17.14%
neutron.list_security_groups 0.132 0.184 -39.39%
neutron.remove_interface_router 1.557 1.057 32.11%
neutron.create_port 0.58 0.614 -5.86%
neutron.create_subnet 0.42 0.416 0.95%
neutron.delete_port 0.464 0.46 0.86%
neutron.create_security_group 0.081 0.094 -16.05%
neutron.create_network 0.113 0.179 -58.41%
neutron.create_router 0.077 0.053 31.17%
neutron.delete_security_group 0.092 0.104 -13.04%

Analysis

The most drastic difference in results is for “nova.boot_server”.  This is also the one piece of these tests that actually measures the time it takes to provision the network, and not just loading Neutron with configuration.

When Nova boots a server, it blocks waiting for an event from Neutron indicating that a port is ready before it sets the server state to ACTIVE and powers on the VM.  Both ML2+OVS and OVN implement this mechanism.  Our test scenario measured the time it took for servers to become ACTIVE.

Further tests were done on ML2+OVS and we were able to confirm that disabling this synchronization between Nova and Neutron brought the results back to being on par with the OVN results.  This confirmed that the extra time was indeed spent waiting for Neutron to report that ports were ready.

To be clear, you should not disable this synchronization.  The only reason you can disable it is because not all Neutron backends support it (ML2+OVS and OVN both do).  It was put in place to avoid a race condition.  It ensures that the network is actually ready for use before booting a VM.  The issue is how long it’s taking Neutron to provision the network for use.  Further analysis is needed to break down where Neutron (ML2+OVS) is spending most of its time in the provisioning process.


Episode 20 - The Death of PGP

Posted by Open Source Security Podcast on December 19, 2016 01:43 PM
Josh and Kurt talk about the death of PGP, and how it's not actually dead at all. It's still really hard to use though.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/298557680&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


Fedora @ LISA 2016 Wrap-Up

Posted by Mike DePaulo on December 19, 2016 12:58 PM

LISA is a conference that stands for "Large Installation System Administration." Corey Sheldon told me how awesome it was last year, and that is one reason why I went this year. I was one of one of the Fedora ambassadors along with:
  • Matthew Miller (it's the only event he serves as an ambassador at each year)
  • Corey Sheldon
  • Nick Bebout
  • Beth Eicher
  • Some Red Hatters involved in Fedora (like Steve Gallagher.)
    • Note: There was a Red Hat booth right next to us.

Mike DePaulo showing off X2Go to Matthew Miller

Overall, the event went very well. Perhaps it felt that way due to the technical nature of the crowed. The reasons why it went well are:
  • Many people had heard of Fedora
  • Of those that had not heard of Fedora, almost everyone had heard of RHEL or CentOS, and were pleased to hear about Fedora's relationship to them.
    • In addition to the usual explanations about the relationship, I like to say "Red Hat sells its past, and gives away its future."
  • A Microsoft employee expressed his opinion that that Fedora should be on the Azure cloud.
    • Disclaimer: This was his personal opinion.
  • A few people reported that they were using Fedora in production environments consisting of dozens or hundreds of machines.
  • At least 1 person expressed interest in using the Fedora 24 Workstation DVDs that we had, specifically to try out Fedora via the live mode.
  • At least 1 person was delighted to hear that he could use either GNOME boxes or virt-manager on his CentOS system to run Fedora easily.
    • After all, Fedora includes the drivers and guest agents to run on these KVM-based virtualization solutions.
  • Steve Gallagher gave a presentation on Fedora Modularity. Of particular note was his Voltron analogy!
Some downsides though:
  • We only had Fedora 24 DVD's rather than Fedora 25 DVD's. Of course, Fedora 25 was only released 2 weeks prior, so it was unreasonable to have hundreds of DVD's printed & delivered by then.
  • We had Cockpit on a big display. Although it generally worked, the SELinux feature did not at all. Even when we went out of our way to generate an SELinux error that was logged, it did not show up in the Cockpit GUI.
I personally had a great time. I look forward to next year!

When Do Drive-By Documentation Contributors Start to Matter?

Posted by Brian "bex" Exelbierd on December 19, 2016 12:03 PM

We should be using a Wiki

A common conversation in documentation circles involves tooling. Within this genre, there is a special conversation around friction that creates a barrier to contribution. Inevitably, someone will argue their preferred tooling (OK, wiki … it is always wiki) is better because it has the lowest friction for drive-by contributors. These contributors, the argument goes, are critical and represent the bulk of the edits and contributions. They then proceed to cite the hit list of fantastic wiki projects, Wikipedia, Arch Linux, etc. I’ve had this conversation in the Fedora Docs Project and most recently in the ledger-cli project.

The counter arguments usually seem to boil down to only a few points:

  • Wiki’s suck or wiki’s don’t support the workflow I think is important.
  • We don’t have enough (potential1) wiki gardeners to prevent spam, bad edits, etc.
  • Our scale is no where near those projects so we can’t expect the same results.
  • Our users, should in general, be familiar with more tools so this isn’t a problem.

Neither side in this conversation has actually proven their point, they’ve just made arguments based on their beliefs. I was wondering if one could prove the need to lower contribution barriers to the “wiki” level.

If you make compromises too early, you are suffering from the “think I need vs I actually need” feature problem. This is similar to designing a new product for launch. It is not uncommon for people to get sidetracked on features, such as “How to change your payment method,” that aren’t needed at launch2. The “new common thinking” is launch a minimum viable product (MVP) and build the features as user demand warrants them.

It’s raining cold hard (arbitrary) facts up in here.3

I am making the following base line contextual decisions4:

  1. Your current documentation contributors are highly affiliated with the project and don’t want to have more tools to use than necessary. i.e. They want their docs as close to their source as they can get it.
  2. Your website/publishing/man pages/whatever are all built using a build script or CI/CD and you want to be able to test for docs in PRs, etc.
  3. You want someone to review documentation changes for sanity, spamminess, accuracy, etc.
  4. You’re willing to compromise on the first 2 points in exchange for better and more complete documentation.
  5. Wikis have a lower barrier to contribution than all other tooling options. This is accepted to make this case. This is a whole other debate in actuality.

This makes me want a number, a tipping point, a something, that indicates when I should start compromising on the first 2 points in order to get better documentation. In other words, when do drive-by contributors who are scared off by your current tooling become too important to keep ignoring.

Warning: Bad Data Analysis Follows

Warning: I am not a data analyst. I even got turned down for my audition to play one on TV. Therefore, I suspect there are holes you can drive several trucks through in what is written below. Use the links at the bottom to tell me about them and how to fix them. Even better, write your own post somewhere and I’ll link to it.

In the absence of any other readily available data, I went to one of the leading lights of the wiki world, Wikipedia. I found the following data about Wikipedians. Wikipedians are what Wikipedia calls contributors. Based on this data, I put together a table (ODS, PDF of contributor counts. I’ve made the following contextual decisions5:

  • A drive-by contributor is modeled by Wikipedians who make less than 5 edits per month.
  • Core contributors, those who are both prolific and theoretically willing to put up with some level of friction, are modeled by the Wikipedians who make more than 5 edits per month.
  • The population make up of Wikipedia is comparable to your project (hah!).
  • You will get no drive-by contributor until you move to a wiki.

Question 1: Assuming all contributors have equal value to the project, when are we “losing” more than 50% of the potential contributors?

In January 2002 Wikipedia reported 344 total Wikipedians of which 152 made more than 5 edits. This is the first month in which more than 50%6 of Wikipedians were drive-by contributors.

Therefore, assuming the rest of this holds any truth, once your project hits 152 documentation contributors a month you should consider shifting the balance of tools vs low friction towards the low friction side of the equation.

Obviously a massive caveat here is that Wikipedia was so new in January 2002 that this number might be meaningless. So maybe a different question is in order.

Question 2: Assuming all contributions have equal value to the project, when are we “losing” more than 50% of the potential contributions?

For this question, lets see when we are losing 50% of the contributions. Because of lack of data detail, I made the following contextual decisions7:

  • Drive-by Contributors made 3 edits per contributor. That is halfway between 1 and 5 edits per month.
  • Contributors with greater than 5 edits per month but less than 10 edits per month, made 8 edits per contributor. That is halfway between 6 and 10.
  • Contributors who made greater than 10 edits per month made 11 edits. Let’s balance against an upper maximum of infinity.

In February 2007 Wikipedia reported 192,338 total Wikipedians with 47,849 making more than 5 edits per month and 4,500 having made more than 10 edits per month. This is the second month, and the beginning of the solid trend, where more than 50%8 of the edits were made by drive-by contributors.

Therefore, assuming the rest of this holds any truth, once your project hits 47,849 documentation contributions a month you should consider shifting the balance of tools vs low friction towards the low friction side of the equation.

But maybe that is not how you think about things. Perhaps you think differently.

Question 3: Assuming we don’t want to risk losing too many potential contributors, what is a metric for figuring out this loss?

One way of thinking about this is by trying to determine when I’ve turned away a drive-by contributor? What if we could look at how many page-views your project has to have before you get a drive-by contributor?

Wikipedia provides page-view traffic. In May 2015 Wikipedia changed their data reporting for page-views to eliminate bot traffic. I also didn’t find any data for months after August 2016. Therefore, I restricted my analysis to just the months between May 2015 and August 2016. I calculated the number of pages-views per drive-by contributor9. I then averaged the value for those months.

Therefore, for every 7,529 page-views per month of your documentation, you are losing a drive-by contributor.

Final Thoughts

I am not sure yet how to put these ideas into practice. I am not sure which question I have the most identification with. I’d also like to see some data statistics on projects considering moving to wikis to see if these numbers make any sense at all. I sincerely doubt many projects have over 100 documentation contributors. Should this be considered over the set of all contributors and not just documentation contributors? How many projects get more than 75,00010 page-views a month for documentation alone?

Additionally, I’d also like to see some UX studies done of a wiki edit versus a web-based edit/PR in GitHub or Pagure. While not directly related to this analysis, this could resolve the question of whether this is even a question11.

Finally, I suspect there are methodological flaws, data flaws, contextual decision flaws and more in this simple analysis. I am also fairly certain, though, I didn’t find it, that someone has to have done a better analysis of these questions.

Please provide some feedback. Is this meaningful? Is there better data? Better methodology? Better conclusions? Should I try for a follow up using Arch Wiki’s Statistics?

Let me know. You can contact me using the links below.

  1. Fedora already has a wiki that is suffering from a lack of gardening. Ledger-cli has a wiki that is not being used for the documentation in question even though it could. However it doesn’t seem to have a high level of contributors (I also have not looked but am judging this based on the content quantity.)

  2. and probably not for a long time. Go read some Tyler Tringas!

  3. Sears Commercial Yes, I do have an interesting sense of humor, thanks for asking.

  4. How’s that for a fancy term for “assumptions I pulled out of the MIASS database.”

  5. You know what the problem with “contextual decisions” is .. it makes a condec out of you and me … or something like that.

  6. Percentage of edits by contributors making less than 5 edits per month: =1-(D181/B181)

  7. It’s my blog post so I get to make stuff up. If you wanna play, get your own blog post.

  8. Percentage of edits by contributors making less than 5 edits per month: =((B120-D120)3)/((B120-D120)3+(D120-E120)8+E12011)

  9. Page-views per drive-by contributor: =J5/(B5-D5)

  10. 75,000 page-views would mean the loss of about 10 drive-by contributors.

  11. We don’t really have this conversation about code. Perhaps we should. I know that I have avoided contribution to a project in the past because it used a language I didn’t feel like fighting with.

RPM packages from syslog-ng Git HEAD

Posted by Peter Czanik on December 19, 2016 12:01 PM

Last week, I described why and how to install the latest stable syslog-ng RPM packages. There are some situations, when even the latest stable release is not good enough. The last stable release has been available since August, and quite a few bugs were fixed after that date. If you have any issues with 3.8.1, there is a good chance that it is already fixed.

Development of syslog-ng happens on GitHub: https://github.com/balabit/syslog-ng/. The latest commit in Git is called the HEAD. From now on, I will try to create packages regularly from the latest Git sources.

Warning: while there are many precautions taken to ensure that there are no defects introduced to syslog-ng during development (including automatic testing and code reviews), you can use these packages only at your own risk. Do not be surprised, if it eats your machine for breakfast!

These packages are not intended for production use, rather to verify that the issues that you may have encountered in 3.8.1 or earlier releases have been fixed. They can change frequently and sometimes even include experimental, not yet merged patches (by the time of writing: OpenSSL 1.1 support).

Installing syslog-ng on RHEL 7 / CentOS 7 and Fedora

The “githead” repository is using the regular 3.8 repository as a dependency. So first, add all the repositories as described in my previous post. Next, download the .repo file belonging to your distribution version from https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/.

In case of installing syslog-ng on RHEL 7, enter the following commands:

cd /etc/yum.repos.d/
wget https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng-githead/repo/epel-7/czanik-syslog-ng-githead-epel-7.repo

From now, on you can install syslog-ng and its sub packages from the “githead” repository with yum or dnf.

Installing syslog-ng on SLES and openSUSE

Installation steps are almost the same as described in my previous blog post. This repository contains all the necessary dependencies, you only have to change the URL compared to the previous post: https://build.opensuse.org/project/show/home:czanik:syslog-ng-githead.

For example, if you want to install syslog-ng on openSUSE Leap 42.2, enter the following command:

zypper ar http://download.opensuse.org/repositories/home:/czanik:/syslog-ng-githead/openSUSE_Leap_42.2/ syslog-ng-githead

From now, on you can install syslog-ng and its sub packages from the “githead” repository with zypper.

Are you stuck?

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I am available as @PCzanik.

Another look at GNOME recipes

Posted by Matthias Clasen on December 19, 2016 11:53 AM

It has been a few weeks since I’ve first talked about this new app that I’ve started to work on, GNOME recipes.

Since then, a few things have changed. We have a new details page, which makes better use of the available space with a 2 column layout.

Among the improved details here is a more elaborate ingredients list. Also new is the image viewer, which lets you cycle through the available photos for the recipe without getting in the way too much.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1703-1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm</video>

We also use a 2 column layout when editing a recipe now.

Most importantly, as you can see in these screenshots, we have received some contributed recipes. Thanks to everybody who has sent us one! If you haven’t yet, please do. You may win a prize, if we can work out the logistics :-)

If you want to give recipes a try, the sources are here: https://git.gnome.org/browse/recipes/ and here is a recent Flatpak.

Update: With the just-released flatpak 0.8.0, installing the Flatpak from the .flatpakref file I linked above is as simple as this:

$ flatpak install --from https://raw.githubusercontent.com/alexlarsson/test-releases/master/gnome-recipes.flatpakref
read flatpak info from GTK_USE_PORTAL: network: 1 portal: 0
This application depends on runtimes from:
 http://sdk.gnome.org/repo/
Configure this as new remote 'gnome-1' [y/n]: y
Installing: org.gnome.Recipes/x86_64/master
Updating: org.gnome.Platform/x86_64/3.22 from gnome
No updates.
Updating: org.gnome.Platform.Locale/x86_64/3.22 from gnome
No updates.
Installing: org.gnome.Recipes/x86_64/master from org.gnome.Recipes-origin

1 delta parts, 5 loose fetched; 20053 KiB transferred in 8 seconds 
Installing: org.gnome.Recipes.Locale/x86_64/master from org.gnome.Recipes-origin

5 metadata, 1 content objects fetched; 1 KiB transferred in 0 seconds

Email clients in Fedora

Posted by Fedora Magazine on December 19, 2016 10:44 AM

Email is used by the vast majority of Internet users. Although increasingly users access their mailboxes through web browsers, desktop client applications are still popular. Their biggest advantage is desktop integration. They can send notifications about incoming messages, work offline, call other helper apps, and more. What are the most popular desktop clients you can find in Fedora?

Thunderbird

One doesn’t even have to introduce Thunderbird. It’s the most famous open source email client. It was created by the split of the Mozilla suite into a browser (Firefox) and email client (Thunderbird). In 2012, Mozilla handed over the development to the community. At the end of 2014, Mozilla announced they would decouple development of Thunderbird from Firefox to focus more on browser development. A lot of users understood this announcement as abandoning Thunderbird completely, but that’s not the case.

thunderbird

Pros:

  • Multi-platform
  • A broad selection of extensions
  • Instant messaging integration
  • Very good IMAP support

Cons:

  • Other groupware tasks only covered by extensions
  • Doesn’t support Microsoft Exchange very well
  • Suboptimal integration into desktop environment

Who is it for? A general purpose app that works well in all desktop environments. You’ll like it especially if you need to work on several OSes.

Thorsten Leemhuis, Fedora packager and former FESCo chair, is using Thunderbird and here is why:

thorsten-leemhuisThunderbird does the job for me. It could be easier and more beautiful. It also has quite a few rough edges I needed to get used to; some I was able to work around by adjusting the configuration or by using extensions. But in the end Thunderbird suits my needs better than all the other clients I looked at in the recent past. Thunderbird for example handles multiple identities properly (with the help from the extension “Folder Account”), which is quite important to me. By using the extension “QuickFolders” I can quickly navigate between all my important IMAP folders. Enigmail, just like Thunderbird, could need some polish, but it handles PGP fine. In the end I’m not completely satisfied with Thunderbird, but it most of the time just works; that’s important to me, as I handle hundreds of mails every day.

Evolution

Evolution also doesn’t have to be introduced to Linux users. It’s less well known by users of other OSes because, unlike Thunderbird, it’s not multi-platform. It’s a groupware client, meaning besides email it can also handle contacts, notes, tasks, calendar. Evolution has been developed for more than 15 years in the GNOME project. It’s the default email client in Fedora.

evolution-mail

Pros:

  • GNOME integration
  • Probably the best MS Exchange support on Linux
  • Covers other groupware tasks

Cons:

  • IMAP support not as good compared Thunderbird

Who is it for? Evolution is an ideal solution for those who also want calendaring, task management, and other functions besides email. If you need to connect to an Exchange server, it’s probably the only reasonable option on Linux.

Peter Robinson, Fedora release engineer, is using Evolution and here is why:

peter-robinsonI use Evolution because it’s generally stable, integrates well with the GNOME Shell for notifications, has integrated calendar and contacts and integrates well with a number of service providers for mail, calendar and contacts such as Microsoft Exchange (old job), Google contacts/calendar, and corporate standards such as iCal. It’s not perfect but the maintainers are responsive when I report bugs

Geary

Geary is the youngest client in this overview. The Yorba Foundation started developing it in 2012, and iIt’s now maintained by the GNOME Community. Geary has a modern interface which is focused on popular email services, mainly Gmail, and is inspired by them. For instance, it adopted the conversation view of Gmail.

geary

Pros:

  • Simple interface and configuration
  • The most similar client to web services like Gmail
  • Good GNOME integration

Cons:

  • Missing advanced features (e.g. filters)

Who is it for? Do you use Gmail and would you like to try a desktop client? Geary is the closest desktop client to it.

Jakub Steiner, member of Red Hat’s desktop team and GNOME designer, is using Geary and here is why:

jakub-steinerGeary does a good job focusing on the essential workflow, providing a mean to quickly sort through the inbox, and keep conversations grouped. While not perfect it does a reasonable job reusing the same patterns established in GNOME3. Difficult choices have been made. Somebody relying on POP/heavy client side filtering will be disappointed, but to me it’s the closest thing to calling a free software mail client elegant.

Kmail

Kmail is well-known mostly among KDE users, with a history almost as long as Evolution’s. It has lost some of its popularity recently, as seen on the poll results. Users complain about lagginess and high system requirements of the Akonadi backend. But Kmail also has advantages, since it has a lot of advanced features, and is easily extendable to a full-fledged groupware solution, Kontact.

kmail

Pros:

  • A lot of features and configuration options
  • Extendable to a full-fledged groupware solution, Kontact
  • KDE Plasma integration

Cons:

  • System resource requirements of the Aconadi backend
  • Installation of Kontact floods the lists of apps with 13 launchers
  • More difficult account setup compared to Thunderbird and Evolution

Who is it for? Do you use KDE Plasma and want complete control over email and setup for everything? Kmail is the best option for you.

Sylvia Sanchez, a member of Fedora Marketing and Design teams, is using Kmail and here is why:

Sylvia Sanchez, a Fedora user and contributorWell, first because it isn’t mandatory to setup Kmail in order to use Kontact. It is in Evolution. Second, because Kmail has a wizard that configures everything automatically fetching info from Mozilla. While it is a bit tricky on certain things it’s still handy. Third, Kmail integrates very well with other desktops; it’s not KDE exclusive. Fourth, because what I use more is the calendar/to-dos part. On that side I prefer by far Kontact because its Summary view. I can see everything at a glance and if there’s any new email I’ll see it there. Fifth, is less Outlook compatible oriented. Sixth, is more flexible and less intrusive.

Mutt

Mutt has been the most popular command-line email client among Linux users. But it’s not very friendly to novice users. The user base is generated mostly from power users who spend a lot of time in the terminal. You can navigate through Mutt using only the keyboard. You compose messages in an external editor, which is a big plus for users accustomed to effective command-line text editors such as Vim and Emacs.

Pros:

  • Fast with low system resource requirements
  • Doesn’t require a graphics stack
  • Completely navigable by the keyboard
  • Message composing is left to your favorite editor

Cons:

  • Not as user friendly as graphical clients
  • Unintuitive settings through commands and configuration files

Who is it for? If you spend most of your computer time in the terminal, why use a graphical client? Mutt can do the same job with fewer resources and it’s configuration is virtually unlimited.

Matthew Miller, the Fedora Project Leader is using Mutt and here’s why:

Matthew Miller http://mattdm.org1) I come from a sysadmin background, where I lived in terminal windows. My FPL job doesn’t need that, but I kind of like to retain that connection. 2) I have extensive customizations, filters, scripts, and everything, which I’ve been using for… twenty years, since I stopped using elm. It’d be a pain to migrate all of those! 3) I actually use it directly on the server where I get my mail, and it can work with local mail folders directly, so no IMAP or anything like that to manage or worry about – and no synchronization problems. It’s the “cloud” advantage of access-from-anywhere, just like webmail – except a little more “texty”. 4) Since it’s a console tool, it integrates seamlessly with my preferred editor, joe.

Claws Mail

Claws Mail is an email client written in GTK+ that’s been a bit hidden in the shadow of Evolution. But it has a loyal community and user base it’s been serving for almost 15 years. It started as a fork of Sylpheed, which is also still alive, but Claws Mail has more active development and seemingly more users, too. Both are conservative desktop clients with lower system resource requirements, so they’re often used in Linux distributions for older computers.

claws-mail

Pros:

  • Low system resource requirements
  • Fairly good selection of extensions

Cons:

  • Too conservative user interface
  • Cannot view HTML messages without an extension

Who is it for? Do you still use email the same way like 15 years ago? You don’t understand how an email client can consume several hundred MB and still be slow? You will like Claws Mail.

Andrew Clayton, a Fedora user and kernel contributor, is using Claws Mail and here is why:

andrew-claytonI like to think of it as a graphical Mutt. It’s nice and configurable, has good IMAP support. It just looks a like a good traditional mail client (it doesn’t try to simplify things and it doesn’t try to be flashy). I like the MH format (each mail message stored in its own file) it uses.

Alpine

Like Mutt, Alpine is a command-line client. It was created in 2007 as a replacement for Pine, whose development was stopped and whose license was changed to freeware.

Pros:

  • Fast with very low system resource requirements
  • Doesn require a graphics stack
  • Completely navigable by the keyboard
  • Easier interface and configuration than Mutt’s

Cons:

  • Although simpler than Mutt still not as user friendly as graphical clients
  • Fewer features and configuration options compared to Mutt
alpine

©Office of UW Technology, University of Washington (Licensed under the Apache License, Version 2.0)

Who is it for? Do you also spend most of your computer time in the terminal, but Mutt is too complex for you? Try Alpine!

Others

This has been an overview of the most popular email clients among Fedora users. But it’s definitely not a complete list. Fedora offers other interesting alternatives. For instance, Trojitá, written in Qt, has very good and fast IMAP support, but has limited features. You can also use the email client in Seamonkey, which is a fork of Mozilla Suite. You can also try Sylpheed. Emacs fans should try Mu4e that is an email client based on Emacs using mu as a backend. N1 by Nylas brings an interesting approach: it moves most of the client logic to the server, and only runs a thin client locally. You won’t find N1 in Fedora repositories yet, but you can install it on Fedora.

What email client do you use and why?


This post was originally published in January 2016

Rawhide notes from the trail, the 2016-12-18 edition

Posted by Kevin Fenzi on December 18, 2016 08:05 PM

Hello from the Rawhide trail.

With the recent Flag day (on Dec 12th), we switched all rawhide builds to allow us to sign (and hopefully eventually test) all packages. Here’s how it works:

  • Your rawhide build used to just be tagged into the f26 (currently rawhide) tag. Now, it tags into the f26-pending tag instead.
  • The autosigner sees the build, signs it and moves it to the f26 tag for the next rawhide compose.

Unfortunately, there is a backlog of packages we needed to sign, many from the ppc{le} bringup, so thats why there hasn’t been any rawhide composes the last few days. The one today should be out later today however, and we should be back on track from there.

So currently we are just signing things at the $release-pending tag, but we would like to try and start doing some automated QA there at some point. Nothing that will hold up builds for too long, but something that will catch obviously broken builds from landing. Now that we have everything otherwise in place we can start figuring out what we want to run there.

Also, coming soon to rawide will be the first rebuild of python packages for the upcoming python 3.6. Hopefully that will all be smooth sailing, but a number of package updates will land for that soon.

Fedora 25 Release Party Beijing Report

Posted by Zamir SUN on December 18, 2016 02:39 AM
Last week we hold the Fedora 25 Release Party Beijing. As I am a little busy, Tonghui volunteered to be the event owner. I co-organise as coordinator and logistics. Since it’s near winter vacation of schools, as soon as Fedora 25 released, we decide to make this happen early December. Otherwise there will hardly any [...]

Thunar: Pfad-Anzeige in der Titelleiste aktivieren

Posted by Fedora-Blog.de on December 17, 2016 10:14 PM
Bitte beachtet auch die Anmerkungen zu den HowTos!

Der Xfce-Dateimanager Thunar zeigt standardmäßig lediglich den Namen des Verzeichnisses an, in dem man sich gerade befindet. Möchte man hingegen den vollen Pfad angezeigt bekommen, kann man diese versteckte Einstellung mittels

xfconf-query --channel thunar --property /misc-full-path-in-title --create --type bool --set true

aktivieren.

Falls man dieses Feature später wieder deaktivieren möchte, kann man dies über folgenden Befehl machen:

xfconf-query --channel thunar --property /misc-full-path-in-title --set false

All systems go

Posted by Fedora Infrastructure Status on December 17, 2016 08:15 PM
Service 'FedoraHosted.org Services' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on December 17, 2016 05:34 PM
Service 'FedoraHosted.org Services' now has status: major: osuosl network outage

Fedora@LISA2016: Event Report

Posted by Corey ' Linuxmodder' Sheldon on December 16, 2016 10:49 PM

LISA 2016 (Large Install System Admin “Sysadmin” Conference) 2016 Dec 4-9th,2016, Expo Dec 7-8th,2016.  Hosted Sheraton Downtown Boston.

Attending Ambassadors, Fedora Contributors included: Corey Sheldon (linuxmodder), Nick Bebout (nb), Mike DePaulo (mikedep333), Beth Lynn (bethlynn), Matthew Miller (mattdm), Stephen Gallagher (sgallagh).  Having a rather nice spread of the Fedora Community among us made for a very productive display and sidebar chats amongst ourselves and the Redhat / Centos Table folks we were with. Among us were several conference talk attendees and even a GPG Signing Party (as a BoF).

Day 1 — Wednesday — (Expo):

Things started off a bit sluggish til just after lunch when there was the first break from all talks on Wednesday.  We had folks from all sectors of the industry coming to the booth, and they had mostly upgraded to 25 already.  A few common questions revolved around what was in the pipeline for modularity and issues/gripes with systemd.  Being a ‘ Large Install`  centric conference we saw plenty of folks also asking about using Dockerfiles and cockpit, which mattdm so happily had displayed on one of the two monitors we had been provided at the booth.  Thanks to some pesky hardware or a bad burn, we even had the pleasure of helping one of our own clean install F25 at the booth (bethlynn).  Among several of the talks that some of us attended there were:  Beginner Wireshark, SRE: At a Startup: Lessons from LinkedIn, SRE: It’s people all the Way Down, The Road to Mordor: Information Security Issues and Your Open Source Project. Also of interest to both booth staff and many attendees was LISA Build, think of that as a Cisco NET+ hands-on event, where all skill levels learned/taught things on building networks, configuring routers/load balancers and setting up native IPv6.  Day one ended with a small number of DVDs (F24, as F25 media was not just available) about  75% of our unixstickers supply and about 50% of the combined USBs from the RedHat / Centos booths.

Day 2 — Thursday — (Expo):

Day 2 started at 10a as far as the expo was concerned but several of the team took advantage of the late start to visit local restaurants for breakfast, braving the wind and cold all the while.  Day 2 saw a lot of the same questions and some more complex questions regarding more complex deployments including ones with advanced SELinux and docker images which given the selection of talks that day was quite understandable.  There were several BoFs (Birds Of a Feather) talks on day 2 as is customary at LISA conferences, the note-worthy one from the Redhat / Fedora / CentOS team was the GPG key signing party which saw less than expected numbers with only 13 attendees but several were either new to key signing or the practice itself. As an uncommon occurrence would have it 3 of the attendees (including Nick Bebout the organizer) that were CACert validators, which would have allowed any interested folks to get over the required 100 points to become a certifier in their own right, sadly this is an aspect of the Web Of Trust (WoT) that is too under publicized.

Several of the booth staff stayed for the Thursday night Google Ice Cream Social, which is always a great networking event that is very low key and laid back.  Nick Bebout (nb) even won (via raffle) a signed copy of the SRE book on website optimization.

All in all, while we still had media on the table at the conclusion, we shared plenty of the other swag and had PLENTY of awesome user interactions with seasoned users and new users alike.  We also had a blast talking and working out ideas amongst ourselves at the booth.

 


Filed under: Community, Conventions / conferences, Current Events, Fedora, Redhat, Security, Volunteer Tagged: Fedora, LISA2016, Open Source, Redhat, Sysadmin