July 15, 2016

Creando un WebApp con Vue.js #PreLinuxDay
El sábado 09 de Julio, Zahir Gudiño brindo el 3er workshop con el tema: creando un webapp en vue.js #PreLinuxDay en la Universidad Interamericana de Panamá. Los presentes conocieron Vue.js una librería javascript para construir interfaces web interactivas. Zahir nos explico las utilidades actuales de Vue.js entre las que tenemos su fácil aprendizaje a comparación con otros frameworks … Continue reading "Creando un WebApp con Vue.js #PreLinuxDay"

July 14, 2016

bc: Command line calculator

If you run a graphical desktop environment, you probably point and click your way to a calculator when you need one. The Fedora Workstation, for example, includes the Calculator tool. It features several different operating modes that allow you to do, for example, complex math or financial calculations. But did you know the command line also offers a similar calculator called bc?

The bc utility gives you everything you expect from a scientific, financial, or even simple calculator. What’s more, it can be scripted from the command line if needed. This allows you to use it in shell scripts, in case you need to do more complex math.

Because bc is used by some other system software, like CUPS printing services, it’s probably installed on your Fedora system already. You can check with this command:

dnf list installed bc

If you don’t see it for some reason, you can install the package with this command:

sudo dnf install bc

Doing simple math with bc

One way to use bc is to enter the calculator’s own shell. There you can run many calculations in a row. When you enter, the first thing that appears is a notice about the program:

$ bc
bc 1.06.95
Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc.
This is free software with ABSOLUTELY NO WARRANTY.
For details type `warranty'.

Now you can type in calculations or commands, one per line:

1+1

The calculator helpfully answers:

2

You can perform other commands here. You can use addition (+), subtraction (-), multipliction (*), division (/), parentheses, exponents (^), and so forth. Note that the calculator respects all expected conventions such as order of operations. Try these examples:

(4+7)*2
4+7*2

To exit, send the “end of input” signal with the key combination Ctrl+D.

Another way is to use the echo command to send calculations or commands. Here’s the calculator equivalent of “Hello, world,” using the shell’s pipe function (|) to send output from echo into bc:

echo '1+1' | bc

You can send more than one calculation using the shell pipe, with a semicolon to separate entries. The results are returned on separate lines.

echo '1+1; 2+2' | bc

Scale

The bc calculator uses the concept of scale, or the number of digits after a decimal point, in some calculations. The default scale is 0. Division operations always use the scale setting. So if you don’t set scale, you may get unexpected answers:

echo '3/2' | bc
echo 'scale=3; 3/2' | bc

Multiplication uses a more complex decision for scale:

echo '3*2' | bc
echo '3*2.0' | bc

Meanwhile, addition and subtraction are more as expected:

echo '7-4.15' | bc

Other base number systems

Another useful function is the ability to use number systems other than base-10 (decimal). For instance, you can easily do hexadecimal or binary math. Use the ibase and obase commands to set input and output base systems between base-2 and base-16. Remember that once you use ibase, any number you enter is expected to be in the new declared base.

To do hexadecimal to decimal conversions or math, you can use a command like this. Note the hexadecimal digits above 9 must be in uppercase (A-F):

echo 'ibase=16; A42F' | bc
echo 'ibase=16; 5F72+C39B' | bc

To get results in hexadecimal, set the obase as well:

echo 'obase=16; ibase=16; 5F72+C39B' | bc

Here’s a trick, though. If you’re doing these calculations in the shell, how do you switch back to input in base-10? The answer is to use ibase, but you must set it to the equivalent of decimal number 10 in the current input base. For instance, if ibase was set to hexadecimal, enter:

ibase=A

Once you do this, all input numbers are now decimal again, so you can enter obase=10 to reset the output base system.

Conclusion

This is only the beginning of what bc can do. It also allows you to define functions, variables, and loops for complex calculations and programs. You can save these programs as text files on your system to run whenever you need. You can find numerous resources on the web that offer examples and additional function libraries. Happy calculating!

Updated RPM Fusion’s mirrorlist servers

RPM Fusion’s mirrorlist server which are returning a list of (probably, hopefully) up to date mirrors (e.g., http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-rawhide&arch=x86_64) still have been running on CentOS5 and the old MirrorManager code base. It was running on two systems (DNS load balancing) and was not the most stable setup. Connecting from a country which has been recently added to the GeoIP database let to 100% CPU usage of the httpd process. Which let to a DOS after a few requests. I added a cron entry to restart the httpd server every hour, which seemed to help a bit, but it was a rather clumsy workaround.

It was clear that the two systems need to be updated to something newer and as the new MirrorManager2 code base can luckily handle the data format from the old MirrorManager code base it was possible to update the RPM Fusion mirrorlist servers without updating the MirrorManager back-end (yet).

From now on there are four CentOS7 systems answering the requests for mirrors.rpmfusion.org. As the new RPM Fusion infrastructure is also ansible based I added the ansible files from Fedora to the RPM Fusion infrastructure repository. I had to remove some parts but most ansible content could be reused.

When yum or dnf are now connecting to http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-rawhide&arch=x86_64 the answer is created by one of four CentOS7 systems running the latest MirrorManager2 code.

RPM Fusion also has the same mirrorlist access statistics like Fedora: http://mirrors.rpmfusion.org/statistics/.

I still need to update the back-end system which is only one system instead of six different system like in the Fedora infrastructure.

Menino Wolnei palestrando sobre Fedora em FISL 17.
Menino Wolnei palestrando sobre Fedora em FISL 17.
Sandbox Steam running it under a different account

To improve my system’s security, I’ve configured Steam to be run as a different Linux account. This guide is inspired in this thread.

First, we need a new user account to run Steam as. I’ve created the user sandbox with group sandbox.

# useradd sandbox
# passwd sandbox

Changing password for user sandbox.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

# usermod -a -G sandbox juan

Next, I give my user ‘juan‘ permissions in sudo to run commands as sandbox without password.

# vi /etc/sudoers.d/sandbox
juan ALL=(sandbox) NOPASSWD: ALL
# chmod 440 /etc/sudoers.d/sandbox

Then, we write a wrapper script to run steam as our sandbox user.

$ mkdir ~/bin
$ vi ~/bin/steam
#!/bin/bash
xhost SI:localuser:sandbox
sudo -i -u sandbox /usr/bin/steam "$@"
$ chmod +x ~/bin/steam

If you don’t have ~/bin in your PATH, add this to ~/.bash_profile:

PATH="$HOME/bin:$PATH"
export PATH

With this in place, we already can run Steam as sandbox with our wrapper, but a few things are missing, most importantly, the audio. For this, we are going to tell pulseaudio to create a unix socket, and the sandbox’s pulseaudio will run as a client through that socket.

First, I create a private folder in /run to host the socket.

# vi /etc/tmpfiles.d/pulse-sandbox.conf
d /run/pulse-sandbox 0750 juan sandbox
# systemd-tmpfiles --create

Then, I configure pulseaudio to create the socket at startup:

$ cp /etc/pulse/default.pa /home/juan/.config/pulse/default.pa
$ vi /home/juan/.config/pulse/default.pa

Add these line:

load-module module-native-protocol-unix auth-group=sandbox auth-group-enable=yes socket=/run/pulse-sandbox/pulse-sandbox.socket

In the sandbox user, we need this configuration:

$ vi /home/sandbox/.config/pulse/client.conf
default-server = unix:/run/pulse-sandbox/pulse-sandbox.socket

One more thing to configure is the desktop entry. We are going to override the global desktop file copying it to our user:

$ cp /usr/share/applications/steam.desktop /home/juan/.local/share/applications/steam.desktop

And we edit the file and substitute all the lines with Exec= to call our wrapper:

Exec=/home/juan/bin/steam %U

Exec=/home/juan/bin/steam steam://store

…and so on.

Lastly, close your session so pulseaudio is able to pick the changes, and you should be able to run Steam as the user sandbox executing the icon in your desktop.

Hope it helps. If someone has any advice to improve this setup, please, tell me.

Cheers!


Fedora-Latam Meeting direto de una pizzaria en Brazil.
Fedora-Latam Meeting direto de una pizzaria en Brazil.
How to package Rust applications to RPM using vendoring
The Rust programming language was born in the Internet age. There are many useful Rust libraries available (called crates) on the Internet collected to https://crates.io/. When you use them, the Cargo build tool can automatically download any of these dependencies as necessary. It is easy for developers, but has some drawbacks. First of all, it […]
FAD Kuala Lumpur

Every year again, could be said if the budget.next not would enforce the Ambasadors to meet in summer instead of the end of the calendar year to come together and working on the budget plan for the next year. So after Singapore in December the APAC ambassadors came in Kuala Lumpur in Malaysia again together. For me is the way to KL just one hour longer as to Singapore, but there is one hour time difference, so I arrived again very late.

The first day was mostly for discussions how to cut the budget of this year so that it fits to the huge budget cut. Having 11.5k US$ budget for the whole year and paying the regional FAD out of it, means for the APAC region after the FAD and paying the media the budget is gone. The next bigger agreement was how to continue with FUDCon in APAC, if it is a good idea to switch to FUDCon APAC with an bi-annually cycle.

It looks we will switch to such one, as we have to to save some money as still a lot of Asians want to travel to Flock. But the FUDCon is still needed in the region for grow our amount of contributors there and connect the region better together. For 2017 are no bids so far, maybe there coming some up. But starting with 2018 we might have FUDCon only each second year.

After reviewing the current budget and some discussions about the problems of the region, day 1 of the FAD ended and it followed a journey with trains to Izhars house where his mother had prepared a dinner for us, because there was Hari Raya Aidilfitri, so we could enjoy some local specialities. Was good for me, I love beef redang anyway and the malaysian version is also good.

The second day was filled with discussions for the budget for next year. There is not much room after the cut, so most of the events dont get budget anymore, just two main events get a larger part, the rest floats into release parties, a few smaller events and the swag production.

Unfortunately we had for the second day only until the late afternoon as some was already flying home then. I did fly back next morning to arrive right in time in the office, just that the flight was delayed and I came a little bit later as planned.

As a conclusion, we still have extreme problems in this region. The Ambassadors think not as Fedorians of the region as just on their own needs first. There is still a huge gap between, what is done and what is reported. Not to speak about missing invoices and carelessness with the reimbursements.

Fedora 24 Release Party: Bangalore, India
Fedora 24 Release Party, Bangalore, India: Contributors earning badges

Attendees earning badges as contributors!

Over the past few months, many of us in the Bangalore open source community have focused our efforts of writing test cases for Fedora, organizing a few sessions where one can learn about testing, and how we can do things together. All this while, it has been fun: I’ve met new people, learned things, and realized that sharing even small pieces of knowledge and experiences makes it easier for newcomers to feel welcome.

Organizing a release party

At one point when Fedora 24 was released, it was exciting as we were closely involved with Fedora release validation testing and so we wanted to put together a release party. All open source projects encourage their communities to celebrate software release and similar milestones. Ours was a simple plan! We were having a really good time learning together and we wanted to get more people to know that there is a better way to gain knowledge: by sharing and working together.

We put together a page on the Fedora wiki and asked locals (in Bangalore) to join in. We wanted to show how versatile Fedora is. This included talking a bit about the new features in Cloud, Workstation, Server, and the Spins. Also, we had a small session put together to help those interested take the first couple of steps in becoming a contributor.

Fedora 24 Release Party, Bangalore, India: Women in FOSS

There was a strong representation of women at the event.

Kicking off the Bangalore release party

Fedora 24 Release Party, Bangalore, India: Cutting the cake and ice breakers!

Cutting the cake and ice breakers!

Fedora 24 Release Party, Bangalore, India: Having lunch and discussions.

Having lunch and discussions.

On Sunday, we were happy to see that around 50 people wanted to spend a bit of their weekend sharing our joys and efforts. Many were wanting to know how to become part of the Fedora Project. We had help from some of Red Hat employees, and together we worked through the logistics of organizing a party.

We started off at 11:30am with celebrations and ice breakers. A release party has a cake and we started off with cake. That said, we introduced the Fedora Project, talked about the values, and also how various projects within Fedora offer opportunities to tinker with creative ideas.

The audience was a mix of those who were learning about Fedora and others who have participated in the project for some time. Along with the small demos for robotics and such, it was good to see that the traditional perception around “Fedora is a Linux Operating System” was addressed.

Afternoon discussions and lunch

Fedora 24 Release Party, Bangalore, India: Visiting the booths and discussing Fedora

Visiting the booths and discussing Fedora

The post-lunch (“pizza” – yay!) sessions included a bit of Linux history by Sachidananda. Others came over to share how working and collaborating helped them gain confidence and enable them to take on new and often complex challenges. Suraj (Deshmukh) talked about his participation in upstream projects via the Durgapur LUG (DGPLUG). He emphasized using IRC communication to seek solutions to vexing problems. Richa shared her journey with Wikimedia Foundation and how the Outreachy program helped improve her programming skills.

Fedora 24 Release Party, Bangalore, India: Arvind , Richa and Sachidananda

Arvind , Richa and Sachidananda speaking

We had planned for some lab sessions. A hands-on for those, around twenty-five of them, interested to take their first steps in being a Fedora contributor. Machines were set up with Fedora 24 and we tested updates as well as some bits of Fedora 25 (Rawhide). Explaining the nature of Rawhide, or a continuously evolving release, was fun too. At the end of the day, we have around 70 updates tested and recorded. Some of the participants shared feedback and I’ll link to them in a later post.

Looking at the enthusiastic response to testing (and breaking things), we will be putting together some follow-up meetings. Mostly just getting together, having fun, and learning by doing things in testing. That would be a good way to understand the workings of various projects and how they interact with Fedora. Hopefully, a few contributors would also find it interesting to contribute to documentation. A lot of projects can help on-board new contributors if structured documentation and sequence of things are available.

Fedora 24 Release Party, Bangalore, India: Hands-on lab with Fedora

Hands-on lab with Fedora

Closing out in Bangalore

I had fun. Arvind tells me that it was a good thing for him too and others present at the party had similar things to say. So, job not done yet! The next step is to continue to have a calendar of small events where we do things and help make Fedora better.

Fedora 24 Release Party, Bangalore, India: Having fun with the community

Yeee! It was fun!

<iframe allowfullscreen="true" class="youtube-player" height="461" src="https://www.youtube.com/embed/videoseries?list=PLlylBL-4PErH3RdK97cJpVFNAs52MA8Gz&amp;hl=en_US" style="border:0;" type="text/html" width="766"></iframe>


Image courtesy of Alfredo Hernandez – originally posted to The Noun Project as Analytics Report.

The post Fedora 24 Release Party: Bangalore, India appeared first on Fedora Community Blog.

xinput resolves device names and property names

xinput is a commandline tool to change X device properties. Specifically, it's a generic interface to change X input driver configuration at run-time, used primarily in the absence of a desktop environment or just for testing things. But there's a feature of xinput that many don't appear to know: it resolves device and property names correctly. So plenty of times you see advice to run a command like this:


xinput set-prop 15 281 1
This is bad advice, it's almost impossible to figure out what this is supposed to do, it depends on the device ID never changing (spoiler: it will) and the property number never changing (spoiler: it will). Worst case, you may suddenly end up setting a different property on a different device and you won't even notice. Instead, just use the built-in name resolution features of xinput:

xinput set-prop "SynPS/2 Synaptics TouchPad" "libinput Tapping Enabled" 1
This command will work regardless of the device ID for the touchpad and regardless of the property number. Plus it's self-documenting. This has been possible for many many years, so please stop using the number-only approach.

Fedora mirror at home with improved hardware

It was always a dream to have a fully functional Fedora mirror in the local network which I can use. I tried many times before, mostly with copying rpms from office, carrying them around in hard drive, etc. But never managed to setup a working mirror which will just work (even though setting it up was not that difficult). My house currently has 3 different network (from 3 different providers) and at any point of time 1 of them stays down 😔

Hardware

If you remember my post on home storage, I was using Banana Pi(s). They are still very nice, and Fedora runs on them properly, but they were not very powerful, things like rsync was crawling on them. This PyCon, I received Minnowboard Turbot from John Hawley(Thanks a lot once again). It took time to get them setup (as I don’t have a monitor with HDMI, I had to steal the TV from the front room), they are finally up in my own production environment. Installation of Fedora was super easy, just used the latest Fedora 24 from a live USB stick, and I was ready to go.

In the picture above you can see two of those running, you can also see a Banana Pi in the back.

Syncing up the mirror

Now for my work, I mostly need x86_64, and nothing else (I update my ARM boards, but not regularly). So following the tips in #fedora-noc channel from smooge, and puiterwijk, and some tips from this wiki page, I started rsyncing the 24GA. This was around 55GB, and took me some days to get it in. Mean while Chandan helped me by syncing the updates repo. Right now I have a cron job which syncs the update repo every night.

Remember to add the following your Apache virtualhost configuration

  AddType application/octet-stream .iso
  AddType application/octet-stream .rpm
Today, july 12, was the first day of the seventeenth edition of FISL - International Forum Free Software...
Today, july 12, was the first day of the seventeenth edition of FISL - International Forum Free Software, this event was my entrance door to Fedora in 2008 and in 2016 this is my seventh participation here in Porto Alegre, capital of state Rio Grande do Sul.

One more time encounter familiar faces of friends and gladly this year, new local contributors faces and more, now we have here in Brazil women ambassadors.

We have a table with four chairs in the area of communities, where we spent the day talking about Fedora, helping install him and distributing adhesives.

At the end of afternoon, 17h40min, gave a quick talk about Fedora 24 release change set distributed in our three versions: Cloud, Server and Workstation.  #fedora #FISL17 #linux #f24 #event  

FISL 2016


July 13, 2016

Saying Goodbye to F23 updated Respins

I am officially announcing that the current F23-20160704 will be the last set for Fedora 23.

As of this week I have updated the builder to Fedora 24 so with the release of the next kernel We will start with the new F24 respins.

 

 


FESCo Elections: Interview with Stephen Gallagher (sgallagh)
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Council badge

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Tuesday, July 19 and closes promptly at 23:59:59 UTC on Monday, July 25th. Please read the responses from candidates and make your choices carefully. Feel free to ask questions to the candidates here (preferred) or elsewhere!

Interview with Stephen Gallagher (sgallagh)

  • Fedora Account: sgallagh
  • IRC: sgallagh (usually in #fedora-devel, #fedora-server, #sssd and #openshift-dev)
  • Fedora User Wiki Page

What is your background in engineering?

I’ve been a software developer working on applications and services for Linux-based systems since around the turn of the millennium. For the last eight years, I’ve been working for Red Hat in various software development roles. During that time, I’ve contributed to a number of open source projects; in particular: Fedora Server, the System Security Services Daemon, and OpenShift Origin.

Prior to working for Red Hat, I developed control software for Linux-powered enterprise WiFi setups, worked on the Apache web-agent for Netegrity/CA SiteMinder and wrote the Linux port of a static code analysis tool.

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

We are entering a brave new world of containerization technology that unsurprisingly looks an awful lot like the Wild West of the late-90s in terms of how software is developed and deployed. Having lived through the last major deployment shift (virtualization), I think I have a reasonably good handle on the issues that this new mechanism is facing and how to try to avoid some of the classic pitfalls inherent in new approaches (such as bundle-vs-system-libraries questions). In addition, my day-job at Red Hat is working on OpenShift, which is Red Hat’s container-management PaaS solution, so I’m in a good position to help direct any such issues that come up.

I think another major technical issue that we face today isn’t really new, but we’re starting to explore new potential solutions: the “Too Fast/Too Slow Problem”. Fedora’s traditional policy of attempting to ship the latest stable version of software available at the time of release tends to put it at odds with two different mindsets of people; there are people who want guarantees about API and ABI stability in the OS that the rate of change often makes impossible, while on the other side of things the six-month release cycle means that people who are trying to co-develop new projects atop very recent (sometimes bleeding-edge) technologies feels that waiting for the next release is too long. Fedora is now trying to approach things by breaking the distribution up into “modules” that are larger than individual packages, separately-updateable and tested as a coherent whole. The hope is that by being able to deliver individual pieces of Fedora at different rates from the distribution as a whole, we will be able to address both of these use-cases. I think there is a lot of work still to be done here, but I’ve been watching the progress and intend to keep myself involved in those efforts.

What are three personal qualities that you feel would benefit FESCo if you are elected?

  • Dedication: Fedora is in my blood. I spend much of my spare time evangelizing Fedora (in specific) and the open source philosophy (in general) wherever I go. I will always look after Fedora’s interests first.
  • Experience: I have been serving as a member of FESCo for years now; I know how it works (and when it doesn’t) and I have contacts with most of the major constituencies in the Fedora Project.
  • Mediation: I have been serving as a mediator and coordinator between projects for many years now and I am skilled at bringing people to workable compromises.

Why do you want to be a member of FESCo?

I’ve been serving on FESCo for years now and I feel that I’ve made a positive contribution to the Fedora Project in that time. Beyond that, I enjoy the opportunity it provides me to interact with a wider set of people in the Fedora Community and learn about projects I might otherwise be unaware of.

Currently, how do you contribute to Fedora? How does that contribution benefit the community?

As noted previously, I currently serve on FESCo. In a direct technical role, I am a developer on the OpenShift Origin project and still contribute to the System Security Services Daemon from time to time. I evangelize the Fedora Project wherever I go, hopefully helping to bring new people into the community. Also this past year I spent part of my time acting as a mentor to the Rensselaer Center for Open Source, which (through a partnership with Red Hat) was developing tools for universities atop Fedora.

The post FESCo Elections: Interview with Stephen Gallagher (sgallagh) appeared first on Fedora Community Blog.

You’re invited: FOSCo Brainstorm Meeting, 2016-07-18, 13:00 UTC

For some time now, Fedora has discussed the idea of the Fedora Outreach Steering Committee (FOSCo), a body to coordinate all our outreach efforts. Now it’s time to make it happen!

FOSCo brainstorming: you’re invited!

On behalf of FAmSCo and the Fedora Council, we would like to invite the Fedora community to an all-hands.

Roll call

So far, the following participants have confirmed attendance.

The fact that we already have a good team of volunteers should not stop you from attending. In fact, we would like to hear more voices from all stakeholders. The more, the better! To get an idea what FAmSCo has been working on so far, please have a look at the wiki page and current status.

None of this is set in stone yet, and we feel we need your input before we go any further. We are looking forward to your comments and to meet you next Monday!


Invitation by Claire Jones from the Noun Project.

The post You’re invited: FOSCo Brainstorm Meeting, 2016-07-18, 13:00 UTC appeared first on Fedora Community Blog.

Using the Java Security Manager in Enterprise Application Platform 7

JBoss Enterprise Application Platform 7 allows the definition of Java Security Policies per application. The way it's implemented means that we'll also be able to define security policies per module, in addition to define one per application. The ability to apply the Java Security Manager per application, or per module in EAP 7, makes it a versatile tool in the mitigation of serious security issues, or useful for applications with strict security requirements.

The main difference between EAP 6, and 7 is that EAP 7 implements the Java Enterprise Edition 7 specification. Part of that specification is the ability to add Java Security Manager permissions per application. How that works in practice is that the Application Server defines a minimum set of policies that need to be enforced, as well as a maximum set of policies that an application is allowed to grant to itself.

Let’s say we have a web application which wants to read Java System Properties. For example:

System.getProperty("java.home");

If you ran with the Security Manager enabled, this call would throw an AccessControlException. In order to enable the security manager, start JBoss EAP 7 with the option -secmgr, or set SECMGR to true in standalone, or domain configuration files.

Now if you added the following permissions.xml file to the META-INF folder in the application archive, you could grant permissions for the Java System Property call:

Add to META-INF/permissions.xml of application:

<permissions ..>
        <permission>
                <class-name>java.util.PropertyPermission</class-name>
                <name>*</name>
                <actions>read,write</actions>
        </permission>
</permissions>

The Wildfly Security Manager in EAP 7 also provides some extra methods for doing privileged actions. Privileged actions are ones that won't trigger a security check. However in order to use these methods, the application will need to declare a dependency on the Wildfly Security Manager. These methods can be used by developers instead of the built in PrivilegedActions in order to improve the performance of the security checks. There are a few of these optimized methods:

  • getPropertyPrivileged
  • getClassLoaderPrivileged
  • getCurrentContextClassLoaderPrivileged
  • getSystemEnvironmentPrivileged

For more information about custom features built into the Wildlfy Security Manager, see this presentation slide deck by David Lloyd.

Out of the box EAP 7 ships with a minimum, and maximum policy like so:

$EAP_HOME/standalone/configuration/standalone.xml:

<subsystem xmlns="urn:jboss:domain:security-manager:1.0">
    <deployment-permissions>
        <maximum-set>
            <permission class="java.security.AllPermission"/>
        </maximum-set>
    </deployment-permissions>
</subsystem>

That doesn't enforce any particular permissions on applications, and grants them AllPermissions if they don’t define their own. If an administrator wanted to grant at least permissions to read System Properties to all applications then they could add this policy:

$EAP_HOME/standalone/configuration/standalone.xml:

<subsystem xmlns="urn:jboss:domain:security-manager:1.0">
    <deployment-permissions>
        <minimum-set>
            <permission class="java.util.PropertyPermission" name="*" actions="read,write"/>
        </minimum-set>
        <maximum-set>
            <permission class="java.security.AllPermission"/>
        </maximum-set>
    </deployment-permissions>
</subsystem>

Alternatively if they wanted to restrict all permissions for all applications except FilePermission than they should use a maximum policy like so:

<subsystem xmlns="urn:jboss:domain:security-manager:1.0">
    <deployment-permissions>
        <maximum-set>
            <permission class="java.io.FilePermission" name="/tmp/abc" actions="read,write"/>
        </maximum-set>
    </deployment-permissions>
</subsystem>

Doing so would mean that the previously described web applications which required PropertyPermission would fail to deploy, because it is trying to grant permissions to Properties, which is not granted by the application administrator. There is a chapter on using the security manager in the official documentation for EAP 7.

Enabling the security manager after development of an application can be troublesome because a developer would then need to add the correct policies one at a time, as the AccessControlExceptions where hit. However the Wildfly Security Manager EAP 7 will have a debug mode, which if enabled, doesn’t enforce permission checks, but logs violations of the policy. In this way, a developer could see all the permissions which need to be added after one test run of the application. This feature hasn’t been backported from upstream yet, however a request to get it backported has been made. In EAP 7 GA release you can get extra information about access violations by enabling DEBUG logging for org.wildfly.security.access.

When you run with the Security Manager in EAP 7 each module is able to declare it’s own set of unique permissions. If you don’t define permissions for a module, a default of AllPermissions is granted. Being able to define Security Manager policies per module is powerful because you can prevent sensitive, or vulnerable features of the application server from a serious security impact if it’s compromised. That gives the ability for Red Hat to provide a workaround for a known security vulnerability via a configuration change to a module which limits the impact. For example, to restrict the permissions of the JGroups modules to only the things required you could add the following permissions block to the JGroups:

$EAP_HOME/modules/system/layers/base/org/jgroups/main/module.xml:

<permissions>
    <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jgroups/main/jgroups-3.6.6.Final-redhat-1.jar" actions="read"/>
    <grant permission="java.util.PropertyPermission" name="jgroups.logging.log_factory_class" actions="read"/>
    <grant permission="java.io.FilePermission" name="${env.EAP_HOME}/modules/system/layers/base/org/jboss/as/clustering/jgroups/main/wildfly-clustering-jgroups-extension-10.0.0.CR6-redhat-1.jar" actions="read"/>
    ...
</permissions>

In EAP 7 GA the use of ${env.EAP_HOME} as above won't work yet. That feature has been implemented upstream and backporting can be tracked. That feature will make file paths compatible between systems by adding support for System Property, and Environment Variable expansion in module.xml permission blocks, making the release of generic security permissions viable.

While the Security Manager could be used to provide multi-tenancy for the application server, Red Hat does not think that it's suitable for that. Our Java multi-tenancy in Openshift is achieved by running each tenant’s application in a separate Java Virtual Machine, with the Operating System providing sandboxing via SELinux. This was discussed within the JBoss community, with the view of Red Hat reflected in this post

In conclusion EAP 7 introduced the Wildfly Java Security Manager which allows an application developer to define security policies per application, while also allowing an application administrator the ability to define security policies per module, or a set of minimum, or maximum security permissions for applications. Enabling the Security Manager will have an impact on performance. Red Hat recommends taking a holistic approach to security of the application, and not relying on the Security Manager only.

Product

Red Hat JBoss Enterprise Application Platform

Tags

eap jboss security

Component

jbossas
New badge: FLISOL 2016 Organizer !
FLISOL 2016 OrganizerYou helped organize the Fedora booth for FLISOL, 2016
New badge: FLISOL 2016 Attendee !
FLISOL 2016 AttendeeYou visited the Fedora booth at FLISOL, 2016
New badge: FISL 2016 !
FISL 2016You attended FISL 2016!
Fedora 24 Release Party: SFVLUG Event Report

On July 2nd, 2016, the San Fernando Valley Linux User Group (SFVLUG) in Lake Balboa, California, celebrated the release of Fedora 24 at their regular meeting. Fedora Ambassador Perry Rivera (FAS: lajuggler) helped coordinate these efforts at their regular meetup at Denny’s. To help celebrate the launch of Fedora 24, Perry brought some install media and a Fedora cake. The release party helped introduce Fedora 24 to a new group of users by providing them with the software and help to get Fedora 24 for themselves. This report details the release party events with SFVLUG and the impact from the event.

At a Glance: What is SFVLUG?

Denny's in Lake Balboa

Denny’s in Lake Balboa

Our Ambassador in the Field

This report is for the following ambassador:

What is SFVLUG?

SFVLUG (San Fernando Valley Linux Users Group) meets bi-weekly to discuss a variety of free and open source topics.

The membership typically assembles at a local Denny’s in Lake Balboa, CA, home of Lake Balboa and the Japanese Garden.

SFVLUG showcases Linux and open-source technology topics among the San Fernando Valley community. The event’s chairperson, Brian, is a courteous, friendly person that encouraged Fedora to step in and sponsor part of their regular meeting.

One week earlier: 25 June

Costco Cake Order Form

Costco Cake Order Form

Prior to the big event, I first checked with the restaurant franchise point-of-contact to confirm whether bringing outside food (in this case, a cake) inside the restaurant was acceptable. An employee of the restaurant double-checked with the manager, green-lit the idea, and mentioned that plates/cutlery usage would be complementary.

I then submitted a pre-order for a half-sheet cake over at my local Costco. They did such a bang-up job, I’ve attached an example of the order form if it helps others in setting up future events/parties.

I tried to contact the main organizers for the club to ask for a projector and power hookups for presenting a slideshow and quiz. Over that upcoming week, I had not heard from the web staff as that particular Meetup inbox is seldom monitored. I later found out through the SFVLUG organizer that the Meetup page is undergoing a transitional phase and that direct e-mail to Brian (SFVLUG) is the recommended method of contact as of this juncture.

Half week earlier: 29 June

I created the following media to prepare for the event:

  • 3 Bootable USB Fedora 24 sticks
  • 1 Bootable Fedora 24 64-bit DVD
  • 1 Bootable Fedora 24 32-bit DVD

I also adapted Ambassador Nemanja Milošević (nmilosev) and Giannis Konstantinidis‘s (giannisk) slides. Our slide decks is found here for reference:

SFVLUG Day 1: 2 July

About 2 hours before the event, I picked up the custom-ordered cake and admired Costco bakery’s handiwork.

Fedora 24 Costco Cake

Fedora 24 Costco Cake

Since it was a very warm day (about 85°F), I dropped the cake off at the Denny’s venue for refrigeration. I then set out to hunt for a mylar “Happy Birthday” balloon to generate questions and some interest for the Fedora sponsorship during the meeting.

Happy Birthday Balloon

Happy Birthday Balloon

I arrived about 15 minutes before the scheduled start time to secure a spot near the entrance, so that I could try to see many people as they began arriving.

After introducing Fedora, I brought the balloon in and waited until another member could bring in a  power hookup. I then set up the following items:

  • Laptop presentation
  • Raffle box
  • Fedora Ambassador business cards
  • Sticker swag
  • Pen swag
  • Sticky notes and pens (for guest notes)

As I did this, I met some interesting people who I hadn’t met before and recorded metrics on their distribution usage.

Throughout the course of the meeting, no projector surfaced, which was OK. Plan B was to set up a laptop to automatically present slides. That presentation later evolved to manually presenting slides to small groups that stopped by to check things out; this turned out fine as it is very similar to the table demonstrations at larger conventions (e.g. SCaLE). The personal interaction facilitated active discussion.

Fedora 24 Slide Demo

Fedora 24 Slide Demo

As more people arrived, I invited guests to pick up raffle drawing tickets. A few seemed worried they’d have to pay something for tickets, but when reassured, they felt better about receiving their ticket.

I also invited people to stop by to learn more about Fedora 24 release as well as Fedora’s mission, goals, and what types of volunteers are needed. About six people expressed interest and stopped by the kiosk. Others asked general questions during an informal Q&A. Around fifteen people showed up throughout the course of the evening.

General questions I asked others were open-ended (not direct “yes” or “no” answers), to engage people into discussion and establish rapport. Such questions included:

  • So tell me what brings you here today?
  • How do you use Fedora?
  • If not using Fedora, what do you use and why?
  • Do you have any suggestions or comments for us to pass back upstream to Fedora?

We served cake about 7:00pm. It was very tasty, and disappeared very quickly as a result.

We Added Fedora Swag For Pizzazz

We Added Fedora Swag For Pizzazz

A Delicious Cake Cutting Ensued...

A Delicious Cake Cutting Ensued…

Additionally, we held the raffle around 8:30pm. The attendees seemed excited that real prizes existed!

Suggestion / feedback items: 2 July

Furthermore, visitors left feedback, suggestions, and comments about Fedora. The following suggestions and feedback are derived from informally interviewing each of them.

  • Michael: Primarily uses Ubuntu. He is generally looking for an unobtrusive operating system. He originally started with Fedora, but moved over to Ubuntu when other friends moved over. He primarily uses LibreOffice.
  • Hektor: Primarily uses a Mac / Windows virtual machine setup, and Ubuntu occasionally. He seemed very interested in experimenting with our Fedora Scientific Spin. He was also interested in a distribution that could offer the following capabilities:
    • VPN into offices
    • Remote desktop into servers
  • Peter: Teaches Debian at local community college. Very Pro-Debian. His big question was why don’t people derive from Fedora like they do from Debian. I responded that I think it was because Debian has a longer release cycle.
  • Frank: Primarily uses Fedora 6 (!!). Also uses CentOS, database, Perl, Puppy Linux/DSL. Primary apps include: OpenOffice and LibreOffice. He uses his system because it’s easy to use, allows him to get online, and because it has LibreOffice. He’s a teacher, writer, and deals with linguistics. He works for a non-profit (MEND) that works with people from different countries. His biggest grievance is that he’s unhappy with the office apps (in general). He’ll take a document written on one system and bring it over to another system under a similar app, and the pagination changes dramatically. He wants an OS that is fairly easy to use. He has multiple systems at home. He’s looking for a good NAS OS and/or a pure NAS device.
  • Dave: Primarily used Fedora 23. On the day of the release party, he used the “fedup” command to upgrade to F24. Smooth upgrade…works great, so far! His gear: T420s, SSD drive, 8GB memory
  • Simon: Installed F24 on to T60p hardware. He found the wording on the install screen confusing, e.g. the Install and Test and Test lines were puzzling. The Advanced Options were also confusing. After install, graphical video was not working on his laptop. Was troubleshooting the issue for the rest of the meeting.

Photo Gallery

SFVLUG Group Photo

SFVLUG Group Photo

SFVLUG Group Photo 2

SFVLUG Group Photo 2

Questions to Answer

  1. What is Compiz support like in Fedora 24?
  2. How is Raspberry Pi development support as of the present time?
  3. Is there an ARM-based spin or remix?
    1. ARM v7?
    2. ARM v8?
    3. For a&b, 32 and 64 bit versions?
  4. Is there a Fedora 24 Docker container available?
  5. What is Wayland, as it pertains to Fedora 24?
  6. 2+ users expressed dissatisfaction with systemd. How possible is it to get rid of it and revert back to System 5 Init?
  7. What is the development status of func?
    1. Is there a Debian package forthcoming? [unanswerable?]
  8. Can we undo usrmove?
    1. It was a big feature in F17.
    2. It is now called a misfeature (one attendee’s perspective).
  9. Are there T60p install fixes for the video issue (mentioned above)?

Lessons learned

  1. Bring a clipboard for next time…
  2. Bring extra Fedora Ambassador cards.
  3. Know Wayland (an X Protocol replacement), which is mentioned very briefly in the slide deck.

Conclusion and acknowledgments

Consequently, I felt that our mission to reach out and encourage attendees to download or try Fedora proved successful. I think we also helped their team tremendously by facilitating discussion, taking pictures, sharing cake, and offering prizes.

Brian from SFVLUG thanked us personally for attending and looks forward to our next release party, if those plans materialize.

As a result, I’d like to personally thank the following for making this possible:

  • Brian / SFVLUG for assisting us with meeting arrangements
  • Costco bakery staff for a tasty cake
  • Denny’s Lake Balboa staff for hosting our meeting
  • Brian Monroe (freenode: ParadoxGuitarist) for providing swag
  • The FAmNA Team for budgeting
  • Nemanja Milošević’s and Giannis Konstantinidis for baseline slides

Image courtesy of Denys Nevozhai – originally posted to Unsplash as Untitled.

The post Fedora 24 Release Party: SFVLUG Event Report appeared first on Fedora Community Blog.

Private Repo on Pagure

One of my proposal for Pagure was to have private repositories. Private repositories are basically repositories which are visible to you and people you give permission to.

To be honest , I thought it would require a few tweaks and I will be good to go, but that wasn’t the case and the insights I got working on this feature was amazing. I fiddle with this project on primarily  three stages. Each stage was a challenge in its own.

The three stages were:

  1. UI
  2.  Database Query
  3. Tests

UI

The UI  was suppose to have a checkbox saying “Private”  and when a user ticks it the existing project becomes private or the new project is private from the time it is conceived.

Achieving this was a joy ride, with flask I just need to make changes in the form and setting page UI and Voilla!

I introduced a column Private in the project table and that was pretty much it. Nice and beautiful.

DATABASE

This was the most challenging part for me , since I have not worked with databases, and this was out of my comfort zone, I actually went back to my database basics to see if I am doing things right.

We in Pagure use Sqlalchemy as the ORM layer, ORM stands for object relation mapper. It basically use to map databases to object-class model of representing data. Sqlalchemy is a really powerful tool.

While figuring out ways to get all admins who can view private projects , I struggled a lot since I was working with a function which forms the core of Pagure so if things go wrong with this function the whole Project will take a hit.

So the challenge was to make minimum changes which are independent so that it doesn’t compromise the existing functionality and yet able to introduce a new one. I struggle to achieve it I failed a lot of time , was working hard to get it working , constantly moving to the board to figure out a solution on paper. Then switching back to my screen to code it out.

I was so desperate to get this working that I even pinged Armin on IRC to ask my doubt about flask and Sqlalchemy.  All this while the best support I got was from my mentor Pingou.

Finally after struggling a lot I got a very beautiful solution and done !

Just when I thought I am done , there comes a question of writing tests. Since I have altered a very major functionality that means I need to test every aspect of it.

Selection_021

Testing

Testing was a herculean task since I have not done a lot of testing, I actually got a lot to learn for starting the DB used for testing is a in-memory DB and not the one used by the app.

The session maintained has to be replicated in a way to use them in the test and how to use pygit to actually initialize a repo with git init and use it.

Towards the end of this PR my development evolve from writing code and testing it , to write the test and then introduce code or write code that pass the test. It has been really amazing working on this feature and hope it will be integrated soon.

I think may be a little more work is required on this feature maybe. It feels really amazing to do this work.

The link to the branch on Pagure.

The link to the current Pull-Request.

Happy Hacking!


Novedades en la equivalencia internacional de títulos superiores españoles

Para empezar reconoceré que no tengo bien estudiada la regulación de las titulaciones universitarias en España. El caso es que he recibido el aviso de que se han publicado por fin las equivalencias internacionales de los títulos españoles. Por algún motivo tradicional, la estructura de las titulaciones superiores en España ha tenido poco que ver con las del resto del mundo. Se supone que ya todos sabemos cuáles han sido las transformaciones obligadas por «el plan Bolonia», que entre otras cosas pretende la movilidad de estudiantes y trabajadores y la interoperabilidad automática de los títulos superiores entre los paises comunitarios (creo que estrictamente va más allá de la UE pero hoy no es mi objetivo ser exhaustivo). Al parecer esa transformación de los títulos no era suficiente para saber cómo equivalen en el extranjero nuestros títulos. Y peor aún, cuáles son las equivalencias de los títulos superiores pre-Bolonia, que con frecuencia siguen siendo un lío al cambiar de universidad sin salir del país. La novedad es que parece que esa información ya ha quedado resuelta.

Siendo muy esquemático: el sistema de interoperabilidad de las titulaciones superiores en España se establece en el Marco Español de Cualificaciones para la Educación Superior (MECES), que, si yo no lo entiendo mal, sirve para establecer las equivalencias con el Marco Europeo de Cualificaciones para el Aprendizaje Permanente (EQF en inglés), que es la piedra de Rosetta para todo este sistema.

Pues bien, desde el 1 de junio de 2016 ya están publicadas oficialmente las equivalencias, como indican en el original: «correspondence between the Spanish Qualifications Framework for Higher Education and the European Qualifications Framework».

Cuadro MECES / EQF de equivalencias de títulos superiores españoles

¿Cuáles son las consecuencias? Probablemente algunas más, pero entiendo que al menos a partir de ahora, por ejemplo, puedes matricularte en cualquier universidad que implemente el EQF sin tener que pasar por el desesperante proceso de convalidación de tus títulos.

Dejo esta entrada a modo de recordatorio personal y con la esperanza de que sea de utilidad. Si alguien detecta algún error que por favor lo indique en los comentarios.

Referencias:

Docker on Bluemix with automated full-stack deploys and delivery pipelines

Introduction

This document explains working examples on how to use Bluemix platform advanced features such as:

  • Docker on Bluemix, integrated with Bluemix APIs and middleware
  • Full stack automated and unattended deployments with DevOps Services Pipeline, including Docker
  • Full stack automated and unattended deployments with cf command line interface, including Docker

For this, I’ll use the following source code structure:

github.com/avibrazil/bluemix-docker-kickstart

The source code currently brings to life, integrated with some Bluemix services and Docker infrastructure, a PHP application (the WordPress popular blogging platform), but it could be any Python, Java, Ruby etc app.

This is how full stack app deployments should be

Before we start: understand Bluemix 3 pillars

I feel it is important to position what Bluemix really is and which of its parts we are going to use. Bluemix is composed of 3 different things:

  1. Bluemix is a hosting environment to run any type of web app or web service. This is the only function provided by the CloudFoundry Open Source project, which is an advanced PaaS that lets you provision and de-provision runtimes (Java, Python, Node etc), libraries and services to be used by your app. These operations can be triggered through the Bluemix.net portal or by the cf command from your laptop.
  2. Pre-installed libraries, APIs and middleware. IBM is constantly adding functions to the Bluemix marketplace, such as cognitive computing APIs in the Watson family, data processing middleware such as Spark and dashDB, or even IoT and Blockchain-related tools. These are high value components that can add a bit of magic to your app. Many of those are Open Source.
  3. DevOps Services. Accessible from hub.jazz.net, it provides:
    • Public and private collaborative Git repositories.
    • UI to build, manage and execute the app delivery pipeline, which does everything needed to transform your pure source code into a final running application.
    • The Track & Plan module, based on Rational Team Concert, to let your team mates and clients exchange activities and control project execution.

This tutorial will dive into #1 and some parts of #3, while using some services from #2.

The architecture of our app

Docker on Bluemix with services

When fully provisioned, the entire architecture will look like this. Several Bluemix services (MySQL, Object store) packaged into a CloudFoundry App (bridge app) that serves some Docker containers that in turns do the real work. Credentials to access those services will be automatically provided to the containers as environment variables (VCAP_SERVICES).

Structure of Source Code

You may fork and add your app components to this structure.

bridge-app and manifest.yml
The CloudFoundry manifest.yml that defines app name, dependencies and other characteristics.
containers
Each directory contains a Dockerfile and other files to create Docker containers. I left some useful examples in the repo, but we’ll only use the phpinfo and wordpress directories in this tutorial.
.bluemix folder
When this code repository is imported into Bluemix, metadata here will be used to set up your development environment under DevOps Services.
admin folder
Random shell scripts.

Watch the deployment

The easiest way to deploy the app is through DevOps Services:

  1. Click to deploy

    Deploy to Bluemix

  2. Provide a unique name to your copy of the app, also select the target Bluemix space
    Deploy to Bluemix screen
  3. Go to DevOps Services ➡ find your project clone ➡ select Build & Deploy tab and watch
    Full Delivery Pipeline on Bluemix

Under the hood: understand the app deployment in 2 strategies

Conceptually, these are the things you need to do to fully deploy an app with Docker on Bluemix:

  1. Instantiate external services needed by your app, such as databases, APIs etc.
  2. Create a CloudFoundry app to bind those services so you can handle them all as one block.
  3. Create the Docker images your app needs and register them on your Bluemix private Docker Registry (equivalent to the public Docker Hub).
  4. Instantiate your images in executable Docker containers, connecting them to your backend services through the CloudFoundry app.

The idea is to encapsulate all these steps in code so deployments can be done entirely unattended. Its what I call brainless 1-click deployment. There are 2 ways to do that:

  • A regular shell script that extensively uses the cf command. This is the admin/deploy script in our code.
  • An in-code delivery pipeline that can be executed by Bluemix DevOps Services. This is the .bluemix/pipeline.yml file.

From here, we will detail each of these steps both as commands (on the script) and as stages of the pipeline.

  1. Instantiate external services needed by your app…

    I used the cf marketplace command to find the service names and plans available. ClearDB provides MySQL as a service. And just as an example, I’ll provision an additional Object Storage service. Note the similarities between both methods.

    Deployment Script
    cf create-service \
      cleardb \
      spark \
      bridge-app-database;
    
    cf create-service \
      Object-Storage \
      Free \
      bridge-app-object-store;
    Delivery Pipeline

    When you deploy your app to Bluemix, DevOps Services will read your manifest.yml and automatically provision whatever is under the declared-services block. In our case:

    declared-services:
      bridge-app-database:
        label: cleardb
        plan: spark
      bridge-app-object-store:
        label: Object-Storage
        plan: Free
    
  2. Create an empty CloudFoundry app to hold together these services

    The manifest.yml has all the details about our CF app. Name, size, CF build pack to use, dependencies (as the ones instantiated in previous stage. So a plain cf push will use it and do the job. Since this app is just a bridge between our containers and the services, we’ll use minimum resources and the minimum noop-buildpack. After this stage you’ll be able to see the app running on your Bluemix console.

    Deployment Script
    Delivery Pipeline
    Stage named “➊ Deploy CF bridge app” simply calls cf push;
  3. Create the Docker images

    The heavy lifting here is done by the Dockerfiles. We’ll use base CentOS images with official packages only in an attempt to use best practices. See phpinfo and wordpress Dockerfiles to understand how I improved a basic OS to become what I need.

    The cf ic command is basically a clone to the docker command, but pre-configured to use Bluemix Docker infrastructure. There is simple documentation to install the IBM Containers plugin to cf.

    Deployment Script
    cf ic build \
       -t phpinfo_image \
       containers/phpinfo/;
    
    cf ic build \
       -t wordpress_image \
       containers/wordpress/;
    
    
    Delivery Pipeline

    Stages handling this are “➋ Build phpinfo Container” and “➍ Build wordpress Container”.

    Open these stages and note how image names are set.

    After this stage, you can query your Bluemix private Docker Registry and see the images there. Like this:

    $ cf ic images
    REPOSITORY                                          TAG     IMAGE ID      CREATED     SIZE
    registry.ng.bluemix.net/avibrazil/phpinfo_image     latest  69d78b3ce0df  3 days ago  104.2 MB
    registry.ng.bluemix.net/avibrazil/wordpress_image   latest  a801735fae08  3 days ago  117.2 MB
    

    A Docker image is not yet a container. A Docker container is an image that is being executed.

  4. Run your containers integrated with your previously created bridge app

    To make our tutorial richer, we’ll run 2 sets of containers:

    1. The phpinfo one, just to see how Bluemix gives us an integrated environment
      Deployment Script
      cf ic run \
         -P \
         --env 'CCS_BIND_APP=bridge-app-name' \
         --name phpinfo_instance \
         registry.ng.bluemix.net/avibrazil/phpinfo_image;
      
      
      IP=`cf ic ip request | 
          grep "IP address" | 
          sed -e "s/.* \"\(.*\)\" .*/\1/"`;
      
      
      cf ic ip bind $IP phpinfo_instance;
      Delivery Pipeline

      Equivalent stage is “➌ Deploy phpinfo Container”.

      Open this stage and note how some environment variables are defined, specially the BIND_TO.

      Bluemix DevOps Services default scripts use these environment variables to correctly deploy the containers.

      The CCS_BIND_APP on the script and BIND_TO on the pipeline are key here. Their mission is to make the bridge-app’s VCAP_SERVICES available to this container as environment variables.

      In CloudFoundry, VCAP_SERVICES is an environment variable containing a JSON document with all credentials needed to actually access the app’s provisioned APIs, middleware and services, such as host names, users and passwords. See an example below.

    2. A container group with 2 highly available, monitored and balanced identical wordpress containers
      Deployment Script
      cf ic group create \
         -P \
         --env 'CCS_BIND_APP=bridge-app-name' \
         --auto \
         --desired 2 \
         --name wordpress_group_instance \
         registry.ng.bluemix.net/avibrazil/wordpress_image
      
      
      cf ic route map \
         --hostname some-name-wordpress \
         --domain $DOMAIN \
         wordpress_group_instance

      The cf ic group create creates a container group and runs them at once.

      The cf ic route map command configures Bluemix load balancer to capture traffic to http://some-name-wordpress.mybluemix.net and route it to the wordpress_group_instance container group.

      Delivery Pipeline

      Equivalent stage is “➎ Deploy wordpress Container Group”.

      Look in this stage’s Environment Properties how I’m configuring container group.

      I had to manually modify the standard deployment script, disabling deploycontainer and enabling deploygroup.

See the results

At this point, WordPress (the app that we deployed) is up and running inside a Docker container, and already using the ClearDB MySQL database provided by Bluemix. Access the URL of your wordpress container group and you will see this:

WordPress on Docker with Bluemix

Bluemix dashboard also shows the components running:

Bluemix dashboard with apps and containers

But the most interesting evidence you can see accessing the phpinfo container URL or IP. Scroll to the environment variables section to see all services credentials available as environment variables from VCAP_SERVICES:

Bluemix VCAP_SERVICES as seen by a Docker container

I use these credentials to configure WordPress while building the Dockerfile, so it can find its database when executing:

.
.
.
RUN yum -y install epel-release;\
	yum -y install wordpress patch;\
	yum clean all;\
	sed -i '\
		         s/.localhost./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_HOSTNAME")/ ; \
		s/.database_name_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_NAME")/     ; \
		     s/.username_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_USERNAME")/ ; \
		     s/.password_here./getenv("VCAP_SERVICES_CLEARDB_0_CREDENTIALS_PASSWORD")/ ; \
	' /etc/wordpress/wp-config.php;\
	cd /etc/httpd/conf.d; patch < /tmp/wordpress.conf.patch;\
	rm /tmp/wordpress.conf.patch
.
.
.

So I’m using sed, the text-editor-as-a-command, to edit WordPress configuration file (/etc/wordpress/wp-config.php) and change some patterns there into appropriate getenv() calls to grab credentials provided by VCAP_SERVICES.

Dockerfile best practices

The containers folder in the source code presents one folder per image, each is an example of different Dockerfiles. We use only the wordpress and phpinfo ones here. But I’d like to highlight some best practices.

A Dockerfile is a script where you define how a container image should be built. A container image is very similar to a VM image, the difference is more related to the file formats they are stored. VMs uses QCOW, VMDK etc while Docker uses layered filesystem images. From the application perspective, all the rest is almost the same.

  1. Being a building script, it starts from a base parent image, defined by the FROM command. We used a plain official CentOS image as a starting point. You must select very carefully your parent images, in the same way you select the Linux distribution for your company. You should consider who maintains the base image, it should be well maintained.
  2. Avoid creating images manually, as running a base container, issuing commands manually and then committing it. All logic to prepare the image should be scripted in your Dockerfile.
  3. In case complex file editing is required, capture edits in patches and use the patch command in your Dockerfile, as I did on wordpress Dockerfile.
    To create a patch:

    diff -Naur configfile.txt.org configfile.txt > configfile.patch

    Then see the wordpress Dockerfile to understand how to apply it.

  4. Always that possible, use official distribution packages instead of downloading libraries (.zip or .tar.gz) from the Internet. In the wordpress Dockerfile I enabled the official EPEL repository so I can install WordPress with YUM. Same happens on the Django and NGINX Dockerfiles. Also note how I don’t have to worry about installing PHP and MySQL client libraries €“ they get installed automatically when YUM install wordpress package, because PHP and MySQL are dependencies.

When Docker on Bluemix is useful

CloudFoundry (the execution environment behind Bluemix) has its own Open Source container technology called Warden. And CloudFoundry’s Dockerfile-equivalent is called Buildpack. Just to illustrate, here is a WordPress buildpack for CloudFoundry and Bluemix.

To chose to go with Docker in some parts of your application means to give up some native integrations and facilities naturally and automatically provided by Bluemix. With Docker you’ll have to control and manage some more things for yourself. So go with Docker, instead of a buildpack, if:

  • If you need portability, you need to move your runtimes in and out Bluemix/CloudFoundry.
  • If a buildpack you need is less well maintained then the equivalent Linux distribution package. Or you need a reliable and supported source of pre-packaged software in a way just a major Linux distribution can provide.
  • If you are not ready to learn how to use and configure a complex buildpack, like the Python one, when you are already proficient on your favorite distribution’s Python packaging.
  • If you need Apache HTTPD advanced features as mod_rewrite, mod_autoindex or mod_dav.
  • If you simply need more control over your runtimes.

The best balance is to use Bluemix services/APIs/middleware and native buildpacks/runtimes whenever possible, and go with Docker on specific situations. Leveraging the integration that Docker on Bluemix provides.

July 12, 2016

Pulp 2.9.0 Generally Available

The Pulp 2.9.0 is now available, and can be downloaded from the 2.9 stable repositories:

https://repos.fedorapeople.org/repos/pulp/pulp/stable/2.9/

The Pulp “2” and “latest” repositories have also been updated to point to 2.9.0:

https://repos.fedorapeople.org/repos/pulp/pulp/stable/2/
https://repos.fedorapeople.org/repos/pulp/pulp/stable/latest/

This release includes many new features, as well as bug fixes. Owing to the more rapid time-based release of 2.9, this release should consist of fewer new features compared to the release of 2.8.

This release was slightly delayed by two blocking issues (2035 and 2037, seen below) which were fixed in Beta 2. The release timeline of 2.10 has not been impacted by these delays.

More information about our release schedule can be found on the Pulp wiki:
https://pulp.plan.io/projects/pulp/wiki/Release_Schedule

Known Issues

Possible Errata Sync Failure

Several issues were reported against Pulp 2.8 that were not included in the Pulp 2.9.0 release as a result of release timing. The list of issues fixed in 2.8.6 outlines these bugs, but there is one issue in particular that can potentially break RPM repository syncing after upgrading: https://pulp.plan.io/issues/2048

This issue is related to resyncing errata from some repositories, and in a pulp-admin sync operation looks like this:

Task Failed
Could not parse errata `updated` field: expected format '%Y-%m-%d %H:%M:%S'.
 Fail to update the existing erratum SOME_ERRATUM_ID.

As a workaround, you can choose to skip errata in the feed repository. To do this, you can update the repo to skip errata:

pulp-admin rpm repo update --repo-id  --skip=erratum

This will be fixed in Pulp 2.9.1. If you require errata to be synced from a feed repository, consider delaying an upgrade to Pulp 2.9 until 2.9.1 is released.

RPM Publish Directory

A story related to specifying different package publish directories inside an RPM repo, e.g. packages go in a configured packages dir, was included in previous betas and the release candidate for 2.9.0. Testing revealed that the feature didn’t perform as expected, so that feature has been pulled from this release. It is likely to return in a future release.

Upgrading

The Pulp 2 stable repository (pulp-2-stable) is included in the pulp repo files:

After enabling the pulp-2-stable repository, you’ll want to follow the standard
upgrade path, with migrations, to get to 2.9:

$ sudo systemctl stop httpd pulp_workers pulp_resource_manager pulp_celerybeat
$ sudo yum upgrade
$ sudo -u apache pulp-manage-db
$ sudo systemctl start httpd pulp_workers pulp_resource_manager pulp_celerybeat

Supported Fedora Versions

Fedora 22 is no longer a supported Fedora release. We will not be supporting Fedora 22 for Pulp 2.9 and onward. Supported Fedora 24 builds will be starting as soon as possible.

We’ll be updating our docs soon with more details about our Fedora support policy than what’s currently there[2]. In the meantime, Pulp users on Fedora 22 are encouraged to upgrade to Fedora 23 to continue to receive supported releases.

New Features

New features for Pulp 2.9 include optimizations in publishing, repoview support for yum repos, langpacks support for yum repos, and more:

Pulp

  • 1724 Publish should be a no-op if no units and no settings have changed since the last successful publish

RPM Support

  • 1716 As a user, I can have better memory performance on Publish by using SAX instead of etree for comps and updateinfo XML production
  • 1543 As a user, I would like incremental export to be the same format as full export
  • 1367 comps langpacks support
  • 1158 As a user, I can force full/fresh publish of rpms and not do an incremental publish
  • 1003 As a user, I can use pulp-admin to upload of package_environment
  • 189 Repoview-like functionality for browsing repositories via the web interface

View this list in redmine.

More information about these new features can be found in the release notes:
http://docs.pulpproject.org/en/2.9/user-guide/release-notes/2.9.x.html
http://docs.pulpproject.org/en/2.9/plugins/pulp_rpm/user-guide/release-notes/2.9.x.html

Issues Addressed

All bug fixes from Pulp 2.8.5 and earlier are included in Pulp 2.9.

Here are the bug fixes specific to 2.9:

Pulp

  • 2015 rpm repo publish fails with “Incorrect length of data produced” error
  • 1928 Publish should be operational if override config values were specified
  • 1844 pulp-admin –config and/or api_prefix option appears to be ignored

RPM Support

  • 2037 Migration failing after 2.9 upgrade
  • 2035 Errata are published incrementally
  • 2001 Make sure sqlite files are generated if repoview is enabled
  • 1969 as user, I can export a repo with a specified checksum type
  • 1938 force_full is not supported by export_distributor
  • 1876 Add example on how to generate PULP_MANIFEST file
  • 1787 None in comps.xml
  • 1619 as user, I can export repo groups with different checksum than sha256
  • 1618 –checksum-type is broken

View this list in redmine.

Libocon 2016: travel info

LibreOffice Conference 2016 is less than two months away and people are starting to look for information how to get to and around Brno. We’ve prepared a page with extensive information how to get to Brno.

libocon16-logo

Plane

If you go by plane, the best option is flying directly to Brno. You can get very cheap tickets from London-Stansted (Ryanair), London-Luton, Eindhoven (Wizz Air) and moderately cheap tickets from summer destinations (Spain, Greece, Italy,…) with SmartWings. There is also a daily Lufthansa flight to Munich which can connect you with dozens of destinations around the world (it’s particularly good for flights to the US, you can reach East Coast in 10 hours). But the line is rather oriented at business travelers, the plane is small and prices tend to be high.

The second best option is flying to Vienna which has flights to dozens of destinations in the world and prices are pretty good. RegioJet operates direct buses between the airport and Brno, so you can get conveniently to Brno in 2 hours. Another option is the airport in Prague, but it has fewer connected destinations and  it takes longer to get from there to Brno.

Train

If you happen to live in Germany, Austria, Poland, Slovakia, or Hungary. Train may be the best option for you. Brno has direct trains to those countries.

Car

You can also come by car. You can be prepared for delays because of many road reconstructions. There is a long-term reconstruction of D1 highway (between Prague and Brno) and there are five sections under reconstruction this season. There are also several more motorways which are partly or completely closed for the same reason. Most of the road work happens during summer holidays, but some of it will be going on in Sept, too. It’s hard to estimate which now. Traffic jams occur, so give yourself enough time if you go to Brno by car.

Bus

If you’d like to go by bus, we can recommend RegioJet. They connect Brno with many destinations in Europe, they have very comfortable buses with free hot drinks, cheap snacks, wifi, entertainment systems and for reasonable prices. Other bus companies are e.g. Flixbus or Eurolines.

And More

The LibreOffice community is truly global and attendees from further destinations may not know much about the Czech Republic. Maybe you wonder what’s the currency here, what’s the weather like in Sept etc. We’ve prepared a page with the basic information about the country. For more information we link CzechTurism website where you can learn much more information which is relevant to travelers to the Czech Republic.

Soon, we will also extend information about how to get around Brno (public transport system, how to buy tickets, where, how to get to the conference hotel, venue,…). Stay tuned😉


Testing the 8-bit computer Puldin

Puldin creators

Last weekend I visited TuxCon in Plovdiv and was very happy to meet and talk to some of the creators of the Puldin computer! On the picture above are (left to right) Dimitar Georgiev - wrote the text editor, Ivo Nenov - BIOS, DOS and core OS developer, Nedyalko Todorov - director of the vendor company and Orlin Shopov - BIOS, DOS, compiler and core OS developer.

Puldin is 100% pure Bulgarian development, while the “Pravetz” brand was copy of Apple ][ (Pravetz 8A, 8C, 8M), Oric (Pravets 8D) and IBM-PC (Pravetz 16). The Puldin computers were build from scratch both hardware and software and were produced in Plovdiv in the late 80s and early 90s. 50000 pieces were made, at least 35000 of them have been exported to Russia and paid for. A typical configuration in a Russian class room consisted of several Puldin computers and a single Pravetz 16. According to Russian sources the last usage of these computers was in 2003 serving as Linux terminals and being maintained without any support from the vendor (b/c it ceased to exist).

Puldin 601

One of the main objectives of Puldin was full compatibility with IBM-PC. At the time IBM had been releasing extensive documentation about how their software and hardware works which has been used by Puldin's creators as their software specs. Despite IBM-PC using faster CPU the Puldin 601 had a comparable performance due to aggressive software and compiler optimizations.

Testing wise the guys used to compare Puldin's functionality with that of IBM-PC. It was a hard requirement to have full compatibility on the file storage layer, that means floppy disks written on Puldin had to be readable on IBM-PC and vice versa. Same goes for programs compiled on Puldin - they had to execute on IBM-PC.

Everything of course had been tested manually and on top of that all the software had to be burned to ROM before you can do anything with it. As you can imagine the testing process had been quite slow and painful compared to today's standards. I've asked the guys if they'd happened to find a bug in IBM-PC which wasn't present in their code but they couldn't remember.

What was interesting for me on the hardware side was the fact that you can plug the computer directly to a cheap TV set and that it's been one of the first computers which could operate on 12V DC, powered directly from a car battery.

Pravetz 8

There was also a fully functional Pravetz 8 with an additional VGA port to connect it to the LCD monitor as well as a SD card reader wired to function as a floppy disk reader (the small black dot behind the joystick).

For those who missed it (and understand Bulgarian) I have a video recording on YouTube. For more info about the history and the hardware please check-out Olimex post on Puldin (in English). For more info on Puldin and Pravetz please visit pyldin.info (in Russian) and pravetz8.com (in Bulgarian) using Google translate if need be.

Hosting your own Fedora Test Day

Many important packages and software are developed for Fedora every day. One of the most important parts of software development is quality assurance, or testing. For important software collections in Fedora, there are sometimes concentrated testing efforts for pulling large groups of people in who might not always help test. Organizing a Fedora Test Day is a great way to help expose your project and bring more testers to trialing a new update before it goes live.

Most of the time, you will be able to test software updates without help. But for larger software or packages crucial to Fedora, having more eyes and hands to poke around is useful and helpful. This post explains and walks you through the process for organizing your own Fedora Test Day and what work goes into it.

Steps to organize a Test Day

An example ticket for a Fedora Test Day in Fedora Quality Assurance

An example ticket for a Test Day

  1. Decide the change or target feature you want to test for.
    1. This might involve talking with other developers and packagers to make sure any compatibility issues with other software are included in test cases.
  2. Create a ticket for organizing your Fedora Test Day. Tickets can be created on the Fedora QA Trac after authenticating against a FAS ID. For filing a ticket, make sure to include the following details.
    1. Test Day name with the planned date
    2. Assigning / copying relevant people to the ticket
    3. Mailing the [email protected] mailing list is helpful for managing and tracking the progress of the ticket for community members
  3. Do some research and check if any similar test cases were already run. If they were, you can re-use a previous wiki page of test cases. If not, you will need to write new Test Day and test case pages.
  4. After setting up the wiki, begin considering and working on the meta-app data page.  The Test Day meta page helps populate the wiki page. Hence, the Test Day application page is where testers post their results against the test cases they execute.

These are the necessary steps for getting your Fedora Test Day rolling. The results for your test cases appear in the Test Day application. The test day app is a place where all the testers will post their results.

Setting up a Test Day

An example of a Fedora Test Day wiki page

An example of a Test Day wiki page

Before working on putting together your Test Day, you will want to do some homework on the change set for the Fedora release you are targeting for. Are there any changes that might conflict with your software updates? Are there other factors that could be a point of concern? Make sure you have a general sense of what’s new and how that may affect your software. For example, take a look at the Fedora 24 change set.

Your next step would be to share your plans with the rest of the Fedora Quality Assurance team about your plans for a Test Day. To do this, you will need to file a ticket in the Fedora QA Trac. Make sure you share any relevant information with the team like the changes you are trying to target and anything useful to explaining your Test Day.

If you were unable to find any previous test cases on the wiki that you could reuse, you will need to write a new Test Day page. Fortunately, there is a template you can use to help simplify this process. If you are starting from scratch, make sure you use the template to meet the required criteria for organizing your Test Day. In addition to the Test Day page, you will also need to set up a page for test cases. This page will contain the specific instructions for what testing is needed and how to do it. It also details which test cases should yield what results. You can find examples of past test case pages on the wiki.

Screenshot of the final Fedora Test Day application page

Screenshot of the final Test Day application page

Next up, the meta page is required for automated result reporting. The Test Day application needs this to work as expected. To help you with writing one of your own, you can find a link to past page for <test day>. Once completed, you will need to submit your Test Day information to the Test Day application in the Fedora Infrastructure.

Running the Test Day

Congratulations! By this point, your Test Day will be live. You can find its own Test Day page publicly once approved. It will look like the screenshot below. You should also consider announcing your Test Day on both the [email protected] and [email protected] mailing lists to help bring some exposure to your event. You can also share the details in other communities, such as the Google+ group.

The post Hosting your own Fedora Test Day appeared first on Fedora Community Blog.

Event report: Fedora 24 release party Pune

Last Saturday we had the Fedora 24 release party in Pune. This was actually done along with our regular Fedora meetup, and in the same location. We had a few new faces this time. But most of the regular attendees attended the meetup.

Chandan Kumar started the day with a presentation about new features in Fedora 24. We tried to see a few those in our laptops. During the discussions Parag , and Siddhesh pointed how important is self learning for us. Siddhesh also talked about a few project ideas on Glibc. In one of the previous meet, many of the attendees sent PR to the Fedora Autocloud testcases. We talked about those for few minutes. From yesterday, I am actually merging them in master.

As a group we decided to work on to make a modern version of glibc documentation. There is no git repo yet, but I will provide the links when we have something to show. As a group our goal is to do more upstream contribution. One thing I noticed that most of the attendees were dgplug summer training participants.

Release Party Fedora 24 - Porto Alegre

A Release Party do Fedora 24 em Porto Alegre aconteceu no dia 5 de Julho no Instituto de Informática da UFRGS.
The Fedora 24 Release Party Porto Alegre happened on the 5th of July in the Informatics Institute of the Federal University of Rio Grande do Sul.


As atividades começaram com a apresentação de um overview do Projeto Fedora e dos novos features inclusos no F24, pelo itamarjp. Em seguida tivemos a apresentação "Comece a contribuir com FOSS no Fedora", por mim.
The activities started with the presentation of an overview on the Fedora Project and new features included in F24, by itamarjp. Following up we had the presentation "Start Contributing to FOSS on Fedora", by me.

 

Depois disso tivemos um bom tempo discutindo sobre o FISL (que acontece essa semana, em Porto Alegre), FOSS e sociedade. Os participantes puderam pegar adesivos e pins do Fedora na nossa mesa. Então comemos nosso lanche na sala de coffee-break, com mais conversas. Finalmente, o evento acabava. Foi ótimo discutir tais assuntos com outras pessoas envolvidas em Software Livre e no Fedora.
After that we had some good time discussing about FISL (that happens this week, in Porto Alegre), FOSS and society. The attendants were able to grab some Fedora stickers and pins at our table. Then we ate our snacks at the coffee-break room, with more conversations. Finally, the event was over. It was nice to discuss these maters with other people involved in Free Software and Fedora.

Mal posso esperar pelo FISL! Estamos na área de comunidades, no estande do Fedora. Venham nos ver!
I can't wait for FISL! We'll be at the communities area, in the Fedora booth. Come to see us!

Beijos, Twi.

Acompanhe o blog nas redes sociais: Bloglovin ♥ FacebookTwitterInstagram
Ανοιχτά Δεδομένα και ΟΑΣΑ

OASA map

Ως τακτικός χρήστης των μέσων μαζικής μεταφοράς (ΜΜΜ) της Αθήνας επισκέπτομαι συχνά το site του ΟΑΣΑ για να δω τις διαθέσιμες πληροφορίες, ειδικά αν θέλω να χρησιμοποιήσω μια γραμμή με την οποία δεν είμαι εξοικειωμένος, όπως π.χ. τη γραμμή 227. Το link είναι απ' το Wayback Machine του Internet Archive project γιατί αυτή η δυνατότητα έχει πλέον εξαφανιστεί απ' το site του ΟΑΣΑ και στη θέση της υπάρχει παραπομπή για το Google Transit.

Το Google Transit είναι sub-project του Google Maps και η υπηρεσία προσφέρεται μεν δωρεάν, δεν παύει όμως να είναι μια εμπορική υπηρεσία μιας for-profit εταιρίας με συγκεκριμένους όρους χρήσης τόσο για την υπηρεσία όσο και για τα δεδομένα. Όπως όλες οι υπηρεσίες της Google, λειτουργεί ως πλατφόρμα διανομής διαφημίσεων.

Συνειδητά εδώ και πολλά χρόνια έχω σταματήσει να χρησιμοποιώ Google Maps και χρησιμοποιώ OpenStreetMap (και εφαρμογές που βασίζονται σ' αυτό) για πολλούς λόγους. Κυρίως για τους ίδιους λόγους που προτιμώ να διαβάσω ένα λήμμα στη Wikipedia και όχι στη Britanica. Θεωρώ συνεπώς απαράδεκτο ένας δημόσιος (ακόμα) οργανισμός να με ωθεί να χρησιμοποιήσω μια εμπορική υπηρεσία για να έχω πρόσβαση στα δεδομένα που έχω ήδη πληρώσει για να παραχθούν. Σήμερα έστειλα το παρακάτω email στον ΟΑΣΑ:

Καλησπέρα,

Τους τελευταίους μήνες έχουν εξαφανιστεί απ' το site σας (oasa.gr) οι πληροφορίες (στάσεις, δρομολόγια, χάρτες) για όλες τις γραμμές λεωφορείων και τρόλεϊ. Αντ' αυτού η σχετική σελίδα παραπέμπει σε μια εμπορική υπηρεσία (Google Transit).

Ως πολίτης θα ήθελα να μάθω:

  1. Πώς μπορώ να βρω μέσα απ' το site σας τις σχετικές πληροφορίες, χωρίς να χρειαστεί να χρησιμοποιήσω εμπορικές υπηρεσίες (Google Maps, Here Maps, κλπ);

  2. Στη σελίδα με τους όρους χρήσης αναφέρεται πως για όλα τα δεδομένα (χάρτες, σχεδιαγράμματα, γραμμές, δρομολόγια, κ.τ.λ.) δεν επιτρέπεται η εμπορική χρήση τους. Σε ποια δεδομένα αναφέρεστε αν αυτά ούτως ή άλλως δεν διατίθενται μέσα απ' το site σας;

  3. Τα δεδομένα προσφέρονται ελεύθερα μέσα απ' το geodata.gov.gr με άδεια "Creative Commons: Attribution" που επιτρέπει την εμπορική χρήση. Τι απ' τα δύο ισχύει τελικά;

  4. Αν όντως δεν επιτρέπεται η εμπορική χρήση των δεδομένων, που υπάρχει αναρτημένη η συμφωνία που έχετε κάνει με τη Google και ποιο είναι το οικονομικό όφελος για τον οργανισμό;

Δεν ξέρω αν θα λάβω κάποια ουσιαστική απάντηση ή αν θα λάβω οποιαδήποτε απάντηση, αλλά το γεγονός παραμένει εξοργιστικό. Ειδικά αν αναλογιστούμε πως ο ΟΑΣΑ είχε τέτοια υπηρεσία σε λειτουργία απ' το 2011 και προτίμησε να την κλείσει, ενώ παράλληλα προωθεί μια εφαρμογή για κινητά τηλέφωνα που σε μεγάλο βαθμό υλοποιεί τις απούσες απ' το site του υπηρεσίες, αδιαφορώντας για τους πολίτες που δεν διαθέτουν smartphone.

Η άποψη μου είναι απλή. Δεδομένα και λογισμικό που παράγονται και υλοποιούνται με δημόσιο χρήμα πρέπει να είναι και δημόσιο κτήμα. Αυτό σημαίνει πως οι πολίτες δεν θα πρέπει να είναι υποχρεωμένοι να χρησιμοποιήσουν εμπορικές υπηρεσίες για να έχουν πρόσβαση σε δεδομένα δημοσίων υπηρεσιών, ούτε θα πρέπει να "περάσουν" μέσα από app stores συγκεκριμένων εταιριών για να κατεβάσουν την εφαρμογή μιας δημόσιας υπηρεσίας στο κινητό τους. Για τους ίδιους λόγους οι εφαρμογές αυτές θα πρέπει να προσφέρονται ως Ελεύθερο Λογισμικό και ο κώδικας τους να είναι ανοιχτός, καθώς δαπανήθηκε δημόσιο χρήμα.

Ο ΟΑΣΑ στη συγκεκριμένη περίπτωση καταπάτησε οποιαδήποτε έννοια "δημόσιου" αγαθού με την ευνοϊκή μεταχείριση μιας εταιρίας (Google), προσφέροντας της δωρεάν δεδομένα και διαφήμιση, και την ταυτόχρονη απαγόρευση της εμπορικής εκμετάλλευσης των δεδομένων από ανταγωνιστές της.


Σχόλια και αντιδράσεις σε Diaspora, Twitter, Facebook

July 11, 2016

This is quite a nice tool – magic-wormhole

I was catching up on the various talks at PyCon 2016 held in the wonderful city of Portland, Oregon last month.

There are lots of good content available from PyCon 2016 on youtube. What I was particularly struck was, what one could say is a mundane tool for file transfer.

This tool, called magic-wormhole, allows for any two systems, anywhere to be able to send files (via a intermediary), fully encrypted and secured.

This beats doing a scp from system to system, especially if the receiving system is behind a NAT and/or firewall.

I manage lots of systems for myself as well as part of the work I at Red Hat. Over the years, I’ve managed a good workflow when I need to send files around but all of it involved having to use some of the techniques like using http, or using scp and even miredo.

But to me, magic-wormhole is easy enough to set up, uses webrtc and encryption, that I think deserves to get a much higher profile and wider use.

On the Fedora 24 systems I have, I had to ensure that the following were all set up and installed (assuming you already have gcc installed):

a) dnf install libffi-devel python-devel redhat-rpm-config

b) pip install –upgrade pip

c) pip install magic-wormhole

That’s it.

Now I would want to run a server to provide the intermediary function instead of depending on the goodwill of Brian Warner.

 


Entry level AI
I was listening to the podcast Security Weekly and the topic of using AI For security work came up. This got me thinking about how most people make their way into security and what something like AI might mean for the industry.

In virtually every industry you start out doing some sort of horrible job nobody else wants to do, but you have to start there because it's the place you start to learn the skills you need for more exciting and interesting work. Nobody wants to go over yesterday's security event log, but somebody does it.

Now consider this in the context of AI. AI can and will parse the event logs faster and better than a human ever could. We're terrible at repetitive boring tasks. Computers are awesome at repetitive boring tasks. It might take the intern two hours to parse the log files, it will take the log parser two seconds. And the computer won't start thinking about donuts halfway through. Of course there are plenty of arguments how today's AI have problems which is true. They're still probably better than humans though.

But here is what really got me thinking. As more and more of this work moves to the domain of AI and machines, what happens to the entry level work? I'm all for replacing humans with robots, without getting into the conversation about what will all the humans do when the robots take over, I'm more interested in entry level work and where the new talent comes from.

For the foreseeable future, we will need people to do the high skilled security work. By definition most of the high skilled people are a bit on the aged side. Most of us worked our way up from doing something that can be automated away (thank goodness). But where will get our new batch of geezers from? If there are no entry level offering, how can security people make the jump to the next level? I'm sure right now there are a bunch of people standing up screaming "TRAINING", but let's face it, that only gets you a little way there, you still need to get your hands dirty before you're actually useful. You're not going to trust a brain surgeon who has never been in an operating room but has all the best training.

I don't have any answers or even any suggestions here. It just happened to get me thinking. It's possible automation will follow behind the geezers which would be a suitable solution. It's possible we'll need to make some token entry level positions just to raise the skill levels.

What do you think? @joshbressers
Let's Encrypt torpedoes cost and maintenance issues for Free RTC

Many people have now heard of the EFF-backed free certificate authority Let's Encrypt. Not only is it free of charge, it has also introduced a fully automated mechanism for certificate renewals, eliminating a tedious chore that has imposed upon busy sysadmins everywhere for many years.

These two benefits - elimination of cost and elimination of annual maintenance effort - imply that server operators can now deploy certificates for far more services than they would have previously.

The TLS chapter of the RTC Quick Start Guide has been updated with details about Let's Encrypt so anybody installing SIP or XMPP can use Let's Encrypt from the outset.

For example, somebody hosting basic Drupal or Wordpress sites for family, friends and small community organizations can now offer them all full HTTPS encryption, WebRTC, SIP and XMPP without having to explain annual renewal fees or worry about losing time in their evenings and weekends renewing certificates manually.

Even people who were willing to pay for a single certificate for their main web site may have snubbed their nose at the expense and ongoing effort of having certificates for their SMTP mail server, IMAP server, VPN gateway, SIP proxy, XMPP server, WebSocket and TURN servers too. Now they can all have certificates.

Early efforts at SIP were doomed without encryption

In the early days, SIP messages would be transported across the public Internet in UDP datagrams without any encryption. SIP itself wasn't originally designed for NAT and a variety of home routers were created with "NAT helper" algorithms that would detect and modify SIP packets to try and work through NAT. Sadly, in many cases these attempts to help actually clash with each other and lead to further instability. Conversely, many rogue ISPs could easily detect and punish VoIP users by blocking their calls or even cutting their DSL line. Operating SIP over TLS, usually on the HTTPS port (TCP port 443) has been an effective way to quash all of these different issues.

While the example of SIP is one of the most extreme, it helps demonstrate the benefits of making encryption universal to ensure stability and cut out the "man-in-the-middle", regardless of whether he is trying to help or hinder the end user.

Is one certificate enough?

Modern SIP, XMPP and WebRTC require additional services, TURN servers and WebSocket servers. If they are all operated on port 443 then it is necessary to use different hostnames for each of them (e.g. turn.example.org and ws.example.org. Each different hostname requires a certificate. Let's Encrypt can provide those additional certificates too, without additional cost or effort.

The future with Let's Encrypt

The initial version of the Let's Encrypt client, certbot, fully automates the workflow for people using popular web servers such as Apache and nginx. The manual or certonly modes can be used for other services but hopefully certbot will evolve to integrate with many other popular applications too.

Currently, Let's Encrypt's certbot tool issues certificates to servers running on TCP port 443 or 80. These are considered to be a privileged ports whereas any port over 1023, including the default ports used by applications such as SIP (5061), XMPP (5222, 5269) and TURN (5349), are not privileged ports. As long as certbot maintains this policy, it is generally necessary to either run a web server for the domain associated with each certificate or run the services themselves on port 443. There are other mechanisms for domain validation and various other clients supporting different subsets of them. Running the services themselves on port 443 turns out to be a good idea anyway as it ensures that RTC services can be reached through HTTP proxy servers who fail to let the HTTP CONNECT method access any other ports.

Many configuration tasks are already scripted during the installation of packages on a GNU/Linux distribution (such as Debian or Fedora) or when setting up services using cloud images (for example, in Docker or OpenStack). Due to the heavily standardized nature of Let's Encrypt and the widespread availability of the tools, many of these package installation scripts can be easily adapted to find or create Let's Encrypt certificates on the target system, ensuring every service is running with TLS protection from the minute it goes live.

If you have questions about Let's Encrypt for RTC or want to share your experiences, please come and discuss it on the Free-RTC mailing list.

Reparando Packet Tracer 6.*
Buenos días compañer@s,

Seguramente, has llegado hasta aquí porque no te arranca el software de simulación de redes de Cisco, Packet Tracer. Y como mucho, mucho, solo ves depurado en terminal "Starting Cisco Packet Tracer 6.*" y de ahí no pasa.

No pasa nada, aquí os traigo un script que facilitará el arranque del software Packet Tracer 6.* en las principales distros como Debian Jessie (stable), openSUSE Leap 42.1, Fedora 23 tanto para 32 como para 64 bits; Gentoo Linux (multilib) y CentOS 7. 

Ya no tendréis tantos problemas con este chiste gracias a este script. Ya que también genera un lanzador (.desktop) específico para que funcione correctamente en los diferentes entornos de escritorio como GNOME, KDE , XFCE...

Próximamente lo adaptaré para ArchLinux 32 y 64 bits e incorporaré la instalación por directorio de instalación de PT elegido por el usuario (actualmente utiliza /opt/pt como dir por defecto). He ignorado a Ubuntu porque ya, el instalador de Packet Tracer te lo automatiza todo tanto para 32 como para 64 bits. Pero el resto tenemos que adaptar las cosas, para que las cosas funcionen como deben... 

Espero que lo disfruten, y cualquier error, no se olviden de comunicármelo.

Repositorio GitLab.
Enlace directo al script.

Cześć, Poland! Back to Europe

Earlier this month, I received some of the most exciting news I have had all year. After much finger-crossing and (hopefully) hard work, I am traveling to Kraków, Poland, for the Fedora Project‘s annual Flock conference. Flock is described by the organizers as the following.

Flock, now in its fourth year, is a conference for Fedora contributors to come together, discuss new ideas, work to make those ideas a reality, and continue to promote the core values of the Fedora community: Freedom, Friends, Features, and First.

This year, I am attending as a contributor to the project, giving a talk, and leading a workshop!

Poland: New experience

Last year, I attended Flock 2015 without having much of an idea of what to expect. Flock 2015 was less than ten minutes away from my then future university, the Rochester Institute of Technology. I was a user of Fedora since 2013, but I had never figured out how to start contributing to the project. To take advantage of this experience, I made plans to move into school early so I could see what Flock was all about.

Fast forward a full year, and a lot has changed. Now, I spend many hours a week working on the Fedora Project in many places. I help lead the CommOps and Marketing teams. I organize and attend events on the US East Coast as an Ambassador. I’m a Google Summer of Code 2016 student for Fedora. When I walked into the conference center last year as a shy student, I never imagined that many of the people I met would become familiar faces in short time.

This year, Flock 2016 in Poland will be a different experience, and I am looking forward to seeing what it will bring. Do you have plans to attend? If so, allow me to share some details for sessions you will want to add to your schedule!

Evaluating our impact in education

At 17:30 UTC+2 on August 2nd, 2016, I will be leading a talk titled, “University Outreach – New task or new mindset?

In early 2015, the Fedora Council proposed a new objective: the University Involvement Initiative. The purpose? Try to increase exposure of Fedora in university settings to gain new users, but also to hopefully gain new contributors. In order to carry this out, is it a new task, or does it need a new mindset? In this talk, we begin looking at the current mindset and marketing thoughts around attracting university students to Fedora. What is working? What isn’t?

We will look at personal experiences among the presenters with getting involved with Fedora as a student for an example. We will focus on how changing the ways we approach reaching out to students might be the best way to begin making an impact on students with Fedora.

If you are someone interested in reaching new audiences of students with Fedora, make sure you work this talk into your agenda.

CommOps Workshop

As of recently, I will also be leading the CommOps workshop on August 4th, 2016 at 13:30 UTC+2.

This year, for the first time, CommOps will be hosting its own workshop to tackle existing tasks and project items, offer a place for the community to add their own ideas and wishes for what they would like to see, and planning for the future growth of our sub-project. Flock offers a unique venue to do this as it brings together multiple people from different areas of Fedora in the same rooms. This is a great place for us to take advantage of the combined people power to accomplish tasks that would be hard otherwise.

The workshop will be designed to also try to keep remote contributors in mind, where possible, over IRC and possibly other means.

To help organize thoughts and ideas on the workshop in a public and open way, the workshop planned in the open on the wiki. We’re working with other CommOps contributors on shaping how the workshop will run. We hope to have you join us and see what we’re up to in CommOps land!

Thank you Red Hat

Finally, I want to offer my sincere gratitude and appreciation to Red Hat and the Flock sponsors for sponsoring my travel costs to Flock 2016. As a student, there would not be a way for me to afford making this trip on my own expenses. Thanks to the great folks behind Flock, I will be attending and hope to contribute my worth with the above talk and workshop, as well as throughout the entire conference.

Thank you for granting me this opportunity, and I look forward to seeing many other Fedora contributors next month in Poland!


Image courtesy Alexey Topolyanskiy – originally posted to Unsplash as Untitled.

The post Cześć, Poland! Back to Europe appeared first on Justin W. Flory's Blog.

libinput and graphics tablet mode support

In an earlier post, I explained how we added graphics tablet pad support to libinput. Read that article first, otherwise this article here will be quite confusing.

A lot of tablet pads have mode-switching capabilities. Specifically, they have a set of LEDs and pressing one of the buttons cycles the LEDs. And software is expected to map the ring, strip or buttons to different functionality depending on the mode. A common configuration for a ring or strip would be to send scroll events in mode 1 but zoom in/out when in mode 2. On the Intuos Pro series tablets that mode switch button is the one in the center of the ring. On the Cintiq 21UX2 there are two sets of buttons, one left and one right and one mode toggle button each. The Cintiq 24HD is even more special, it has three separate buttons on each side to switch to a mode directly (rather than just cycling through the modes).

In the upcoming libinput 1.4 we will have mode switching support in libinput, though modes themselves have no real effect within libinput, it is merely extra information to be used by the caller. The important terms here are "mode" and "mode group". A mode is a logical set of button, strip and ring functions, as interpreted by the compositor or the client. How they are used is up to them as well. The Wacom control panels for OS X and Windows allow mode assignment only to the strip and rings while the buttons remain in the same mode at all times. We assign a mode to each button so a caller may provide differing functionality on each button. But that's optional, having a OS X/Windows-style configuration is easy, just ignore the button modes.

A mode group is a physical set of buttons, strips and rings that belong together. On most tablets there is only one mode group but tablets like the Cintiq 21UX2 and the 24HD have two independently controlled mode groups - one left and one right. That's all there is to mode groups, modes are a function of mode groups and can thus be independently handled. Each button, ring or strip belongs to exactly one mode group. And finally, libinput provides information about which button will toggle modes or whether a specific event has toggled the mode. Documentation and a starting point for which functions to look at is available in the libinput documentation.

Mode switching on Wacom tablets is actually software-controlled. The tablet relies on some daemon running to intercept button events and write to the right sysfs files to toggle the LEDs. In the past this was handled by e.g. a callout by gnome-settings-daemon. The first libinput draft implementation took over that functionality so we only have one process to handle the events. But there are a few issues with that approach. First, we need write access to the sysfs file that exposes the LED. Second, running multiple libinput instances would result in conflicts during LED access. Third, the sysfs interface is decidedly nonstandard and quite quirky to handle. And fourth, the most recent device, the Express Key Remote has hardware-controlled LEDs.

So instead we opted for a two-factor solution: the non-standard sysfs interface will be deprecated in favour of a proper kernel LED interface (/sys/class/leds/...) with the same contents as other LEDs. And second, the kernel will take over mode switching using LED triggers that are set up to cover the most common case - hitting a mode toggle button changes the mode. Benjamin Tissoires is currently working on those patches. Until then, libinput's backend implementation will just pretend that each tablet only has one mode group with a single mode. This allows us to get the rest of the userstack in place and then, once the kernel patches are in a released kernel, switch over to the right backend.

July 10, 2016

How GNOME Software uses libflatpak

It seems people are interested in adding support for flatpaks into other software centers, and I thought I might be useful to explain how I did this in gnome-software. I’m lucky enough to have a plugin architecture to make all the flatpak code be self contained in one file, but that’s certainly not a requirement.

Flatpak generates AppStream metadata when you build desktop applications. This means it’s possible to use appstream-glib and a few tricks to just load all the enabled remotes into an existing system store. This makes searching the new applications using the (optionally stemmed) token cache trivial. Once per day gnome-software checks the age of the AppStream cache, and if required downloads a new copy using flatpak_installation_update_appstream_sync(). As if by magic, appstream-glib notices the file modification/creation and updates the internal AsStore with the new applications.

When listing the installed applications, a simple call to flatpak_installation_list_installed_refs() returns us the list we need, on which we can easily set other flatpak-specific data like the runtime. This is matched against the AppStream data, which gives us a localized and beautiful application to display in the listview.

At this point we also call flatpak_installation_list_installed_refs_for_update() and then do flatpak_installation_update() with the NO_DEPLOY flag set. This just downloads the data we need, and can be cancelled without anything bad happening. When populating the updates panel I can just call flatpak_installation_list_installed_refs() again to find installed applications that have downloaded updates ready to apply without network access.

For the sources list I’m calling flatpak_installation_list_remotes() then ignoring any set as disabled or noenumerate. Most remotes have a name and title, and this makes the UI feature complete. When collecting information to show in the ui like the size we have the metadata already, but we also add the size of the runtime if it’s not already installed. This is the same idea as flatpak_installation_install(), where we also install any required runtime when installing the main application. There is a slight impedance mismatch between the flatpak many-installed-versions and the AppStream only-one-version model, but it seems to work well enough in the current code. Flatpak splits the deployment into a runtime containing common libraries that can be shared between apps (for instance, GNOME 3.20 or KDE5) and the application itself, so the software center always needs to install the runtime for the application to launch successfully. This is something that is not enforced by the CLI tool. Rather than installing everything for each app, we can also install other so-called extensions. These are typically non-essential like the various translations and any debug information, but are not strictly limited to those things. libflatpak automatically keeps the extensions up to date when updating, so gnome-software doesn’t have to do anything special at all.

Updating single applications is trivial with flatpak_installation_update() and launching applications is just as easy with flatpak_installation_launch(), although we only support launching the newest installed version of an application at the moment. Reading local bundles works well with flatpak_bundle_ref_new(), although we do have to load the gzipped AppStream metadata and the icon ourselves. Reading a .flatpakrepo file is slightly more work, but the data is in keyfile format and trivial to parse with GKeyFile.

Overall I’ve found libflatpak to be surprisingly easy to work with, requiring none of the kludges of all the different package-based systems I’ve worked on developing PackageKit. Full marks to Alex et al.

How to set up Android Studio on Fedora systems

Android Studio Development on Windows, Mac OS X and Linux

Android Studio is the Official IDE for Android Development on modern personal computers. Android Studio provides the fastest tools for building apps on every type of Android device. World-class code editing, debugging, performance tooling, a flexible build system, and an instant build/deploy system all allow you to focus on building unique and high quality apps. Based on JetBrains IntelliJ IDEA software, Android Studio is designed specifically for Android development. It’s available for Windows, Mac OS X and Linux, and replaced Eclipse as Google’s primary IDE for native Android application development.

Android Studio IDE

More infos about Android Studio is available on the wikipedia.

How to configure Android Studio on Fedora systems

Basically, you can download Android Studio from the Official WebSite and start it on your Linux system. For a smooth experience on Fedora, I suggest you to do some necessary steps.

  • If you run a 64 bit system, in a terminal window do this

    sudo dnf install zlib-devel.i686 ncurses-devel.i686 ant

  • Install libs for mksdcard SDK Tool

    sudo dnf install compat-libstdc++-296.i686 compat-libstdc++-33.i686 compat-libstdc++-33.x86_64 glibc.i686 glibc-devel.i686 libstdc++.i686 libX11-devel.i686 libXrender.i686 libXrandr.i686

  • Install Java development tools

    sudo dnf install java-1.8.0-openjdk-devel.x86_64

  • Download the Android Studio package from Official WebSite and, just in case, add the SDK from http://developer.android.com/sdk/index.html

  • Extract the package (it’s ok also the Downloads folder) and relocate the Android Studio folder

    mv /home/yourUserName/Downloads/android-studio /opt

  • Make symbolic lynk for a quick launch

    • Using command line

      ln -s /opt/android-studio/bin/studio.sh /usr/local/bin/asb

      exit

    • Using Android Studio

      • Run Andorid Studio with ./opt/android-studio/bin/studio.sh
      • Menu > Tools > Create Command-line Launcher…
        • Name: asb
        • Path: /usr/local/bin
      • Click OK
    • Using Alacarte (sudo dnf install alacarte)

      • New item under Programming
      • Browse to /opt/android-studio/bin/studio.sh
      • Browse icon to /opt/android-studio/bin/studio.png
  • Check Java environment

    echo $JAVA_HOME

    If it isn’t /usr/lib/jvm/default-java, type:

    export JAVA_HOME=/usr/lib/jvm/default-java

  • Run Android Studio and enjoy it

    • Typing asb in terminal
    • Using Android Studio icon in launcher

Some plus with Android Studio and Fedora

For the latest news, configurations and some plus on setting up an Android Studio Development environment on Fedora systems you can check my GitHub. If you need a more powerful and fast emulator for Android, you can see my post on How to install Genymotion emulator on Fedora. For now, it’s all.

Bye, bye!

Liveness

The term Liveness here refers to the  need to ensure that the data used to make an authorization check is valid at the time of the check.

The mistake I made with PKI tokens was in not realizing how important Liveness was.  The mistake was based on the age old error of confusing authentication with authorization.  Since a Keystone token is used for both, I was confused into thinking that the primary importance was on authentication, but the reality is that the most important thing a token tells you is information essential to making an authorization decision.

Who you are does not change often.  What you can do changes much more often.  What OpenStack needs in the token protocol is a confirmation that the user is authorized to make this action right now.  PKI tokens, without revocation checks, lost that liveness check.  The revocation check undermined the primary value of PKI.

That is the frustration most people have with certificate revocation lists (CRLs).  Since Certificates are so long lived, there is very little “freshness” to the data.  A CRL is a way to say “not invalidated yet” but, since a cert might carry data more than just “who are you” certificates can often become invalid.  Thus, any active system built on X509 for authorization (not just authentication) is going to have many many revocations.  Keystone tokens fit that same profile. The return to server validated tokens (UUID or Fernet) return that Freshness check.

However, bearer tokens have a different way of going stale.  If I get a token, use it immediately, the server knows that It was very highly probably that the token came from me.  If I wait, the probability drops.  The more I use the same token, and the longer I use it, the greater the probability is that someone other than me is going to get access to that token.  And that means the probability that it is going to be misused has also increased.

I’ve long said that what I want is a token that lasts roughly five minutes.  That means that it is issued, used, and  discarded, with a little wiggle room for latency and clock skew across the network.  The problem with this is that a token is often used for a long running task.  If a task takes 3 hours, but a token is good for only five minutes, there is no way to perform the task with just that token.

One possible approach to returning this freshness check is to always have some fresh token on a call, just not necessarily the one that the user originally requested.  This is the idea behind the Trust API.  A Trust is kind-of-like a long term token, but one that is only valid when paired with a short term token for the trustee.  But creating a trust every time a user wants to create a new virtual machine is too onerous, too much overhead.  What we want, instead is a rule that says:

When Nova calls Glance on behalf of a user, Nova passes a freshly issued token for itself along with the original users token.  The original user’s token will be validated based on when it was issued.  Authorization requires the combination of a fresh token for the Nova service user and a not-so-fresh-but-with-the-right-roles token for the end user.

This could be done with no changes to the existing token format. Set the token expiration to 12 hours.  The only change would be inside python-keystonemiddleware.  It would have a pair of rules:

  1. If a single token is passed in, it must have been issued within five minutes.  Otherwise, the operation returns a 401.
  2. If a service token is passed in with the user’s token, the service token must have been issued within five minutes.  The users token is validated normally.

An additional scope limiting mechanism would further reduce the possibility of abuse.  For example,

  • Glance could limit the service-token scoped operations from Nova to fetching an image and saving a snapshot.
  • Nova might only allow service-scoped tokens from a service like Trove within a 15 minute window.
  • A user might have to ask for an explicit “redelegation” role on a token before handing it off to some untrusted service run off site.

With Horizon, we already have a mechanism that says that it has to fetch an unscoped token first, and then use that to fetch a scoped token.  Horizon can be smart enough to fetch an scoped token before each bunch of calls to a remote server, cache if for only a minute, and use the unscoped token only in communication with Keystone.  The unscoped token, being validated by Keystone, is sufficient for maintaining “Liveness” of the rest of the data for a particular workflow.

Its funny how little change this would require to OpenStack, and how big an impact it would make on security.  It is also funny how long it took for this concept to coalesce.

Design prototypes

Last week my mentor had suggested that it would be great if I am able to code the front end for the designs that I had implemented. I thought it would be a great way to see how my designs would be developed and implemented.

For this week, I concentrated on implementing the visual rework as well the front end code for the issues 98 and 63.

For the issue 98, I constructed the accordion through javascript and implemented the drop downs through chosen.js. I had to be super careful about using the js libraries online as I had make sure they had appropriate licenses. The chosen.js that I have used for this project is under MIT license. Even though I ran into multiple problems in the style sheet,I managed to implement the first few elements of the visual design. Further, the rework for the visual design and the code for implementing it can be seen here.

prototype link

Further, I still have to work on implementing the chosen.js correctly since I want the dropdown to have the text input functionality as well. I will be working on this coming week.

Another issue that I had worked this week included the release cycle. I implemented this by using a responsive rectangle and overlayed all the images over it. So far the output looks like this. There are some portions that are not responsive yet.

prototype link - tested only on firefox so far

My next week goal would be to make the slider stop when the date of release/ freeze has been reached. I am thinking this could be implemented by using the detailed timeline document which I referred from the fedora release cycle wiki. The slider would act like a timer which could be implemented by js. Further, there are some portions of this release cycle which are not yet fully responsive. I will be addressing that as well.

July 09, 2016

Instalando o Bumblebee no Fedora 24 com suporte ao Steam

Notebooks recentes com placas de vídeo da NVIDIA e processador i3, i5 ou i7 usam a tecnologia Optimus para aumentar a vida útil da bateria. Infelizmente o software que suporta essa tecnologia só foi desenvolvido para sistemas proprietários.

O projeto Bumblebee é um conjunto de ferramentas desenvolvidas com o foco em fornecer suporte ao Optimus no Linux, até que os drivers do kernel suportem esse cenário.

Este artigo ensina como instalar e utilizar o Bumblebee no Fedora 24 Workstation 64 bits e também com o Steam.

Para saber se você precisa do bumblebee, execute o comando abaixo:
$ lspci | grep -i vga

Caso retorne mais de uma linha, sendo uma delas contendo a palavra NVIDIA, certamente você se enquadra nos requisitos para utilizar o Bumblebee.
Existem duas formas de se utilizar o Bumblebee, uma com os drivers livres nouveau, e a outra juntamente com os drivers proprietários da NVIDIA.
Neste artigo, vamos cobrir a segunda opção.

Primeiramente vamos adicionar o repositório do Bumblebee através do seguinte comando:
# dnf -y --nogpgcheck install http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee/fedora24/noarch/bumblebee-release-1.2-1.noarch.rpm

Após isso, vamos instalar o pacote contendo o repositório do Bumblebee que contém os drivers proprietários da NVIDIA:
# dnf -y --nogpgcheck install http://install.linux.ncsu.edu/pub/yum/itecs/public/bumblebee-nonfree/fedora24/noarch/bumblebee-nonfree-release-1.2-1.noarch.rpm

Depois iremos instalar os pacotes multilib (entre outras coisas) do bumblebee e os drivers priprietários da nvidia. A versão multilib é ideal caso você queira executar pela placa de vídeo secundária, softwares/games 32 bits.

Segue o comando:
# dnf install bumblebee-nvidia bbswitch-dkms VirtualGL.x86_64 VirtualGL.i686 primus.x86_64 primus.i686 kernel-devel

Após a instalação, segue a sintaxe para utilizar a placa da NVIDIA:

utilizaremos o pacote primusrun (por ter uma performance melhor que o optirun). O “vblan_mode=0” irá aumentar o desempenho desabilitando a sincronização vertical. Ex:
# vblank_mode=0 primusrun xpto

Aonde Xpto é o nome do jogo ou aplicativo no qual você quer que seja renderizado pela sua placa de vídeo da NVIDIA.

Faça um teste para certificar que está tudo funcionando corretamente:
# vblank_mode=0 primusrun glxgears

Steam: Caso você utilize o Steam como plataforma de games no Fedora 24, você não é aconselhado a executar o Steam via primusrun, mas sim apenas nos jogos. Como fazer isso? Executando o Steam normalmente, e dentro do Steam, ao selecionar o jogo, ir nas propriedades do jogo, e modificar o lançador do mesmo, para que cada vez que seja invocado o executável do jogo o primusrun também o seja.

Para fazer o jogo utilizar a GPU da NVIDIA siga estes passos:

  1. Selecione o jogo – que você deseja executar utilizando a placa da NVIDIA – através da página Library do cliente Steam, clique com o botão direito, e selecione Properties.
  2. Clique no botão SET LAUNCH OPTIONS… e digite:
    vblank_mode=0 primusrun %command%
  3. Salves as modificações.

Fontes:

Acrelinux - Fedora - Instalando o Bumblebee [NVIDIA Optimus] - https://www.youtube.com/watch?v=sYn46YZmDrE
Fedora Project Wiki - https://fedoraproject.org/wiki/Bumblebee
Steam - https://support.steampowered.com/kb_article.php?ref=6316-GJKC-7437&l=portuguese

Tokens without revocation

PKI tokens in Keystone suffered from many things, most essentially the trials due to the various forms of revocation. I never wanted revocation in the first place. What could we have done differently? It just (I mean moments ago) came to me.

A PKI token is a signed document that says “at this point in time, these things are true” where “these things” have to do with users roles in projects. Revocation means “these things are no longer true.” But long running tasks need long running authentication. PKI tokens seem built for that.

What we should distinguish is a difference between kicking off a new job, and continued authorization for an old job. When a user requests something from Nova, the only identity that comes into play is the users own Identity. Nova needs to confirm this, but, in a PKI token world, there is no need to go and ask Keystone.

In a complex operation like launching a VM, Nova needs to ask Glance to do something. Today, Nova passes on the token it received, and all is well. This makes tokens into true bearer tokens, and they are passed around far too much for my comfort.

Lets say that, to start, when Nova calls Glance, Nova’s own Identity should be confirmed. Tokens are really poor for this, a much better way would be to use X509. While Glance would need to do a mapping transform, the identity of Nova would not be transferable. Put another way, Nova would not be handing off a bearer token to Glance. Bearer tokens from Powerful systems like Nova are a really scary thing.

If we had this combination of user-confirmed-data and service-identity, we would have a really powerful delegation system. Why could this not be done today, with UUID/Fernet tokens? If we only ever had to deal with a max of two hops, (Nova to Glance, Nova to Neutron) we could.

Enter Trove, Heat, Sahara, and any other process that does work on behalf of a user. Lets make it really fun and say that we have the following chain of operations:

Deep-delegatuion-chain

If any one links in this chain is untrusted, we cannot pass tokens along.
What if, however, each step had a rule that said “I can accept tokens for users from Endpoint E”  and passed a PKI token along.  User submits a PKI token to Heat.  Heat passes this. plus its own identity on to Sahara, that trusts Heat.  And so on down the line.

OK…revocations.  We say here that a PKI token is never revoked.  We make it valid for the length of long running operations…say a day.

But we add an additional rule:  A user can only use a PKI token within 5 minutes of issue.

Service to Service calls can use PKI tokens to say “here is when it was authorized, and it was good then.”

A user holds on to A PKI token for 10 minutes, tries to call Nova, and the token is rejected as “too old.”

This same structure would work with Fernet tokens, assuming a couple things:

  1. We get rid of revocations checks for tokens validated with service tokens.
  2. If a user loses a role, we are OK with having a long term operation depending on that role failing.

I think this general structure would make OpenStack a hell of a lot more scalably secure than it is today.

Huge thanks to Jamie Lennox for proposing a mechanism along these lines.

"I recieved a free or discounted product in return for an honest review"
My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I'm ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon's success is fairly phenomenal. It's estimated that there's over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon's surprisingly customer friendly service there's a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you're not on Amazon, you're hurting your sales.

And if you're an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren't utterly dreadful, you'll probably maintain that position.

But if you're a brand nobody's ever heard of, things are more difficult. People are unlikely to search for your product directly, so you're relying on turning up in the results for more generic terms. But if you're selling a more generic kind of item (say, a Bluetooth smart bulb) then there's probably a number of other brands nobody's ever heard of selling almost identical objects. If there's no reason for anybody to choose your product then you're probably not going to get any reviews and you're not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number's large enough to matter, you're probably onto a new product cycle.

In summary: if nobody's ever heard of you, you need reviews but you're probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody's going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you're not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they'll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a "Vine Voice" badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there's a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There's no requirement that you be an existing trusted reviewer and the product probably isn't free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you'll lose access to the service. This is completely acceptable under Amazon's rules, which state "If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact". So far, so reasonable.

In reality it's worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven't tested and then (presumably) reselling them. What's surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing - you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn't like, they may engage in heavy handed tactics in an attempt to make the review go away.

It's hard to avoid the conclusion that Amazon's review model is broken, but it's not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we're left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comment count unavailable comments
Approximating a PDF of Distances With a Gamma Distribution

In a previous post I discussed some unintuitive aspects of the distribution of distances as spatial dimension changes. To help explain this to myself I derived a formula for this distribution, assuming a unit multivariate Gaussian. For distance (aka radius) r, and spatial dimension d, the PDF of distances is:

Figure 1

Recall that the form of this PDF is the generalized gamma distribution, with scale parameter <nobr>a=sqrt(2),</nobr> shape parameter p=2, and free shape parameter (d) representing the dimensionality.

I was interested in fitting parameters to such a distribution, using some distance data from a clustering algorithm. SciPy comes with a predefined method for fitting generalized gamma parameters, however I wished to implement something similar using Apache Commons Math, which does not have native support for fitting a generalized gamma PDF. I even went so far as to start working out some of the math needed to augment the Commons Math Automatic Differentiation libraries with Gamma function differentiation needed to numerically fit my parameters.

Meanwhile, I have been fitting a non generalized gamma distribution to the distance data, as a sort of rough cut, using a fast non-iterative approximation to the parameter optimization. Consistent with my habit of asking the obvious question last, I tried plotting this gamma approximation against distance data, to see how well it compared against the PDF that I derived.

Surprisingly (at least to me), my approximation using the gamma distribution is a very effective fit for spatial dimensionalities <nobr> >= 2 </nobr>:

Figure 2

As the plot shows, only for the 1-dimension case is the gamma approximation substiantially deviating. In fact, the fit appears to get better as dimensionality increases. To address the 1D case, I can easily test the fit of a half-gaussian as a possible model.

Gravar a tela (RecordMyDesktop) + som no Fedora
Essa é para eu mesmo não esquecer depois :-)
Para que o Fedora grave a tela e o som (áudio do PC/notebook), é necessário instalar dois softwares:
dnf install gtk-recordmydesktop
dnf install pavucontrol
Para gravação abra os dois simultaneamente. E voilá
Bypassing Version Discovery in Keystoneauth1

I’ve been a happy Dreamhost customer for many years.  So I was thrilled when I heard that they had upgrade Dreamcompute to Mitaka.  So, like the good Keystoner that I am, I went to test it out.  Of course, I tried to use the V3 API.   And it failed.

What?  Dreamhost wouldn’t let me down, would they?

No.  V3 works fine, it is discovery that is misconfigured.

If you do not tell the openstack client (and thus keystoneauth1) what plugin to use, it defaults to the non version specific Password plugin that does version discovery,  What this means is it will go to the auth URL you give it, and try to figure out what the right version to use is.  And, it so happens that there is a nasty bit of Keystone which is not well documented that makes the dreamhost /v3 page look like this:

$ curl $OS_AUTH_URL
{"version": {"status": "stable", "updated": "2013-03-06T00:00:00Z", "media-types":

[{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"},

{"base": "application/xml", "type": "application/vnd.openstack.identity-v3+xml"}], "id":

"v3.0", "links": [{"href": "https://keystone-admin.dream.io:35357/v3/", "rel": "self"}]}}

See that last link?

Now, like a good service provider, Dreamhost keeps its Keystone administration inside, behind their firewall.

nslookup keystone-admin.dream.io
Server: 75.75.75.75
Address: 75.75.75.75#53

Non-authoritative answer:
Name: keystone-admin.dream.io
Address: 10.64.140.19

[ayoung@ayoung541 dreamhost]$ curl keystone-admin.dream.io

Crickets…hangs.  Same with a request to 35357.  And since the Password auth plugin is going to use the URL from the /v3 page, which is

https://keystone-admin.dream.io:35357/v3.

To get around this, Dreamhost will shortly change their Keystone config file:  If they have the base line config shipped with Keystone, they have, in the section:

[DEFAULT]

admin_endpoint = <None>

Which is what is used in discovery to build the URL above.  yeah,  It is dumb.  Instead, they will set it to

https://keystone.dream.io/

And discovery will work.

But I am impatient, and I want to test it now. The work around is to bypass discovery and specify the V3 version of the Keystoneauth1 Password protocol. The version specific plugin uses the AUTH_URL as provided to figure out where to get tokens. With the line:

export OS_AUTH_TYPE=v3password

And now…

$ openstack server show ipa.younglogic.net   
+--------------------------------------+---------------------------------------------------------+
| Field                                | Value                                                   |
+--------------------------------------+---------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                  |
| OS-EXT-AZ:availability_zone          | iad-1                                                   |
| OS-EXT-STS:power_state               | 1                                                       |
| OS-EXT-STS:task_state                | None                                                    |
| OS-EXT-STS:vm_state                  | active                                                  |
| OS-SRV-USG:launched_at               | 2016-06-17T03:28:48.000000                              |
| OS-SRV-USG:terminated_at             | None                                                    |
| accessIPv4                           |                                                         |
| accessIPv6                           |                                                         |
| addresses                            | private-network=2607:f298:6050:499d:f816:3eff:fe6a:afdb, 
                                                         10.10.10.75, 173.236.248.45             |
| config_drive                         |                                                         |
| created                              | 2016-06-17T03:27:09Z                                    |
| flavor                               | warpspeed (400)                                         |
| hostId                               | 4a7c64b912cfeda73c2c56ac52e8ffd124aac29ec54e1e4902d54bd4|
| id                                   | f0f46fd3-fa59-4a5b-835d-a638f6276566                    |
| image                                | CentOS-7 (c1e8c5b5-bea6-45e9-8202-b8e769b661a4)         |
| key_name                             | ayoung-pubkey                                           |
| name                                 | ipa.younglogic.net                                      |
| os-extended-volumes:volumes_attached | []                                                      |
| progress                             | 0                                                       |
| project_id                           | 9c7e4956ea124220a87094a0a665ec82                        |
| properties                           |                                                         |
| security_groups                      | [{u'name': u'ayoung-all-open'}]                         |
| status                               | ACTIVE                                                  |
| updated                              | 2016-06-17T03:28:24Z                                    |
| user_id                              | b6fd4d08f2c54d5da1bb0309f96245bc                        |
+--------------------------------------+---------------------------------------------------------+

And how cool is that: they are using IPv6 for their private network.

If you want to generate your own V3 config file from the file they ship, use this.

Servidor LAMP en Fedora 24
Algo que me encanta de Fedora es que configurar un servidor LAMP es una tarea bastante sencilla. A continuación describo como hacerlo. Instalamos el servidor web(apache httpd) la forma más sencilla(todos estos comandos como root) dnf groupinstall "Web Server" **si muestra algún error de versiones (workstation, nonproduct) usar: dnf groupinstall "Web Server" --skip-broken Después el servidor

July 08, 2016

Installing FreeIPA in as few lines as possible

I had this in another post, but I think it is worth its own.

sudo hostnamectl set-hostname --static undercloud.ayoung-dell-t1700.test
export address=`ip -4 addr  show eth0 primary | awk '/inet/ {sub ("/24" ,"" , $2) ; print $2}'`
echo $address `hostname` | sudo tee -a /etc/hosts
sudo yum -y install ipa-server-dns
export P=FreIPA4All
ipa-server-install -U -r `hostname -d|tr "[a-z]" "[A-Z]"` -p $P -a $P --setup-dns `awk '/^name/ {print "--forwarder",$2}' /etc/resolv.conf`

Just make sure you have enough entropy.

Chromium ist bald offizieller Bestandteil von Fedora

Was lange währt, wird endlich gut: Der Review-Request für Chromium, hat heute endlich das Flag „fedora‑review+“ für ein positiv abgeschlossenes Review bekommen, womit einer Aufnahme der Chromium-Pakete in die Fedora Repositories im Grunde nichts mehr im Wege steht.

Somit dürfte es zukünftig nicht mehr nötig sein, Chromium aus Drittquellen wie COPR oder Fremd-Repositories, bei denen man im Grunde auch immer auf die Redlichkeit der Anbieter vertrauen muss, zu installieren.

Die veröffentlichten News werden nach bestem Wissen und Gewissen zusammengetragen. Eine Garantie für die Vollständigkeit und/oder Richtigkeit wird nicht übernommen.
Merging FreeIPA and Tripleo Undercloud Apache installs

My Experiment yesterday left me with a broken IPA install. I aim to fix that.

To get to the start state:

From my laptop, kick off a Tripleo Quickstart, stopping prior to undercloud deployment:

./quickstart.sh --teardown all -t  untagged,provision,environment,undercloud-scripts  ayoung-dell-t1700.test

SSH in to the machine …

ssh -F /home/ayoung/.quickstart/ssh.config.ansible undercloud

and set up FreeIPA;

$ cat install-ipa.sh

#!/usr/bin/bash

sudo hostnamectl set-hostname --static undercloud.ayoung-dell-t1700.test
export address=`ip -4 addr  show eth0 primary | awk '/inet/ {sub ("/24" ,"" , $2) ; print $2}'`
echo $address `hostname` | sudo tee -a /etc/hosts
sudo yum -y install ipa-server-dns
export P=FreIPA4All
sudo ipa-server-install -U -r `hostname -d|tr "[a-z]" "[A-Z]"` -p $P -a $P --setup-dns `awk '/^name/ {print "--forwarder",$2}' /etc/resolv.conf`

Backup the HTTPD config directory:

 sudo cp -a /etc/httpd/ /root

Now go continue the undercloud install

./undercloud-install.sh 

Once that is done, the undercloud passes a sanity check. Doing a diff between the two directories shows a lot of differences.

sudo diff -r /root/httpd  /etc/httpd/

All of the files in /etc/httpd/conf.d that were placed by the IPA install are gone, as are the following module files in /root/httpd/conf.modules.d

Only in /root/httpd/conf.modules.d: 00-base.conf
Only in /root/httpd/conf.modules.d: 00-dav.conf
Only in /root/httpd/conf.modules.d: 00-lua.conf
Only in /root/httpd/conf.modules.d: 00-mpm.conf
Only in /root/httpd/conf.modules.d: 00-proxy.conf
Only in /root/httpd/conf.modules.d: 00-systemd.conf
Only in /root/httpd/conf.modules.d: 01-cgi.conf
Only in /root/httpd/conf.modules.d: 10-auth_gssapi.conf
Only in /root/httpd/conf.modules.d: 10-nss.conf
Only in /root/httpd/conf.modules.d: 10-wsgi.conf

TO start, I am going to backup the existing HTTPD directory :

 sudo cp -a /etc/httpd/ /home/stack/

Te rest of this is easier to do as root, as I want some globbing. First, I’ll copy over the module config files

 sudo su
 cp /root/httpd/conf.modules.d/* /etc/httpd/conf.modules.d/
 systemctl restart httpd.service

Test Keystone

 . ./stackrc 
 openstack token issue

Get a token…good to go…ok, lets try toe conf.d files.

sudo cp /root/httpd/conf.d/* /etc/httpd/conf.d/
systemctl restart httpd.service

Then as a non admin user

$ kinit admin
Password for [email protected]: 
[stack@undercloud ~]$ ipa user-find
--------------
1 user matched
--------------
  User login: admin
  Last name: Administrator
  Home directory: /home/admin
  Login shell: /bin/bash
  UID: 776400000
  GID: 776400000
  Account disabled: False
  Password: True
  Kerberos keys available: True
----------------------------
Number of entries returned 1
----------------------------

This is a fragile deployment, as updating either FreeIPA or the Undercloud has the potential to break one or the other…or both. But it is a start.