Fedora People

Outreachy with Fedora, Fall 2016

Posted by Fedora Community Blog on October 07, 2016 08:15 AM

What is Outreachy?

GNOME Outreachy is a global program that offers historically underrepresented people of gender and race stipends to write code for several participating FOSS projects . Inspired by Google Summer Of Code, Outreachy offers participants hands-on internships for contributing to open source projects.

In 2016, the Outreachy internship dates are from December 6, 2016 to March 6, 2017. Participants work remotely from home while getting guidance from an assigned mentor and collaborating within their project’s community.

Why open source and Fedora?

Free and Open Source Software (FOSS) is software that gives the user the freedom to use, share, study, and improve it. FOSS contributors believe that this is the best way to develop software because it benefits society, creates a fun collaborative community around a project, and allows anyone to make creative changes that reach many people.

Fedora is participating in Outreachy 2016, with a goal to welcome underrepresented minorities to contribute to the project.  Fedora mentors Outreachy interns and helps them get a hands-on experience with developing for an open source project.

Schedule

The schedule for Outreachy 2016 will be as follows:

  • October 17:  Application deadline
  • November 8: Selection decisions made
  • December 6 – March 6: Working period

Projects

The following project selections are available under Fedora for Outreachy Fall 2016. For more detailed information on Fedora’s positions, please refer to the Fedora wiki page.

Cockpit UX: Firewall

Cockpit is an interactive server admin interface that helps work with storage, SELinux, networking, containers and lots of other things. It is a flagship feature of Fedora Server edition (although it’s available for other distributions as well). One important part it doesn’t cover: Firewalls! This internship will focus on the design of what an interface for firewall management should look like. This will involve developing user stories for firewall management (different people interact with firewalls for various reasons), creating mock-ups for how an interface would look and feel, as well as specing out the designs and working with developers to have them implemented. This is an iterative process, an example being the SELinux Troubleshooting module of Cockpit.

Cockpit Dev: System journal

Cockpit is an interactive server admin interface that helps work with storage, SELinux, networking, containers and lots of other things. It is a flagship feature of Fedora Server edition (although it’s available for other distributions as well). One essential part of Cockpit is long due for an overhaul: the system journal (logs). Since its start, Cockpit has steadily refined its UX patterns. The current journal code works with jquery and is, frankly, a bit fiddly. Convenient features like filtering the view don’t exist (or are very limited). A new look is designed and the task scoped. For a rewrite in React we have standard components (e.g. for list views) that can be used, others may need to be imported or created. This internship will focus on the development aspect of implementing the new journal look and adapting its integration tests in close cooperation with our designer and the other developers.

Mentors

  • Dominik Perpeet

How do I join?

The application deadline for Outreachy 2016 is October 17, and the internship dates are December 6 to March 6. The stipend for the program is also $5,500 (USD). Unlike in Google Summer of Code, participants do not need to be students and non-coding projects are available. In addition to coding, projects include such tasks as graphic design, user experience design, documentation, bug triage and community engagement.

outreachy.org

To apply for either program, you need to connect with a participating organization early, select a project you want to work on, make a few relevant contributions with the help of a mentor, and create a project plan.

Please consider applying for Outreachy, urge someone else to apply, or help spread the word by forwarding this message to any interested university and community groups.

Interested in joining us? The application for Outreachy has the following steps:

  • Introduction
  • Choose a Project
  • Make a Small Contribution
  • Submit an Application
  • Continue working through the schedule

Detailed information is available on this page.


GNOME Outreachy 2016 flyer and Fedora

The post Outreachy with Fedora, Fall 2016 appeared first on Fedora Community Blog.

Deploy containers with Atomic Host, Ansible, and Cockpit

Posted by Fedora Magazine on October 07, 2016 08:00 AM

In the course of my job at Red Hat, I work with Docker containers on Fedora Atomic host every day. The Atomic Host from Project Atomic is a lightweight container OS that can run Linux containers in Docker format. It’s been modified for efficiency, making it optimal to use as a Docker run-time system for cloud environments.

Fortunately I’ve found a great way to manage containers running on the host: Cockpit. Cockpit is a remote manager for GNU/Linux servers with a nice Web UI. It lets me manage servers and containers running on the host. You can read more about Cockpit in this overview article previously published here. However, I also wanted to automate running containers on the host, which I’ve done using Ansible.

Note that we cannot use the dnf command on the Atomic Host. The host is designed not as a general purpose OS, but to be more fit for containers and other purposes. But it’s still very easy to set up applications and services on the Atomic Host. This post shows you how to automate and simplify this process.

Setting up the components

First we will need to run the cockpit container on atomic host. Copy the sources down from https://github.com/trishnaguha/fedora-cloud-ansible on your machine.

$ git clone https://github.com/trishnaguha/fedora-cloud-ansible.git

Now change your directory to cockpit and edit its inventory file as shown below:

$ cd fedora-cloud-ansible
$ cd cockpit
$ vim inventory

Make the following changes:

  1. Replace IP_ADDRESS_OF_HOST with the IP address of your Atomic host.
  2. Replace PRIVATE_KEY_FILE in the line ansible_ssh_private_key_file=’PRIVATE_KEY_FILE’ with your SSH private key file.

Now save and exit the inventory file.

Next, edit the ansible configuration file:

$ vim ansible.cfg

Replace User in the line remote_user=User with your remote user on your Atomic host. Then save and exit the file.

Putting it all together

Now it’s time to run the playbook. This command starts running the Cockpit container on the Atomic host:

$ ansible-playbook cockpit.yml

Cockpit is now running on the Atomic host. Use your web browser to visit the public IP of your instance on port 9090. This is the default port of Cockpit. For instance, if the IP address of the instance is 192.168.1.4, browse to 192.168.1.4:9090. You’ll now see the web interface of Cockpit on the web browser:

Cockpit login screen

Managing your containers

Login with the credentials of your Atomic host or as root. Then visit the Containers section on the Cockpit manager to see the containers running on your Atomic host. In the example below, you’ll see I also set up others like httpd and redis:

Cockpit panel for managing containers

Notice the interface lets you start and stop containers directly in the Cockpit manager using the Run and Stop buttons. You can also manage your Atomic host using the Cockpit manager. Go to Tools -> Terminals. There you can use the terminal of the Atomic host:

Cockpit terminal panel

If you plan to deploy your containerized application on Atomic host, you can simply write a playbook for it. Then you can deploy using the ansible-playbook command and manage the containers using Cockpit.

Running ansible-playbook to deploy multiple containers

Feel free to fork or add playbooks for containers in the repository https://github.com/trishnaguha/fedora-cloud-ansible.

How to use Google Calendar with Thunderbird

Posted by Luca Ciavatta on October 07, 2016 08:00 AM

Mozilla’s Thunderbird can also handle your calendars

Gnome environment has a lot of built-in widget and one of the most useful is gnome-calendar. It shows you all about your calendars, but a more useful way to handle them it’s to delegate to Thunderbird.

Mozilla’s Thunderbird is an incredible flexible email client and, by adding several add-ons it could be also do a lot of more things. I use Thunderbird for email, for newsgroup, and I also use it for calendar appointments using a add-on called Lightning Calendar. Lightning is a wonderful standard add-on, but it’s not able to sync with phones and web.

Mozilla Thunderbird - Free download here

Lightning Calendar organizes your schedule and life’s important events in a calendar that’s fully integrated with your Thunderbird or Seamonkey email. Manage multiple calendars, create your daily to do list, invite friends to events, and subscribe to public calendars. So, Lightning Calendar is amazing but faults to handle online calendars, especially Google Calendar.

Lightning Calendar - Free download here

Google Calendar on Linux

Google Calendar on Thunderbird and Gnome

Best solution is to find a way to sync calendars across Thunderbird, Lighting and, in my case, Google. I mean Google, but it’s true also for every other web calendar services. The solution is simple and on the way, it’s called Provider for Google Calendar and you just simple install it through the add-on system.

Provider for Google Calendar - Free download here

Once Thunderbird has restarted, go to the calendar view and then create a New Calendar through a right-click on the calendars tab on the left-side. At the prompt, select On the Network and click Next. In the box, select Google Calendar and type your Gmail address. Click Next to finish off setting up. You may receive a prompt to log in with your Google account and you must insert your password and, I hope, the 2-STEP authentication.

That’s it. Your Google Calendar will automatically synchronise with Thunderbird and with Gnome Calendar. Finally, you can create and edit all the events in the Thunderbird calendar view.

That’s all. Enjoy it!

PHPUnit 5.6

Posted by Remi Collet on October 07, 2016 06:09 AM

RPM of PHPUnit version 5.6 are available in remi repository for Fedorra ≥ 22 and for Enterprise Linux (CentOS, RHEL...).

Documentation :

emblem-notice-24.pngThis new major version requires PHP ≥ 5.6 (PHPUnit is available in remi, as PHP 5.4 and 5.5 have reached their EOL).

Installation, Fedora:

dnf --enablerepo=remi install phpunit

Installation, Enterprise Linux:

yum --enablerepo=remi,remi-php56 install phpunit

Notice: this tool is an essential component of PHP QA in Fedora. This version is also available in official repository of Fedora rawhide (so used by Koschei). I plan an update in Fedora 24 and 25 soon.

#RedhatDID: Retrospective and a look ahead to future events

Posted by Corey ' Linuxmodder' Sheldon on October 07, 2016 03:15 AM

Oct 6, 2016:  The day several Redhat trainers and industry folks met to talk about best practices and give feedback on the vision and mission ( and speed of progression) of Redhat Enterprise Linux (RHEL) and upstream /  downstream projects and products.  Among one of the most popular Sessions was the one by Robin Price and Martin Priesler on OpenSCAP which was a standing room only  session with nearly  1/3 of attendants in attendance for this talk / session.  Rita Carroll and others setup a interest list for those that would like to attend another OpenSCAP Workshop (mainly centered on a hands-on event but other venues seemed open for debate). If you’d be interested regardless of whether you like me were in attendance please email Rita @ [email protected] with a simple subject line referencing OpenSCAP Workshop (Tysons Area).

All slide decks will be up on the RedHatDID site used for registration within the coming week or two ( some presenters were not  Redhat afterall).

The above link has all the info about all 4  tracks presented and the topics, If you would like more info or a company visit on any topic shown ( or maybe something more topical to your organization) feel free to contact Rita or another event coordinator to schedule.

Next Event will be on Nov 2, 2016 at the Ritz-Carlton, Pentagon City, Va  and is FREE for Gov’t folks when registering for the rest of us Industry folks that’s still only $195 for a 8 hr symposium with some of the most authoritative folks in the industry.


Filed under: Community, Conventions / conferences, Developers Unite, Fedora, PSAs, Redhat Tagged: #opensource, #redhatDID, Container Security, Defense In Defense, FreeIPA, OpenSCAP, Redhat, Security, Selinux, Sysadmin

Securing the Cyrus SASL Sample Server and Client with Kerberos

Posted by Adam Young on October 07, 2016 02:34 AM

Since running the Cyrus SASL sample server and client was not too bad, I figured I would see what happened when I tried to secure it using Kerberos.

Mechanisms

I’m going to run this on a system that has been enrolled as a FreeIPA client, so I start with a known good Kerberos setup.

To see the list of mechanisms available, run

sasl2-shared-mechlist 

I have the following available.

Available mechanisms: GSS-SPNEGO,GSSAPI,DIGEST-MD5,CRAM-MD5,ANONYMOUS
Library supports: ANONYMOUS,CRAM-MD5,EXTERNAL,DIGEST-MD5,GSSAPI,GSS-SPNEGO

For Kerberos, I want to use GSSAPI.

Lets do this the hard way, by trial and error. First, run the server, telling it to use the GSSAPI mechanism

/usr/bin/sasl2-sample-server -p 1789 -h localhost -s hello  -m GSSAPI

Then run the client in another terminal:

sasl2-sample-client -s hello -p 1789  -m GSSAPI localhost

Which includes the following in the output:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available)
closing connection

Kerberos

I need a Kerberos TGT in order to get a service ticket. Use kinit

$ kinit admin
Password for [email protected]: 

This time the error message is:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server rcmd/[email protected] not found in Kerberos database)

I notice two things, here:

  1. The service needs to be in the Kerberos servers directory.
  2. the service name should match the hostname.

 

If I rerun the command using the FQDN of the server, I can see the service name as expected:

 

$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.testreceiving capability list... ...
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server hello/[email protected] not found in Kerberos database)
closing connection

 

So I tried to create the service in the ipa server:

ipa service-add
Principal: hello/[email protected]
ipa: ERROR: Host does not have corresponding DNS A/AAAA record
[stack@overcloud ~]$ ipa service-find

Strange error, I don’t understand, as the Host does have an A record.

Work around it with Force:

ipa service-add  --force  hello/[email protected]

Success:

------------------------------------------------------------------------------
Added service "hello/[email protected]"
------------------------------------------------------------------------------
  Principal: hello/[email protected]
  Managed by: undercloud.ayoung-dell-t1700.test

OK, lets try running this again.

 sasl2-sample-client -s hello -p 1789 -m GSSAPI 
...

SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (KDC has no support for encryption type)

Keytabs

OK, I’m going to guess that this is because my remote service can’t deal with the Kerberos service tickets it is getting. Since the service tickets are for the principal: hello/[email protected] it needs to be able to decrypt requests using a key meant for this principal.

Fetch a keytab for that principal, and put it in a place where the GSSAPI libraries can access it automatically. This place is:

/var/kerberos/krb5/user/{uid}

Where {uid} is the numeric UID for a users. In this case, the users name is stack and I can find the numeric UID value using getent.

KRB5_KTNAME=/var/kerberos/krb5/user/1000/client.keytab

ipa-getkeytab -p hello/[email protected] -k client.keytab  -s identity.ayoung-dell-t1700.test
Keytab successfully retrieved and stored in: client.keytab
$  getent passwd stack
stack:x:1000:1000::/home/stack:/bin/bash
$ sudo mkdir /var/kerberos/krb5/user/1000
$ sudo chown stack:stack /var/kerberos/krb5/user/1000
$ mv client.keytab /var/kerberos/krb5/user/1000

Restart the server process, try again, and the log is interesting. Here is the full client side trace.

$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
receiving capability list... recv: {6}
GSSAPI
GSSAPI
please enter an authorization id: admin
using mechanism GSSAPI
send: {6}
GSSAPI
send: {1}
Y
send: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC][91][F0][A]D[91][F6][FA][F4][B9][13][DF]d|[F4]Y[DF][9E]M[A2]f[11][15]x[C5]-|Qt[F4]nL>@[F4][18][FF],[F6][B5]F6[EC]+[C3]V[F1][81][97][E2][1D]i[4]wD&[9A]V[CE][A1][16][D7]4[E0]C[B]O[D1]v[DD][E9][84]lW[DA]%[F6]v[93]<m"SAfiF[8E][[95]"[CC][D2]4[FA]_[FB]i[E7][D4]M[AE][5][82][FF][D7][0][8C]6[8D][B0]3[F8][E3][B4]P[9C][9E][A2]`[7]U[F7][1D]zub[E0]([A9]P>[AE]f[1A][B1][80][A0]}s[EA][D1]Zk[FF]n_S[9E]rK[E5]n [85]#[DB][FF][B3][E2][19];[F5][E2][8A]>2[E5][A4][81][E8]z[9D][E3][BC][C8][87][F]:[81]7[C9]ix[1E]5[15])[8D][9D][C7][DB][13][98][97][C7]C[6]q[D2][C1][ED][B3]:[E0]
waiting for server reply...
authentication failed
closing connection

On the server side, it looks similar, but ends like this:

starting SASL negotiation: generic failureclosing connection

It is not a GSSAPI error this time. To dig deeper, I’m going to look at the source code on the server side.

Debugging

I’ll shortcut a few steps. Install both gdb and the debugInfo for the sample code:

sudo yum install gdb
sudo debuginfo-install cyrus-sasl-devel-2.1.26-20.el7_2.x86_64

Note that the version might change for the debuginfo.

The source code is included with the debuginfo rpm:

$ rpmquery  --list cyrus-sasl-debuginfo-2.1.26-20.el7_2.x86_64 | grep server.c
/usr/src/debug/cyrus-sasl-2.1.26/lib/server.c
/usr/src/debug/cyrus-sasl-2.1.26/sample/server.c

Looking at the server code at line 267 I see:

if (r != SASL_OK && r != SASL_CONTINUE) {
saslerr(r, “starting SASL negotiation”);
fputc(‘N’, out); /* send NO to client */
fflush(out);
return -1;
}

Let’s put a breakpoint at line 255 above it and see what is happening. Here is the session for setting up the breakpoint:

$  gdb /usr/bin/sasl2-sample-server
...
(gdb) break 255
Breakpoint 1 at 0x2557: file server.c, line 255.
(gdb) run  -h undercloud.ayoung-dell-t1700.test -p 1789 -m GSSAPI

Running the client code gets as far as prompting for the please enter an authorization id: admiyo

This is suspect. We’ll come back to it in a moment.

Back on the server, now, we see the breakpoint has been hit.

Breakpoint 1, mysasl_negotiate (in=0x55555575c150, out=0x55555575c390, conn=0x55555575a6e0)
    at server.c:255
255	    if(buf[0] == 'Y') {
Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.5.8-3.el7.x86_64 libdb-5.3.21-19.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 nss-softokn-freebl-3.16.2.3-14.2.el7_2.x86_64 openssl-libs-1.0.1e-51.el7_2.7.x86_64 pcre-8.32-15.el7_2.1.x86_64 xz-libs-5.1.2-12alpha.el7.x86_64 zlib-1.2.7-15.el7.x86_64

We might need some other RPMS if we want to step deeper through the code, but for now, let’s keep on here.

(gdb) print buf
$1 = "Y", '\000' ...
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
(gdb) n
recv: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC]hgdf j[CF][AE][7F]:![1C]D[F8]3^w[B7];"[3][D8]3"[8]i[9]J[D3]R[F]A[E7]![BE]0<[8][D3]'j`[B7]J[16][A9][F3][E6]=[E5]J[FE].-[A1]t[[2]W[8D]7[F3][8][EC][92][BB][A3]o5h[C1]A[CC][A2][F1][99][AA][93]2{[BA]Mx0[9D][9][CC]![A]Y[12][D8][2][95][17]ml[B4][1A][94]y[1A][BC][D2]I[8F]7Vg2[8E]6[13]:Lx[E6][1][D3][3][7]r?[12][84]3[B1][B5][AA]E)[EA][87][A][9F]Nk[D1]I[FD]{[B8]9#-[D][8]2[CC]C1[A8]Lfl[B0][E8][82][13][F9]t[1A][F6]^[8D] O13[12]L[E7][C0]k[99][E1]J[1F][FE]#[14]u[B][B2][8F][DB][E6]73*[FA][ED][11][F7][9E][B0][DC][D9][19][AB][97][D7][8B][BB]
260	        r = sasl_server_start(conn, chosenmech, buf, len,
(gdb) print len
$2 = 1
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
(gdb) 
260	        r = sasl_server_start(conn, chosenmech, buf, len,
(gdb) 
267	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$3 = -1

A -1 response code usually is an error. Looking in /usr/include/sasl/sasl.h:

#define SASL_FAIL -1 /* generic failure */

I wonder if we can figure out why. Let’ see, first, if we can figure out what the client is sending in the authentication request. If it is a bad principal, then we have a pretty good reason to expect the server to reject it.

Let’s let the server continue running, and try debugging the client.

Client code can be found here

$ rpmquery  --list cyrus-sasl-debuginfo | grep client.c
/usr/src/debug/cyrus-sasl-2.1.26/lib/client.c
/usr/src/debug/cyrus-sasl-2.1.26/sample/client.c

At line 258 I see the call to sasl_client_start which includes what appears to be the initialization of the data variable. Set a breakpoint there

Running the code in the debugger like this:

$ gdb sasl2-sample-client
...
(gdb) break 258
Breakpoint 1 at 0x201b: file client.c, line 258.
(gdb) run -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
Starting program: /bin/sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
receiving capability list... recv: {6}
GSSAPI
GSSAPI

Breakpoint 1, mysasl_negotiate (in=0x55555575cab0, out=0x55555575ccf0, conn=0x55555575b520)
    at client.c:258
258	    r = sasl_client_start(conn, mech, NULL, &data, &len, &chosenmech);
(gdb) print data
$1 = 0x0
(gdb) print mech
$2 = 0x7fffffffe714 "GSSAPI"
(gdb) print conn
$3 = (sasl_conn_t *) 0x55555575b520
(gdb) print len
$4 = 6
(gdb) n
please enter an authorization id: 

So it is the SASL library itself requesting an authorization ID. Let me try putting in the full Principal associated with the service ticket.

 
please enter an authorization id: [email protected]
259	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$5 = 1
(gdb) 

And from sasl.h we know that is good.

#define SASL_CONTINUE 1 /* another step is needed in authentication */

Let’s let it continue.

authentication failed

Nope. Continuing through the debugger, I see another generic failure here:

1531	            } else {
1532	                /* Mech wants client-first, so let them have it */
1533	                result = sasl_server_step(conn,
1534	                                          clientin,
1535						  clientinlen,
1536	                                          serverout,
1537						  serveroutlen);
(gdb) n
1557	    if (  result != SASL_OK
(gdb) print result
$15 = -1

Still…why is the Client side SASL call kicking into an interactive prompt? There should be enough information via the GSSAPI SASL library interaction to authenticate. The Man page for sasl_client_start even indicates that there might be prompts returned.

Looking deeper at the client code, I do see that the prompt is from line 122. The function simple at line 107 must be set as a callback. Perhaps the client code is not smart enough to work with the GSSAPI? At line 190 and 192 I see that the simple code is provided as a callback for the responses SASL_CB_USER or SASL_CB_AUTHNAME. Setting a break point and rerunning shows the id value to be 16385 or x4001.

#define SASL_CB_USER 0x4001 /* client user identity to login as */

 

Humility and Success

If you have followed through this far, you know I am in the weeds. I asked for help. Help, in this case,was Robbie Harwood, how showed me that the sample server/client worked OK if I ran the server as root, and userd the service host instead of hello. That gave me a succesfful comparison other to work with. I ran using strace and noticed that the failing version was not trying to read the keytab file from /var/kerberos/krb5/user/1000/client.keytab. The successful one running as root read the keytab from /etc/krb5.keytab THe failing one was trying to read from there and getting a permissions failure. The final blow that took down the wall was to realize that the krb5.conf file defined different values for default_client_keytab_name and default_keytab_name, with the latter being set to FILE:/etc/krb5.keytab. To work around this, I needed the environment variable KRB5_KTNAME to be set to the keytab. This was the winning entry:

KRB5_KTNAME=/var/kerberos/krb5/user/1000/client.keytab  sasl2-sample-server -h $HOSTNAME -p 9999 -s hello -m GSSAPI 

And then ran

sasl2-sample-client -s hello -p 9999 -m GSSAPI undercloud.ayoung-dell-t1700.test

Oh, one other tyhing Robbie told me was that the string I type when prompted with

please enter an authorization id:

Should be the Kerberos principal, minus the Realm, so for me it was

please enter an authorization id: ayoung

Libre Application Summit 2016

Posted by Adam Williamson on October 07, 2016 01:02 AM

I had a great time at the Libre Application Summit in sunny, hipster moustachioed Portland – many thanks to Sri for inviting me. Sorry this blog post is a bit late, but things have been really busy with the Fedora 25 Beta release (which we signed off on today).

For a first year conference without a crazy marketing budget or anything, attendance was great – it was a good size for the venue, the number of sessions, and the social events, things felt busy and there was a lot of people-getting-to-know-each-other going on. Sri, Adelia and friends did a great job of finding a good venue and getting a solid wifi network, providing food and coffee, and setting up some fun social events.

They also did a great job getting some really interesting talks from both high-profile and regular-profile folks 🙂

Matthew Garrett gave one of his usual thought provoking talks about the relationship between security and privacy, and the possibility of using the concept of ‘safety’ to make sure we consider both appropriately when designing software, especially software which gathers ‘user data’ in any way.

Bradley Kuhn led a BoF on licensing which rapidly sprawled out to cover all kinds of topics; it might not have been the most directly productive session ever but we covered a lot of ground and had a good time, and it’s always great to pick Bradley’s brain on stuff.

Asheesh Laroia led a couple of sessions on the super-interesting Sandstorm project he works on, which is…well it’s almost like a Flatpak or a Snappy but for webapps, I thought it was a great idea to have him at the conference as Sandstorm makes an interesting contrast to the desktop application sandboxing stuff that’s hot right now.

Jim Hall and Ciarrai Cunneen gave a great talk on the GNOME usability testing that’s been going on recently under the banner of the Outreachy project; it was awesome to hear both how GNOME’s been going about usability testing and how Outreachy is achieving really useful results. The results clearly already highlighted some opportunities to improve and I was impressed with the way they were analyzing where the testing process could be improved and planning how to do an even better job with the next round.

There were a couple of sessions from Endless folks, including Matt Dalio, which were also a great window into exactly where Endless is going these days; I’ve always been kinda roughly aware of them without knowing exactly what they were doing, so this was good stuff. In a lot of ways they seem like they’re hoeing the same row as OLPC did in its heyday, but maybe with a little more focus and a bit more of a commercial mindset. It’s always good value for money to have someone stand up in front of a room full of people used to always-on, uncapped, 50+Mb/sec internet connections and ask them to think about how well their stuff works when you have a modem-speed connection that’s maybe accessible for a couple of hours at a time…

There were lots of other good sessions, a GNOME release party, and of course the hallway track was in full effect; it was great to see Bryan and Matthias and a bunch of other folks, and good to meet lots of new people too.

I did a slightly condensed and probably somewhat garbled version of the openQA presentation I gave with Richard Brown at LFNW this year, and talked to several folks about how openQA could possibly be useful in testing; Sri and Richard and I agreed that we could try and set up openQA testing of GNOME Continuous builds, if we can all find a bit of free time to work on it.

Overall it was a great event and I’m glad Sri convinced me to poke my head out of my apartment for a few days 🙂 It should be back bigger and better next year, so do consider coming along.

Design - Steps into tickets - the Track ticket.

Posted by mythcat on October 06, 2016 11:40 PM
The first step in this is to make sure that the subject completely design team fedora.
A second step is to check the validity according to legal terms.
In this category are the rules established by the international community and the teams fedora.
The process seems simple but requires time and cooperation.
Let's see some example:
Example 1: The python logo can be used according to the python official website:
The Python Logo Projects and companies that use Python are encouraged to incorporate the Python logo on their websites, brochures, packaging, and elsewhere to indicate suitability for use with Python or implementation in Python. Use of the "two snakes" logo element alone, without the accompanying wordmark is permitted on the same terms as the combined logo.
Example 2: The Fedora logo usage:
These are the official brand and logo usage guidelines for Fedora. Usage of Fedora and related logos must follow the guidelines as specified below. All uses must also comply with the official trademark guidelines.
Trac has a built in small and powerful wiki rendering engine and used Wiki markup to fill your issues.
You can add files to you ticket and used comments to solve it.
Now the fedora design team used trac. The official webpage told us about trac:
Trac is a web-based software project management and bug/issue tracking system emphasizing ease of use and low ceremony. It provides an integrated Wiki, an interface to version control systems, and a number of convenient ways to stay on top of events and changes within a project. Trac is distributed under the modified BSD License. The complete text of the license can be found online as well as in the COPYING file included in the distribution.
When you open a new ticket you need to fill with all infos and data to make it to make it solvable.
Also it would better to be discussed in team meetings fedora design because this will brainstorms ideas and will avoid comments or any inconsistency with another ticket or usage of output. When you used to open the new ticket then you may use WikiFormatting .
The new Create New Ticket come with some field:
Properties - all about this ticket:
Summary - is more like a title , and is a little summary of the ticket.
Description - is the description of the ticket and has at least three components completed in order
to facilitate the work will solve the ticket and come with the three components:
= phenomenon =
= reason =
= recommendation =
The type - is the type of ticket and this will put into area of interest and that will show us the interaction with the understanding of which will be part of the final component will be.
Type:
- Design and Tools Education
- Digital Artwork;
- Print/Swag Design;
- Release Artwork;
- UX/Interaction Research and Design;
- Web Design;
- Team Maintenance;
Priority - will determine the team needs intervention and is four:
release blocker
high
medium
low
Severity - represent the complexity in solving this ticket
- Quick & Easy
- Moderately Involved
- Long-Term / Complex Issue
Keywords - the unused part but that could solve many problems in the management packetelor, eg logo, banner, wallpaper, badget, foss, devel-team, infra-team
Cc - indicates those who are to receive a copy
The next editbox is used for owner, blocking issue and put to working: Blocked By,Blocking and Owner.
The next step is resolving proper, comments remediation work and finally upload the ticket closure.
Although it seems simple in real completing the final result emerges by differentiation with similar components in the online environment in terms of the one he sees and the one working in similar areas of design.
My point: The best results are obtained not through cooperation at operational management. They appear as the sum of the factors involved in platforms and solutions implemented for the design team members. Therefore their knowledge is very useful.

util-linux v2.29 -- what's new?

Posted by Karel Zak on October 06, 2016 11:18 PM
The release v2.29 (now rc1) is without dramatical changes, the small exception is libsmartcols where we have many improvements. 

The old good cal(1) is more user-friendly now. It's possible to specify month by name (e.g. "cal January 2017") and use relative placeholders, for example:

        cal now
cal '1 year ago'
cal '+2 months'

fdisk(8) allows to wipe newly created partitions -- the feature is possible to control by new command line option --wipe-partitions[==auto|never|default]. 
 The default in the interactive mode is to ask user when a filesystem or RAID signature is detected. The goal is to be sure that new block devices are usable without any collisions and extra wipefs(8) step (because users are lazy and mkfs-like programs are often no smart enough to wipe the device). 

findmnt --verify is probably the most attractive new feature for admins. The command scans /etc/fstab and tries to verify the configuration. The traditional way is to use "mount -a" for this purpose, but it's overkill. The new --verify does not call mount(2), but it checks parsability, LABEL/UUID/etc. translation to paths, mountpoints order, support for specified FS types. The option --verify together with --verbose provides many details. 

For example my ext4 filesystems:

# findmnt --verify --verbose -t ext4
/
[ ] target exists
[ ] LABEL=ROOT translated to /dev/sda4
[ ] source /dev/sda4 exists
[ ] FS type is ext4
[W] recommended root FS passno is 1 (current is 2)
/boot
[ ] target exists
[ ] UUID=c5490147-2a6c-4c8a-aa1b-33492034f927 translated to /dev/sda2
[ ] source /dev/sda2 exists
[ ] FS type is ext4
/home
[ ] target exists
[ ] UUID=196972ad-3b13-4bba-ac54-4cb3f7b409a4 translated to /dev/sda3
[ ] source /dev/sda3 exists
[ ] FS type is ext4
/home/misc
[E] unreachable on boot required target: No such file or directory
[ ] UUID=e8ce5375-29d4-4e2f-a688-d3bae4b8d162 translated to /dev/sda5
[ ] source /dev/sda5 exists
[ ] FS type is ext4

0 parse errors, 1 error, 1 warning


When you create multiple loop block devices from one backing file then Linux kernel does not care about possible collisions and the same on-disk filesystem is maintained by multiple independent in-memory filesystem instances. The result is obvious -- data lost and filesystem damage.

Now mount(8) rejects requests to create another device and mount filesystem for the same backing file. The command losetup --nooverlap reuse loop device if already exists for the same backing file. All the functionality calculate with offset and sizelimit options of course, so it's fine to have multiple regions (partitions) in the same image file and mount all of them in the same time. The restriction is that the regions should not overlap. Thanks to Stanislav Brabec from Suse! 

Heiko Carstens from IBM (thanks!) has improved lscpu(1) for s390. Now it supports "drawer" topology level, static and dynamic MHz, machine type and a new option --physical. 

The most important libsmartcols change is probably better support for multi-line cells. Now the library supports custom cell wrap functions -- this allows to wrap your text in cells after words, line breaks, etc. See multi-line cells (WRAPNL column) output: 

TREE           ID PARENT WRAPNL
aaaa 1 0 aaa
├─bbb 2 1 bbbbb
│ ├─ee 5 2 hello
│ │ baby
│ └─ffff 6 2 aaa
│ bbb
│ ccc
│ ddd
├─ccccc 3 1 cccc
│ │ CCCC
│ └─gggggg 7 3 eee
│ ├─hhh 8 7 fffff
│ │ └─iiiiii 9 8 g
│ │ hhhhh
│ └─jj 10 7 ppppppppp
└─dddddd 4 1 dddddddd
DDDD
DD

The another change is support for user defined padding chars; we use this feature for LIBSMARTCOLS_DEBUG_PADDING=on|off, for example: 

   $ LIBSMARTCOLS_DEBUG=all LIBSMARTCOLS_DEBUG_PADDING=on findmnt 2> /dev/null

For me really important is that we have regression tests for all libsmartcols table and tree formatting code now :-) 

Igor Gnatenko from Red Hat (thanks!) continues to work on Python binding for libsmartcols, see https://github.com/ignatenkobrain/python-smartcols and see example below.

The idea is to use libsmartcols as output formatter for Fedora/RHEL dnf (package manager for RPM-based Linux distributions, yum replacement). This is also reason why libsmartcols has been massively extended and improved in the last releases. 

That's all. Thanks also to Werner Fink, Sami Kerola, Ruediger Meier and many others contributors! 

#!/bin/python3

import smartcols

tb = smartcols.Table()
name = tb.new_column("NAME")
name.tree = True
age = tb.new_column("AGE")
age.right = True

ggf = tb.new_line()
ggf[name] = "John"
ggf[age] = "70"

gfa = tb.new_line(ggf)
gfa[name] = "Donald"
gfa[age] = "50"

fa = tb.new_line(gfa)
fa[name] = "Benny"
fa[age] = "30"

ln = tb.new_line(fa)
ln[name] = "Arlen"
ln[age] = "5"

ln = tb.new_line(fa)
ln[name] = "Gerge"
ln[age] = "7"

fa = tb.new_line(gfa)
fa[name] = "Berry"
fa[age] = "32"

ln = tb.new_line(ggf)
ln[name] = "Alex"
ln[age] = "44"

print(tb)
  
NAME        AGE
John 70
├─Donald 50
│ ├─Benny 30
│ │ ├─Arlen 5
│ │ └─Gerge 7
│ └─Berry 32
└─Alex 44

rawhide 4.9.0 pre rc1 kernels

Posted by Kevin Fenzi on October 06, 2016 07:27 PM

Just a quick note that with the upstream kernel folks opening up the merge window for the 4.9 linux kernel, rawhide also has been getting these merge kernels. Of course there’s a ton of churn when the kernel merge window is open, vast amounts of things are merged, so it’s a wonder they usually work as well as they do.

In this case there seems to be some pretty severe issues with at least kernel-4.9.0-0.rc0.git1.1.fc26 and kernel-4.9.0-0.rc0.git2.1.fc26 at least. In vm’s they seem to boot and then start oopsing. On my laptop it does boot, but then waits about 2 minutes at waiting for usb devices, boots the rest of the way up and then has no wireless or usb devices.

https://bugzilla.redhat.com/1382134 is tracking at least part of this issue and the kernel maintainers are digging into it. Of course while they are, more things are being merged (possibly even fixes to these issues), so hopefully it will all shake out in the next few days.

In the mean time, rawhide users are advised to just switch back to the 4.8.0 final kernel for now.

Fedora 24 et 25 Alpha : bogue important lors d'une mise à jour

Posted by Charles-Antoine Couret on October 06, 2016 06:08 PM

Ce message fait suite à une discussion ayant eu lieu dans la liste de diffusion de développement de Fedora.

Il a été constaté que Fedora 24 et Fedora 25 Alpha sont touchés par un bogue lors d'une mise à jour du paquet systemd-udev. Cela se manifeste quand sont réunies trois conditions qui sont :

  • Avoir une machine possédant deux unités graphiques, ce qui est le cas de beaucoup d'ordinateurs portables modernes ;
  • Exécuter la mise à jour par la commande "dnf update" dans un terminal graphique (comme GNOME Terminal, Konsole, etc.) ;
  • L'environnement graphique doit fonctionner sous X.org / X11 et non Wayland (Wayland n'étant par défaut que pour GNOME sur Fedora 25).

Dans ce cas, en cas de mise à jour du paquet dans ces conditions, le serveur X risque de crasher ce qui entraînera une coupure de la mise à jour en pleine opération et potentiellement une base de données des paquets et votre système dans un état incohérent (avec des paquets en double).

Pour éviter ce problème, vous pouvez faire :

  • Utiliser GNOME Logiciels pour les mises à jour, qui fait une mise à jour dite hors-ligne ;
  • Utiliser PackageKit pour effectuer vos mises à jour en mode hors-ligne* ;
  • Mettre à jour via les terminaux en mode texte du système (accessible via Ctrl+Alt+FX pour X valant 3 à 6) ;
  • Utiliser Wayland si vous êtes sur GNOME et Fedora 25 (sinon c'est déconseillé) ;
  • La mise à jour hors ligne consiste à réaliser les mises à jour lors du démarrage de votre ordinateur, de manière similaire à Windows. L'objectif est de s'assurer qu'après la mise à jour tous les composants soient lancés effectivement à jour (ce qui n'est pas le cas quand vous le faite à chaud) mais aussi de réaliser cette opération dans un environnement minimal et contrôlé ce qui limite les problèmes. Depuis Fedora 18 c'est la méthode recommandée par le Projet Fedora et GNOME Logiciels en tire parti. Nous profitons de ce bogue pour vous rappeler de l'utilité de suivre cette procédure lors des mises à jour.

Pour faire cela en terminal avec PackageKit, vous pouvez exécuter les commandes suivantes avec les droits superutilisateur :

# pkcon refresh force
# pkcon update --only-download
# pkcon offline-trigger
# systemctl reboot

Le problème est en cours de correction auprès de systemd et de X11.

RHEL containers on non-RHEL hosts

Posted by Patrick Uiterwijk on October 06, 2016 03:40 PM

I now do most of my development work in a setup based on RPM-OSTree with my own trees, and doing most of my development work inside containers.

However, I do still work for Red Hat, so would like to test stuff against Red Hat Enterprise Linux-based platforms, but as you might be aware getting the required entitlements setup is considered "difficult", so I did as probably a lot of people do: I used CentOS containers, just because they don't require fiddling with the entitlement stuff.

The other day, however, I decided that enough is enough and that I need to eat our own dogfood, so that I need to finally get it set up. Now, if I were just running a RHEL host as development machine itself, it would figure itself out all automatically, but obviously I don't do that, working for Fedora Infrastructure et al, so I set out to figure out how to get it working on a Fedora host.

Turns out: it's not actually that difficult as people make it to be, so let me write down how I got it working.

Note: This requires the "Docker Super Secrets" patch. If you don't have this patch (it's in the Fedora/RHEL distribution of docker), you will need to manually copy/mount the directory. I'll explain that at the end of this post.

Note 2: You will only be able to follow this when you have actual RHEL entitlements. This will NOT help you get around that, only explain how to use your existing RHEL entitlements.

Setting it up

First off, open a terminal on the host and touch a file in /usr/share/rhel/secrets/ to make sure that this directory is writable (it's not on ostree-based deployments, which means you'll need to bind-mount it to a directory that is with something like "mount --bind /home/secret/ /usr/share/rhel/secrets/").

Now, inside this directory, create the following directory structure:

/usr/share/rhel/secrets
\- etc-pki-entitlement
\- rhsm
  |- ca

Now, you will need some files from an existing RHEL installation, but you can just pull down and start a RHEL container and pull them from there, as they are public files. From any RHEL system, copy /etc/rhsm/rhsm.conf to your hosts' secrets/rhsm/rhsm.conf, and /etc/rhsm/ca/redhat-uep.pem into the hosts' secrets/rhsm/ca/redhat-uep.pem.

Those are all the public files, now you will need the RHEL entitlement certificates. For this, go to access.redhat.com, and click on the system (or create one) that you are running this on. Make sure it's the correct architecture and RHEL version.

Next, make sure there are actually subscriptions attached to this system: you will see these listed under "Attached Subscriptions", and the will have View and Download links under "Entitlement Certificate". Click on the Download link under the entitlement that you want to use.

This single PEM file that you just downloaded, contains a certificate, entitlement data, a signature and a private key. Copy this file to the hosts' /usr/share/rhel/secrets/etc-pki-entitlement/entitlement.pem AND /usr/share/rhel/secrets/etc-pki-entitlement/entitlement-key.pem . Yes, that means copying one file to two locations. Normally, Red Hat Subscription Manager splits the private key out of the certificate file before writing, and it expects to the reverse when reading, but we have a single combined file. You could split it manually, but just copying it to two locations is easier in my opinion.

Now, that should be all. Yes, that's all there is to it. When you now start a new container ("docker run -it registry.access.redhat.com/rhel7/rhel:latest"), you should see these files in /run/secrets and provided that your entitlement is of the correct version/arch, you can now run yum update!

Without super secrets patch

Now, I had promised to also tell how to use this if you do not have the super secrets patch. This can be done by creating /usr/share/rhel/secrets, performing the steps before, and adding a "-v /usr/share/rhel/secrets:/run/secrets" argument to the docker run call. Note that this likely won't work if you DO have the super secrets patch, because the mounting would get confused.

FOSS Wave: Bangalore at UVCE

Posted by Fedora Community Blog on October 06, 2016 08:15 AM
FOSS Wave - Bangalore, India: Standing in front of University Visvesvaraya College of Engineering

Standing in front of University Visvesvaraya College of Engineering preparing for the FOSS Wave event

It was another lazy Saturday with a rare sight of empty Bangalore roads. This FOSS Wave event in Bangalore had been in planning for almost a month. Finally, here we were on September 10th, 2016 in front of the almost a century old structure of University Visvesvaraya College of Engineering.

Five speakers reached the venue by 9:30am. We were to talk in two different sessions starting from 10:30am until 4:00pm on the following topics.

  • FOSS, its philosophy, ethics and importance
  • Fedora and contributing to the Fedora Project
  • Eminent women in the history of tech and FOSS
  • Basics of git and GitHub

Special thanks

Before talking about the event, I would like to thank a few people whose presence made this event a huge success. I would like to thank…

  • Prathik and IEEE UVCE: For being wonderful hosts and making sure that the event went ahead smoothly. They put in a lot of hard work.
  • Sumantro Mukherjee: A mentor, teacher, friend, critic and leader who has always supported me.
  • Vipul Siddharth: A friend who is always with me all the time and has my back.
  • Kanika and Sarah: For being such awesome co-speakers.
  • All the attendees who came in huge numbers and made this event a grand success.

The event started at 10:45am in the seminar hall of UVCE. The five speakers who talked at the event were (in order) :

  1. Vipul Siddharth
  2. Sumantro Mukherjee
  3. Kanika Murarka
  4. Prakash Mishra (Me)
  5. Sarah Masud
FOSS Wave - Bangalore at UVCE: Speakers for the event

The speakers for the event on UVCE campus

Introducing FOSS in Bangalore

First, we had Vipul Siddharth, who gave an intriguing on FOSS: its philosophy, ethics, and importance. The talk covered various important aspects of FOSS such as…

  1. What is FOSS?
  2. Why you should contribute to FOSS?
  3. Areas of contribution
  4. How and where to get started with FOSS contributions
FOSS Wave - Bangalore at UVCE: Vipul Siddharth leads a talk on what FOSS is and how to get involved

Vipul Siddharth leads a talk on what FOSS is and how to get involved

Introducing the Fedora Project

Coming up next, we had an engaging talk by Sumantro Mukherjee who covered a wide range of topics about the Fedora Project and how to contribute. His talk included…

  1. How to start with Fedora contributions
  2. Where to contribute? – Testing, Documentation, Packaging, CommOps, etc.
  3. Creating a FAS account, joining a mailing list, and how mailing lists work.
  4. Using IRC to connect with the community
  5. Fedora Apps
FOSS Wave - Bangalore at UVCE: Sumantro Mukherjee presents the Fedora Project to the audience and how to contribute

Sumantro Mukherjee presents the Fedora Project to the audience and how to contribute

Power in numbers

Kanika Murarka took the stage next and delivered a stirring talk on Women in FOSS and technology. She shared her own journey so far with FOSS and what motivated her to take the FOSS way. She also showed how prominent women, time and time again, have gone against all stereotypes and changed the landscape of technology in history and the present alike. Some prominent women that she gave examples of were…

<figure class="wp-caption aligncenter" id="attachment_173">
FOSS Wave - Bangalore at UVCE: Kanika Murarka speaks about her experience in FOSS communities and prominent female figures in open source

Kanika Murarka speaks about her experience in FOSS communities and prominent female figures in open source

</figure>

We dispersed for lunch at 12:20pm and met back for the post-lunch session by 1:30pm.

World of version control

Sarah Masud and I spoke in the post-lunch session about version control systems, git, and GitHub. This also included a hands-on demo of git and how to use it to contribute to a repository on GitHub.  Sarah was a helpful co-speaker who took care of much of the demonstration. Though there were a few technical glitches, she conducted the talk with me and we complemented each other on stage. Our talk covered some basics of git and GitHub such as…

  • What is version control?
  • Need for a version control system
  • Methods of version control
  • What is git, who developed it, and why you should use it?
  • What is GitHub and why you should use it?
  • Setting up and configuring git
  • Stages of file tracking
  • Creating a new organization and repository on GitHub
  • Basic git commands: git status, git clone, git diff, git add, git commit, git push, etc.
  • Hosting a static website on GitHub
FOSS Wave - Bangalore at UVCE: Prakash Mishra speaking to the audience about version control and git

Prakash Mishra speaking to the audience about version control and git

FOSS Wave - Bangalore at UVCE: Sarah Masud explains GitHub to the audience

Sarah Masud explains GitHub to the audience

Later, we had Sumantro on stage again to speak to participants about “forking” a repository on GitHub, pull requests, and how open source contributions to GitHub repositories work, with wonderful examples.

Wrapping up in Bangalore

The response by the audience was wonderful and they listened to the sessions with rapt attention. They also raised a few awesome questions to have their doubts cleared. The organizers, looking at the response, also discussed with us the probability of conducting more such events in the future. If plans go as expected, we might have another workshop again in the succeeding months.

We wrapped up the session by 4:00pm in the evening. This was my first experience as a speaker for FOSS and I look forward to organizing and speaking at many more workshops in the future.

If you have any feedback or suggestions, please do mail me on prakashmishra1598 [at] gmail [dot] com.

More pictures

Here are a few more pictures from the workshop.

<figure class="wp-caption alignnone" id="attachment_247"> <figcaption class="wp-caption-text">
FOSS Wave - Bangalore at UVCE: A packed room for the start of the day!

A packed room for the start of the day!

</figcaption>
FOSS Wave - Bangalore at UVCE: A GitHub recap session led by Sumantro

A GitHub recap session led by Sumantro

FOSS Wave - Bangalore at UVCE: Prathik giving a token of gratitude to the team

Prathik giving a token of gratitude to the team

FOSS Wave - Bangalore at UVCE: The whole team: organizers and speakers together

The whole team: organizers and speakers together

</figure>

The post FOSS Wave: Bangalore at UVCE appeared first on Fedora Community Blog.

I18N Red Hatter Recruitment

Posted by Takao Fujiwara on October 06, 2016 03:47 AM

Are you interested in I18N (internationalization) engineering?
We’re recruiting who is familiar with Linux open source and I18N developments. It’s also great if you’re good at Japanese language.
Click here with detail.

10 basic linux security measures everyone should be doing

Posted by Kevin Fenzi on October 05, 2016 06:31 PM

Akin to locking your doors and closing your windows there’s some really basic things everyone should be doing with their Linux installs (This is of course written from a Fedora viewpoint, but I think this pretty much applies to all computer OSes).

  1. Choose nice long passphrases you can remember. Most any modern system will have a pretty long limit on passphrases, so pick something nice and long that you can remember. Don’t think of them as passwords, they are phrases with many words.
  2. When installing, encrypt your drive(s). The performance hit is not noticeable and if you ever throw away a broken drive or someone steals your computer they won’t have your data.
  3. Apply updates regularly. If you aren’t someone who remembers to do so, setup something like dnf-automatic to just apply them every day for you in the middle of the night. Otherwise try and get into the habit of letting gnome-software do offline updates at some regular time.
  4. Along with (3), reboot when needed for new kernels or glibc or other things you use. Get used to rebooting on a regular schedule. Don’t be afraid of rebooting, get used to doing it.
  5. If you are in a place with untrusted people roaming around, do setup a screen locker and lock your computer when you are away from it.
  6. Make (and sometimes test) regular backups. You may not think of backups as a security measure, but they sure are. Think of the new fad of ‘ransomware’ where someone encrypts your data and sells you the key. If you have good backups you can just wipe that all out and restore from those. They are handy for lots of other reasons too.
  7. Don’t open weird attachments or links sent to you in email. If you didn’t ask for it, delete it.
  8. Don’t plugin weird devices you run across to your machine. (USB or otherwise). You can use a neat package called ‘usbguard’ to make sure no one else does while you are not around too.
  9. Use a passphrase manager or have some system to allow you to have long, not easily guessed passphrases at all the various applications you login to. There’s tons of these out there: Password managers: pass, keepassx, gpg encrypted file, etc. Schemes: Diceware, etc. Pick one that works for you.
  10. If you use a laptop/travel a lot, consider using a VPN for all your network needs. As long as you have an endpoint to connect to (your home server, your work, a vpn provider) you can send (almost) all your traffic over the vpn and thus avoid problems with people sniffing local traffic.

Some of these things require an initial investment of time (backups, vpn, passphrase manager, screen lock) and some require just making something a habit (long passphrases, apply updates regularly, reboot regularly, don’t open weird things in email or the physical world), but they are all worth it.

Will this make your computer “secure”? No. There’s no such thing. “secure” is not a binary state, it’s a process of assessing threats and deciding what you can or want to do about them. Doing the above things will protect you from some threats nicely (guessable passwords, untrusted people tampering with your computer, sniffing traffic, vulnerabilities that have already been fixed in software you use, etc), but will basically do nothing against others ( someone installing a keylogging device and recording everything you type, someone threatening you with harm to tell them some information, someone installing a spy cam and recording everything on your computer screen, someone using a non public vulnerability in software you use, someone social engineering access to your computer, etc).

Bodhi 2.2.4 released to fix two more issues

Posted by Bodhi on October 05, 2016 04:53 PM

This release fixes two issues:

  • #989, where Karma on non-autopush updates would reset the request to None.
  • #994, allowing Bodhi to be built on setuptools-28.

Running the Cyrus SASL Sample Server and Client

Posted by Adam Young on October 05, 2016 04:42 PM

When I start working on a new project, I usually start by writing a “Hello, World” program and going step by step from there. When trying to learn Cyrus SASL, I found I needed to something comparable, that showed both the client and server side of the connection. While the end state of using SASL should be communication that is both authenticated and encrypted, to start, I just wanted to see the protocol in action, using clear text and no authentication.

UPDATE: Note that the client and server code are provided with the cyrus-sasl-devel RPM on a Fedora system and comparable pacakges elsewhere.

I started by running the server:

/usr/bin/sasl2-sample-server  -h localhost -p 1789 -m ANONYMOUS

Why did I chose 1789? It is the port for the Hello server:

$ getent services hello
hello                 1789/tcp

The -m flag has the value of ANONYMOUS, saying no Authentication is required.

Starting up the server showed:

trying 2, 1, 6
trying 10, 1, 6
bind: Address already in use

This last line looks like a failure, but as we will see, it is not. I ignored it to start.

To test a connection to it, I ran the following in a second terminal window.

sasl2-sample-client -p 1789  -m ANONYMOUS localhost

Here is what that session looked like:

$ sasl2-sample-client -p 1789  -m ANONYMOUS localhost
receiving capability list... recv: {9}
ANONYMOUS
ANONYMOUS
please enter an authorization id: ADMIYO
using mechanism ANONYMOUS
send: {9}
ANONYMOUS
send: {1}
Y
send: {21}
[email protected]
waiting for server reply...
successful authentication
closing connection

Note that I was prompted for the authorization id and I entered the string’ADMIYO.’ I intentionally chose something that I would not expect to be a standard part of the output so I can see the effect I am having. Here is the server side of the communication as logged.

accepted new connection
forcing use of mechanism ANONYMOUS
send: {9}
ANONYMOUS
waiting for client mechanism...
recv: {9}
ANONYMOUS
recv: {1}
Y
recv: {21}
[email protected]
negotiation complete
successful authentication 'anonymous'
closing connection

Let’s take a look on the (virtual) wire. Running tcpdump like this:

 sudo  tcpdump -i lo port 1789

For the first part of the interaction (prior to typing in the string ADMIYO) The output is;

12:02:42.201997 IP6 localhost.53196 > localhost.hello: Flags [S], seq 2530750333, win 43690, options [mss 65476,sackOK,TS val 1486702922 ecr 0,nop,wscale 7], length 0
12:02:42.202012 IP6 localhost.hello > localhost.53196: Flags [R.], seq 0, ack 2530750334, win 0, length 0
12:02:42.202053 IP localhost.50258 > localhost.hello: Flags [S], seq 2408359983, win 43690, options [mss 65495,sackOK,TS val 1486702922 ecr 0,nop,wscale 7], length 0
12:02:42.202067 IP localhost.hello > localhost.50258: Flags [S.], seq 11931919, ack 2408359984, win 43690, options [mss 65495,sackOK,TS val 1486702922 ecr 1486702922,nop,wscale 7], length 0

Once I type in ADMIYO and hit return in the client I see:

12:04:51.107447 IP localhost.50258 > localhost.hello: Flags [P.], seq 1:15, ack 15, win 342, options [nop,nop,TS val 1486831827 ecr 1486702922], length 14
12:04:51.107530 IP localhost.hello > localhost.50258: Flags [.], ack 15, win 342, options [nop,nop,TS val 1486831827 ecr 1486831827], length 0
12:04:51.107551 IP localhost.50258 > localhost.hello: Flags [P.], seq 15:21, ack 15, win 342, options [nop,nop,TS val 1486831827 ecr 1486831827], length 6
12:04:51.107563 IP localhost.hello > localhost.50258: Flags [.], ack 21, win 342, options [nop,nop,TS val 1486831827 ecr 1486831827], length 0

Let’s see if the server can correctly translate the port for the “hello” service.

Running

$ /usr/bin/sasl2-sample-server  -h localhost -s hello  -m ANONYMOUS

TCP dump shows the following output:

12:06:57.628798 IP6 localhost.53252 > localhost.hello: Flags [S], seq 2637706072, win 43690, options [mss 65476,sackOK,TS val 1486958349 ecr 0,nop,wscale 7], length 0
12:06:57.628815 IP6 localhost.hello > localhost.53252: Flags [R.], seq 0, ack 2637706073, win 0, length 0
12:06:57.628859 IP localhost.50314 > localhost.hello: Flags [S], seq 1432008138, win 43690, options [mss 65495,sackOK,TS val 1486958349 ecr 0,nop,wscale 7], length 0
12:06:57.628875 IP localhost.hello > localhost.50314: Flags [R.], seq 0, ack 1432008139, win 0, length 0
12:07:21.065692 IP6 localhost.53262 > localhost.hello: Flags [S], seq 1562244294, win 43690, options [mss 65476,sackOK,TS val 1486981785 ecr 0,nop,wscale 7], length 0
12:07:21.065712 IP6 localhost.hello > localhost.53262: Flags [R.], seq 0, ack 1562244295, win 0, length 0
12:07:21.065775 IP localhost.50324 > localhost.hello: Flags [S], seq 4166967599, win 43690, options [mss 65495,sackOK,TS val 1486981786 ecr 0,nop,wscale 7], length 0
12:07:21.065791 IP localhost.hello > localhost.50324: Flags [R.], seq 0, ack 4166967600, win 0, length 0

Note that I had to change how I called the client to:

$ sasl2-sample-client -s hello  -m ANONYMOUS localhost

Why is that? My suspicion is that the Service name is part of the SASL handshake. Let’s see if we can find out. To start, let’s tell tcpdump to dump the contents of the packets out in hex and ascii:

sudo  tcpdump -XX -i lo port 1789

Running both the server and the client with the explicit port assigned I get the following dump:

12:12:08.992969 IP6 localhost.53316 > localhost.hello: Flags [S], seq 2611436863, win 43690, options [mss 65476,sackOK,TS val 1487269713 ecr 0,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 86dd 6000  ..............`.
	0x0010:  8995 0028 0640 0000 0000 0000 0000 0000  ...(.@..........
	0x0020:  0000 0000 0001 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0001 d044 06fd 9ba7 5d3f 0000  .......D....]?..
	0x0040:  0000 a002 aaaa 0030 0000 0204 ffc4 0402  .......0........
	0x0050:  080a 58a5 ef51 0000 0000 0103 0307       ..X..Q........
12:12:08.992986 IP6 localhost.hello > localhost.53316: Flags [R.], seq 0, ack 2611436864, win 0, length 0
	0x0000:  0000 0000 0000 0000 0000 0000 86dd 6007  ..............`.
	0x0010:  bb57 0014 0640 0000 0000 0000 0000 0000  .W...@..........
	0x0020:  0000 0000 0001 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0001 06fd d044 0000 0000 9ba7  .........D......
	0x0040:  5d40 5014 0000 001c 0000                 ]@P.......
12:12:08.993035 IP localhost.50378 > localhost.hello: Flags [S], seq 613533991, win 43690, options [mss 65495,sackOK,TS val 1487269713 ecr 0,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  003c 2676 4000 4006 1644 7f00 0001 7f00  .<&v@[email protected]......
	0x0020:  0001 c4ca 06fd 2491 c927 0000 0000 a002  ......$..'......
	0x0030:  aaaa fe30 0000 0204 ffd7 0402 080a 58a5  ...0..........X.
	0x0040:  ef51 0000 0000 0103 0307                 .Q........
12:12:08.993053 IP localhost.hello > localhost.50378: Flags [S.], seq 561556928, ack 613533992, win 43690, options [mss 65495,sackOK,TS val 1487269713 ecr 1487269713,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  003c 0000 4000 4006 3cba 7f00 0001 7f00  .<..@.@.<.......
	0x0020:  0001 06fd c4ca 2178 adc0 2491 c928 a012  ......!x..$..(..
	0x0030:  aaaa fe30 0000 0204 ffd7 0402 080a 58a5  ...0..........X.
	0x0040:  ef51 58a5 ef51 0103 0307                 .QX..Q....
12:12:11.741135 IP localhost.50378 > localhost.hello: Flags [P.], seq 1:15, ack 15, win 342, options [nop,nop,TS val 1487272461 ecr 1487269713], length 14
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0042 2679 4000 4006 163b 7f00 0001 7f00  .B&y@.@..;......
	0x0020:  0001 c4ca 06fd 2491 c928 2178 adcf 8018  ......$..(!x....
	0x0030:  0156 fe36 0000 0101 080a 58a5 fa0d 58a5  .V.6......X...X.
	0x0040:  ef51 7b39 7d0d 0a41 4e4f 4e59 4d4f 5553  .Q{9}..ANONYMOUS
12:12:11.741183 IP localhost.hello > localhost.50378: Flags [.], ack 15, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 4291 4000 4006 fa30 7f00 0001 7f00  .4B.@[email protected]......
	0x0020:  0001 06fd c4ca 2178 adcf 2491 c936 8010  ......!x..$..6..
	0x0030:  0156 fe28 0000 0101 080a 58a5 fa0d 58a5  .V.(......X...X.
	0x0040:  fa0d                                     ..
12:12:11.741193 IP localhost.50378 > localhost.hello: Flags [P.], seq 15:48, ack 15, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 33
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0055 267a 4000 4006 1627 7f00 0001 7f00  .U&z@.@..'......
	0x0020:  0001 c4ca 06fd 2491 c936 2178 adcf 8018  ......$..6!x....
	0x0030:  0156 fe49 0000 0101 080a 58a5 fa0d 58a5  .V.I......X...X.
	0x0040:  fa0d 7b31 7d0d 0a59 7b32 317d 0d0a 4144  ..{1}..Y{21}..AD
	0x0050:  4d49 594f 4061 796f 756e 6735 3431 2e74  [email protected]
	0x0060:  6573 74                                  est
12:12:11.741198 IP localhost.hello > localhost.50378: Flags [.], ack 48, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 4292 4000 4006 fa2f 7f00 0001 7f00  .4B.@.@../......
	0x0020:  0001 06fd c4ca 2178 adcf 2491 c957 8010  ......!x..$..W..
	0x0030:  0156 fe28 0000 0101 080a 58a5 fa0d 58a5  .V.(......X...X.
	0x0040:  fa0d                                     ..
12:12:11.741248 IP localhost.hello > localhost.50378: Flags [P.], seq 15:16, ack 48, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 1
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0035 4293 4000 4006 fa2d 7f00 0001 7f00  .5B.@[email protected]......
	0x0020:  0001 06fd c4ca 2178 adcf 2491 c957 8018  ......!x..$..W..
	0x0030:  0156 fe29 0000 0101 080a 58a5 fa0d 58a5  .V.)......X...X.
	0x0040:  fa0d 4f                                  ..O
12:12:11.741260 IP localhost.50378 > localhost.hello: Flags [.], ack 16, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 267b 4000 4006 1647 7f00 0001 7f00  .4&{@[email protected]......
	0x0020:  0001 c4ca 06fd 2491 c957 2178 add0 8010  ......$..W!x....
	0x0030:  0156 fe28 0000 0101 080a 58a5 fa0d 58a5  .V.(......X...X.
	0x0040:  fa0d                                     ..
12:12:11.741263 IP localhost.hello > localhost.50378: Flags [F.], seq 16, ack 48, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 4294 4000 4006 fa2d 7f00 0001 7f00  .4B.@[email protected]......
	0x0020:  0001 06fd c4ca 2178 add0 2491 c957 8011  ......!x..$..W..
	0x0030:  0156 fe28 0000 0101 080a 58a5 fa0d 58a5  .V.(......X...X.
	0x0040:  fa0d                                     ..
12:12:11.741285 IP localhost.hello > localhost.50378: Flags [.], ack 49, win 342, options [nop,nop,TS val 1487272461 ecr 1487272461], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 4295 4000 4006 fa2c 7f00 0001 7f00  .4B.@.@..,......
	0x0020:  0001 06fd c4ca 2178 add1 2491 c958 8010  ......!x..$..X..
	0x0030:  0156 fe28 0000 0101 080a 58a5 fa0d 58a5  .V.(......X...X.
	0x0040:  fa0d                                     ..

But running with -s hello shows nothing. Is it running on a different port? Let’s use LSOF to check. First run the server with the -s hello flag set. Then run lsof to see what is going on;

$ ps -ef | grep sasl
ayoung    2513 25933  0 12:14 pts/1    00:00:00 /usr/bin/sasl2-sample-server -h localhost -s hello -m ANONYMOUS
$ sudo lsof -p 2513  | grep TCP
sasl2-sam 2513 ayoung    3u  IPv4 26451981      0t0      TCP *:italk (LISTEN)
$ getent services italk
italk                 12345/tcp

Let’s see if tcpdump can confirm. Run it like this:

$ sudo  tcpdump -XX -i lo port 12345

And after running both server and client with -p hello I see

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes




12:18:48.995740 IP6 localhost.38730 > localhost.italk: Flags [S], seq 2085322154, win 43690, options [mss 65476,sackOK,TS val 1487669716 ecr 0,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 86dd 600a  ..............`.
	0x0010:  8706 0028 0640 0000 0000 0000 0000 0000  ...(.@..........
	0x0020:  0000 0000 0001 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0001 974a 3039 7c4b 7daa 0000  .......J09|K}...
	0x0040:  0000 a002 aaaa 0030 0000 0204 ffc4 0402  .......0........
	0x0050:  080a 58ac 09d4 0000 0000 0103 0307       ..X...........
12:18:48.995764 IP6 localhost.italk > localhost.38730: Flags [R.], seq 0, ack 2085322155, win 0, length 0
	0x0000:  0000 0000 0000 0000 0000 0000 86dd 600f  ..............`.
	0x0010:  e905 0014 0640 0000 0000 0000 0000 0000  .....@..........
	0x0020:  0000 0000 0001 0000 0000 0000 0000 0000  ................
	0x0030:  0000 0000 0001 3039 974a 0000 0000 7c4b  ......09.J....|K
	0x0040:  7dab 5014 0000 001c 0000                 }.P.......
12:18:48.995808 IP localhost.45714 > localhost.italk: Flags [S], seq 4246244983, win 43690, options [mss 65495,sackOK,TS val 1487669716 ecr 0,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  003c 87a3 4000 4006 b516 7f00 0001 7f00  .<..@.@.........
	0x0020:  0001 b292 3039 fd18 8e77 0000 0000 a002  ....09...w......
	0x0030:  aaaa fe30 0000 0204 ffd7 0402 080a 58ac  ...0..........X.
	0x0040:  09d4 0000 0000 0103 0307                 ..........
12:18:48.995820 IP localhost.italk > localhost.45714: Flags [S.], seq 1101043017, ack 4246244984, win 43690, options [mss 65495,sackOK,TS val 1487669716 ecr 1487669716,nop,wscale 7], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  003c 0000 4000 4006 3cba 7f00 0001 7f00  .<..@.@.<.......
	0x0020:  0001 3039 b292 41a0 9549 fd18 8e78 a012  ..09..A..I...x..
	0x0030:  aaaa fe30 0000 0204 ffd7 0402 080a 58ac  ...0..........X.
	0x0040:  09d4 58ac 09d4 0103 0307                 ..X.......
12:18:52.072280 IP localhost.45714 > localhost.italk: Flags [P.], seq 1:15, ack 15, win 342, options [nop,nop,TS val 1487672792 ecr 1487669716], length 14
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0042 87a6 4000 4006 b50d 7f00 0001 7f00  .B..@.@.........
	0x0020:  0001 b292 3039 fd18 8e78 41a0 9558 8018  ....09...xA..X..
	0x0030:  0156 fe36 0000 0101 080a 58ac 15d8 58ac  .V.6......X...X.
	0x0040:  09d4 7b39 7d0d 0a41 4e4f 4e59 4d4f 5553  ..{9}..ANONYMOUS
12:18:52.072343 IP localhost.italk > localhost.45714: Flags [.], ack 15, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 9c9f 4000 4006 a022 7f00 0001 7f00  .4..@.@.."......
	0x0020:  0001 3039 b292 41a0 9558 fd18 8e86 8010  ..09..A..X......
	0x0030:  0156 fe28 0000 0101 080a 58ac 15d8 58ac  .V.(......X...X.
	0x0040:  15d8                                     ..
12:18:52.072358 IP localhost.45714 > localhost.italk: Flags [P.], seq 15:48, ack 15, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 33
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0055 87a7 4000 4006 b4f9 7f00 0001 7f00  .U..@.@.........
	0x0020:  0001 b292 3039 fd18 8e86 41a0 9558 8018  ....09....A..X..
	0x0030:  0156 fe49 0000 0101 080a 58ac 15d8 58ac  .V.I......X...X.
	0x0040:  15d8 7b31 7d0d 0a59 7b32 317d 0d0a 4144  ..{1}..Y{21}..AD
	0x0050:  4d49 594f 4061 796f 756e 6735 3431 2e74  [email protected]
	0x0060:  6573 74                                  est
12:18:52.072366 IP localhost.italk > localhost.45714: Flags [.], ack 48, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 9ca0 4000 4006 a021 7f00 0001 7f00  .4..@.@..!......
	0x0020:  0001 3039 b292 41a0 9558 fd18 8ea7 8010  ..09..A..X......
	0x0030:  0156 fe28 0000 0101 080a 58ac 15d8 58ac  .V.(......X...X.
	0x0040:  15d8                                     ..
12:18:52.072464 IP localhost.italk > localhost.45714: Flags [P.], seq 15:16, ack 48, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 1
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0035 9ca1 4000 4006 a01f 7f00 0001 7f00  .5..@.@.........
	0x0020:  0001 3039 b292 41a0 9558 fd18 8ea7 8018  ..09..A..X......
	0x0030:  0156 fe29 0000 0101 080a 58ac 15d8 58ac  .V.)......X...X.
	0x0040:  15d8 4f                                  ..O
12:18:52.072494 IP localhost.45714 > localhost.italk: Flags [.], ack 16, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 87a8 4000 4006 b519 7f00 0001 7f00  .4..@.@.........
	0x0020:  0001 b292 3039 fd18 8ea7 41a0 9559 8010  ....09....A..Y..
	0x0030:  0156 fe28 0000 0101 080a 58ac 15d8 58ac  .V.(......X...X.
	0x0040:  15d8                                     ..
12:18:52.072501 IP localhost.italk > localhost.45714: Flags [F.], seq 16, ack 48, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 9ca2 4000 4006 a01f 7f00 0001 7f00  .4..@.@.........
	0x0020:  0001 3039 b292 41a0 9559 fd18 8ea7 8011  ..09..A..Y......
	0x0030:  0156 fe28 0000 0101 080a 58ac 15d8 58ac  .V.(......X...X.
	0x0040:  15d8                                     ..
12:18:52.072529 IP localhost.italk > localhost.45714: Flags [.], ack 49, win 342, options [nop,nop,TS val 1487672792 ecr 1487672792], length 0
	0x0000:  0000 0000 0000 0000 0000 0000 0800 4500  ..............E.
	0x0010:  0034 9ca3 4000 4006 a01e 7f00 0001 7f00  .4..@.@.........
	0x0020:  0001 3039 b292 41a0 955a fd18 8ea8 8010  ..09..A..Z......
	0x0030:  0156 fe28 0000 0101 080a 58ac 15d8 58ac  .V.(......X...X.
	0x0040:  15d8                                     ..

As a final test, let’s see what happens when I tell the client to use that port explicitly. Running:

 sasl2-sample-client -p 12345 -m ANONYMOUS localhost

Generates the proper output:

receiving capability list... recv: {9}
ANONYMOUS
ANONYMOUS
please enter an authorization id: ADMIYO
using mechanism ANONYMOUS
send: {9}
ANONYMOUS
send: {1}
Y
send: {21}
[email protected]
waiting for server reply...
successful authentication
closing connection

Fedora Join Meeting 26 September 2016 - Summary

Posted by Ankur Sinha "FranciscoD" on October 05, 2016 09:41 AM

We had a Fedora Join SIG meeting this Monday. While the minutes are here, I figured I'll summarise them in my own words with the required context. As usual, I'm a week late, but well, better late than never!

Summary and important bits

We started off by reviewing the tickets that have already been filed. Each ticket is briefly summarised below.

During the discussion, we observed how there seemed to be some overlap over the function of the SIG and the work that CommOps is already doing. CommOps is currently working its way through the different teams' onboarding processes to improve them, for example. Nothing major, just updating wiki pages, streamlining things, and that sort. The Join SIG has a general goal of "making it easier for newbies to contribute to Fedora" and that's where the overlap lies. As jflory pointed out, it doesn't make sense for two teams to do the same work, so a clear demarcation needs to be made over the tasks that the SIG will take up. I'm bringing this up on the ML in a thread so that the current team can discuss it.

My personal view

When we started, sure, our goal was quite general - to "make things easier for newbies", but as our goal evolved, we've moved to being more of a communication channel for newbies, a starting point. The idea, and you'll see this if you read the goal statement, is to sit on top of the different Fedora teams and help navigate newbies to teams that they fit best at. Since most teams already have well defined onboarding processes, and the hubs are going to make this even easier, we don't really need to work on any more tooling. It's more about the initial few weeks/months where a novice would struggle to navigate our vast community and it's vast infrastructure/resources. To this end, we:

  • want newbies to communicate with contributors over our communication channels (even before they've found a team to dedicate their time to)
  • want to help newbies form better relationships with current community members (in more than one team, so that they can help out and switch from one team to another more easily)
  • want current contributors to help newbies learn the open source way and the Free software philosophy (you learn this over time, and hanging out with folks that already practice it is a great way to pick things up quicker)
  • want to help match newbies to potential mentors in the community (akin to the ambassadors SIG, but more general)
  • can hold events such as "Fedora Join Days" like "Fedora Test Days" or university "Open days" where we dedicate a day regularly for newbies to come and speak to the community, ask questions, and get started right away
  • more..

None of this is written in stone, of course, and I'll post again once we've had a discussion over this and come to a consensus.

Reviewing past tickets

A quick summary of the more noticeable tickets on our tracker (now on Pagure!).

#11 - Membership tracker - no pending membership requests

We have an FAS group, and a Pagure group to give members access to our Pagure repositories. The two must be kept in sync manually, so we've created a membership tracker ticket. To join the SIG, one can either comment on this ticket, or apply via the FAS group. The admins monitor both :)

#10 - "Fedora and the open source philosophy" essay contest

This is an idea we had a while ago. I think most of the community knows rather well how Fedora's philosophy and stand towards Free software differs from other Linux distributions. The idea here is to hold a contest encouraging community members to write their interpretations and personal reflections on our philosophy. This will get the community speaking about Free software more, to begin with, and it'd also broadcast our first foundation more to the general public, especially our users. It is just another idea to continue spreading the word on Free software to a larger audience.

#9 - "Invite members from other SIGs to join the SIG"

Of course, we need more activity in the SIG's channels. We need more contributors to hang around to help newbies, and we need more newbies to help out. This, at the moment, is a WIP. There are some other wrinkles to iron out first. (Read on.)

#8 - "Mail people listed on the wiki page as members to see if they're still interested"

Another old ticket. We had a list of people add their names to the wiki list. We hadn't had the FAS group set up back then so they couldn't join it. I've sent them all an e-mail asking if I should add them to the SIG.

#7 - "Prospective contributor introduction template"

We think it's a good idea to have a template that folks can use to introduce themselves over various mailing lists. Sure, introductions are easy, but for someone who hasn't worked in free software or open source communities before, it may not be easy to decide on what the introduction should contain. The template will serve as a guideline in such situations.

Other tickets

The other tickets were minor admin tasks that we skipped over. We'll go over them again in a future meeting.

FOSS Wave: Goa, India

Posted by Fedora Community Blog on October 05, 2016 08:15 AM

This post details how we executed planned activities for Internet of Things (IoT) in Goa, India. First, thanks to Espressotive (headed by Sudhir Shetty and CIBA) for doing all the prep work from registration to our accommodation. Over a span of three days, more than 400 students from three colleges and universities attended the event.

Introducing IoT in Goa

The primary agenda topic was the Internet of Things (IoT). To help get students up to speed, we started with basic webpage structure and how NodeJS can come handy to write web servers. Mrinal Jain, a Mozilla Rep from Indore, led discussion about HTML5, CSS3, and JavaScript to write webpages. He used web creation tools to explain the basic structure of webpages and how the web works in general. GitHub was an item of major focus, as we wanted participants to understand how they can collaboratively build and contribute to projects. We talked about how students can commit code and do basic git operations in a command line interface or a GUI.

Mrinal Jain, a Mozilla Rep from Indore, led discussion about HTML5, CSS3, and JavaScript to write webpages in Goa, India

Mrinal Jain, a Mozilla Rep from Indore, led discussion about HTML5, CSS3, and JavaScript to write webpages.”

Server applications and communicating

Soon after the basics were clear, we moved ahead. I began talking about how a server works and what NodeJS helps us accomplish. The session was about different types of pub-sub frameworks and protocols (e.g. MQTT), which ensures a standard way of interfacing and communicating with hardware (e.g. an Arduino Uno).

FOSS Wave - Boa, India: Talking about how a server works and what NodeJS helps us accomplish

“I began talking about how a server works and what NodeJS helps us accomplish.”

After the basic architecture of the IoT and NodeJS, we jumped into writing small API functions. Finally, we made our way to controlling a LED attached to a particular PIN number on an Arduino. For ease of understanding, we made sure that the code and the step-by-step process were clearly documented.

FOSS Wave - Boa, India: Introducing the Internet of Things (IoT)

Teaching to build

As nothing is ever complete without a hands-on application, we held a workshop about how they can start working towards building small IoT projects of their own.

FOSS Wave - Boa, India: Hands-on workshop teaching Internet of Things (IoT)

“…we held a workshop about how they can start working towards building small IoT projects of their own.”

We also gave the participants a free Fedora 24 installation DVD to install and start using Fedora as a cutting-edge platform to build their college projects.

FOSS Wave - Boa, India: Giving Fedora to students to develop with

Wave washes over Goa

These sessions mark the beginning  of FOSS Wave: Goa, India. We have seen a lot of enthusiasm in this event and many people were interested in learning about FOSS and cutting-edge technologies. Contributors are already flowing in and we expect more in the near future!

The post FOSS Wave: Goa, India appeared first on Fedora Community Blog.

I am speaking at

Posted by Sirko Kemter on October 05, 2016 07:15 AM


about the Linux Professional Institute – Linux Essentials Certification, which I doing here in Cambodia and a bit about Fedora to :)

It is Hacktoberfest

Posted by Till Maas on October 05, 2016 07:14 AM

339h

Recently the Hacktoberfest started, your chance to get involved with free/libre and open-source software and get a cool free t-shirt. All you need to do is find one or more projects on Github and submit four pull requests in October. I recommend taking a look at the  Fedora easyfix issues – they contain several issues in projects using Github.


Python Meetup Panamá 2016

Posted by Jose A Reyes H on October 05, 2016 02:38 AM
Los invitamos: 22 de Octubre. Universidad Interamericana de Panamá. 09:00 am – 02:00 pm. Facebook: Python Meetup.

systemd.conf 2016 Over Now

Posted by Lennart Poettering on October 04, 2016 10:00 PM

systemd.conf 2016 is Over Now!

A few days ago systemd.conf 2016 ended, our second conference of this kind. I personally enjoyed this conference a lot: the talks, the atmosphere, the audience, the organization, the location, they all were excellent!

I'd like to take the opportunity to thanks everybody involved. In particular I'd like to thank Chris, Daniel, Sandra and Henrike for organizing the conference, your work was stellar!

I'd also like to thank our sponsors, without which the conference couldn't take place like this, of course. In particular I'd like to thank our gold sponsor, Red Hat, our organizing sponsor Kinvolk, as well as our silver sponsors CoreOS and Facebook. I'd also like to thank our bronze sponsors Collabora, OpenSUSE, Pantheon, Pengutronix, our supporting sponsor Codethink and last but not least our media sponsor Linux Magazin. Thank you all!

I'd also like to thank the Video Operation Center ("VOC") for their amazing work on live-streaming the conference and making all talks available on YouTube. It's amazing how efficient the VOC is, it's simply stunning! Thank you guys!

In case you missed this year's iteration of the conference, please have a look at our YouTube Channel. You'll find all of this year's talks there, as well the ones from last year. (For example, my welcome talk is available here). Enjoy!

We hope to see you again next year, for systemd.conf 2017 in Berlin!

X crash during Fedora update when system has hybrid graphics and systemd-udev is in update

Posted by Adam Williamson on October 04, 2016 09:36 PM

Hi folks! This is a PSA about a fairly significant bug we’ve recently been able to pin down in Fedora 24+.

Here’s the short version: especially if your system has hybrid graphics (that is, it has an Intel video adapter and also an AMD or NVIDIA one, and it’s supposed to switch to the most appropriate one for what you’re currently doing – NVIDIA calls this ‘Optimus’), DON’T UPDATE YOUR SYSTEM BY RUNNING DNF FROM THE DESKTOP. (Also if you have multiple graphics adapters that aren’t strictly ‘hybrid graphics’; the bug affects any case with multiple graphics adapters).

Here’s the slightly longer version. If your system has more than one graphics adapter, and you update the systemd-udev package while X is running, X may well crash. So if the update process was running inside the X session, it will also crash and will not complete. This will leave you in the unfortunate situation where RPM thinks you have two versions of several packages installed at the same time (and also a bunch of package scripts that should have run will not have run).

The bug is actually triggered by restarting systemd-udev-trigger.service; anything which does that will cause X to crash on an affected system. So far only systems with multiple adapters are reported to be affected; not absolutely all such systems are affected, but a good percentage appear to be. It occurs when the systemd-udev package is updated because the package %postun scriptlet – which is run on update when the old version of the package is removed – restarts that service.

The safest possible way to update a Fedora system is to use the ‘offline updates’ mechanism. If you use GNOME, this is how updates work if you just wait for the notifications to appear, the ones that tell you you can reboot to install updates now. What’s actually happening there is that the system has downloaded and cached the updates, and when you click ‘reboot’, it will boot to a special state where very few things are running – just enough to run the package update – run the package update, then reboot back to the normal system. This is the safest way to apply updates. If you don’t want to wait for notifications, you can run GNOME Software, click the Updates button, and click the little circular arrow to force a refresh of available updates.

If you don’t use GNOME, you can use the offline update system via pkcon, like this:

sudo pkcon refresh force && \
sudo pkcon update --only-download && \
sudo pkcon offline-trigger && \
sudo systemctl reboot

If you don’t want to use offline updates, the second safest approach is to run the update from a virtual terminal. That is, instead of opening a terminal window in your desktop, hit ctrl-alt-f3 and you’ll get a console login screen. Log in and run the update from this console. If your system is affected by the bug, and you leave your desktop running during the update, X will still crash, but the update process will complete successfully.

If your system only has a single graphics adapter, this bug should not affect you. However, it’s still not a good idea to run system updates from inside your desktop, as any other bug which happens to cause either the terminal app, or the desktop, or X to crash will also kill the update process. Using offline updates or at least installing updates from a VT is much safer.

The bug reports for this issue are:

  • #1341327 – for the X part of the problem
  • #1378974 – for the systemd part of the problem

Updates for Fedora 24 and Fedora 25 are currently being prepared. However, the nature of the bug actually means that installing the update will trigger the bug, for the last time. The updates will ensure that subsequent updates to systemd-udev will no longer cause the problem. We are aiming to get the fix into Fedora 25 Beta, so that systems installed from Fedora 25 Beta release images will not suffer from the bug at all, but existing Fedora 25 systems will encounter the bug when installing the update.

Short note: fedoraproject.org smtp sessions now using TLS

Posted by Kevin Fenzi on October 04, 2016 09:07 PM

Just a quick note for everyone who gets emails from fedoraproject.org or it’s various other domains. Just before we entered freeze for Fedora 25 Beta we landed changes to make our smtp servers use TLS where possible, so emails to servers that support it should now be fully encrypted.

Please let us know if you see any issues related to this change.

Translating Between RDO/RHOS and upstream releases Redux

Posted by Adam Young on October 04, 2016 05:47 PM

I posted this once before, but we’ve moved on a bit since then. So, an update.

#!/usr/bin/python

upstream = ['Austin', 'Bexar', 'Cactus', 'Diablo', 'Essex (Tag 2012.1)', 'Folsom (Tag 2012.2)',
 'Grizzly (Tag 2013.1)', 'Havana (Tag 2013.2) ', 'Icehouse (Tag 2014.1) ', 'Juno (Tag 2014.2) ', 'Kilo (Tag 2015.1) ', 'Liberty',
 'Mitaka', 'Newton', 'Ocata', 'Pike', 'Queens', 'R', 'S']

for v in range(0, len(upstream) - 3):
 print "RHOS Version %s = upstream %s" % (v, upstream[v + 3])

 

RHOS Version 0 = upstream Diablo
RHOS Version 1 = upstream Essex (Tag 2012.1)
RHOS Version 2 = upstream Folsom (Tag 2012.2)
RHOS Version 3 = upstream Grizzly (Tag 2013.1)
RHOS Version 4 = upstream Havana (Tag 2013.2)
RHOS Version 5 = upstream Icehouse (Tag 2014.1)
RHOS Version 6 = upstream Juno (Tag 2014.2)
RHOS Version 7 = upstream Kilo (Tag 2015.1)
RHOS Version 8 = upstream Liberty
RHOS Version 9 = upstream Mitaka
RHOS Version 10 = upstream Newton
RHOS Version 11 = upstream Ocata
RHOS Version 12 = upstream Pike
RHOS Version 13 = upstream Queens
RHOS Version 14 = upstream R
RHOS Version 15 = upstream S

Tags in the Git repos are a little different.

  • For Essex though Kilo, the releases are tagged based on their code names
  • 2011 was weird.  We don’t talk about that.
  • From 2012 through 2015, the release tags  are based on the date of the release.  Year, the release number.  So the first release in 2012 is 2012.1.  Thus 2012.3 does not exist. Which is why we don’t talk about 2011.3.
  • From Liberty/8 the upstream  8 matches the RDO and RHOS version 8. Subnumbers are for stable releases, and may not match the downstream releases; Once things go stable, it is a downstream decision when to sync.  Thus, we have tags that start with 8,9, and 10 mapping to Liberty, Mitaka, and Newton.
  • When Ocata is cut, we’ll go to 11, leading to lots of Spinal Tap references

docker-selinux changed to container-selinux

Posted by Dan Walsh on October 04, 2016 12:33 PM
Changing upstream packages

I have decided to change the docker SELinux policy package on github.com from docker-selinux to container-selinux

https://github.com/projectatomic/container-selinux

The main reason I did this was after seeing the following on twitter.   Docker, INC is requesting people not use docker prefix for packages on github.

https://twitter.com/MacYET/status/775535642793086976

Since the policy for container-selinux can be used for more container runtimes then just docker, this seems like a good idea.  I plan on using it for OCID, and would consider plugging it into the RKT CRI.

I have modified all of the types inside of the policy to container_*.  For instance docker_t is now container_runtime_t and docker_exec_t is container_runtime_exec_t.

I have taken advantage of the typealias capability of SELinux policy to allow the types to be preserved over an upgrade.

typealias container_runtime_t alias docker_t;
typealias container_runtime_exec_t alias docker_exec_t;


This means people can continue to use docker_t and docker_exec_t with tools but the kernel will automatically translate them to the primary name container_runtime_t and container_runtime_exec_t.

This policy is arriving today in rawhide in the container-selinux.rpm which obsoletes the docker-selinux.rpm.  Once we are confident about the upgrade path, we will be rolling out the new packaging to Fedora and eventually to RHEL and CentOS.

Changing the types associated with container processes.

Secondarily I have begun to change the type names for running containers.  Way back when I wrote the first policy for containers, we were using libvirt_lxc for launching containers, and we already had types defined for VMS launched out of libvirt.  VM's were labeled svirt_t.  When I decided to extend the policy for Containers I decided on extending svirt with lxc.
svirt_lxc, but I also wanted to show that it had full network.  svirt_lxc_net_t.  I labeled the content inside of the container svirt_sandbox_file_t.

Bad names...

Once containers exploded on the seen with the arrival of docker, I knew I had made a mistake choosng the types associated with container processes.  Time to clean this up.  I have submitted pull requests into selinux-policy to change these types to container_t and container_image_t.

typealias container_t alias svirt_lxc_net_t;
typealais container_image_t alias svirt_sandbox_file_t;

The old types will still work due to typealias, but I think it would become a lot easier for people to understand the SELinux types with simpler names.  There is a lot of documentation and "google" knowledge out there about svirt_lxc_net_t and svirt_sandbox_file_t, which we can modify over time.

Luckily I have a chance at a do-over.

AppData content ratings for games shipped in Fedora

Posted by Fedora Community Blog on October 04, 2016 08:15 AM

GNOME Software developer Richard Hughes recently e-mailed the Fedora developers mailing requesting Fedora package maintainers to update their AppData files to include age ratings using OARS.

“The latest feature we want to support upstream is age classifications
for games. I’ve asked all the maintainers listed in the various
upstream AppData files (using the update contact email address) to
generate some OARS metadata and add it to the .appdata.xml file, but
of course some AppData files do not have any contact details and so
they got missed. I’m including this email here as I know some AppData
files are included in the various downstream spec files by Fedora
packagers. Generating metadata is really as simple as visiting
https://odrs.gnome.org/oars then answering about 20 questions with
multiple choice answers, then pasting the output inside the
<component> tag.

Using the <content_rating> tag means we can show games with an
appropriate age rating depending on the country of the end user. If
you have any comments about the questions on the OARS page please do
let me know. Before the pitchforks start being sharpened it’s an
anti-goal of the whole system to in any way filter the output of
search results dependent on age. The provided metadata is only used in
an informational way.”

If your package ships an AppData file, please consider updating it. If you have any queries about the addition or OARS, please discuss it on the Fedora developers mailing list.

The post AppData content ratings for games shipped in Fedora appeared first on Fedora Community Blog.

HackMIT meets Fedora

Posted by Justin W. Flory on October 04, 2016 07:45 AM

This post was originally published on the Fedora Community Blog.

HackMIT meets Fedora

<iframe class="wp-embedded-content" data-secret="ILrTs89jCq" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/hackmit-meets-fedora/embed/#?secret=ILrTs89jCq" title="“HackMIT meets Fedora” — Fedora Community Blog" width="600"></iframe>


HackMIT is the annual hackathon event organized by students at the Massachusetts Institute of Technology in Cambridge, Massachusetts. HackMIT 2016 took place on September 17th and 18th, 2016. This year, the Fedora Project partnered with Red Hat as sponsors for the hackathon. Fedora Ambassadors Charles Profitt and Justin W. Flory attended to represent the project and help mentor top students from around the country in a weekend of learning and competitive hacking. Fedora engaged with a new audience of students from various universities across America and even the globe.

Arriving at HackMIT

The Fedora team arrived in Massachusetts a day early on Friday to ensure prompt arrival at the event the following morning. Fedora was one of the first sponsors to arrive on MIT’s campus Saturday morning, and scouted one of the best positions on the floor. Fedora was given a choice of anywhere in the bleachers surrounding the floor. As a result, the team set up Fedora’s banners close to many of the tables where hackers would spend the weekend.

Fedora setup at HackMIT 2016

The Fedora setup at HackMIT 2016

On the morning of the first day, over a thousand students arrived on the MIT campus. Around 10:00am, the kickoff ceremony began in the main auditorium. The event staff introduced themselves and the structure of the event. After covering the basics, every sponsor was given a 30 second “elevator pitch” to explain their company or project, and share anything important with the hackers. Justin represented Fedora and Red Hat on stage to introduce Fedora and what Fedora wanted to help students with. He introduced Fedora as a distribution targeted towards developers, briefly introduced the three editions of Fedora, and offered help for anyone wanting to open source their hack or seek support with open source tooling.

May the hacking begin!

After the sponsor introductions, hackers relocated to the main floor to start seeking teams and begin working on projects. While HackMIT was getting into full swing, many people visited the Fedora area before jumping into a project. Many of the students who talked with Charles and Justin were either surprised to see Fedora at an event like HackMIT or were curious to know what was going on in Fedora. For the most part, many students were familiar with Linux through classes or lectures. The ones familiar with Linux knew about it from hands-on experience or from guided instruction in classes. A smaller number of people were running Linux environments or using them in servers or other ways.

Overall, the demographic of people attending the hackathon were generally familiar with Linux, but not at an advanced level. This group was ideal for promoting Fedora as a developer environment. The ease of setting up a development workspace or installing dependencies for projects intrigued many students. HackMIT was an ideal opportunity to present Fedora to a new group of budding technological enthusiasts. HackMIT participants had an organic interest in Fedora and wanted to know how Fedora made development easier or what made it different from other distributions.

Personal engagement

MeTime team demos project at HackMIT to Fedora

The MeTime team demos their product to Charles before the last judgment

During the event, Charles walked around the various tables to talk with students while Justin manned the Fedora area. Charles introduced himself to the hackers and asked to know what they were working on or what their plans were. For many teams, he provided advice on how to get over hurdles with first planning and project direction. He checked back in with these groups across the weekend to see how they progressed.

At the Fedora space, Justin fielded questions from students about Linux, what Fedora offers, and about open source software. Some people were familiar with Fedora, and a small handful of students were running Fedora as a primary operating system. However, most students were only familiar with Linux and were curious to know more. As a student, Justin offered specific advice about contributing to open source software and how helpful it is to gain real-world experience. Some students expressed interest in contributing but were unsure about where to start. Justin coached students through key steps to start with on beginning their open source adventure. He identified the process of choosing a project to contribute to, matching something genuinely interesting with technical skills, and getting involved with the community.

Additionally, there were two students organizing other hackathons in the country with a specific focus towards open source software development. The Ambassadors engaged with these students and joined in a dialogue about making open source a critical part of hackathons. More information about these events will become available in the coming future.

Evaluating impact

May Tomic works on her team's project, Conversationalist at HackMIT

May Tomic works on her team’s project, Conversationalist

To help gauge our impact with the event, there was a limited edition HackMIT 2016 Attendee badge that attendees could claim during the event. The team leveraged Fedora Badges as a tool to help tell the story of our impact at the event. Through Badges, you can see a list of FAS accounts that claimed the badge from the event and their account activity in the long run. Bee Padalkar‘s FOSDEM event evaluation demonstrates how this data can be used. Ten people claimed the badge during the weekend. One of the benefits of using badges as a tool for measuring impact and engagement is the follow-up it allows us to make with what badge claimers do in the Fedora community.

However, there were more ways to measure engagement with the students and hackers than only with badges. Many of the most valuable insight into our impact was follow-up on the second morning. Charles went around to most of the tables he visited on the first day leading up to the final deadline. With one team, he helped do some live testing in the last 30 minutes before the deadline since her team was asleep from the previous night. Engagements like these left a positive impression of Fedora, and by extension, the community.

What was our engagement?

HackMIT 2016 Attendee Fedora badge

The HackMIT 2016 Attendee Fedora badge

The type of interactions and conversations Fedora held with students and other attendees was productive and motivating, not only to the students but also to the Ambassador team. People were genuinely interested in Fedora and it was easier to shape their interest into an insightful discussion about what Fedora enables students to create and develop. A powerful message about open source software development was also delivered during the event. This stands in contrast to some other hackathons in the United States which are sometimes set up more like unofficial career fairs. HackMIT clearly held a strong focus on community. Events with that kind of management and direction are where Fedora succeeds and has a more valuable impact.

Leaving the event, the Fedora team was confident that we had a powerful impact on students during the event. For many, Fedora was not only introduced as an operating system, but as a tool for accomplishing and doing. Fedora provides the tools and utilities students need to build their projects and drive them forward. Open source as a development practice was also introduced to many for the first time, or deeper explained for those with a mild interest. These messages and the team’s other engagements were warmly received.

Looking ahead

The Fedora Ambassadors of North America would like to make a special thanks to Red Hat and Tom Callaway for partnering to sponsor this event. Without Red Hat’s help, attending this event would not have been possible. Our engagement and impact after HackMIT excites the Ambassador team. We hope many students from the event turn to Fedora not only as an operating system, but as a tool for their expanding technological toolbox. A congratulations also goes to the organizers of HackMIT for putting together a thoroughly planned and carefully executed event that placed a strong focus on community, which fits within one of Fedora’s four key foundations, Friends.

We hope to return to Cambridge again next year!


You can read Charles Profitt’s event report on his blog.

The post HackMIT meets Fedora appeared first on Justin W. Flory's Blog.

Episode 7 - More Powerful than root!

Posted by Open Source Security Podcast on October 03, 2016 09:05 PM
Kurt and Josh discuss the ORWL computer, crashing systemd with one line, NIST, and a security journal.

Download Episode

<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/285901909&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes


How to debug Fedora rawhide compose problems

Posted by Kevin Fenzi on October 03, 2016 07:29 PM

From time to time rawhide composes fail and are not announced or synced to mirrors.

In the past this would happen only if the very basic setup (a mock chroot with the ‘buildsys-build’ group installed in it) broke. Additionally, in the past rawhide composes where many deliverables failed to compose were still synced out and announced, leading to days when no images were available until the issue was fixed.

Now, with the latest version of pungi (The tool that composes Fedora releases, including rawhide), composes can fail if some deliverables (Those marked in the configuration as not failable) didn’t complete. So, while rawhide can fail more easily, it also means it’s much easier to revert some change that broke images and get that fixed before it lands, and images should be always available.

So, how can you tell if a rawhide compose (or some part of it) failed and why? All the pungi logs are avilable and since all the builds take place in koji, anyone can look at them as well. Rawhide composes are of course fedmsg enabled, so you want to look for the https://apps.fedoraproject.org/datagrepper/raw?topic=org.fedoraproject.prod.pungi.compose.status.change topic. Composes can finish with 3 states:

  1. FINISHED – This means the compose finished and everything in it completed successfully. I am not sure we have yet seen this status in real life. 😉
  2. FINISHED_INCOMPLETE – This means the compose finished and only failable things failed. This is the “normal” status we see day to day.
  3. DOOMED – This means the compose failed it’s initial very basic setup and/or some deliverable marked as not failable failed. When this happens, it means the compose isn’t synced out or advertised. This is the status where we need to find out what caused the problem and fix it and either restart the compose or wait for the next days compose. In IRC or on fedmsg you may see this status as “failed in a horrible fire” as thats what our fedmsg translates it to.

So if you have a compose and you want to see why some part of it failed, you can look at the fedmsg and it provides a location url, but they are always the same format. Lets look at an example: https://kojipkgs.fedoraproject.org/compose/rawhide/Fedora-Rawhide-20161002.n.0/ This means it’s a rawhide compose (there’s a Branched directory for the Branched Fedora 25 right now), then the particular compose was Fedora-Rawhide-YYYYMMDD.n.0. Yeah, month and day, then a ‘n’ to mean ‘nightly’ and a ‘0’ to mean it was the first compose of the day. If there’s more composes that same day, they are in n.1, n.2, etc. For Alpha/Beta/Final releases there’s no ‘n’ there, as they are not nightly.

The first place I go to look for issues is the global log file. This would be in logs/global/pungi.global.log and you will want to look for the image or deliverable you are interested in or search generally for tracebacks. Usually this will note a koji task id or build which you can then look up on koji. Since this is already getting long, I’ll post about tracking down koji build problems another day.

Linaro Connect Las Vegas 2016

Posted by Laura Abbott on October 03, 2016 06:00 PM

I spent last week at Linaro Connect in Las Vegas. Nominally I was there for some discussions about Ion. The week ended up being fairly full of the gamut of ARM topics.

IoT is still a top buzzword. Linaro announced the founding of the LITE (Linaro IoT and Embedded) group. The work that this group has done so far is mostly related to Cortex-M processors which don't run Linux. This is a change of pace from a consortium that has exclusively focused on Linux. The Linux Foundation has done the same thing, given their focus on the Zephyr Project. I see this shift for three reasons: 1) vendors want an end-to-end solution and reduced fragmentation and Linaro/Linux Foundation provide a good forum to do this because 2) both Linaro and the Linux Foundation are very good at courting companies and engaging in 'corporate hand holding' through open source projects especially 3) when bootstrapping relatively new projects. This is not intended to be a negative, sometimes companies need to throw money at outside entities to inform them what needs to be done (even when internal employees are shouting the same thing). Corporate influence in open source can certainly be critiqued but I'm optimistic about that not being a problem for Linaro1.

Red Hat also announced its involvement in the LITE group. Red Hat's interest aren't in the RTOS Microcontroller space but the higher level gateway. All those IoT devices have to communicate somewhere and a centralized gateway makes it easier to manage those devices, especially for industrial use cases. Hearing the full-stack story of IoT was a good learning experience for me, as I mostly have my head in the kernel. Everyone seems to be learning everywhere and most of the work is brand new. The Zephyr project was talking about writing new IP stacks which should give you some idea of where these projects are right now.

In not IoT things, I sat in on the firmware mini conference. This was mostly an update about ACPI and UEFI related things for server platforms. arm64 ACPI and UEFI support has come a very long way. In the Fedora kernel, we carry very few arm64 related patches. Basic virtualization works and servers boot in a 'boring' manner. PCIE quirks are still an ongoing TODO item along with SMMU work. There was discussion about the next version of the UEFI spec, or as much discussion as could be had, given UEFI rules. Leif Lindholm gave an update about Tianocore, the open source UEFI implementation. There's been some change in community governance to hopefully make more forward progress, which is always good to hear.

I had a meeting with some of the other folks who have been working on kernel hardening for arm64. arm64 mostly has feature parity with x86 for hardening features that have been merged. There's ongoing work for software emulation of Privilege Access Never (PAN) on targets that don't have this in hardware. Newer features like vmapped stacks are works in progress but should have a short window for merging. We concluded that many of the features on the wiki involve becoming a gcc hacker. Nobody stepped up to do that quite yet so that's still an open project2. I spent some time hacking on a patch set to do checking for writable/executable pages to match with x86. I sent v1 out at the end of last week, so v2 will probably come after the merge window closes in a few weeks.

As mentioned, my nominal reason for heading to Linaro connect was for discussion about Ion. I was excited to report we had made some good progress with things like platform support. Then several devicetree people announced that they hadn't gotten around to giving feedback and they still don't like the idea of Ion in devicetree. So much for that milestone. There was some interesting discussion that came out of XDC last week where apparently the DRM layer is looking for something similar to constraint solving. This was interesting to hear as the constraint solving had become less important for Ion in recent years. The discussion in the Android miniconference was useful. People do care about Ion so I can't just delete it. I had been hoping for a small first step of moving Ion as a self-contained framework out of staging into drivers/android/ but that seems less plausible and less of a good idea. I had a meeting with others who are working on the secure memory allocation framework (SMAF). They need something very similar to Ion and given what the DRM people are looking at as well it may in fact be time for a centralized constraint allocator (cca instead of Ion as a name?). There's still a month before plumbers where there are supposed to be more discussions of Ion. I'll have more research and work to do before that.

Most of the videos should be up sometime in the near future if they aren't already. I believe the keynotes should be up. You need to watch Sarah Sharp's keynote which is a great summary of why corporations struggle with upstreaming. I may start linking to this the next time a "but why isn't my phone upstream" topic comes up. The keynote about IoT Zephyr was excellent. Jono Bacon gave a great talk about community management.

Overall, it felt like a productive week. I always enjoy meeting with the ARM community and this time was no exception.


  1. I've met enough of the people involved that I don't see anything that extreme happening. If anyone involved is reading this, please don't make me eat my words, I'm cynical enough already. 

  2. If you're looking for a challenging 'how do I get involved in the kernel project', the gcc plugins could be for you! 

All systems go

Posted by Fedora Infrastructure Status on October 03, 2016 05:50 PM
Service 'Ask Fedora' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on October 03, 2016 05:46 PM
Service 'Ask Fedora' now has status: major: ask down

The importance of paying attention in building community trust

Posted by Matthew Garrett on October 03, 2016 05:14 PM
Trust is important in any kind of interpersonal relationship. It's inevitable that there will be cases where something you do will irritate or upset others, even if only to a small degree. Handling small cases well helps build trust that you will do the right thing in more significant cases, whereas ignoring things that seem fairly insignificant (or saying that you'll do something about them and then failing to do so) suggests that you'll also fail when there's a major problem. Getting the small details right is a major part of creating the impression that you'll deal with significant challenges in a responsible and considerate way.

This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?

And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.

But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.

There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.

It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.

One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.

Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.

comment count unavailable comments

What is the spc_t container type, and why didn't we just run as unconfined_t?

Posted by Dan Walsh on October 03, 2016 05:00 PM
What is spc_t?

SPC stands for Super Privileged Container, which are containers that contain software used to manage the host system that the container will be running on.  Since these containers could do anything on the system and we don't want SELinux blocking any access we made spc_t an unconfined domain. 

If you are on an SELinux system, and run docker with SELinux separation turned off, the containers will run with the spc_t type.

You can disable SELinux container separation in docker in multiple different ways.

  • You don't build docker from scratch with the BUILDTAG=selinux flag.

  • You run the docker daemon without --selinux-enabled flag

  • You run a container with the --security-opt label:disable flag

          docker run -ti --security-opt label:disable fedora sh

  • You share the PID namespace or IPC namespace with the host

         docker run -ti --pid=host --ipc=host fedora sh
         
Note: we have to disable SELinux separation in ipc=host  and pid=host because it would block access to processes or the IPC mechanisms on the host.

Why not use unconfined_t?

The question comes up is why not just run as unconfined_t?  A lot of people falsely assume that unconfined_t is the only unconfined domains.  But unconfined_t is a user domain.   We block most confined domains from communicating with the unconfined_t domain,  since this is probably the domain that the administrator is running with.

What is different about spc_t?

First off the type docker runs as (docker_t) can transition to spc_t, it is not allowed to transition to unconfined_t. It transitions to this domain, when it executes programs located under /var/lib/docker

# sesearch -T -s docker_t | grep spc_t
   type_transition container_t docker_share_t : process spc_t;
   type_transition container_t docker_var_lib_t : process spc_t;
   type_transition container_t svirt_sandbox_file_t : process spc_t;


Secondly and most importantly confined domains are allowed to connect to unix domain sockets running as spc_t.

This means I could run as service as a container process and have it create a socket on /run on the host system and other confined domains on the host could communicate with the service.

For example if you wanted to create a container that runs sssd, and wanted to allow confined domains to be able to get passwd information from it, you could run it as spc_t and the confined login programs would be able to use it.

Conclusion:

Some times you can create an unconfined domain that you want to allow one or more confined domains to communicate with. In this situation it is usually better to create a new domain, rather then reusing unconfined_t.

New badge: LinuxCon Europe 2016 Attendee !

Posted by Fedora Badges on October 03, 2016 03:46 PM
LinuxCon Europe 2016 AttendeeYou visited the Fedora booth at LinuxCon Europe 2016!

Impossible is impossible!

Posted by Josh Bressers on October 03, 2016 02:00 PM
Sometimes when you plan for a security event, it would be expected that the thing you're doing will be making some outcome (something bad probably) impossible. The goal of the security group is to keep the bad guys out, or keep the data in, or keep the servers patched, or find all the security bugs in the code. One way to look at this is security is often in the business of preventing things from happening, such as making data exfiltration impossible. I'm here to tell you it's impossible to make something impossible.

As you think about that statement for a bit, let me explain what's happening here, and how we're going to tie this back to security, business needs, and some common sense. We've all heard of the 80/20 rule, one of the forms is that the last 20% of the features are 80% of the cost. It's a bit more nuanced than that if you really think about it. If your goal is impossible it would be more accurate to say 1% of the features are 2000% of the cost. What's really being described here is a curve that looks like this
You can't make it to 100%, no matter how much you spend. This of course means there's no point in trying, but more importantly you have to realize you can't get to 100%. If you're smart you'll put your feature set somewhere around 80%, anything above that is probably a waste of money. If you're really clever there is some sort of best place to be investing resources, that's where you really want to be. 80% is probably a solid first pass though, and it's an easy number to remember.

The important thing to remember is that 100% is impossible. The curve never reaches 100%. Ever.

The thinking behind this came about while I was discussing DRM with someone. No matter what sort of DRM gets built, someone will break it. DRM is built by a person which means, by definition, a smarter person can break it. It can't be 100%, in some cases it's not even 80%. But when a lot of people or groups think about DRM, the goal is to make acquiring the movie or music or whatever 100% impossible. They even go so far as to play the cat and mouse game constantly. Every time a researcher manages to break the DRM, they fix it, the researcher breaks it, they fix it, continue this forever.

Here's the question about the above graph though. Where is the break even point? Every project has a point of diminishing returns. A lot of security projects forget that if the cost of what you're doing is greater than the cost of the thing you're trying to protect, you're wasting resources. Never forget that there is such a thing as negative value. Doing things that don't matter often create negative value.

This is easiest to explain in the context of ransomware. If you're spending $2000 to protect yourself from a ransomware invasion that will cost $300, that's a bad investment. As crime inc. continues to evolve I imagine they will keep a lot of this in mind, if they can keep their damage low, there won't be a ton of incentive for security spending, which helps them grow their business. That's a topic for another day though.

The summary of all this is that perfect security doesn't exist. It might never exist (never say never though). You have to accept good enough security. And more often than not, good enough is close enough to perfect that it gets the job done.

Comment on Twitter

RISC-V bootstrapping: over 1,100 packages

Posted by Richard W.M. Jones on October 03, 2016 11:01 AM
$ ls -l SRPMS | wc -l
1172

The autobuilder is really getting though the package list, having attempted to rebuild nearly 4,000 so far.


HackMIT meets Fedora

Posted by Fedora Community Blog on October 03, 2016 10:17 AM

HackMIT is the annual hackathon event organized by students at the Massachusetts Institute of Technology in Cambridge, Massachusetts. HackMIT 2016 took place on September 17th and 18th, 2016. This year, the Fedora Project partnered with Red Hat as sponsors for the hackathon. Fedora Ambassadors Charles Profitt and Justin W. Flory attended to represent the project and help mentor top students from around the country in a weekend of learning and competitive hacking. Fedora engaged with a new audience of students from various universities across America and even the globe.

Arriving at HackMIT

The Fedora team arrived in Massachusetts a day early on Friday to ensure prompt arrival at the event the following morning. Fedora was one of the first sponsors to arrive on MIT’s campus Saturday morning, and scouted one of the best positions on the floor. Fedora was given a choice of anywhere in the bleachers surrounding the floor. As a result, the team set up Fedora’s banners close to many of the tables where hackers would spend the weekend.

Fedora setup at HackMIT 2016

The Fedora setup at HackMIT 2016

On the morning of the first day, over a thousand students arrived on the MIT campus. Around 10:00am, the kickoff ceremony began in the main auditorium. The event staff introduced themselves and the structure of the event. After covering the basics, every sponsor was given a 30 second “elevator pitch” to explain their company or project, and share anything important with the hackers. Justin represented Fedora and Red Hat on stage to introduce Fedora and what Fedora wanted to help students with. He introduced Fedora as a distribution targeted towards developers, briefly introduced the three editions of Fedora, and offered help for anyone wanting to open source their hack or seek support with open source tooling.

May the hacking begin!

After the sponsor introductions, hackers relocated to the main floor to start seeking teams and begin working on projects. While HackMIT was getting into full swing, many people visited the Fedora area before jumping into a project. Many of the students who talked with Charles and Justin were either surprised to see Fedora at an event like HackMIT or were curious to know what was going on in Fedora. For the most part, many students were familiar with Linux through classes or lectures. The ones familiar with Linux knew about it from hands-on experience or from guided instruction in classes. A smaller number of people were running Linux environments or using them in servers or other ways.

Overall, the demographic of people attending the hackathon were generally familiar with Linux, but not at an advanced level. This group was ideal for promoting Fedora as a developer environment. The ease of setting up a development workspace or installing dependencies for projects intrigued many students. HackMIT was an ideal opportunity to present Fedora to a new group of budding technological enthusiasts. HackMIT participants had an organic interest in Fedora and wanted to know how Fedora made development easier or what made it different from other distributions.

Personal engagement

MeTime team demos project at HackMIT to Fedora

The MeTime team demos their product to Charles before the last judgment

During the event, Charles walked around the various tables to talk with students while Justin manned the Fedora area. Charles introduced himself to the hackers and asked to know what they were working on or what their plans were. For many teams, he provided advice on how to get over hurdles with first planning and project direction. He checked back in with these groups across the weekend to see how they progressed.

At the Fedora space, Justin fielded questions from students about Linux, what Fedora offers, and about open source software. Some people were familiar with Fedora, and a small handful of students were running Fedora as a primary operating system. However, most students were only familiar with Linux and were curious to know more. As a student, Justin offered specific advice about contributing to open source software and how helpful it is to gain real-world experience. Some students expressed interest in contributing but were unsure about where to start. Justin coached students through key steps to start with on beginning their open source adventure. He identified the process of choosing a project to contribute to, matching something genuinely interesting with technical skills, and getting involved with the community.

Additionally, there were two students organizing other hackathons in the country with a specific focus towards open source software development. The Ambassadors engaged with these students and joined in a dialogue about making open source a critical part of hackathons. More information about these events will become available in the coming future.

Evaluating impact

May Tomic works on her team's project, Conversationalist at HackMIT

May Tomic works on her team’s project, Conversationalist

To help gauge our impact with the event, there was a limited edition HackMIT 2016 Attendee badge that attendees could claim during the event. The team leveraged Fedora Badges as a tool to help tell the story of our impact at the event. Through Badges, you can see a list of FAS accounts that claimed the badge from the event and their account activity in the long run. Bee Padalkar‘s FOSDEM event evaluation demonstrates how this data can be used. Ten people claimed the badge during the weekend. One of the benefits of using badges as a tool for measuring impact and engagement is the follow-up it allows us to make with what badge claimers do in the Fedora community.

However, there were more ways to measure engagement with the students and hackers than only with badges. Many of the most valuable insight into our impact was follow-up on the second morning. Charles went around to most of the tables he visited on the first day leading up to the final deadline. With one team, he helped do some live testing in the last 30 minutes before the deadline since her team was asleep from the previous night. Engagements like these left a positive impression of Fedora, and by extension, the community.

What was our engagement?

HackMIT 2016 Attendee Fedora badge

The HackMIT 2016 Attendee Fedora badge

The type of interactions and conversations Fedora held with students and other attendees was productive and motivating, not only to the students but also to the Ambassador team. People were genuinely interested in Fedora and it was easier to shape their interest into an insightful discussion about what Fedora enables students to create and develop. A powerful message about open source software development was also delivered during the event. This stands in contrast to some other hackathons in the United States which are sometimes set up more like unofficial career fairs. HackMIT clearly held a strong focus on community. Events with that kind of management and direction are where Fedora succeeds and has a more valuable impact.

Leaving the event, the Fedora team was confident that we had a powerful impact on students during the event. For many, Fedora was not only introduced as an operating system, but as a tool for accomplishing and doing. Fedora provides the tools and utilities students need to build their projects and drive them forward. Open source as a development practice was also introduced to many for the first time, or deeper explained for those with a mild interest. These messages and the team’s other engagements were warmly received.

Looking ahead

The Fedora Ambassadors of North America would like to make a special thanks to Red Hat and Tom Callaway for partnering to sponsor this event. Without Red Hat’s help, attending this event would not have been possible. Our engagement and impact after HackMIT excites the Ambassador team. We hope many students from the event turn to Fedora not only as an operating system, but as a tool for their expanding technological toolbox. A congratulations also goes to the organizers of HackMIT for putting together a thoroughly planned and carefully executed event that placed a strong focus on community, which fits within one of Fedora’s four key foundations, Friends.

We hope to return to Cambridge again next year!


You can read Charles Profitt’s event report on his blog.

The post HackMIT meets Fedora appeared first on Fedora Community Blog.

DNF 2.0.0 and DNF-PLUGINS-CORE 1.0.0 release candidate released in Fedora rawhide

Posted by DNF on October 03, 2016 08:20 AM

DNF-2.0 is available for testing! The next major version release of DNF brings many user experience improvements such as more understandable dependency problem reporting messages, weak dependencies shown in transaction summary, more intuitive help usage invoking and others. Repoquery plugin has been moved into DNF itself. Whole DNF stack release fixes over 60 bugs. DNF-2.0 release is focused on getting rid of yum incompatibilities i.e. treat yum configuration options the same (`include`, `includepkgs` and `exclude`). Unfortunately this release is not fully compatible with DNF-1. See the list of DNF-1 and DNF-2 incompatible changes and prepare for the upcoming official release. Especially plugins will need to be changed to the new DNF argument parser. For complete list of changes see DNF and plugins release notes.

What’s new in PostgreSQL 9.5

Posted by Fedora Magazine on October 03, 2016 08:00 AM

Fedora 24 ships with PostgreSQL 9.5, a major upgrade from version 9.4 that is included in Fedora 23. The new version 9.5 provides several enhancements and new features, but also brings some compatibility changes, as it has been very common between PostgreSQL major versions. Note that in the PostgreSQL versioning scheme, 9.4 and 9.5 are two major versions, while the first number is mostly marketing and increments when major features are introduced in the release.

New features and enhancements in 9.5

GROUPING SETS

PostgreSQL has been traditionally OLTP, rather than OLAP, but this may change in the future; small steps like GROUPING SETS help on this path, since the GROUPING SETS allow to use more complex aggregation operations (grouping). CUBE and ROLEUP are then just specific variants of GROUPING SETS.

ON CONFLICT

Users may also be very happy about ON CONFLICT enhancement, that allows to do something sane in case the current statement would generate a conflict. That is quite general approach with two possible solutions — we can either turn the INSERT statement to UPDATE or ignore the statement at all. This feature is often called UPSERT, and in other DBMS we may know something very similar as MERGE command. However, it is not 100% MERGE implementation of the SQL standard in case of PostgreSQL, so it is not called like that. UPSERT implementation in PostgreSQL should also be more safe because CTE (common table expressions) might lead to race condition if not used properly. An example of inserting new tags into database and ignoring duplicate records may look like this:

INSERT INTO tags (tag) VALUES ('PostgreSQL'),('Database') ON CONFLICT DO NOTHING;

Row-level security control

Another feature that may substantially simplify SQL queries, is row-level security control, that allowing check access on particular rows. For example, if every row includes information about who is owner of the record, we would need to check in the application that the currently logged user equals the owner column. With row-level security feature we may leave this to the DBMS and thus we can not only keep application logic clear, but we can also be a bit more sure that potential attacker would not get around, because it’s checked on one layer further. An example how to use the row-level security control may look like this:

CREATE POLICY policy_article_user ON articles
FOR ALL TO PUBLIC
USING (user = current_user);

ALTER TABLE articles ENABLE ROW LEVEL SECURITY;

SELECT * FROM articles;
 id | user | title 
 ---+------+-------------------
  1 | joe  | How I went to Rome
  4 | joe  | My Favorite Recipe
 (2 rows)

With this, currently logged user can only see items that were created by the user that is logged in.

Other improvements

Have you ever tried connecting two database servers into one instance, so application does not need to care about connections to more servers separately? This was already possible using CREATE FOREIGN TABLE, but one needed to re-define every table with every single column. And of course change it again, once structure of the foreign table changed.

From version 9.5 we can import whole schema as easy as this, so not only we have simpler and less error-prone way to connect two remote databases, but it can be very handy also for data migration:

IMPORT FOREIGN SCHEMA invoices
LIMIT TO (customers, customers_invoices)
FROM SERVER invoice.example.com INTO remote_invoices;

What else we find in 9.5? Of course there are several performance enhancements, but that is almost a must for every release, right? What might not be that common for every release though is a brand new index type — BRIN (Block Range Index). According to the documentation, it is designed for handling very large tables in which certain columns have some natural correlation with their physical location within the table. In such cases the bitmap index scans may be used and performance of especially analytical queries might be substantially better.

Upgrading from PostgreSQL 9.4

Note that upgrading from PostgreSQL 9.4 is not automatic. If you have already used PostgreSQL in some previous version, you need to proceed with upgrade. Upgrade procedure, as it is common in case of PostgreSQL, is not an automatic procedure and admins are required to proceed with the steps manually. Fedora helps here a lot by providing postgresql-setup binary that accepts either –initdb (for initializing the datadir) or –upgrade arguments and helps proceeding with the whole procedure almost automatically.

Warning: Do not forget to back-up all your data before proceeding with the upgrade.

After system upgrade (F23 to F24 in this case), you will probably see something like this after trying to run PostgreSQL server:

$ sudo systemctl start postgresql.service
Job for postgresql.service failed because the control process exited with error code. See "systemctl status postgresql.service" and "journalctl -xe" for details.

That’s because server knows about version of the datadir and refuses to start to not break anything. So let’s proceed with upgrade — install the upgrade subpackage first:

$ sudo dnf install postgresql-upgrade

Then run upgrade itself:

$ sudo postgresql-setup --upgrade
 * Upgrading database.
 * Upgraded OK.
WARNING: The configuration files were replaced by default configuration.
WARNING: The previous configuration and data are stored in folder
WARNING: /var/lib/pgsql/data-old.
 * See /var/lib/pgsql/upgrade_postgresql.log for details.

At this point you should really look at the log file as suggested, but you should also be able to start the service now:

$ sudo systemctl start postgresql.service
$ sudo systemctl status postgresql.service

And that’s all. Easy, right? As always, any feedback is welcome.

When every Beta closes another Alpha opens…

Posted by Mary Shakshober on October 03, 2016 12:46 AM

As many of you may know, deadlines for Beta packaging for Fedora 25 have recently come and gone. With this said, designs for the default wallpaper are underway and I’m continuing to work through quirks in the design in order to represent the subtle, yet bold and memorable aesthetic that is present in Fedora wallpapers. Getting closer to the Alpha package deadline, I figured that I’d post another progress picture of where I’m at so far. Be sure to check out https://fedorahosted.org/design-team/ticket/473 for more information as to the background and thought process of the design as well!

f25-wallpaper_alpha_attempt1


The Deletion of gcj

Posted by Tom Tromey on October 02, 2016 10:36 PM

I originally posted this on G+ but I thought maybe I should expand it a little and archive it here.

The patch to delete gcj went in recently.

When I was put on the gcj project at Cygnus, I remember thinking that Java was just a fad and that this was just a temporary thing for me. I wasn’t that interested in it. Then I ended up working on it for 10 years.

In some ways it was the high point of my career.

Socially it was fantastic, especially once we merged with the Classpath community — I’ve always considered Mark Wielaard’s leadership in that community as the thing that made it so great.  I worked with and met many great people while working on gcj and Classpath, but I especially wanted to mention Andrew Haley, who is the single best debugger I’ve ever met, and who stayed in the Java world, now working on OpenJDK.

We also did some cool technical things in gcj. The binary compatibility ABI was great, and the split verifier was very fun to come up with.  Per Bothner’s early vision for gcj drove us for quite a while, long after he left Cygnus and stopped working on it.

On the downside, gcj was never quite up to spec with Java. I’ve met Java developers even as recently as last year who harbor a grudge against gcj.

I don’t apologize for that, though. We were trying something difficult: to make a free Java with a relatively small team.

When OpenJDK came out, the Sun folks at FOSDEM were very nice to say that gcj had influenced the opening of the JDK. Now, I never truly believed this — I’m doubtful that Sun ever felt any heat from our ragtag operation — but it was very gracious of them to say so.

Since the gcj days I’ve been searching for basically the same combination that kept me hacking on gcj all those years: cool technology, great social environment, and a worthwhile mission.

This turned out to be harder than I expected. I’m still searching. I never thought it was possible to go back, though, and with this deletion, this is clearer than ever.

There’s a joy in deleting code (though in this case I didn’t get to do the deletion… grrr); but mainly this weekend I’m feeling sad about the final close of this chapter of my life.

Rawhide notes from the trail, the 2016-10-02 edition

Posted by Kevin Fenzi on October 02, 2016 07:58 PM

It’s once again been a while since one of these posts, but again I’m going to try and do them more often again.

This last week had a few big changes in rawhide:

  • With the Fedora 26 change approval last week ( https://fedoraproject.org/wiki/Changes/DNF-2.0 ) DNF-2.0 has landed in rawhide. Things were a bit shaky at first as the compose tools were not also updated, but anaconda folks were ready with a patch and lorax got a patch from the DNF maintainers and everything landed. There’s still one big issue outstanding: dnf 2.0 sets strict group installs back to true, which causes all the live media to fail to compose. Some of our comps groups contain packages that only exist in some arches, not all of them, and when dnf does strict group installs it errors when it cannot find all the packages in the group. I’ve filed a bug for figuring this out: https://bugzilla.redhat.com/show_bug.cgi?id=1380945
  • Last year we setup a quick and dirty autosigning setup for rawhide, which was just a script in a loop run by releng folks when they remembered to do so. This meant it wasn’t all that reliable. Now, thanks to Patrick Uiterwijk, we have a real autosigning setup implemented and are very very close to an always signed rawhide. The final bit of gating needs a update to fedpkg to go out first, but as of last week, most everything should be signed with only the rare issue with a build that lands right before the compose starts.
  • There has been some work the last few weeks to produce a rpm ostree for rawhide that is updated every few minutes. Once this is available it will be a great help to testing and running rawhide.
  • Xorg server 1.19 almost landed, but we need to fix tigervnc before it can (at least without breaking all the install paths for rawhide). That is being fixed in a side tag and should land as soon as tigervnc is fixed. https://bugzilla.redhat.com/show_bug.cgi?id=1380879 is tracking that work
  • A while back changes were made to cause composes that fail when some deliverables don’t compose. This has been very helpful in making sure changes that land don’t break things. New package builds that cause the entire compose to fail can be untagged until the breakage is fixed.

That’s it for now from the rawhide trail…

F24-20161001 updated Lives released

Posted by Ben Williams on October 02, 2016 01:58 PM

New Kernel means new set of updated lives.

I am happy to release the F24-20161001 updated lives isos with the kernel-4.7.5-200.

as always these respins can be found at http://tinyurl.com/Live-respins2

These update iso contain all the updates as of the date of creation. YMMV (Gold MATE install updates as of 20161001 is 711M)

I would like to thank the community and the seeders for their dedication to this project.


How to reignite a flamewar in one tweet (and I still don’t get it)

Posted by Kevin Fenzi on October 02, 2016 01:03 AM

disSome of you may have seen this post last week:

https://www.agwa.name/blog/post/how_to_crash_systemd_in_one_tweet

I saw it I think shortly after it was posted to hacker news (aside: I read hacker news via it’s rss feed that has just the raw links and no comments or ratings, much nicer IMHO).

My first thought on reading it was “Did they report this upstream?” and upon some digging, yes they did ( https://github.com/systemd/systemd/issues/4234 ). It was filed the same time they posted their blog post/tweet (no responsible disclosure here). My next thought was to see if it worked, so I tried it on various of my test machines and it did not. Turns out you have to run it in a loop (as noted now in the blog and seen in the systemd issue). At that point I saw that there was a Fedora tracking bug, someone requested a CVE and there were patches proposed on the systemd issue (which has since been closed/fix merged), so I moved on with life.

Next someone posted the blog link to the fedora users list along with a rebuttal from a systemd maintainer: https://medium.com/@davidtstrauss/how-to-throw-a-tantrum-in-one-blog-post, and the discussion went downhill quickly from there. Finally, I saw a slashdot post today with the calm, journalistic title of Multiple Linux Distributions Affected By Crippling Bug In Systemd (which points to the blog post above that started all this).

So, everyone is flaming everyone else again about systemd, and I just don’t get it. My personal relationship with systemd is well within the range of all the other Open Source projects I use every day. Mostly I am happy with it, sometimes it has bugs and I report them, sometimes the project makes decisions I don’t like or agree with, sometimes the project doesn’t communicate as well with downstream as I would like (see fedora devel list post about systemd and TasksMax ), sometimes I have to learn (again) how to do what I want to do, sometimes it’s slow when I want it to be fast and so on. Would a local denial of service attack in another project have (re)opened such a bunch of flames? There was in fact just the day before the post a remote denial of service in bind (the most popular dns server out there). Did you even hear about it?

I just don’t get the passionate hatred systemd has. I guess perhaps I never will, and I guess thats ok. I would kindly ask those who do passionately dislike systemd to just move on to somewhere without it and leave the rest of us alone, but I don’t expect thats likely to happen. In the mean time if you have some technical problem with systemd, I will be happy to help you isolate it and file a bug or teach you how to do whatever you are trying to do with it, but I’m going to try and stay out of your flamewars.

From NFS to LizardFS

Posted by Jonathan Dieter on September 30, 2016 08:59 PM

If you’ve been following me for a while, you’ll know that we started our data servers out using NFS on ext4 mirrored over DRBD, hit some load problems, switched to btrfs, hit load problems again, tried a hacky workaround, ran into problems, dropped DRBD for glusterfs, had a major disaster, switched back to NFS on ext4 mirrored over DRBD, hit more load problems, and finally dropped DRBD for ZFS.

As of March 2016, our network looked something like this:

Old server layout

Old server layout

Our NFS over ZFS system worked great for three years, especially after we added SSD cache and log devices to our ZFS pools, but we were starting to overload our ZFS servers and I realized that we didn’t really have any way of scaling up.

This pushed me to investigate distributed filesystems yet again. As I mentioned here, distributed filesystems have been a holy grail for me, but I never found one that would work for us. Our problem is that our home directories (including config directories) are stored on our data servers, and there might be over one hundred users logged in simultaneously. Linux desktops tend to do a lot of small reads and writes to the config directories, and any latency bottlenecks tend to cascade. This leads to an unresponsive network, which then leads to students acting out the Old Testament practice of stoning the computer. GlusterFS was too slow (and almost lost all our data), CephFS still seems too experimental (especially for the features I want), and there didn’t seem to be any other reasonable alternatives… until I looked at LizardFS.

LizardFS (a completely open source fork of MooseFS) is a distributed filesystem that has one fascinating twist: All the metadata is stored in RAM. It gets written out to the hard drive regularly, but all of the metadata must fit into the RAM. The main result is that metadata lookups are rocket-fast. Add to that the ability to direct different paths (say, perhaps, config directories) to different storage types (say, perhaps, SSDs), and you have a filesystem that is scalable and fast.

LizardFS does have its drawbacks. You can run hot backups of your metadata servers, but only one will ever be the active master at any one time. If it goes down, you have to manually switch one of the replicas into master mode. LizardFS also has a very complicated upgrade procedure. First the metadata replicas must be upgraded, then the master and finally the clients. And finally, there are some corner cases where replication is not as robust as I would like it to be, but they seem to be well understood and really only seem to affect very new blocks.

So, given the potential benefits and drawbacks, we decided to run some tests. The results were instant… and impressive. A single user’s login time on a server with no load… doubled. Instead of five seconds, it took ten for them to log in. Not good. But when a whole class logged in simultaneously, it took only 15 seconds for them to all log in, down from three to five minutes. We decided that a massive speed gain in the multiple user scenario was well worth the speed sacrifice in the single-user scenario.

Another bonus is that we’ve gone from two separate data servers with two completely different filesystems (only one which ever had high load) to five data servers sharing the load while serving out one massive filesystem, giving us a system that now looks like this:

New server setup

New server layout

So, six months on, LizardFS has served us well, and will hopefully continue to serve us for the next (few? many?) years. The main downside is that Fedora doesn’t have LizardFS in its repositories, but I’m thinking about cleaning up my spec and putting in a review request.

Updated to add graphics of old and new server layouts, info about Fedora packaging status, LizardFS bug links, and remove some grammatical errors


Weechat-Tmux

Posted by farhaan on September 30, 2016 06:16 PM

Recently I have been to pycon-india (will blog about that too!) there Sayan and Vivek introduced me to weechat which is a terminal based IRC client, from the time I saw Sayan’s weechat configuration I was hooked to it.

The same night I started configuring my weechat , it’s such a beautiful IRC client I was regretting why did I not use it before. It just transforms your terminal into IRC window.

For fedora you need to do:

sudo dnf install weechat

Some of the configuration and plugins you need are :

  1. buffer
  2. notify-send

That’s pretty much it but that doesn’t stop there you can make that client little more aesthetic.  You can set weechat by using their documentation.

The clean design kind of makes you feel happy, plus adding plugin is not at all a pain. In the weechat window you just say /script install buffer.pl and it just installs it in no time.  There are various external plugin in case you want to use them and writing plugin is actually fun , I have not tried that yet.

screenshot-from-2016-09-30-23-02-13

I also use to use bigger font but now I find this size more soothing to eyes. It is because of weechat I got to know or explore about this beautiful tool called tmux ,  because on normal terminal screen weechat lags , what I mean by lag is the keystroke somehow reach after like 5-6 seconds which makes the user experience go bad.  I pinged people on IRC in #weechat channel with the query the community is amazing they helped me to set it up and use it efficiently , they only told me to use tmux or screen . With tmux my session are persistent and without any lag.

To install tmux on fedora:

sudo install tmux

tmux is a terminal multiplexer which means it can extend one terminal screen into many screen . I got to learn a lot of concepts in tmux like session, pane and windows. Once you know these things in tmux its really a funride. Some of the blogs I went through for configuring and using tmux the best I found was hamvoke , the whole series is pretty amazing . So basically my workflow goes for every project I am working on I have a tmux session named after it, which is done by the command:

tmux new-session -s <name_session>

Switching between two session can be done by attach and detach. And I have one constant session running of weechat. I thought I have explored every thing in tmux but that can’t be it , I came to know that there is a powerline for tmux too. That makes it way more amazing so this is how a typical tmux session with powerline looks like.

screenshot-from-2016-09-30-23-31-10

I am kind of loving the new setup and enjoying it. I am also constantly using tmux cheatsheet :P because it’s good to look up what else you can do and also I saw various screencast on youtube where  tmux+vim makes things amazing.

Do let me know how you like my setup or how you use it .

Till then, Happy Hacking!🙂