February 08, 2017

This year’s Plasma Sprint is kindly being hosted by von Affenfels, a software company in Stuttgart, Germany, focusing on mobile apps. Let me try to give you an idea of what we’re working on this week.

Bundled apps

Welcome, KDE hackers!Welcome, KDE hackers!
One problem we’re facing in KDE is that for Linux, our most important target platform, we depend on Linux distributors to ship our apps and updates for it. This is problematic on the distro side, since the work on packaging has to be duplicated by many different people, but it’s also a problem for application developers, since it may take weeks, months or until forever until an update becomes available for users. This is a serious problem and puts us far, far behind for example deployment cycles for webapps.

Bundled app technologies such as flatpak, appimage and snap solve this problem by allowing us to create one of these packages and deploy them across a wide range of distributions. That means that we could go as far as shipping apps ourselves and cutting out the distros as middle men. This has a bunch of advantages:

  • Releases and fixes can reach the user much quicker as we don’t have to wait for distros with their own cycles, policies and resources to pick up our updates
  • Users can easily get the lastest version of the software they need, without being bound to what the distro ships
  • Packaging and testing effort is vastly reduced as it has to only be done once, and not for every distro out there
  • Distros with less man-power, who may not be able to package and offer a lot of software can make available many more appliations,…
  • …and at the same time concentrate their efforts on the core of their OS

From a Plasma point of view, we want to concentrate on a single technology, and not three of them. My personal favorite is flatpak, as it is technologically the most advanced, it doesn’t rely on a proprietary and centralized server component. Unless Canonical changes the way they control snaps, flatpak should be the technology KDE concentrates on. This hasn’t been formally decided however, and the jury is still out. I think it’s important to realize that KDE isn’t served by adopting a technology for a process as important as software distribution that could be switched off by a single company. This would pose an unacceptable risk, and it would send the wrong signal to the rest of the Free software community.

How would this look like to the user? I can imagine KDE to ship applications directly. We already build our code on pretty much every commit, we are actually the to know how to build it properly. We’d integrate this seamlessly in Discover through the KDE store, and users should be able to install our applications very easily, perhaps similarly to openSUSE’s

Website work

Hackers hacking.Hackers hacking.

We started off the meeting by going over and categorizing topics and then dove straight into the first topic: Communication and Design. There’s a new website for Plasma (and the whole of KDE) coming, thanks to the tireless work of Ken Vermette. We went over most of his recent work to review and suggest fixes, but also to get a bit excited about this new public face of Plasma. The website is part of a bigger problem: In KDE, we’re doing lots of excellent work, but we fail to communicate it properly, regularly and in ways and media that reach our target audience. In fact, we haven’t even clearly defined the target audience. This is something we want to tackle in the near future as well, so stay tuned.

But also webbrowsers….

KDE Plasma in 2017KDE Plasma in 2017

Kai Uwe demo’ed his work on better integration of browsers: Native notifications instead of the out-of-place notifications created by the browser, controls for media player integration between Plasma and the browser (so your album artwork gets shown in the panel’s media controller), acccess to tabs, closing all incognito tabs from Plasma, including individual browser and a few more cool features. Plasma already has most of this functionality, so the bigger part of this has to be in the browser. Kai has implemented the browser side of things as an extension for Chromium (that’s what he uses, Firefox support is also planned), and we’re discussing how we can bring this extension to the attention of the users, possibly preinstalling it so you get the improvements in browser integration without having to spend a thought on it.

On and on…

We only just started our sprint, and there are many more things we’re working on and discussing. The above is my account of some things we discussed so far, but I’m planning to keep you posted.

on February 08, 2017 10:39 AM

February 07, 2017

Anyone using the internet in Europe and the US on 21st October last year experienced what economists call an externality.

It arrived in the form of a massive 1.2 Tbps DDoS attack on Dyn, a US-based internet infrastructure company. This, in turn, triggered outages at multiple sites – including PayPal, Twitter, Amazon and Netflix.

The attack was coordinated by a piece of malware called Mirai, which coordinated millions of compromised IP-connected devices including DVRs and cameras. According to the security firm Flashpoint, the likely authors of the attack were talented amateurs: script kiddies.

Security breaches always impose a cost on innocent parties. Most consumers would describe this as a variant on Murphy’s Law. PayPal, Twitter, Amazon and Netflix probably view it as economic sabotage. Economists, by contrast, use the e-word to describe this kind of thing. Externalities are the hidden costs of doing business that tell us markets are working imperfectly.

Whatever you want to call it, the risks involved in IoT security are immense. If Netflix goes dark while you watching a box set, that’s one thing. If pacemakers crash and automobiles veer off course, that’s something very different. At the point where the digital world blurs into the physical, risks to human life become evident. For obvious reasons, the Dyn attack sparked a high-level debate about the state of IoT security.

So here’s a question: in IoT, who is responsible for closing down the space in which externalities like DDoS attacks can occur?

Clearly, the script kiddies have a lot to answer for (though it remains unlikely that they will pay a penalty). This leaves us with two targets:

  • Device vendors who understand the risks, but don’t mitigate them
  • Consumers who don’t understand the risks and don’t care about them

It’s easy enough for us, inside the industry, to criticise consumers.

But take a look at the scale of the attack surface generated by ignorance. It’s enormous. In a recent survey, which you can read about in more detail in this white paper, we asked consumers for their views on the security of connected devices. Here’s what they told us:

  • 57% said the job of securing devices is clearly the responsibility of vendors
  • 48% said they didn’t know that connected devices in the home could be used to conduct a cyberattack
  • 40% said they had never consciously performed updates on their devices
  • 37% admitted they were not “sufficiently aware” of the risks

It’s clear that we will have our work cut out to educate a sufficiently large number of individuals – at the minimum — about the need to rewrite default credentials and install firmware updates.

So let’s turn to the device vendors who understand IoT security risks, but don’t mitigate them.

Clearly, these vendors have the power to close down the space for externalities like IoT-mediated DDoS attacks. (For an overview of what’s wrong with cheap consumer IoT devices, take a look at this post by Ray Krebs, who himself was the victim of a similar IoT-mediated DDoS attack last September.)

Now it’s perfectly understandable to read an analysis like this and leap straight to the recommendation that regulation is the answer.

Among those urging us down this route is Bruce Schneier, the veteran security analyst and thinker. In a long essay last month, Schneier wrote: “Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.”

Schneier may well be correct. Regulation is the classic response to externalities and market failure. But once again, this will be an enormous undertaking. Governments don’t move fast. And they are already well behind the pace of IoT deployment.

So where does this leave us? Well, in addition to clueless consumers and slow-moving government, there’s a third option for mitigation: the possibility of better and smarter architectures – at network and device level.

Innovation may not be the only solution, but it will play a major role in securing the IoT. With that in mind, we suggest you take a look at Ubuntu Core – a tiny version of Ubuntu designed specifically for IoT.

While we wait for consumers to get educated, and for governments to do their thing, let’s build a better IoT, using a purpose-built OS that takes security seriously: Ubuntu Core.

Learn more about current approaches to IoT security and why they aren’t working in Taking charge of the IoT’s security vulnerabilities

Download the white paper

on February 07, 2017 04:16 PM

ansible deploying video boxes

Mark Van den Borre

 
This was how an ansible deploy of the https://fosdem.org video boxes looked like... More info to come.
on February 07, 2017 01:14 PM

In the tutorial How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04 we saw how to create a snap with snapcraft for the CLI utility called how2. That was a software based on nodejs.

In this post we will repeat the process for another CLI utility called howdoi by Benjamin Gleitzman, which does a similar task with how2 but is implemented in Python and has a few usability differences as well. howdoi does not have yet a package in the repositories of Ubuntu, either.

Since we covered already the details in How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04, this post would be more focused, and shorter. 🙂

Planning

Reading through https://github.com/gleitz/howdoi we see that howdoi

  1. is software based on Python (therefore: plugin: python)
  2. requires networking (therefore: plugs: [network])
  3. and has no need to save files (therefore it does not need access to the filesystem)

Crafting with snapcraft

Let’s start with snapcraft.

$ mkdir howdoi
$ cd howdoi/
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started

Now we edit snap/snapcraft.yaml and here are our changes (in bold) from the initial generated file.

$ cat snap/snapcraft.yaml 
name: howdoi # you probably want to 'snapcraft register <name>'
version: '20170207' # just for humans, typically '1.2+git' or '1.3.2'
summary: instant coding answers via the command line # 79 char long summary
description: |
  Are you a hack programmer? Do you find yourself constantly Googling 
  for how to do basic programing tasks?
  Suppose you want to know how to format a date in bash. Why open your browser 
  and read through blogs (risking major distraction) when you can simply 
  stay in the console and ask howdoi.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  howdoi:
    command: howdoi
    plugs: [network]

parts:
  howdoi:
    plugin: python
    source: https://github.com/gleitz/howdoi.git

First, we selected to use the name howdoi because again, it’s not a reserved name :-). Also, we registered it with snapcraft,

$ snapcraft register howdoi
Registering howdoi.
Congratulations! You're now the publisher for 'howdoi'.

Second, we did not notice a particular branch or tag for howdoi, therefore we put the date of the snap creation.

Third, the summary and the description are just pasted from the Readme.md of the howdoi repository.

Fourth, we select the grade stable and enforce the strict confinement.

The apps: howdoi: command: howdoi is the standard sequence to specify the command that will be exposed to the user. The user will be typing howdoi and the command howdoi inside the snap will be invoked.

The parts: howdoi: plugin: python source: … is the standard sequence to specify that the howdoi that was referenced just earlier, is software written in Python and the source comes from this github repository.

Let’s craft the snap.

$ snapcraft 
Preparing to pull howdoi 
...                                                                     
Pulling howdoi 
...
Preparing to build howdoi 
Building howdoi 
...
Successfully built howdoi
...
Installing collected packages: howdoi, cssselect, Pygments, requests, lxml, pyquery, requests-cache
Successfully installed Pygments-2.2.0 cssselect-1.0.1 howdoi-1.1.9 lxml-3.7.2 pyquery-1.2.17 requests-2.13.0 requests-cache-0.4.13
Staging howdoi 
Priming howdoi 
Snapping 'howdoi' |                                                                       
Snapped howdoi_20170207_amd64.snap
$ snap install howdoi_20170207_amd64.snap --dangerous
howdoi 20170207 installed
$ howdoi format date bash
DATE=`date +%Y-%m-%d`
$ _

Beautiful! It worked!

Publish to the Ubuntu Store

Let’s publish the snap to the Ubuntu Store. We are going to push the file howdoi_20170207_amd64.snap and then check that it has passed the automatic checking. Once it has done so, we release to the stable channel.

$ snapcraft push howdoi_20170207_amd64.snap 
Pushing 'howdoi_20170207_amd64.snap' to the store.
Uploading howdoi_20170207_amd64.snap [=============================================================] 100%
Ready to release!|                                                                                       
Revision 1 of 'howdoi' created.

Just a reminder: We can release the snap to the stable channel simply by running snapcraft release howdoi 1 stable. The alternative to this command, is to do all the following through the Web.

We log in into https://myapps.developer.ubuntu.com/ to check whether snap is ready to publish. In the following screenshots, you would click where the arrows are showing. See the captions for explanations.

Here is the uploaded snap in our account page in the Ubuntu Store. The snap was uploaded using snapcraft, although it is also possible to uploaded from the account page as well.

 

The package (the snap) is ready to publish, because it passed the automated tests and was not flagged for manual review.

By default, the package has not been released to a channel. We click on Release in order to select which channels to release it to.

For this specific package, we select the stable channel. It is not necessary to select the other channels, because by default a higher channel implies those below. Then, click on the Release button.

The package got released, and it shown it got released in stable, candidate, beta and edge (we selected stable, but the rest are implied because “stable” beats the rest.) Note that the Package status has changed to “Published”, and we have the option to Unpublish or even Make private. Ignore the arrow, it was pasted by mistake.

on February 07, 2017 11:42 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #497 for the week January 30 – February 5, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on February 07, 2017 01:55 AM

Stackoverflow is an invaluable resource for questions related to programming and other subjects.

Normally, the workflow for searching http://stackoverflow.com/, is to search Google using a Web browser. Most probably, the result will be a question from stackoverflow.

A more convenient way to query StackOverflow, is to use the how2 command-line utility.

Here is how it looks:

In this HowTo, we will see:

  1. How to set up snapcraft in order to make the snap
  2. How to write the initial snapcraft.yaml configure
  3. Build the snap with trial and error
  4. Create the final snap
  5. Make the snap available to the Ubuntu Store

Set up snapcraft

snapcraft is a utility that helps us create snaps. Let’s install snapcraft.

$ sudo apt update
...
Reading state information... Done
All packages are up to date.
$ sudo apt install snapcraft
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  snapcraft
...
Preparing to unpack .../snapcraft_2.26_all.deb ...
Unpacking snapcraft (2.26) ...
Setting up snapcraft (2.26) ...
$_

In Ubuntu 16.04, snapcraft was updated in early February and has a few differences from the previous version. Make sure you have snapcraft 2.26 or newer.

Let’s create a new directory for the development of the httpstat snap and initialize it with snapcraft so that create the necessary initial files.

$ mkdir how2
$ cd how2/
$ snapcraft init
Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
$ ls -l
total 4
drwxrwxr-x 2 myusername myusername 4096 Feb   6 14:09 snap
$ ls -l snap/
total 4
-rw-rw-r-- 1 myusername myusername 676 Feb   6 14:09 snapcraft.yaml
$ _

We are in this how2/ directory and from here we run snapcraft in order to create the snap. snapcraft will take the instructions from snap/snapcraft.yaml and do its best to create the snap.

These are the initial contents of snap/snapcraft.yaml:

name: my-snap-name # you probably want to 'snapcraft register <name>'
version: '0.1' # just for humans, typically '1.2+git' or '1.3.2'
summary: Single-line elevator pitch for your amazing snap # 79 char long summary
description: |
  This is my-snap's description. You have a paragraph or two to tell the
  most important story about your snap. Keep it under 100 words though,
  we live in tweetspace and your description wants to look good in the snap
  store.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: devmode # use 'strict' once you have the right plugs and slots

parts:
  my-part:
    # See 'snapcraft plugins'
    plugin: nil

I have formatted as italics the first chunk of configuration lines of snapcraft.yaml, because this chunk is what rarely changes when you develop the snap. The other chunk is the one that the actual actions take place. It is good to distinguish those two chunks.

This snap/snapcraft.yaml configuration file is actually usable and can create an (empty) snap. Let’s create this empty snap, install it, uninstall it and then clean up to the initial pristine state.

$ snapcraft 
Preparing to pull my-part 
Pulling my-part 
Preparing to build my-part 
Building my-part 
Staging my-part 
Priming my-part 
Snapping 'my-snap-name' |                                                                 
Snapped my-snap-name_0.1_amd64.snap
$ snap install my-snap-name_0.1_amd64.snap 
error: cannot find signatures with metadata for snap "my-snap-name_0.1_amd64.snap"
$ snap install my-snap-name_0.1_amd64.snap --dangerous
error: cannot perform the following tasks:
- Mount snap "my-snap-name" (unset) (snap "my-snap-name" requires devmode or confinement override)
Exit 1
$ snap install my-snap-name_0.1_amd64.snap --dangerous --devmode
my-snap-name 0.1 installed
$ snap remove my-snap-name
my-snap-name removed
$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ ls
my-snap-name_0.1_amd64.snap  snap/
$ rm my-snap-name_0.1_amd64.snap 
rm: remove regular file 'my-snap-name_0.1_amd64.snap'? y
removed 'my-snap-name_0.1_amd64.snap'
$ _

While developing the snap, we will be going through this cycle of creating the snap, testing it and then removing it. There are ways to optimize a bit this process, learn soon we will.

In order to install the snap from a .snap file, we had to use –dangerous because the snap has not been digitally signed. We also had to use –devmode because snapcraft.yaml specifies the developer mode, which is a relaxed (in terms of permissions) development mode.

Writing the snapcraft.yaml for how2

Here is the first chunk of snapcraft.yaml, the chunk that does not change while developing the snap.

name: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: stable # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

We specify the name and version of the snap. The name is not registered already and it is not reserved, because

$ snapcraft register how2
Registering how2.
Congratulations! You're now the publisher for 'how2'.

We add a suitable summary and description that was copied conveniently from the development page of how2.

We set the grade to stable so that the snap can make it to the stable channel and be available to anyone.

We set the confinement to strict, which means that by default the snap will have no special access (no filesystem access, no network access, etc) unless we carefully allow what is really needed.

Here goes the other chunk.

apps:
  how2:
    command: how2

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

How did we write this other chunk?

The apps: how2 : command: how2 is generic. That is, we specify an app that we name as how2, and it is invoked as a command with the name how2. The command could also be bin/how2 or node how2. We will figure out later whether we need to change it because snapcraft will show an error message.

The parts: how2: plugin: nodejs is also generic. We know that how2 is build on nodejs and we figured that one out from the github page of how2. Then, we looked into the list of plugins for snapcraft, and found the nodejs plugin page. At the end of the nodejs plugin page there is a link to examples for the user of nodejs in snapcraft.yaml. This link is actually a search in github with search terms filename:snapcraft.yaml “plugin: nodejs”(in all files that are named snapcraft.yaml, search for “plugin: nodejs”). For this search to work, you need to be logged in to Github first. For the specific case of nodejs, we can try without additional parameters as most examples do not show a use of special parameters.

Work on the snapcraft.yaml with trial and error

We come up with the following snapcraft.yaml by piecing together the chunks from the previous section:

$ cat snap/snapcraft.yamlname: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  how2:
    command: how2
    plugs:
      - network

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

Let’s run snapcraft in order to build the snap.

$ snapcraft clean
Cleaning up priming area
Cleaning up staging area
Cleaning up parts directory
$ snapcraft 
Preparing to pull how2 
Pulling how2 
...
Downloading 'node-v4.4.4-linux-x64.tar.gz'[===============================] 100%
npm --cache-min=Infinity install
...
[email protected] node_modules/npm-latest
├── [email protected]
├── [email protected]
└── [email protected] ([email protected])
...
Preparing to build how2 
Building how2 
...
Staging how2 
Priming how2 
Snapping 'how2' |                                                                              
Snapped how2_20170206_amd64.snap
$ _

Wow, it created successfully the snap on the first try! Let’s install it and then test it.

$ sudo snap install how2_20170206_amd64.snap --dangerous
how2 20170206 installed
$ how2 read file while changing
/Cannot connect to Google.
Error: Error on response:Error: getaddrinfo EAI_AGAIN www.google.com:443 : undefined
$ _

It works again, and the only problem is the confinement. We need to allow the snap to access the Internet, and only the Internet.

Add the ability to access the Internet

To be able to access the network, we need to relax the confinement of the snap and allow access to the network interface.

There is an identifier called plugs, and accepts an array of names of interfaces, from the list of available interfaces.

In snapcraft.yaml, you can specify such an array in either of the following formats:

plugs: [ network]
         or
plugs: 
   - network

Here is the final version of snapcraft.yaml for how2:

name: how2 # you probably want to 'snapcraft register <name>'
version: '20170206' # just for humans, typically '1.2+git' or '1.3.2'
summary: how2, stackoverflow from the terminal
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.

grade: devel # must be 'stable' to release into candidate/stable channels
confinement: strict # use 'strict' once you have the right plugs and slots

apps:
  how2:
    command: how2
    plugs: [ network ]

parts:
  how2:
    plugin: nodejs
    source: https://github.com/santinic/how2.git

Let’s create the snap, install and run the test query.

$ snapcraft 
Skipping pull how2 (already ran)
Skipping build how2 (already ran)
Skipping stage how2 (already ran)
Skipping prime how2 (already ran)
Snapping 'how2' |                                                                              
Snapped how2_20170206_amd64.snap
$ sudo snap install how2_20170206_amd64.snap --dangerous
how2 20170206 installed
$ how2 read file while changing
terminal - Output file contents while they change

You can use tail command with -f  :


   tail -f /var/log/syslog 

It's good solution for real time  show.


Press SPACE for more choices, any other key to quit.

That’s it! It works fine!

Make the snap available in the Ubuntu Store

The command snapcraft push will upload the .snap file to the Ubuntu Store. Then, we use the snapcraft release command to release the snap into the beta channel of the Ubuntu Store. Because we specified the grade as devel, we cannot release to the stable channel. When we release a snap to the beta channel, it is considered as released to the edge channel as well (because beta is higher than edge).

$ snapcraft push how2_20170206_amd64.snap 
Pushing 'how2_20170206_amd64.snap' to the store.
Uploading how2_20170206_amd64.snap [====================================================================] 100%
Ready to release!|                                                                                            
Revision 1 of 'how2' created.
$ snapcraft release how2 1 stable
Revision 1 (strict) cannot target a stable channel (stable, grade: devel)
$ snapcraft release how2 1 beta
The 'beta' channel is now open.

Channel    Version    Revision
stable     -          -
candidate  -          -
beta       20170206   1
edge       ^          ^
$ _

Everything looks fine now. Let’s remove the manually-installed snap and install it from the Ubuntu Store.

$ snap remove how2
how2 removed
$ snap info how2
name:      how2
summary:   "how2, stackoverflow from the terminal"
publisher: simosx
description: |
  how2 finds the simplest way to do something in a unix shell. 
  It is like the man command, but you can query it in natural language.
  
channels:              
  beta:   20170206 (1) 11MB -
  edge:   20170206 (1) 11MB -

$ snap install how2
error: cannot install "how2": snap not found
$ snap install how2 --channel=beta
how2 (beta) 20170206 from 'simosx' installed
$ how2 how to edit an XML file
How to change values in XML file

Using XMLStarlet (http://xmlstar.sourceforge.net/):
...omitted... 

on February 07, 2017 01:48 AM

February 06, 2017

Hace exactamente 2 años, el 6 de Febrero del 2015, Canonical me hacía entrega como insider del bq E4.5, un par de meses antes de su venta al público.

Presentación Ubuntu Phone en Londres


Y sí, usé Ubuntu Phone durante 2 años en exclusiva (excepto unos pocos días que jugué con Firefox OS y Android).
 
E4.5


Pasado

Yo estaba muy feliz con mi bq E4.5 cuando ¡Oh sorpresa! Canonical nos entregaba un Meizu MX4.


Eran los buenos tiempos, con dos compañías volcadas en Ubuntu Touch, sacando a posteriori el bq E5, el Meizu PRO 5 y la tablet bq M10. Y una Canonical publicando actualizaciones OTA cada mes y pico.

Tablet M10

En estos 2 años leí muchos artículos sobre los primeros terminales. Casi todos desfavorables. Se olvidaban de que eran móviles para early adopters y les hacían reviews comparándolos con lo mejor de Android. ¡Fail! Para ser justos estas primeras versiones de Ubuntu Phone superaban a las primeras versiones de Android e iOS.

A nivel personal, nacían uNav y uWriter :')) Con un éxito arrollador que me sorprendió.

Ubucon Paris 15.10

Presente

Grandes baluartes de Ubuntu, como David Planella, Daniel Holbach o Martin Pitt abandonan Ubuntu. Y junto a eso leo que Canonical para el desarrollo del móvil, con una redacción que no invita al optimismo. Pero ese 'para' no significa 'abandona'.

UBPorts coge relevancia en estos últimos meses trabajando en los ports de Fair Phone 2 y OnePlus One.


FairPhone 2

Futuro

El presente no puede hacer que me sienta especialmente optimista. Ya no sólo por Ubuntu Touch en particular, si no por el mercado móvil en general. Un excelente Firefox OS que murió, un SailfishOS que se mantiene a duras penas, un Tizen que sólo papa Samsung mantiene con vida y un Windows Phone que se mantiene tercero en base a pasta del number one en el escritorio.
Y es que a pesar de la falta de privacidad, seguridad y en especial de software libre, nadie tose a Android.

Imagen de neurogadget



¿Y cómo plantea Ubuntu ese futuro tan negro? Pues podemos decir que Canonical se va a jugar el todo o nada a una sola carta: snap.

snap


Debo aclarar aquí el estado actual: En PC tenemos Ubuntu con Unity 7 y en móvil Ubuntu con Unity 8. Pero todo es el mismo Ubuntu, la misma base.

Y esa es la jugada, a corto plazo deberíamos tener un Ubuntu con Unity 8 tanto en PC como en móvil y basado en paquetes snap (que no tienen problemas de dependencias y tienen muchísima seguridad al 'isolar' las aplicaciones).

Y ahí entra en juego la convergencia: Mismo Ubuntu, mismas aplicaciones, distintos dispositivos.

Imagen de OMG Ubuntu!

Pero el coste de esta jugada podría ser muy caro: Dejar atrás toda la base actual de móviles (se salva la tablet), por usar Android de 32 bits y el salto implicaría usar 64bits lo cual no parece factible.
on February 06, 2017 08:44 PM

My WordPress blog got hacked two days ago and now twice today. This morning I purged MySQL and restored a good backup from three days ago, changed all DB and WordPress passwords (both the old and new ones were long and autogenerated ones), but not even an hour after the redeploy the hack was back. (It can still be seen on Planet Debian and Planet Ubuntu. Neither the Apache logs nor the Journal had anything obvious, nor were there any new files in global or user www directories, so I’m a bit stumped how this happened. Certainly not due to bruteforcing a password, that would both have shown in the logs and also have triggered ban2fail, so this looks like an actual vulnerability.

I upgraded to WordPress 4.7.1 a few days ago, and apparently 4.7.2 fixes a few vulnerabilities, although all of them don’t sound like they would match my situation. jessie-backports is still at 4.7.1, so I missed that update. But either way, all WordPress blogs hosted on my server are down for the time being.

I took this as motivation to finally migrate to something more robust. WordPress has tons of features that I never need, and also a lot of overhead (dynamic generation, MySQL, its own user/passwords, etc.). I had a look around, and it seems Hugo and Blogofile are nice contenders – no privileges, no database, outputting static files, input is Markdown (so much nicer to type than HTML!), and maintaining your blog in git and previewing the changes on my local laptop are straightforward. I happened to try Hugo first, and like it enough to give it an extended try – you have plenty of themes to choose from and they are straightforward to customize, so I don’t need to spend a lot of time learning and crafting CSS.

I ran the WordPress to Hugo Exporter, and it produced remarkable results – fairly usable HTML → Markdown and metadata conversion, it keeps all the original URLs, and it’s painless to use. Nicely done!

So here it is, on to a much more secure server now! \o/

on February 06, 2017 08:04 PM

1) Smart cities: revenue generating fountains

What if instead of being a cost to cities, fountains become a revenue generator? Crazy idea? Just put the same coin accepting mechanism you find in vending machines and allow tourists to switch the fountain on for 30 seconds to get the ideal photo taken. But why not put an App Store on the fountain and allow others to come up with new ideas. What if you could make apps that can let water and lasers join up. The perfect place? The Bellagio fountain in Las Vegas. Why not allow for marriage proposals to appear projected inside the water flows? Let different tourists jointly bid for the fountain to go on or even betting apps for fountains.

2) Building automation – first class elevators

This one is for Dubai, New York and Taipei as well as any luxury hotel. The super rich often live in the penthouse suite or rent the top floors in hotel. What is the worst thing you can do to them in an elevator? Press all the buttons of every floor! So why not introduce first class elevator service just like in planes. Their mobile is detected and even before they reach the elevators there is one waiting that offers the current users the choice: “leave now” or “go to the top floors before you go to your floor”. For $1,000 to $10,000 per month you can also select your favourite music and other types of personal touches.

3) Building automation – anti-terrorist and other emergency solutions.

What if security cameras inside buildings can check for hazardous situations? Somebody covers their face and pulls a gun out of their pocket. The elevator should close the doors, automatic doors inside the building get locked and a swat team is called. If you fall down and stay for 30 seconds on the ground then a doctor is called. Same for fires and the fire department.

4) Building automation – elevators sales pitch

Taking the elevator at work during lunch time towards the ground floor? The digital signage inside it should offer you a GroupOn type of discount for a new restaurant two streets further. Just tap with your mobile and confirm the reservation.

5) Digital signage – app stores on toilets

You already have a company that puts games on male toilets. This can go a lot further with app stores. Male toilets have a special way of controlling the interface. Female toilets will need to do with gesture control. This is the only place where touch screens will never be used.

6) ehealth – open source MRI scanners with app stores

If somebody open sources the design and brings the price of MRI scanners from millions to $100,000 then each small hospital can afford one or multiple. Just go there one time a year. Let your full body be scanned and have thousands of algorithms looking for early signs of cancer. Pay a $100 and if there are more than a 1,000 customers, the MRI scanner becomes revenue generating. Donate your data to science and it can be free. Finding cancer early makes it easier to cure.

7) Home automation – coolness as a service

Pay per day for coolness as a service, a.k.a. a fridge, and get paid a $100 per day the coolness is not available. This type of business models will immediately introduce the need for predictive maintenance in which your fridge gets fixed before it breaks. Add Alexa services to the fridge and you can order any type of groceries. Add cloud bidding markets and you always get the cheapest groceries. But even if you don’t feel like cooking you can just ask the fridge to order a pizza, make a restaurant or movie reservation. Finally you can buy apps that have nothing to do with the fridge like a Pokemon Go app that warns the kids that a rare Pokemon is outside the house. Generate enough app and cloud revenues for the manufacturer or services company and you might end up not having to pay for coolness as a service, a.k.a. get the fridge for free by using the app platform on top.

8) Smart vending – a telecom in a box

Add a mobile base station to a vending machine, sell sim cards and allow people to top-up their prepaid account. Now the vending machine is doing what a telecom operator does.

9) Smart telecom – run your own base station

Every company or consumer should be able to setup their own base station and run their own network. Via apps on the base station, spectrum can be delivered as a service and the network can be managed as a federation. Why not provide the owner of the base station with a free contract?

10) Smart robots – close sourcing personalised products

Have small batches of robots make products close to consumers and make personalised robots for each customer. It used to be extremely expensive. Not so any more. Everything from soldering, sorting, laser engraving, 3d printing, etc. can be done by robot arms that cost under $2,000. Put 10 in a row, add a conveyor belt and Mr. Ford would be proud.

The future of smart devices?

The future is here. These are not ideas of things that need years of research. We can make most of them real in under 12 months. We will be showing several demos at Mobile World Congress at the end of February…more info here!

on February 06, 2017 05:29 PM

FOSDEM 2017

Costales

Viajar a algunas Ubucons me ha permitido conocer a personas excepcionales de la comunidad. Y en esta ocasión, me animé a asistir al FOSDEM en Bruselas, uno de los eventos más importantes de Europa en cuanto a software libre.

VIERNES 3 DE FEBRERO - BEER EVENT

Llegué el primero al evento cervecero del viernes, al que pronto se unieron Marius, Ilonka, Diogo, Tiago, Laura, Rudy y Quest. El famoso Delirium Cafe estaba muy petado y eso que sólo podían entrar asistentes a FOSDEM.
Olive, Quest, Rudy, Tiago y yo
Ahí estuvimos conversando sobre Ubuntu y disfrutando de buena cerveza, hasta que a la 1 nos retiramos cogiendo un autobús dirección a la casa de Diogo (que me hospedó en su casa ¡Gracias Diogo!), pero ops... íbamos en el autobús equivocado que nos alejó 30km al sur de la ciudad. Tuvimos que volver en un taxi en la gélida madrugada belga. Aunque Diogo, con su buen humor característico, intentaba animarnos a Tiago y a mi intentando que disfrutaramos de las vistas de un edificio con luces de colores que había cerca.

SÁBADO 4 FEBRERO - CONFERENCIAS (DE MOZILLA)

Este será mi único día de conferencias, pues el domingo tengo el avión de vuelta temprano.
No había apenas charlas sobre Linux o Ubuntu, así que disfruté el día entero en el aula de Mozilla.

Rina Jensen abrió el día con una charla muy interesante sobre qué motiva a la comunidad de código abierto.
Continuó Pascal Chevrel, con quien trabajé muchísimo en el pasado para la localización de Firefox al asturiano. No lo había conocido antes en persona y moló ponerle cara :)
Tras Pascal, Alex Lakatos nos mostró el potencial de las Herramientas de Desarrollador que están preinstaladas en Firefox. Y Daniel Scasciafratte nos contó el potencial de las webextensions.

Rina Jensen

El gran Pascal

Un invitado especial

La sala estuvo muy llena casi todo el día

Interrumpí la sesión para ir a comer con Tiago y Diogo. Tras comer coincidí con Jeroen, que no le veía desde la Ubucon Europe. Charlamos largo y tendido, tanto, que me salté 6 charlas.
Jeroen y yo

De vuelta a la conferencia de Mozilla ví demostraciones como las de Eugenio Petulla con el A-Frame para realidad virtual.
El potencial de javascript para crear juegos HTML5, por Istvan Szmozsanzky y cómo de fácil es flashear ese juego en una miniconsola Arduboy.
Las últimas conferencias fueron las de Daniel Stenberg con una gran sala abarrotada sobre qué será lo siguiente a HTTP/2, la de Robert Kaiser sobre las alternativas para loguearse en webs, Leo McArdle sobre Discourse, Kristi Progri sobre el papel de la mujer en el software libre en general y Mozilla en particular y Glori Dwomoh sobre como obtener más atención y empatía cuando hablemos de nuestra comunidad.
Finalizó una muy amena charla de Raegan MacDonald sobre asuntos actuales de copyright.

Raegan MacDonald
Tras las charlas nos reunimos parte de los ubunteros, alargando la noche con unas pizzas y cerveza en el centro de la ciudad.

Centro de Bruselas
Rudy y Tiago

¡Hasta la próxima!






on February 06, 2017 12:03 PM

We’ve had a busy weekend at FOSDEM in Brussels for the last two days and now I’ve travelled into my fifth country of the trip picking up a few hackers on the way for the KDE Plasma Sprint which is happening all this week in Stuttgart, do drop by if you’re in town.

DSC_0001KDE and Gnome looking good at the Friday beer event

DSC_0004Busy busy on the KDE stall

DSC_0010Food and drinks at the KDE Slimbook release party.

DSC_0008KDE neon goes smart

DSC_0013After a road trip into the forest of baden württemberg we arrived at the KDE Plasma Sprint sponsored by von Affenfels

DSC_0017Plasma Sprint also sponsored by openSUSE

DSC_0015Plasma Sprint also sponsored by Meat Water

DSC_0016Plasma Sprint also sponsored by Kai Uwe’s mum

Facebooktwittergoogle_pluslinkedinby feather
on February 06, 2017 11:07 AM

February 04, 2017

It has been a quiet start to the year due to work keeping me very busy. Most of my spare time (when not sitting shattered on the sofa) was spent resurrecting my old website from backups. My son had plenty of visitors coming to visit as well, which prompted me to restart work on my model railway in the basement. Last year I received a whole heap of track, and also a tunnel formation from a friend at work. I managed to finish the supporting structure for the tunnel, and connect one end of it to the existing track layout. The next step (which will be a bit harder) is to connect the other end of the tunnel into the existing layout. The basement is one of the favourite things for me to keep my son and his friends occupied when there is a visit. The railway and music studio are very popular with the little guests.

Debian

  • Packaged latest Gramps 4.2.5 release for Debian so that it will be part of the Stretch release.
  • Package latest abcmidi release so it too would be part of Stretch. The upstream author had changed his website, so it took a while to locate a tarball.
  • Tested my latest patches to convert Cree.py to Qt5, but found another Qt4 – Qt5 change to take into account (SIGNAL function). I ran out of time to fully investigate that one, before Creepy was booted out of testing again. I am seriously considering the removal of Cree.py from Debian, as the upstream maintainer does not seem very active any more, and I am a little tired of being upstream for a project that I don’t actually use myself. It was only because it was a reverse dependency of osm-gps-map that I originally got involved.
  • Started preparing a Gramps 5.2.5 backport for Jessie, but found that the tests I enabled in unstable were failing in the Jessie build. I need to investigate this further.

Ubuntu

  • Announced the Ubuntu Studio 16.02.2 point release date on the Ubuntu Studio mailing lists asking for testers. The date subsequently got put back to February the 9th.
  • Upgraded my Ubuntu Studio machine from Wily to Xenial.

Other

  • Resurrected my old Drupal Gammon One Name Study website. I used Drupal VM to get the site going again, before transferring it to the new webhost. It was originally a Drupal 7 site, and I did not have the required versions of Ansible & Vagrant on my Ubuntu Xenial machine, so the process was quite involved. I will blog about that separately, as it may be a useful lesson for others. As part of that, I started on a backport of vagrant, but found a bug which I need to follow up on.
  • Also managed to extract my old WordPress blog posts from the same machine that had the failed Drupal instance, and import them into this blog. I also learnt some stuff in that process that I will blog about at some point.

Plan status from last month & update for next month

Debian

Before the 5th February 2017 Debian Stretch hard freeze I hope to:

For the Debian Stretch release:

Generally:

  • Finish the Gramps 5.2.5 backport for Jessie.
  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release.
  • Begin working again on all the new stuff I want packaged in Debian.

Ubuntu

  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do (actually started today)
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
  • Start testing & bug triaging Ubuntu Studio packages. – Still to do
  • Test Len’s work on ubuntustudio-controls – Still to do

Other

  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project. – Done
  • Give JMRI a good try out and look at what it would take to package it. – In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!).

on February 04, 2017 05:55 PM

January 2017 Update

Svetlana Belkin

If anyone noticed that I tend to post at least two (2) blog posts per month, but the month of January 2017 was different.  My blog was down for a half of December 2016 and most of January 2017.  But that didn’t stop me from creating posts- just in a different way!  Through my vBlogs and AudioBlogs.

As I said on my AudioBlog Episode 1, here are the updates and some that I forgot to add:

  • As of now, my Ubuntu volunteer work will be on hold.  This is partly due to the fact that I’m still dealing with burntout and I’m out of ideas on how grow the Community.
  • On behalf of the general admins of Linux Padawn, we have sadly closed the site and program down due to the fact that nothing is happening.  Linux Padawan is just another dead project.
  • Over the month of January, I started to think about leadership within the Open * communities.  This started when I found out that Mozilla Foundation is hosting a leadership mentoring program in March in which I applied to but as a co-leader/project manager looking to be partnered up.  I might not make in but I may be able to find some project to be apart of.
    • I also am working on adding more to their leadership training series, which is a training series on the open practices of being a leader along with GitHub being used as a tool.
  • So far, I’m liking my Pebble Time although the Ubuntu (Touch) has issues reconnecting back to the watch if the disconnection is longer than five (5) minutes.  Most of the time when this happens a simple factory reset on the watch is needed and it will not delete anything that you have downloaded from the phone to the watch, just the data that is stored on the watch.  I also advise to forget the connection before factory reset.

And that’s all, thanks for reading!

on February 04, 2017 05:43 PM

February 03, 2017

To celebrate the release of Ubuntu Weekly News (UWN) Issue 500 coming up in a few weeks, we need some help from you!

Issue 500 will feature quotes from leaders in the community and Canonical, and from folks just like you.

We’re also trying to mix things up a bit by offering some prizes for community members who want to learn a but more about the UWN team and history, by way of a quiz!

Sound interesting? Read on!

Quotes

Every contributor to the Ubuntu Weekly Newsletter is a volunteer with a passion for Ubuntu, and it’s quite a small team.

In their free time, these volunteers work to collect news articles they find, write up summaries, edit and release the newsletter nearly every week.

Do you appreciate the work of the team? Would there be a void in your ability to keep up with Ubuntu if UWN didn’t exist? Do you eagerly await getting to read the newsletter each week?

Tell them!

The following form has been created to collect thanks:

Send your thanks to Ubuntu Weekly News contributors

Your quote may end up in the 500th issue of the newsletter, and all suitable responses will be shared with the public ubuntu-news-team mailing list so that contributors can see your gratitude.

Quiz

Use your knowledge about the newsletter and your sleuthing skills (you can find all the answers on the Ubuntu wiki and forums) to answer the questions on this quiz. This competition is open to anyone in the world, unless you’ve contributed to the news team in the past 12 months, in which case we ask that you let your fellow community members have a chance at learning more about the team through this quiz 🙂

Winners will be selected from a pool of quizzes with the most correct answers. Each winner will receive a set of Ubuntu News stickers, Ubuntu stickers and a thank you card from the current UWN editor. These prizes will be shipped from the US, but anyone in the world is eligible to receive them.

A valid email address is required to contact you if you’ve won, this email will not be used for any other purpose. If you’re one of our winners, the name you provide in this form will be used in our winner announcements.

Are you ready? The quiz is now online here:

Ubuntu Weekly Newsletter Issue 500 Quiz

Responses are due by 20 February 2017.

Winners will be announced in the 500th issue of the newsletter, good luck!

on February 03, 2017 08:52 PM

Scratch is a block-based programming language created by the Lifelong Kindergarten Group (LLK) at the MIT Media Lab. Scratch gives kids the power to use programming to create their own interactive animations and computer games. Since 2007, the online community that allows Scratch programmers to share, remix, and socialize around their projects has drawn more than 16 million users who have shared nearly 20 million projects and more than 100 million comments. It is one of the most popular ways for kids to learn programming and among the larger online communities for kids in general.

Front page of the Scratch online community (https://scratch.mit.edu) during the period covered by the dataset.

Since 2010, I have published a series of papers using quantitative data collected from the database behind the Scratch online community. As the source of data for many of my first quantitative and data scientific papers, it’s not a major exaggeration to say that I have built my academic career on the dataset.

I was able to do this work because I happened to be doing my masters in a research group that shared a physical space (“The Cube”) with LLK and because I was friends with Andrés Monroy-Hernández, who started in my masters cohort at the Media Lab. A year or so after we met, Andrés conceived of the Scratch online community and created the first version for his masters thesis project. Because I was at MIT and because I knew the right people, I was able to get added to the IRB protocols and jump through the hoops necessary to get access to the database.

Over the years, Andrés and I have heard over and over, in conversation and in reviews of our papers, that we were privileged to have access to such a rich dataset. More than three years ago, Andrés and I began trying to figure out how we might broaden this access. Andrés had the idea of taking advantage of the launch of Scratch 2.0 in 2013 to focus on trying to release the first five years of Scratch 1.x online community data (March 2007 through March 2012) — most of the period that the codebase he had written ran the site.

After more work than I have put into any single research paper or project, Andrés and I have published a data descriptor in Nature’s new journal Scientific Data. This means that the data is now accessible to other researchers. The data includes five years of detailed longitudinal data organized in 32 tables with information drawn from more than 1 million Scratch users, nearly 2 million Scratch projects, more than 10 million comments, more than 30 million visits to Scratch projects, and much more. The dataset includes metadata on user behavior as well the full source code for every project. Alongside the data is the source code for all of the software that ran the website and that users used to create the projects as well as the code used to produce the dataset we’ve released.

Releasing the dataset was a complicated process. First, we had navigate important ethical concerns about the the impact that a release of any data might have on Scratch’s users. Toward that end, we worked closely with the Scratch team and the the ethics board at MIT to design a protocol for the release that balanced these risks with the benefit of a release. The most important features of our approach in this regard is that the dataset we’re releasing is limited to only public data. Although the data is public, we understand that computational access to data is different in important ways to access via a browser or API. As a result, we’re requiring anybody interested in the data to tell us who they are and agree to a detailed usage agreement. The Scratch team will vet these applicants. Although we’re worried that this creates a barrier to access, we think this approach strikes a reasonable balance.

Beyond the the social and ethical issues, creating the dataset was an enormous task. Andrés and I spent Sunday afternoons over much of the last three years going column-by-column through the MySQL database that ran Scratch. We looked through the source code and the version control system to figure out how the data was created. We spent an enormous amount of time trying to figure out which columns and rows were public. Most of our work went into creating detailed codebooks and documentation that we hope makes the process of using this data much easier for others (the data descriptor is just a brief overview of what’s available). Serializing some of the larger tables took days of computer time.

In this process, we had a huge amount of help from many others including an enormous amount of time and support from Mitch Resnick, Natalie Rusk, Sayamindu Dasgupta, and Benjamin Berg at MIT as well as from many other on the Scratch Team. We also had an enormous amount of feedback from a group of a couple dozen researchers who tested the release as well as others who helped us work through through the technical, social, and ethical challenges. The National Science Foundation funded both my work on the project and the creation of Scratch itself.

Because access to data has been limited, there has been less research on Scratch than the importance of the system warrants. We hope our work will change this. We can imagine studies using the dataset by scholars in communication, computer science, education, sociology, network science, and beyond. We’re hoping that by opening up this dataset to others, scholars with different interests, different questions, and in different fields can benefit in the way that Andrés and I have. I suspect that there are other careers waiting to be made with this dataset and I’m excited by the prospect of watching those careers develop.

You can find out more about the dataset, and how to apply for access, by reading the data descriptor on Nature’s website.

on February 03, 2017 08:01 PM

Over the last month or so I’ve been working on producing snap packages for a variety of OpenStack components.  Snaps provide a new fully isolated, cross-distribution packaging paradigm which in the case of Python is much more aligned to how Python projects manage their dependencies.

Alongside work on Nova, Neutron, Glance and Keystone snaps (which I’ll blog about later), we’ve also published snaps for end-user tools such as the OpenStack clients, Tempest and Rally.

If you’re running on Ubuntu 16.04 its really simple to install and use the openstackclients snap:

sudo snap install --edge --classic openstackclients

right now, you’ll also need to enable snap command aliases for all of the clients the snap provides:

ls -1 /snap/bin/openstackclients.* | cut -f 2 -d . | xargs sudo snap alias openstackclients

after doing this, you’ll have all of the client tools aligned to the OpenStack Newton release available for use on your install:

aodh
barbican
ceilometer
cinder
cloudkitty
designate
freezer
glance
heat
ironic
magnum
manila
mistral
monasca
murano
neutron
nova
openstack
sahara
senlin
swift
tacker
trove
vitrage
watcher

The snap is currently aligned to the Newton OpenStack release; the intent is to publish snaps aligned to each OpenStack release using the series support that’s planned for snaps –  so you’ll be able to pick clients appropriate for any supported OpenStack release or for the current development release.

You can check out the source for the snap on github; writing a snap package for a Python project is pretty simple, as it makes use of the standard pip tooling to describe dependencies and install Python modules. Kudos to the snapcraft team who have done a great job on the Python plugin.

Let us know what you think by reporting bugs or by dropping into #openstack-snaps on Freenode IRC!


on February 03, 2017 11:07 AM

February 02, 2017

Today Plasma 5.9.0 became available in KDE neon User Edition. With it comes the return of global menus along with other awesome sauce features.

To enable global menus open System Settings, go into the Application Style category, and in the Widget Style settings you will find a tab called Fine Tuning. On this tab you can find the new Menubar options. You can change to either a Title Bar Button, which will tuck the menu into a tiny button into the window decoration bar at the top, or the Application Menu widget, allowing the associated Plasma panel to supply the menu in a fixed location.

screenshot_20170202_174458 screenshot_20170202_174438

To apply the change, your applications need to be restarted, so ideally you’ll simply log out and back in again.

To add an Application Menu to Plasma, simply right click on the desktop and add the Panel called Application Menu Bar.

screenshot_20170202_174850 screenshot_20170202_174812

Enjoy your new Plasma 5.9 with global menu bars!

on February 02, 2017 05:02 PM

February 01, 2017

This weekend I'm going to FOSDEM, one of the largest gatherings of free software developers in the world. It is an extraordinary event, also preceded by the XSF / XMPP Summit

For those who haven't been to FOSDEM before and haven't yet made travel plans, it is not too late. FOSDEM is a free event and no registration is required. Many Brussels hotels don't get a lot of bookings on weekends during the winter so there are plenty of last minute offers available, often cheaper than what is available on AirBNB. I was speaking to somebody in London on Sunday who commutes through St Pancras (the Eurostar terminal) every day and didn't realize it goes to Brussels and only takes 2 hours to get there. One year I booked a mini-van at the last minute and made the drive from the UK with a stop in Lille for dinner on the way back, for 5 people that was a lot cheaper than the train. In other years I've taken trains from Switzerland through Paris or Luxembourg.

Real-time Communication (RTC) dev-room on Saturday, 4 February

On Saturday, we have a series of 23 talks about RTC topics in the RTC dev-room, including SIP, XMPP, WebRTC, peer-to-peer (with Ring) and presentations from previous GSoC students and developers coming from far and wide.

The possibilities of RTC with free software will also be demonstrated and discussed at the RTC lounge in the K building, near the dev-room, over both Saturday and Sunday. Please come and say hello.

Please come and subscribe to the Free-RTC-Announce mailing list for important announcements on the RTC theme and join the Free-RTC discussion list if you have any questions about the activities at FOSDEM, dinners for RTC developers on Saturday night or RTC in general.

Software Defined Radio (SDR) and the Debian Hams project

At 11:30 on Saturday I'll be over at the SDR dev-room to meet other developers of SDR projects such as GNU Radio and give a brief talk about the Debian Hams project and the relationship between our diverse communities. Debian Hams (also on the Debian Ham wiki) provides a ready-to-run solution for ham radio and SDR is just one of its many capabilities.

If you've ever wondered about trying the RTL-SDR dongle or similar projects Debian Hams provides a great way to get started quickly.

I've previously given talks on this topic at the Vienna and Cambridge mini-DebConfs (video).

Ham Radio (also known as amateur radio) offers the possibility to gain exposure to every aspect of technology from the physical antennas and power systems through to software for a range of analog and digital communications purposes. Ham Radio and the huge community around it is a great fit with the principles and philosophy of free software development. In a world where hardware vendors are constantly exploring ways to limit their users with closed and proprietary architectures, such as DRM, a broad-based awareness of the entire technology stack empowers society to remain in control of the technology we are increasingly coming to depend on in our every day lives.

on February 01, 2017 09:07 AM

January’s reading list

Canonical Design Team

Here are the best links shared by the design team during the first month of 2017:

  1. A Guide to 2017 Conferences
  2. Information Literacy Is a Design Problem
  3. Pattern patter.
  4. CLARK FROM INVISION, FOREVER!
  5. The Imbalance of Culture Fit
  6. a big list of good news from 2016
  7. Calculate the ideal height of your ergonomic desk, chair and keyboard
  8. Commit Logs From Last Night
  9. Design Guidelines
  10. Reasons.London event
  11. Letterboxd 2016 Year in Review
  12. TED Talk: Are you a giver or a taker?
  13. Restoring Sanity to the Office
  14. A Framework for Building a Design Practice
  15. On Better Meetings
  16. Skeuomorphism on Conversational UIs
  17. Designing a product with mental issues in mind

Thank you to Andrea, Anthony, Clara, Grazina, Jamie, Karl, Richard and me for the links this month.

on February 01, 2017 08:53 AM

January 31, 2017

LXD logo

What’s Ubuntu Core?

Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

The current release of Ubuntu Core is called series 16 and was released in November 2016.

Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

Requirements

As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

  • An up to date Ubuntu system using the official Ubuntu kernel
  • An up to date version of LXD

Creating an Ubuntu Core container

The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:

stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core
Creating ubuntu-core
Starting ubuntu-core

The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

stgraber@dakara:~$ lxc list
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
|     NAME    |  STATE  |          IPV4        |                      IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+
| ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0         |
+-------------+---------+----------------------+----------------------------------------------+------------+-----------+

You can then interact with that container the same way you would any other:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap list
Name       Version     Rev  Developer  Notes
core       16.04.1     394  canonical  -
pc         16.04-0.8   9    canonical  -
pc-kernel  4.4.0-45-4  37   canonical  -
root@ubuntu-core:~#

Updating the container

If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

If you want to immediately force an update, you can do it with:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap refresh
pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded
core (stable) 16.04.1 from 'canonical' upgraded
root@ubuntu-core:~# snap version
snap 2.17
snapd 2.17
series 16
root@ubuntu-core:~#

And then reboot the system and check the snapd version again:

root@ubuntu-core:~# reboot
root@ubuntu-core:~# 

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap version
snap 2.21
snapd 2.21
series 16
root@ubuntu-core:~#

You can get an history of all snapd interactions with

stgraber@dakara:~$ lxc exec ubuntu-core snap changes
ID  Status  Spawn                 Ready                 Summary
1   Done    2017-01-31T05:14:38Z  2017-01-31T05:14:44Z  Initialize system state
2   Done    2017-01-31T05:14:40Z  2017-01-31T05:14:45Z  Initialize device
3   Done    2017-01-31T05:21:30Z  2017-01-31T05:22:45Z  Refresh all snaps in the system

Installing some snaps

Let’s start with the simplest snaps of all, the good old Hello World:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install hello-world
hello-world 6.3 from 'canonical' installed
root@ubuntu-core:~# hello-world
Hello World!

And then move on to something a bit more useful:

stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install nextcloud
nextcloud 11.0.1snap2 from 'nextcloud' installed

Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

If you feel like testing the latest LXD straight from git, you can do so with:

stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true
stgraber@dakara:~$ lxc exec ubuntu-core bash
root@ubuntu-core:~# snap install lxd --edge
lxd (edge) git-c6006fb from 'canonical' installed
root@ubuntu-core:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]: 

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? 
Would you like LXD to be available over the network (yes/no) [default=no]? 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? 
Would you like to create a new network bridge (yes/no) [default=yes]? 
What should the new bridge be called [default=lxdbr0]? 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? 
LXD has been successfully configured.

And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core
Creating nested-core
Starting nested-core 
root@ubuntu-core:~# lxc list
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
|    NAME     |  STATE  |         IPV4        |                       IPV6                    |    TYPE    | SNAPSHOTS |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+
| nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+-----------------------------------------------+------------+-----------+

Conclusion

If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on January 31, 2017 08:27 PM

Neon OEM Mod…arghhh

Harald Sitter

For years and years already Ubuntu’s installer, Ubiquity, has an OEM mode. And for years and years I know it doesn’t really work with the Qt interface.

An understandable consequence of not actually having any real-life use cases of course, disappointing all the same. As part of the KDE Slimbook project I took a second and then a third look at the problems it was having and while it still is not perfect it is substantially better than before.

The thing to understand about the OEM mode is that technically it splits the installation in two. The OEM does a special installation which leads to a fully functional system that the OEM can modify and then put into “shipping” mode once satisfied with the configuration. After this, the system will go into a special Ubiquity that only offers the configuration part of the installation process (i.e. User creation, keyboard setup etc.). Once the customer completed this process the system is all ready to go, with any additional software the OEM might have installed during preparation.

Therein lies the problem in a way. The OEM configuration is design-wise kind of fiddly considering how the Qt interface is set up and interacts with other pieces of software (most notably KWin). This is double true for KDE neon where we use a slightly modified Ubiquity, with the fullscreen mode removed. However, as you might have guessed, not using fullscreen leads to all sorts of weird behavior in the OEM setup where practically speaking the user is meant to be locked out of the system but technically he is in a minimal session with a window manager. So, in theory, one can close the window, when started the window would be placed as though more windows are meant to appear, and it would have a minimize button etc. etc. All fairly terrible. However also not too tricky to fix once one has the identified all problems. Arguably that is the biggest feat with any installer change. Finding all possible scenarios where things can go wrong takes days.

So, to improve this the KDE Visual Design Group‘s Jens Reuterberg and I again descended into the hellish pit that is Qt 4 QWidget CSS theming on a code base that has seen way too many cooks over the years. The result I like much better than what we started out with, even if it isn’t perfect.

 

New Old

The sidebar has had visual complexity removed to bring it closer to a Breeze look and feel. Window decoration elements not wanted during OEM set up are being removed by setting up suitable KWin rules when preparing for first boot.

Additionally, we will hopefully soon have enough translations to push out a new slideshow featuring slightly more varied visuals than the current “Riding the Waves” picture we have for a slideshow.

New Old

For additional information on how to use the current OEM mode check out the documentation on the KDE UserBase.

Ubiquity code
Slideshow code (most interest translations setup this)

 

on January 31, 2017 12:25 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • I reviewed multiple CVE affecting ntp and opted to mark them no-dsa (just like what has been done for jessie).
  • I pinged upstream authors of jbig2dec (here) and XML::Twig (by private email) where the upstream report had not gotten any upstream reply yet.
  • I asked on oss-security for more details about CVE-2016-9584 because it was not clear whether it had already been reported upstream. Turns out that it was. I then updated the security tracker accordingly.
  • Once I got a reply on jbig2dec, I started to backport the patch pointed out by upstream, it was hard work. When I was done, I had also received by private email the fuzzed file at the origin of the report… unfortunately that file did not trigger the same problem with the old jbig2dec version in wheezy. That said valgrind still identified read outside of allocated memory. At this point I had a closer look at the git history only to discover that the last 3 years of work consisted mainly of security fixes for similar cases that were never reported to CVE. I thus opened a discussion about how to handle this situation.
  • Matthias Geerdsen reported in #852610 a regression in libtiff4. I confirmed the problem and spent multiple hours to come up with a fix. The patch that introduced the regression was Debian-specific as upstream did not fix those issues yet. I released a fixed package in DLA-610-2.

Debian packaging

With the deep freeze approaching, I made some last-minute updates:

  • schroot 1.6.10-3 fixing some long-standing issues with the way bind mounts are shared (#761435) and other important fixes.
  • live-boot 1:20170112 to fix a failure when booting on a FAT filesystem and other small fixes.
  • live-config 5.20170112 merging useful patches from the BTS.
  • I finished the update of hashcat 3.30 with its new private library and fixed RC bug #851497 at the same time. The work was initiated by fellow members of the pkg-security team.

Misc work

Sponsorship. I sponsored a new asciidoc upload demoting a dependency into a recommends (#850301). I sponsored a new upstream version of dolibarr.

Discussions. I seconded quite a few changes prepared by Russ Allbery on debian-policy. I helped Scott Kitterman with #849584 about a misunderstanding of how the postfix service files are supposed to work. I discussed in #849913 about a regression in building of cross-compilers, and made a patch to avoid the problem. In the end, Guillem developed a better fix.

Bugs. I investigated #850236 where a django test failed during the first week after each leap year. I filed #853224 on desktop-base about multiple small problems in the maintainer scripts.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on January 31, 2017 10:33 AM

As children use digital media to learn and socialize, others are collecting and analyzing data about these activities. In school and at play, these children find that they are the subjects of data science. As believers in the power of data analysis, we believe that this approach falls short of data science’s potential to promote innovation, learning, and power.

Motivated by this fact, we have been working over the last three years as part of a team at the MIT Media Lab and the University of Washington to design and build a system that attempts to support an alternative vision: children as data scientists. The system we have built is described in a new paper—Scratch Community Blocks: Supporting Children as Data Scientists—that will be published in the proceedings of CHI 2017.

Our system is built on top of Scratch, a visual, block-based programming language designed for children and youth. Scratch is also an online community with over 15 million registered members who share their Scratch projects, remix each others’ work, have conversations, provide feedback, bookmark or “love” projects they like, follow other users, and more. Over the last decade, researchers—including us—have used the Scratch online community’s database to study the youth using Scratch. With Scratch Community Blocks, we attempt to put the power to programmatically analyze these data into the hands of the users themselves.

To do so, our new system adds a set of new programming primitives (blocks) to Scratch so that users can access public data from the Scratch website from inside Scratch. Blocks in the new system gives users access to project and user metadata, information about social interaction, and data about what types of code are used in projects. The full palette of blocks to access different categories of data is shown below.

Project metadata
User metadata
Site-wide statistics

The new blocks allow users to programmatically access, filter, and analyze data about their own participation in the community. For example, with the simple script below, we can find whether we have followers in Scratch who report themselves to be from Spain, and what their usernames are.

Simple demonstration of Scratch Community Blocks

In designing the system, we had two primary motivations. First, we wanted to support avenues through which children can engage in curiosity-driven, creative explorations of public Scratch data. Second, we wanted to foster self-reflection with data. As children looked back upon their own participation and coding activity in Scratch through the project they and their peers made, we wanted them to reflect on their own behavior and learning in ways that shaped their future behavior and promoted exploration.

After designing and building the system over 2014 and 2015, we invited a group of active Scratch users to beta test the system in early 2016. Over four months, 700 users created more than 1,600 projects. The diversity and depth of users creativity with the new blocks surprised us. Children created projects that gave the viewer of the project a personalized doughnut-chart visualization of their coding vocabulary on Scratch, rendered the viewer’s number of followers as scoops of ice-cream on a cone, attempted to find whether “love-its” for projects are more common on Scratch than “favorites”, and told users how “talkative” they were by counting the cumulative string-length of project titles and descriptions.

We found that children, rather than making canonical visualizations such as pie-charts or bar-graphs, frequently made information representations that spoke to their own identities and aesthetic sensibilities. A 13-year-old girl had made a virtual doll dress-up game where the player’s ability to buy virtual clothes and accessories for the doll was determined by the level of their activity in the Scratch community. When we asked about her motivation for making such a project, she said:

I was trying to think of something that somebody hadn’t done yet, and I didn’t see that. And also I really like to do art on Scratch and that was a good opportunity to use that and mix the two [art and data] together.

We also found at least some evidence that the system supported self-reflection with data. For example, after seeing a project that showed its viewers a visualization of their past coding vocabulary, a 15-year-old realized that he does not do much programming with the pen-related primitives in Scratch, and wrote in a comment, “epic! looks like we need to use more pen blocks. :D.”

Doughnut visualization
Ice-cream visualization
Data-driven doll dress up

Additionally, we noted that that as children made and interacted with projects made with Scratch Community Blocks, they started to critically think about the implications of data collection and analysis. These conversations are the subject of another paper (also being published in CHI 2017).

In a 1971 article called “Teaching Children to be Mathematicians vs. Teaching About Mathematics”, Seymour Papert argued for the need for children doing mathematics vs. learning about it. He showed how Logo, the programming language he was developing at that time with his colleagues, could offer children a space to use and engage with mathematical ideas in creative and personally motivated ways. This, he argued, enabled children to go beyond knowing about mathematics to “doing” mathematics, as a mathematician would.

Scratch Community Blocks has not yet been launched for all Scratch users and has several important limitations we discuss in the paper. That said, we feel that the projects created by children in our the beta test demonstrate the real potential for children to do data science, and not just know about it, provide data for it, and to have their behavior nudged and shaped by it.

This blog post and the paper it describes are collaborative work with Sayamindu Dasgupta. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson. Financial support came from the US National Science Foundation. The paper itself is open access so anyone can read the entire paper here. This blog post was also posted on Sayamindu Dasgupta’s blog, on the Community Data Science Collective blog, and in several other places.
on January 31, 2017 04:54 AM

January 30, 2017

Niobium

Stuart Langridge

[41 is] the smallest integer whose reciprocal has a 5-digit repetend. That is a consequence of the fact that 41 is a factor of 99999. — Wikipedia

I don’t understand a lot of things, these days. I don’t understand what a 5-digit repetend is, or why 41 being a factor of 99999 has to do with anything. I don’t understand how much all this has changed in the last thirteen years of posts. I don’t understand when building web stuff got hard. I don’t understand why I can’t find anyone who sells wall lights that look nice without charging a hundred notes for each one, which is a bit steep when you need six. I don’t understand why I can’t get thinner and still eat as many sandwiches as I want. I don’t understand an awful lot of why the world suddenly became a terrible, frightening, mean-spirited, mocking, vitriolic place. And most of what I do understand about that, I hate.

We all sorta thought that we were moving forward; there was less hatred of the Other, fewer knives out, not as much fear and spite as there used to be. And it turns out that it wasn’t gone; it was just suppressed, building up and up underneath the volcano cap until the bad guys realised that there’s nothing actually stopping them doing terrible things and there’s nothing anyone can do about it. So the Tories moved from daring to talk about shutting down the NHS to actually doing it and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. Trump starts restricting immigration and targeting Muslims directly and puts a Nazi adviser on the National Security Council and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. I don’t want to give in to hatred — it leads to the Dark Side — and so I don’t want to hate them for doing this. But I do hate that I have to fight to avoid it. I hate that I feel so helpless. I hate that the only way I know to fight back is to actually fight — to become them. I hate that they turn everyone into malign, terrible copies of themselves. I hate that they don’t understand. I hate that I don’t understand. I hate that I just hate all the time now.

I’m forty-one. Apparently, according to Wikipedia, the US Navy Fleet Ballistic Missile nuclear submarines from the George Washington, Ethan Allen, Lafayette, James Madison, and Benjamin Franklin classes were nicknamed “41 for Freedom“. 41 for freedom. Maybe that’s not a bad motto for me, being 41. Do more for freedom. My freedom, my family’s freedom, my friends’ freedom, my city’s freedom, people I’ve never met and never will’s freedom. None of us are free if one of us is chained, and if you don’t say it’s wrong then that says it right.

Two photos from today.

Niamh and a message board saying that she loves me lots

Anti-Trump protest in Victoria Square, 30th January 2017

One is of Niamh, and her present to me for my birthday: a light box like the ones you get outside cinemas and churches and fast food places and we can put messages for one another on it. I’m hugely pleased with it. The other is of today’s anti-Trump demo in Victoria Square, at which Reverend David Butterworth, of the Methodist Church, said: “Whatever we can do to make this a more peaceful city and a more inclusive city, and to stand up and be counted, we must and should do it together. The only way that Donald Trump will win is if the good people of Birmingham, and of other cities that we’re twinned with like Chicago, stay silent.” People standing up, and a demonstration of what they’re standing up for. Not a bad way to start making me being 41 for freedom, perhaps.

Happy birthday to me. And for those of you less lucky than me today: I hope we can help.

on January 30, 2017 11:20 PM

January 29, 2017

2017 is the new 1984

Dimitri John Ledkov

1984: Library EditionNovel by George Orwell, cover picture by Google Search result
I am scared.
I am petrified.
I am confused.
I am sad.
I am furious.
I am angry.

28 days later I shall return from NYC.

I hope.
on January 29, 2017 10:23 PM

Hello world!

Julian Fernandes

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

on January 29, 2017 02:01 AM

January 28, 2017

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) is released today. With this Alpha 2 pre-release, you can see what we are trying out in preparation for 17.04, which we will be releasing in April.

NOTE: This is Alpha 2 Release. Kubuntu Alpha Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Alpha 2
* Upgrade from 16.10, run do-release-upgrade from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive

on January 28, 2017 08:58 PM

The Kubuntu Team announces the availability of Plasma 5.8.4 and KDE Frameworks 5.2.8 on Kubuntu 16.04 (Xenial) and 16.10 (Yakkety) though our Backports PPA.

Plasma 5.8.4 Announcement:
https://www.kde.org/announcements/plasma-5.8.4.php
How to get the update (in the commandline):

  1. sudo apt-add-repository ppa:kubuntu-ppa/backports
  2. sudo apt update
  3. sudo apt full-upgrade -y

If you have been testing this upgrade by using the backports-landing PPA, please remove it first before doing the upgrade to backports. Do this in the commandline:

sudo apt-add-repository --remove ppa:kubuntu-ppa/backports-landing

Please report any bugs you find on Launchpad (for packaging problems) and http://bugs.kde.org for bugs in KDE software.

on January 28, 2017 08:53 PM
The second alpha of the Zesty Zapus (to become 17.04) has now been released! This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie. Pre-releases of the Zesty Zapus are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent […]
on January 28, 2017 07:15 PM

We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

on January 28, 2017 08:08 AM

January 27, 2017

Icona Pop

Rhonda D'Vine

Last fall I went to a Silent Disco event. You get wireless headphones, a DJane and a DJ were playing music on different channels, and you enjoy the time with people around who can't hear what you hear. It's a pretty funny experience, and it was one of the last warm sunny days. There I heard a song that was just in the mood for the moment, and made me looking up the band to listen more closely to them.

The band was Icona Pop, they have a mood enlighening pop sound that cheers you up. Here are the songs I want to present you today:

  • I Love It: The first song I heard from them, and I Love It!
  • Girlfriend: Sweet song, and probably part of the reason they are well received in the LGBTIQ community.
  • All Night: A song/video with a message.

Like always, enjoy!

/music | permanent link | Comments: 4 | Flattr this

on January 27, 2017 01:22 PM

January 26, 2017

Dear OpenStack Foundation

why do I need to be an OpenStack Foundation Member when I want to send you a bugfix via PR on GitHub?

I don't wanna work on OpenStack per se, I just want to use one of your little utils from your stack and it doesn't work as expected under a newer version of Python :)

It would be nice, if the barrier to contribute could be lowered.

on January 26, 2017 09:39 AM

I love C. And I loathe C++.

But there’s one thing I like about C++: The fact that I don’t have to write my own dynamic array libraries each time I try to start a project.

Of course, there are many libraries that exist for working with arrays in C. Glib, Eina, DynArray, etc. But I wanted something as easy to use as C++’s std::vector, with the performance and memory usage of std::vector.

By the way, I am not talking about algorithmic performance. I’m writing this assuming the algorithms are identical (i.e. I’m writing purely about implementation differences).

There is a few problems with the performance and memory usage of the aforementioned libraries, the major one being that the element size is stored as a structure member. Which means an extra 4-8 bytes per array, and constantly having to read a variable (which means many missed optimization opportunities). While this may not sound too bad (and in the grand scheme of things, probably isn’t), it is undeniably less efficient than C++.

This isn’t the only problem, there are other missed optimization opportunities in the function (vs macro)-based variants, for example, calling functions for tiny operations, calling memcpy for types that fit within registers, etc.

All of this might seem like splitting hairs, and it probably is. But knowing that C++ can be faster, more memory efficient, and less bothersome to code in than C is not a thought I like very much. So I wanted to try to level the playing field.

It took me a rather long amount of sporadic work for me to create my very own “Perfect C Array Library”, that, I thought, fulfilled my requirements.

First, let’s look at some example code using it:

array(int) myarray = array_init();
array_push(myarray, 5);
array_push(myarray, 6);

for (int i = 0; i < myarray.length; i++) {
    printf("%i\n", myarray.data[i]);
}

array_free(myarray);

Alright, it might be a tiny bit less pretty than C++. But hey, this is good enough for me.

In terms of performance and memory issues, I fixed the issues I wrote above. So in theory, it should be just as fast as C++, right?

Turns out I missed one issue. Cache Misses. In my mind, if everything was written as a macro, it would, in theory, be faster than functions. I was wrong. Large portions of code inlined can result in cache misses, which will quite negatively impact the performance of the function.

So, as far as I can see, it is impossible to write a set of array functions for C that will be as fast and easy to use as C++’s std::vector. But please correct me if I’m wrong!

With that being said, this implementation is the most efficient I’ve been able to write so far, so let me show you the idea behind it:

#define array(type)  \
  struct {           \
      type* data;    \
      size_t length; \
  }

#define array_init() \
  {                  \
      .data = NULL;  \
      .length = 0;   \
  }

#define array_free(array) \
  do {                    \
      free(array.data);   \
      array.data = NULL;  \
      array.length = 0;   \
  } while (0)

#define array_push(array, element)                \
  do {                                            \
      array.data = realloc(array.data,            \
                           sizeof(*array.data) *  \
                             (array.length + 1)); \
      array.data[array.length] = element;         \
      array.length++;                             \
  } while (0)

The magic is in sizeof(*array.data). For some reason I never knew this was legal in C, but it does exactly what it says it does: it returns the size of type. Which eliminates the need to store this in the struct.

The code above is vastly oversimplified to demonstrate the idea. It’s very incomplete, algorithmically slow, and unsafe. But the idea is there.

To summarize, I am not aware of any way to write a completely zero-compromise array library in C, but the code above shows the closest I’ve come to that.

 

P.S. There is one problem I am aware of with this method:

array(int) myarray;
array(int) myarray1 = myarray; /* 'error: invalid initializer' */

There are 2 ways to get around this:

memcpy(&myarray1, &myarray, sizeof(myarray));
/* or */
myarray1 = *((typeof(myarray1)*)&myarray); /* requires GNU C */

Both of which should, under a decent optimization level, result in the same assembly.


on January 26, 2017 08:38 AM

Previous LTS point-releases came with a renamed Mesa backported from the latest release (as in mesa-lts-wily for instance) . Among other issues this prevented providing newer Mesa backports for point-release users without getting a mess of different versions. 

That’s why from 16.04.2 onwards Mesa will be backported unrenamed, and this time it is the last version of the 12.0.x series which was also used in 16.10. It’s available now in xenial-proposed, and of course in yakkety-proposed too (16.10 released with 12.0.3). Get it while hot! 


on January 26, 2017 05:13 AM

We all sorta thought

Stuart Langridge

A thing I wrote today, about Trump and Brexit and “post-truth” and “alternative facts” and helplessness, because I’ve had this conversation separately three times today.

this is the thing. We all sorta thought (and by “we” I mean everyone from us here right back to, I dunno, Newton and Boyle) that if we provided inductive or deductive proof of a thing, that everyone else would say “oh yeah, I’m convinced now!” and that’d be it. But people who don’t want that to happen have learned that attacking the evidence doesn’t work — it took them a few hundred years to learn that, but they did — but dismissing the whole idea as illegitimate does work. And we don’t know how to argue against that. I say two and two are four; you disagree; I say “no look here’s the proof”; you say “your methods of proof are wrong and biased”; and then I’m all, er, I don’t know what to say now, you were meant to be convinced by the proof.

more importantly: a third party, looking at that conversation, goes away thinking “well, is 2+2 equal to 4? Don’t know; there seem to be two sides to that argument”, or worse, “man, I just don’t care what 2+2 is because every time I try to find out there’s just loads of shouting, so I’ll stop asking”.

and thus, modern politics. Gaslighting and obfuscation, designed to make people believe that facts are disputable and that engagement is confusing and annoying.

(Of course, part of the problem here is that our side have a habit of declaring things to be an actual fact when they’re really “what we want to believe”, and once one’s cried wolf that way a few times, one’s credibility is gone and it’s really hard to get back. It’s not all the other side’s fault.)

Normally I wouldn’t re-post such a thing, but of course this conversation happened on Slack, which means that six months from now I won’t be able to link to this because it’ll be over 10,000 messages ago and Slack will be holding it to ransom until we pay money, and five years from now I won’t be able to link to it because Slack will have gone bust or have been sold to someone and shut down.

on January 26, 2017 12:08 AM

January 25, 2017

As Xubuntu’s tenth anniversary year is now over, it’s time to announce the winners of the #lovexubuntu competition announced in June!

The two grand prize winners, receiving a t-shirt and a sticker set, are Keith I Myers with his Xubuntu cookie cutters and Daniel Eriksson with a story of a happy customer. The three other finalists, each one receiving a set of Xubuntu stickers are Dina, Sabrin Islam and Michael Morozov.

Congratulations to all winners!

Finally, before presenting the winning submissions, let us thank everybody who submitted a story or a picture – we really appreciate it! For those who want to see more, all of the submissions are listed on the Xubuntu wiki on the Love Xubuntu 2016 page.

The Grand Prize Winners

Keith I Myers

Xubuntu cookie cutters by Keith I Myers

After seeing a simple metal cookie cutter created by the Xubuntu Marketing lead, Keith was inspired to make a plastic 3D-printed version of the Xubuntu cookie cutter. He printed several of them and also shared the design on Thingiverse so others could also print it.

If you decide to print and use these, we’d love to see the resulting cookies!

Daniel Eriksson

We run a small business, mainly doing computer service and maintenance, app programming and other similar things. One of the things we do are customized Linux desktops, where we build a user interface based around a customers wishes; tweaking everything from themes, colors and fonts to panels, widgets and other content. When we started doing this we tried out and evaluated loads of distributions and desktop environments, eventually deciding that Xubuntu was the perfect choice. We wanted to maximize the amount of customization we could do while still having a system that was light on resources (since customers often have old computers.)

It was a choice we have never regretted, as it has always fit our needs perfectly. We can get everything from design to workflow just as we want it, and it is stable as rock while still often introducing new features for us to play with.

One of our best experiences was with a person who wanted an interface on a laptop that was just as simple and scaled down as that of an iPad, while still being able to do all things a computer ought to do. This was not an especially computer-savvy person, so it needed to be straightforward and simple. We managed to discard most classic desktop parameters and build a very unique interface, all within what was provided by stock Xubuntu. (Though we did some art ourselves.) It turned out great, our customer was very happy with it and other people have shown interest in having something similar on their computers. Needless to say, this was a success story for us which had not been possible without Xubuntu.

So thanks for all your hard work! We keep on designing our users desktops and will continue to use the excellent Xubuntu for it. :)

Finalists

Dina

I live in Israel, and in Hebrew, the slang word “Zubi” is an insolent and extreme way to say “No way I’ll do it”.

Also, according to the Hebrew Wikipedia, Xubuntu is pronounced as “Zoo-boon-too” rather than “Ksoo-boon-too” (its name is written in Hebrew, which solves that ambiguity).

Therefore, when I told a friend that my old computer would not boot because of a hard disk problem, and all the technicians advised me to buy a new one, but I installed Xubuntu and it works, he noted that “Xubuntu” actually sounds like “I’m not doing that, I’m moving to Linux!”

Sabrin Islam

@Xubuntu A teacher once asked me, “how did you get Windows to look like that”, to which I replied it’s Xubuntu sir #LoveXubuntu

– @Ornim on Twitter
Original tweet

Michael Morozov

I #LoveXubuntu because it’s top-notch, minimalistic neat and helps me focus on real things.

– @m1xo_0n on Twitter
Original tweet

Beyond Year 10

As we look forward to 2017 and the 11th year of Xubuntu, keep an eye out for other ways you can help celebrate and promote Xubuntu. And as always, we could use more folks contributing directly to the development, testing and release of Xubuntu, see the Xubuntu Contributor Documentation to learn more.

on January 25, 2017 08:53 PM

January 24, 2017

Recently, I have had the pleasure of working with a fantastic company called Endless who are building a range of computers and a Linux-based operating system called Endless OS.

My work with them has primarily been involved in the community and product development of an initiative in which they are integrating functionality into the operating system that teaches you how to code. This provides a powerful platform where you can learn to code and easily hack on applications in the platform.

If this sounds interesting to you, I created a short video demo where I show off their Mission hardware as well as run through a demo of Endless Code in action. You can see it below:

I would love to hear what you think and how Endless Code can be improved in the comments below.

The post Endless Code and Mission Hardware Demo appeared first on Jono Bacon.

on January 24, 2017 12:35 PM

January 23, 2017

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes

or

    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.2.

General Fixes

  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes

  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]

Unfiled/un-scheduled fixes:

  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below

Additional Feature(s):

  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.

Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

on January 23, 2017 08:30 AM

January 21, 2017

When you download a KDE neon ISO you get transparently redirected to one of the mirrors that KDE uses. Recently the Polish mirror was marked as unsafe in Google Safebrowsing which is an extremely popular service used by most web browsers and anti-virus software to check if a site is problematic. I expect there was a problem elsewhere on this mirror but it certainly wasn’t KDE neon. KDE sysadmins have tried to contact the mirror and Google.

You can verify any KDE neon installable image by checking the gpg signature against the KDE neon ISO Signing Key.  This is the .sig file which is alongside all the .iso files.

gpg2 --recv-key '348C 8651 2066 33FD 983A 8FC4 DEAC EA00 075E 1D76'

wget http://files.kde.org/neon/images/neon-useredition/current/neon-useredition-current.iso.sig

gpg2 --verify neon-useredition-current.iso.sig
gpg: Signature made Thu 19 Jan 2017 11:18:13 GMT using RSA key ID 075E1D76
gpg: Good signature from "KDE neon ISO Signing Key <[email protected]>" [full]

Adding a sensible GUI to do this is future work and fairly tricky to do in a secure way but hopefully soon.

Facebooktwittergoogle_pluslinkedinby feather
on January 21, 2017 12:18 AM

January 19, 2017

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

on January 19, 2017 07:31 PM