August 18, 2016
August 17, 2016
Recently I have been working on the visual design for RCS (which stands for rich communications service) group chat. While working on the “Group Info” screen, we found ourselves wondering what the best way to display an online/offline status. Some of us thought text would be more explicit but others thought it adds more noise to the screen. We decided that we needed some real data in order to make the best decision.
Usually our user testing is done by a designated Researcher but they don’t always have the time to test every aspect and especially something small like online/offline status. So I decided to make my first foray into user testing. I got some tips from designers who had more experience with user testing on our cloud team; Maria Vrachni, Carla Berkers and Luca Paulina.
I then set about finding my user testing group. I chose 5 people to start with because you can uncover up to 80% of usability issues from speaking to 5 people. I tried to recruit a range of people to test with and they were:
- Billy: software engineer, very tech savvy and tech enthusiast.
- Magda: Our former PM and very familiar with our product and designs.
- Stefanie: Our Office Manager who knows our products but not so familiar with our designs.
- Rodney: Our IS Associate who is tech savvy but not familiar with our design work
- Ben: A copyeditor who has no background in tech or design and a light phone user.
The tool I decided to use was Invision. It has a lot of good features and I already had some experience creating lightweight prototypes with it. I made four minimal prototypes where the group info screen had a mixture of dots vs text to represent online status and variations on placement. I then put these on my phone so my test subjects could interact with it and feel like they were looking at a full fledged app and have the same expectations.

During testing, I made sure not to ask my subjects any leading questions. I only asked them very broad questions like “Do you see everything you expect to on this page?” “Is anything unclear?” etc. When testing, it’s important not to lead the test subjects so they can be as objective as possible. Keeping this in mind, it was interesting to to see what the testers noticed and brought up on their own and what patterns arise.
My findings were as follows:
Online status: Text or Green Dot
Unanimously they all preferred online status to be depicted with colour and 4 out of 5 preferred the green dot rather than text because of its scannability.
Online status placement:
This one was close but having the green dot next to the avatar had the edge, again because of its scannability. One tester preferred the dot next to the arrow and another didn’t have a preference on placement.
Pending status:
What was also interesting is that three out of the four thought “pending” had the wrong placement. They felt it should have the same placement as online and offline status.
Overall, it was very interesting to collect real data to support our work and looking forward to the next time which will hopefully be bigger in scope.

The finished design
On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will be doing a Reddit AMA about my work in community strategy, management, developer relations, open source, music, and elsewhere.
For those unfamiliar with Reddit AMAs, it is essentially a way in which people can ask questions that someone will respond to. You simply add your questions (serious, or fun both welcome!) and I will respond to as many as I can.
It has been a while since my last AMA, so I am looking forward to this one.
Feel free to ask any questions you like, and this could include questions that relate to:
- Community management, leadership, and best practice.
- Working at Canonical, GitHub, XPRIZE, and elsewhere.
- The open source industry, how it has changed, and what the future looks like.
- The projects I have been involved in such as Ubuntu, GNOME, KDE, and others.
- The driving forces behind people and groups, behavioral economics, etc.
- My other things such as my music, conferences, writing etc.
- Anything else – politics, movies, news, tech…ask away!
If you want to ask about something else though, go ahead! 
How to Join
Joining the AMA is simple. Just follow these steps:
- Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
- On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here.
- Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
- Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!
Simple as that. 
A Bit of Background
For those of you unfamiliar with my work, you can read more here, but here is a quick summary:
- I run a community strategy/management and developer relations consultancy practice.
- My clients include Deutsche Bank, HackerOne, data.world, Intel, Sony Mobile, Open Networking Foundation, and others.
- I previously served as director of community for GitHub, Canonical, and XPRIZE.
- I serve as an advisor to various organizations including Open Networking Foundation, Mycroft AI, Mod Duo, and Open Cloud Consortium.
- I wrote The Art of Community and have columns for Forbes and opensource.com. I have also written four other books and hundreds of articles.
- I have been involved with various open source projects including Ubuntu, GNOME, KDE, Jokosher, and others.
- I am an active podcaster, previously with LugRadio and Shot of Jaq, and now with Bad Voltage.
- I am really into music and have played in Seraphidian and Severed Fifth.
So, I hope you manage to make it over to the AMA, ask some fun and interesting questions, and we can have a good time. Thanks!
The post Join My Reddit AMA – 30th August 2016 at 9am Pacific appeared first on Jono Bacon.
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In July, 136.6 work hours have been dispatched among 11 paid contributors. Their reports are available:
- Antoine Beaupré has been allocated 4 hours again but in the end he put back his 8 pending hours in the pool for the next months.
- Balint Reczey did 18 hours (out of 7 hours allocated + 2 remaining, thus keeping 2 extra hours for August).
- Ben Hutchings did 15 hours (out of 14.7 hours allocated + 1 remaining, keeping 0.7 extra hour for August).
- Brian May did 14.7 hours.
- Chris Lamb did 14 hours (out of 14.7 hours, thus keeping 0.7 hours for next month).
- Emilio Pozuelo Monfort did 13 hours (out of 14.7 hours allocated, thus keeping 1.7 hours extra hours for August).
- Guido Günther did 8 hours.
- Markus Koschany did 14.7 hours.
- Ola Lundqvist did 14 hours (out of 14.7 hours assigned, thus keeping 0.7 extra hours for August).
- Santiago Ruano Rincón did 14 hours (out of 14.7h allocated + 11.25 remaining, the 11.95 extra hours will be put back in the global pool as Santiago is stepping down).
- Thorsten Alteholz did 14.7 hours.
Evolution of the situation
The number of sponsored hours jumped to 159 hours per month thanks to GitHub joining as our second platinum sponsor (funding 3 days of work per month)! Our funding goal is getting closer but it’s not there yet.
The security tracker currently lists 22 packages with a known CVE and the dla-needed.txt file likewise. That’s a sharp decline compared to last month.
Thanks to our sponsors
New sponsors are in bold.
- Platinum sponsors:
- Gold sponsors:
- The Positive Internet (for 26 months)
- Blablacar (for 25 months)
- Linode LLC (for 15 months)
- Babiel GmbH (for 4 months)
- Plat’Home (for 4 months)
- Silver sponsors:
- Domeneshop AS (for 25 months)
- Université Lille 3 (for 25 months)
- Trollweb Solutions (for 23 months)
- Nantes Métropole (for 19 months)
- University of Luxembourg (for 17 months)
- Dalenys (for 16 months)
- Univention GmbH (for 11 months)
- Université Jean Monnet de St Etienne (for 11 months)
- Sonus Networks (for 5 months)
- Bronze sponsors:
- David Ayers – IntarS Austria (for 26 months)
- Evolix (for 26 months)
- Offensive Security (for 26 months)
- Seznam.cz, a.s. (for 26 months)
- Freeside Internet Service (for 25 months)
- MyTux (for 25 months)
- Linuxhotel GmbH (for 23 months)
- Intevation GmbH (for 22 months)
- Daevel SARL (for 21 months)
- Bitfolk LTD (for 20 months)
- Megaspace Internet Services GmbH (for 20 months)
- Greenbone Networks GmbH (for 19 months)
- NUMLOG (for 19 months)
- WinGo AG (for 18 months)
- Ecole Centrale de Nantes – LHEEA (for 15 months)
- Sig-I/O (for 12 months)
- Entr’ouvert (for 10 months)
- Adfinis SyGroup AG (for 7 months)
- Laboratoire LEGI – UMR 5519 / CNRS
- Quarantainenet BV
- GNI MEDIA
2 comments | Liked this article? Click here. | My blog is Flattr-enabled.
My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.
DebConf 16
I was in South Africa for the whole week of DebConf 16 and gave 3 talks/BoF. You can find the slides and the videos in the links of their corresponding page:
- Kali Linux’s Experience of a Derivative Tracking Debian Testing
- 2 Years of Work of Paid Contributors in the Debian LTS Project
- Using Debian Money to Fund Debian Projects
I was a bit nervous about the third BoF (on using Debian money to fund Debian projects) but discussed with many persons during the week and it looks like the project evolved quite a bit in the last 10 years and while it’s still a sensitive topic (and rightfully so given the possible impacts) people are willing to discuss the issues and to experiment. You can have a look at the gobby notes that resulted from the live discussion.
I spent most of the time discussing with people and I did not do much technical work besides trying (and failing) to fix accessibility issues with tracker.debian.org (help from knowledgeable people is welcome, see #830213).
Debian Packaging
I uploaded a new version of zim to fix a reproducibility issue (and forwarded the patch upstream).
I uploaded Django 1.8.14 to jessie-backports and had to fix a failing test (pull request).
I uploaded python-django-jsonfield 1.0.1 a new upstream version integrating the patches I prepared in June.
I managed the (small) ftplib library transition. I prepared the new version in experimental, ensured reverse build dependencies do still build and coordinated the transition with the release team. This was all triggered by a reproducible build bug that I got and that made me look at the package… last time upstream had disappeared (upstream URL was even gone) but it looks like he became active again and he pushed a new release.
I filed wishlist bug #832053 to request a new deblog command in devscripts. It should make it easier to display current and former build logs.
Kali related Debian work
I worked on many issues that were affecting Kali (and Debian Testing) users:
- I made an open-vm-tools NMU to get the package back into testing.
- I filed #830795 on nautilus and #831737 on pbnj to forward Kali bugs to Debian.
- I wrote a fontconfig patch to make it ignore .dpkg-tmp files. I also forwarded that patch upstream and filed a related bug in gnome-settings-daemon which is actually causing the problem by running fc-cache at the wrong times.
- I started a discussion to see how we could fix the synaptics touchpad problem in GNOME 3.20. In the end, we have a new version of xserver-xorg-input-all which only depends on xserver-xorg-input-libinput and not on xserver-xorg-input-synaptics (no longer supported by GNOME). This is after upstream refused to reintroduce synaptics support.
- I filed #831730 on desktop-base because KDE’s plasma-desktop is no longer using the Debian background by default. I had to seek upstream help to find out a possible solution (deployed in Kali only for now).
- I filed #832503 because the way dpkg and APT manages foo:any dependencies when foo is not marked “Multi-Arch: allowed” is counter-productive… I discovered this while trying to use a firefox-esr:any dependency. And I filed #832501 to get the desired “Multi-Arch: allowed” marker on firefox-esr.
Thanks
See you next month for a new summary of my activities.
No comment | Liked this article? Click here. | My blog is Flattr-enabled.
August 16, 2016
We just cut the cord, and glory is ours. I thought I would share how we did it to provide food for thought for those of you sick of cable (and maybe so people can stop bickering on my DirecTV blog post from years back).
I will walk through the requirements we had, what we used to have, and what the new setup looks like.
Requirements
The requirements for us are fairly simple:
- We want access to a core set of channels:
- Comedy Central
- CNN
- Food Network
- HGTV
- Local Channels (e.g. CBS, NBC, ABC).
- Be able to favorite shows and replay them after they have aired.
- Have access to streaming channels/services:
- Amazon Prime
- Netflix
- Crackle
- Spotify
- Pandora
- Be able to play Blu-ray discs, DVDs, and other optical content. While we rarely do this, we want the option.
- Have a reliable Internet connection and uninterrupted service.
- Have all of this both in our living room and in our bedroom.
- Reduce our costs.
- Bonus: access some channels on mobile devices. Sometimes I would like to watch the daily show or the news while on the elliptical on my tablet.
Previous Setup
Our previous setup had most of these requirements in place.
For TV we were with DirecTV. We had all of the channels that we needed and we could record TV downstairs but also replay it upstairs in the bedroom.
We have a Roku that provides the streaming channels (Netflix, Amazon Prime, Crackle, Spotify, and Pandora).
We also have a cheap Blueray player which while rarely used, does come in handy from time time.
Everything goes into Pioneer Elite amp and I tried to consolidate the remotes with a Logitech Harmony but it broke immediately and I have heard from others the quality is awful. As such, we used a cheaper all in one remote which could do everything except the Roku as that is bluetooth.
The New Setup
At the core of our new setup is a Playstation 4. I have actually had this for a while but it has been sat up in my office and barely used.
The Playstation 4 provides the bulk of what we need:
- Amazon Prime, Netflix, and Spotify. I haven’t found a Pandora app yet, but this is fine.
- Blueray playback.
- Obviously we have the additional benefit of now being able to play games downstairs. I am enjoying having a blast on Battlefield from time to time and I installed some simple games for Jack to play on.
For the TV we are using Playstation Vue. This is a streaming service that has the most comprehensive set of channels I have seen so far, and the bulk of what we wanted is in the lowest tier plan ($40/month). I had assessed some other services but key channels (e.g. Comedy Central) were missing.
Playstation Vue has some nice features:
- It is a lot cheaper. Our $80+/month cable bill has now gone down to $40/month with Vue.
- The overall experience (e.g. browsing the guide, selecting shows, viewing information) is far quicker, more modern, and smoother than the clunky old DirecTV box.
- When browsing the guide you can not just watch live TV but also watch previous shows that were on too. For example, missed The Daily Shows this week? No worries, you can just go back and watch them.
- Playstation Vue is also available on Android, IOS, Roku and other devices which means I can watch TV and play back shows wherever I am.
In terms of the remote control I bought the official Playstation 4 remote and it works pretty well. It is still a little clunky in some areas as the apps on the Playstation sometimes refer to the usual playstation buttons as opposed to the buttons on the remote. Overall though it works great and it also powers my other devices (e.g. TV and amp), although I couldn’t get volume pass-through working.
Networking wise, we have a router upstairs in the bedroom which is where the feed comes in. I then take a cable from it and send it over our power lines with a Ethernet Over Power adapter. Then, downstairs I have an additional router which is chained and I take ethernet from the router to the Playstation. This results in considerably more reliable performance than using wireless. This is a big improvement as the Roku doesn’t have an ethernet port.
In Conclusion
Overall, we love the new setup. The Playstation 4 is a great center-point for our entertainment system. It is awesome having a single remote, everything on one box and in one interface. I also love the higher-fidelity experience – the Roku is great but the interface looks a little dated and the apps are rather restricted.
Playstation Vue is absolutely awesome and I would highlight recommend it for people looking to ditch cable. You don’t even need a Playstation 4 – you can use it on a Roku, for example.
I also love that we are future proofed. I am planning on getting Playstation VR, which will now work downstairs, and Sony are bringing more and more content and apps to the Playstation Store. For example, there are lots of movies, TV shows, and other content which may not be available elsewhere.
I would love to hear your stories though about your cord cutting. Which services and products did you move to? What do you think about a games console running your entertainment setup? What am I doing wrong? Let me know in the comments!
The post Cutting the Cord With Playstation Vue appeared first on Jono Bacon.
We’re happy to welcome a new development board in the Ubuntu family! The new Intel® Joule™ is a powerful board targeted at IoT and robotics makers and runs Ubuntu for a smooth development experience. It’s also affordable and compact enough to be used in deployment, therefore Ubuntu Core can be installed to make any device it’s included in secure and up to date … wherever it is!
Check out this Robot Demo that was filmed pre-IDF – The Turtlebot runs ROS on Ubuntu using the Intel® Joule™ board and Realsense camera.
Ubuntu Core, also known as Snappy, is a stripped down version of Ubuntu, designed to run securely on autonomous machines, devices and other internet-connected digital things. From homes to drones, these devices are set to revolutionise many aspects of our lives, but they need an operating system that is different from that of traditional PCs. Learn more about Ubuntu Core here.
Get involved in contributing to Ubuntu Core here.
Ubuntu Core operating system provides production ready platform for gateways.
London, UK – August 16, 2016 – Canonical today announced it has formed a strategic partnership with Advantech to work together to certify the company’s Internet of Things (IoT) gateways for Ubuntu Core.
The partnership ensures users of Advantech’s selected Intel x86-based IoT gateways are certified to have a fully functioning and supported Ubuntu image for their gateway. Users will also have access to an Ubuntu image and developer tools to ready their devices for production, as well as a number of services to fully manage their device’s security and software.
“We are extremely pleased to be forming this strategic partnership with Advantech, one of the world’s leaders in providing trusted innovative embedded and automation products and solutions,” said Jon Melamut, vice president of commercial devices operations for Canonical. “This partnership confirms Ubuntu Core as the operating system of choice for IOT developers and systems integrators who want to deploy products to market quickly. Ubuntu Core is Ubuntu for IoT and it provides amongst others, a production ready operating system for IoT gateways.”
“Advantech aims to provide a full range of IoT gateways with pre-integrated software to fulfill the needs of diverse IoT applications. We are very pleased with our collaboration with Canonical and the expansion of our operating system offerings. This collaboration will enable us to satisfy even more customer requirements and deliver an integrated, pre-validated, and flexible open-computing gateway platform that allows fast solution development and deployment,” said Miller Chang, vice president of Advantech Embedded Computing Group.
Canonical is the company behind Ubuntu, the world’s most popular open-source platform, while Advantech is the leader in providing trusted innovative embedded and automation products and solutions.
For more information on Ubuntu Core, please visit here.
About Canonical
Canonical is the company behind Ubuntu, the leading Operating System for cloud and the Internet of Things. Most public cloud workloads are running Ubuntu, and most new smart gateways, self-driving cars and advanced humanoid robots are running Ubuntu as well. Canonical provides enterprise support and services for commercial users of Ubuntu.
Canonical leads the development of the snap universal Linux packaging system for secure, transactional device updates and app stores. Ubuntu Core is an all-snap OS, perfect for devices and appliances. Established in 2004, Canonical is a privately held company.
About Advantech
Founded in 1983, Advantech is a leader in providing trusted, innovative products, services, and solutions. Advantech offers comprehensive system integration, hardware, software, customer-centric design services, embedded systems, automation products, and global logistics support. We cooperate closely with our partners to help provide complete solutions for a wide array of applications across a diverse range of industries. Our mission is to enable an intelligent planet with Automation and Embedded Computing products and solutions that empower the development of smarter working and living. With Advantech, there is no limit to the applications and innovations our products make possible. Advantech is a premier member of the Intel® Internet of Things Solutions Alliance. From modular components to market-ready systems, Intel and the 350+ global member companies of the Alliance provide scalable, interoperable solutions that accelerate deployment of intelligent devices and end-to-end analytics. Close collaboration with Intel and each other enables Alliance members to innovate with the latest technologies, helping developers deliver first-in-market solutions. (Corporate Website: www.advantech.com).
August 15, 2016
![]()
Welcome to the Ubuntu Weekly Newsletter. This is issue #478 for the week August 8 – 14, 2016, and the full version is available here.
In this issue we cover:
- Ubuntu Stats
- LoCo Events
- Zygmunt Krynicki: Snap execution environment
- Canonical Design Team: Web team hack day
- Canonical Design Team: Competition winner – Timer App
- Dustin Kirkland: Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for Linux.com
- Zygmunt Krynicki: Creating your first snappy interface
- Julian Andres Klode: Porting APT to CMake
- Aaron Honeycutt: Plasma features – The endless search Pt.1
- Simon Quigley: A look at Lubuntu’s LXQt Transition
- Canonical News
- In The Blogosphere
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 12.04, 14.04 and 16.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth K. Joseph
- Chris Guiver
- Athul Muralidhar
- Chris Sirrs
- Paul White
- Simon Quigley
- And many others
If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.
I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.
On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.
On the functionality side we are proposing a number of improvements:
-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history
Tabs and split screen
On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.
On mobile, the tabs will be accessed through the bottom edge, as on the browser app.
Quick mobile access to shortcuts and commands
We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.
We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.
More themability
We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.
We need your help!
These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!
Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to [email protected] or [email protected] to discuss details!
EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.
Things move fast in the land of Neon light.
Today KDE Frameworks 5.25 was added to Neon User edition. KDE’s selection of Qt addon libraries gets released every month and this update comes with a bunch of fixes.
Finally Kontact has built in Developer Editions, apologies to those who had a half installed build for a while, you should now be able to install all of KDE PIM and get your e-mail/calendar/notes/feed reader/a load of other bits. Suggestions now taken for what I should add next to Neon builds.
And in free software you are nobody until somebody bases their project off yours. Yesterday Maui Linux released its new version based off KDE neon. Maui was previously the distro used for Hawaii Qt Desktop but now it’s Plasma all the way and comes from the Netrunner team with a bunch of customisations for those who don’t appreciate Neon’s minimalist default install.
Maui Linux based off NeonHola a todos, después de tanto tiempo me he permitido escribir este artículo como propósito de compartir una experiencia vivida hace poco a través de un programa de formación presentado por www.zuliatec.com llamado Procodi www.procodi.com, programa de formación para niños donde les brinda desde muy temprana edad herramientas que les permiten a ellos desarrollar habilidades y destreza en las áreas de desarrollo, diseño gráfico, Electrónica Digital, Robótica, Medios digitales y música digital.
Scratch (Cómo lo dice su misma página web), está diseñado especialmente para edades entre los 8 y 16 años, pero es usado por personas de todas las edades. Millones de personas están creando proyectos en Scratch en una amplia variedad de entornos, incluyendo hogares, escuelas, museos, bibliotecas y centros comunitarios.
También cita: La capacidad de codificar programas de computador es una parte importante de la alfabetización en la sociedad actual. Cuando las personas aprenden a programar en Scratch, aprenden estrategias importantes para la solución de problemas, diseño de proyectos, y la comunicación de ideas.
Dado a que es muy provechoso esta herramienta tanto para niños como para los adultos me dedico a explicarle de una manera sencilla como hacer funcionar el programa desde GNU/Linux de manera OffLine.
Si usas Gnome o derivado es necesario tener instalado una librería que tiene como nombre Gnome-Keyring y si usas KDE debes tener instalado Kde-Wallet.
Para este ejemplo explico como hacer funcionado Scratch para Linux Mint y que puede servir para S.O derivados para Debian.
- Descarga primero desde la web oficial de Scratch https://scratch.mit.edu/scratch2download/ los archivos para instalar Adobe Air y Scratch para Linux, también está disponible para Windows y para Mac.
- Luego instalar gnome-keyring: sudo aptitude install gnome-keyring
- Agregar dos enlaces simbólicos a la carpeta /usr/lib/ de la siguiente manera: sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0 && sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
- Luego te posiciones desde la consola donde se encuentre el instalador de Adobe Air, y desde allí ejecutar lo siguiente: chmod +x AdobeAIRInstaller.bin && ./AdobeAIRInstaller.bin
- Sigue todos los pasos que te indique el instalador y ten un poco de paciencia.
- Ya instalado el Adobe Air, buscamos gráficamente el archivo Scratch-448.air y lo abrimos con Adobe AIR application Instaler. También hay que tener un poco de paciencia, pero al terminar te generará un enlace en tu escritorio donde podrás acceder al sistema las veces que desees.
Con lo ante expuesto ya podemos utilizar Scratch OffLine, pero recuerda que si entraste en la web oficial del proyecto pudiste haber notado que también lo podemos utilizar en linea.
Happy Hacking.
A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.
I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.
Thus was born minica!
Something as simple as minica [email protected] domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.
Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!
August 13, 2016
This blog post is not an announcement of any kind or even an official plan. This may even be outdated, so check the links I provide for additional info.
As you may have seen, the Lubuntu team (which I am a part of) has started the migration process to LXQt. It's going to be a long process, but I thought I might write about some of the things that goes into this process.
Step 1 - Getting a metapackage
This step is already done, and it's installable last time I checked in a Virtual Machine. The metapackage is lubuntu-qt-desktop, but there's a lot to be desired.
While we already have this package, there's a lot to be tweaked. I've been running LXQt with the Lubuntu artwork as my daily driver for a few months now, and there's a lot missing that needs to be tweaked. So while you have the ability to install the package and play around with it, it needs to be a lot different to be usable.
Also in this image are our candidates (not final yet) for applications that will be included in Lubuntu. Here's a current list of what's on the image:
- Qupzilla - Internet browser
- Transmission (Qt) - BitTorrent Client
- Quassel IRC - IRC Client
- SMPlayer - Media Player
- SMTube - Native YouTube Video Player (launches with SMPlayer so it's a nice partner application)
- Calibre - "E-book" client
- 2048 (Qt) - Sliding block puzzle game
- JuffEd - Text editor
- nobleNote - Application to take notes
- Xarchiver - Archive manager
- ScreenGrab - Screenshots
- nomacs - Image viewer
- XScreenSaver - Screensaver
- gdebi (KDE UI) - Application/Package Manager
- pinentry (Qt) - Secure GnuPG dialog boxes
- usb-creator (KDE frontend) - Tool to take ISO files and write them to USB disks
- compton-conf - Configuration tool for X composite manager Compton
- ObConf (Qt) - Configuration editor for OpenBox
- Qlipper - Clipboard history applet
- QtPass - GUI Password Manager
- LibreOffice - Office Suite
- qpdfview - PDF Viewer
- Muon - Package Management
- Software Properties (KDE) - Easily manage software repositories
An up-to-date listing of the software in this metapackage is available here.
Step 2 - Getting an image
The next step is getting a working image for the team to test. The two outstanding merge proposals adding this have been merged, and we're now waiting for the images to be spun up and added to the ISO QA Tracker for testers.
Having this image will help us gauge how much system resources are used, and gives us the ability to run some benchmarks on the desktop. This will come after the image is ready and spins up correctly.
Step 3 - Testing
An essential part of any good operating system is the testing. We need to create some LXQt-specific test cases and make sure the ISO QA test cases are working before we can release a reliable image to our users.
As mentioned before, we need test cases. We created a blueprint last cycle tracking our progress with test cases, and the sooner that those are done, the sooner Lubuntu can make the switch knowing that all of our selected applications work fine.
Step 4 - Picking applications
This is the tough step in all of this. We need to pick the applications that best suit our users' use cases (a lot of our users run on older hardware) and needs (LibreOffice for example). Every application will most likely need a week or two to do proper benchmarking and testing, but if you have a suggestion for an application that you would like to see in Lubuntu, share your feedback on the blueprints. This is the best way to let us know what you would like to see and your feedback on the existing applications before we make a final decision.
Final thoughts
I've been using LXQt for a while now, and it has a lot of advantages not only in applications, but the desktop itself. Depending on how notable some things are, I might do a blog post in the future, otherwise, see for yourself. :)
Here is our blueprint that will be updated a lot in the next week or so that will tell you more about the transition. If you have any questions, shoot me an email at [email protected] or send an email to the Lubuntu-devel mailing list.
I'm really excited for this transition, and I hope you are too.
August 11, 2016
It’s Episode Twenty-four of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, and Martin Wimpress are here again.
We’re here – all of us!
In this week’s show:
- We interview elementary OS developers Daniel Foré, Cody Garver and Corentin Noël about the upcoming Loki release and a little bit about Snaps.
-
We also discuss playing the Ukulele and playing with a new Entroware Athena laptop.
-
We share a Command Line Lurve,
himawaripy(via Joey at OMG Ubuntu), which takes photos of the world from a satellite. -
And we go over all your amazing feedback – thanks for sending it – please keep sending it!
-
We discuss building MATE Desktop from source using reference packages for Debian and Slackware.
-
This weeks cover image is taken from Public Domain Pictures.
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to [email protected] or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
- Join us on IRC in #ubuntu-podcast on Freenode
I recently discovered Martin O’Leary‘s Feeling Old twitter bot,1 which has a big list of “things that have happened” and then constructs comparisons such as “Y2K was as close to the release of Return of the Jedi as to now”, to make you do that weird “wow, I am old!” double-take. As Martin says, it occasionally throws out a gem. But, looking through the list, I think that they’re probably all gems, but whether they hit home for you depends on how old you are.
Basically, the form of the sentence is: thing is closer to old thing than to today. Ideally, you want thing to be something that you think is recent, and old thing to be something that you think is ancient, and therefore you’ll be surprised that thing really isn’t actually recent and that’s because you’re a decrepit old codger.
My theory is this: old thing ought to be before you were born. By definition, anything that happens before you were born feels like a long time ago to you. And the gap between thing and old thing is the same as the gap between thing and now (because that’s what constructs the sentences). So thing has to happen in the first half of your life. Stuff that happens while you’re a young child also feels like a long time ago — you were a kid when it happened! — so we want something that happened once you started to feel like you in your head. Say, around 12 or so years of age. Thus, we take a big list of pop culture things, find an event which happened between the ages of 12 and half your current age, find a corresponding old event, display them to you, and have you be surprised and displeased. It’s a living.
Give it a try.
I was born
- because I was reading his excellent work on how to create accurate looking fantasy maps ↩
I’ll start this off with mentioning that I’m on Plasma 5.7.2 so you might not see these features (yet!).
Since I started working with a global team I’ve hit the unforgiving thing called Time Zones, so my first feature will be covering the ‘Digital Clock’ widget. I’m sorry to report that those of you who love that ‘Fuzzy Clock’ widget are missing this feature. With this widget I can add Time Zones by simply right-clicking the widget if it’s one of your panels already or on your desktop.
If the widget is in the panel then you can just right click it like so. (Look above)
But if you have the widget on your desktop you have to press and hold it with left click like so. (Look above)
It will turn a light blue and a pop up will well… pop up with some buttons. The bottom one is the one we want. (Look above)
Either from the panel widget or the desktop widget we will get this window. (Look above) From here we can search for Time Zones based on cities ex. London being the capital of England or my state falling into the Time Zone of New York.
CORRECTION 2016.04.23 - It was previously stated that 16.04 is a point release to 14.04. This was due to a silly copy&paste issue from our previous release statement for 14.04. The Mythbuntu 16.04 release is a flavor of Ubuntu 16.04. We're sorry for any confusion this has caused.Mythbuntu 16.04 has been released. This is our third LTS release and will be supported until shortly after the 18.04 release.The Mythbuntu team would like to thank our ISO testers for helping find critical bugs before release. You guys rock!With this release, we are providing torrents only. It is very important to note that this release is only compatible with MythTV 0.28 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.You can get the Mythbuntu ISO from our downloads page. Highlights
Underlying system
MythTV
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 16.04 or xenial), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/16.04/). Upgrade NodesIf you have enabled the mysql tweaks in the Mythbuntu Control Center these will need to be disabled prior to upgrading. Once upgraded, these can be reenabled. Known issues
|
August 10, 2016
Ever since it’s creation back in the dark ages, APT shipped with it’s own build system consisting of autoconf and a bunch of makefiles. In 2009, I felt like replacing that with something more standard, and because nobody really liked autotools, decided to go with CMake. Well, the bazaar branch was never really merged back in 2009.
Fast forward 7 years to 2016. A few months ago, we noticed that our build system had trouble with correct dependencies in parallel building. So, in search for a way out, I picked up my CMake branch from 2009 last Thursday and spent the whole weekend working on it, and today I am happy to announce that I merged it into master:
123 files changed, 1674 insertions(+), 3205 deletions(-)
More than 1500 lines less build system code. Quite impressive, eh? This also includes about 200 lines of less code in debian/, as that switched from prehistoric debhelper stuff to modern dh (compat level 9, almost ready for 10).
The annoying Tale of Targets vs Files
Talking about CMake: I don’t really love it. As you might know, CMake differentiates between targets and files. Targets can in some cases depend on files (generated by a command in the same directory), but overall files are not really targets. You also cannot have a target with the same name as a file you are generating in a custom command, you have to rename your target (make is OK with the generated stuff, but ninja complains about cycles because your custom target and your custom command have the same name).
Byproducts for the (time) win
One interesting thing about CMake and Ninja are byproducts. In our tree, we are building C++ files. We also have .pot templates depending on them, and .mo files depending on the templates (we have multiple domains, and merge the per-domain .pot with the all-domain .po file during the build to get a per-domain .mo). Now, if we just let them depend naively, changing a C++ file causes the .pot file to be regenerated which in turns causes us to build .mo files for every freaking language in the package. Even if nothing changed.
Byproducts solve this problem. Instead of just building the .pot file, we also create a stamp file (AKA the witness) and write the .pot file (without a header) into a temporary name and only copy it to its final name if the content changed. The .pot file is declared as a byproduct of the command.
The command doing the .pot->.mo step still depends on the .pot file (the byproduct), but as that only changes now if strings change, the .mo files only get rebuild if I change a translatable string. We still need to ensure that that the .pot file is actually built before we try to use it – the solution here is to specify a custom target depending on the witness and then have the target containing the .mo build commands depend on that target.
Now if you use make, you might now this trick already. In make, the byproducts remain undeclared, though, while in CMake we can now actually express them, and they are used by the Ninja generator and the Ninja build tool if you chose that over make (try it out, it’s fast).
Further Work
Some command names are hardcoded, I should find_program() them. Also cross-building the package does not yet work successfully, but it only requires a tiny amount of patches in debhelper and/or cmake.
I also tried building the package on a Fedora docker image (with dpkg installed, it’s available in the Fedora sources). While I could eventually get the programs build and most of the integration test suite to pass, there are some minor issues to fix, mostly in the documentation building and GTest department: Fedora ships its docbook stylesheets in a different location, and ships GTest as a pre-compiled library, and not a source tree.
I have not yet tested building on exotic platforms like macOS, or even a BSD. Please do and report back. In Debian, CMake is not up-to.date enough on the non-Linux platforms to build APT due to test suite failures, I hope those can be fixed/disabled soon (it appears to be a timing issue AFAICT).
I hope that we eventually get some non-Debian backends for APT. I’d love that.
Filed under: Debian, Uncategorized
Today is a day I've been waiting for a long time. We now have enough knowledge to create our first real interface from scratch. To really understand this content you need to be familiar with parts [1], [2], [3] and [4].
We will go all the way, from branching snapd all the way to running a program that uses our new interface. We will focus on the ancillary tasks this time, the actual interface will be rather basic. Still, this knowledge will be invaluable next time where we will try to do something more complicated.
Adding the new "hello" interface
Let's get started. It all begins with snapd. If you didn't already, fork snapd and clone your fork locally. You may find this small guide that I wrote earlier useful. It goes through all those steps in detail. At the end of the exercise you should be able to build your fork of snapd (make sure it is really your fork, not the upstream version!)
Let's look around. Each time a new interface is added, the following files are modified:
- The file interfaces/builtin/foo{,_test}.go contains the actual interface
- The file interfaces/builtin/all{,_test}.go contains tiny change that is used to register a new interface
// -*- Mode: Go; indent-tabs-mode: t -*-
/*
* Copyright (C) 2016 Canonical Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 3 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package builtin
import (
"fmt"
"github.com/snapcore/snapd/interfaces"
)
// HelloInterface is the hello interface for a tutorial.
type HelloInterface struct{}
// String returns the same value as Name().
func (iface *HelloInterface) Name() string {
return "hello"
}
// SanitizeSlot checks and possibly modifies a slot.
func (iface *HelloInterface) SanitizeSlot(slot *interfaces.Slot) error {
if iface.Name() != slot.Interface {
panic(fmt.Sprintf("slot is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the slot side.
return nil
}
// SanitizePlug checks and possibly modifies a plug.
func (iface *HelloInterface) SanitizePlug(plug *interfaces.Plug) error {
if iface.Name() != plug.Interface {
panic(fmt.Sprintf("plug is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the plug side.
return nil
}
// ConnectedSlotSnippet returns security snippet specific to a given connection between the hello slot and some plug.
func (iface *HelloInterface) ConnectedSlotSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}
// PermanentSlotSnippet returns security snippet permanently granted to hello slots.
func (iface *HelloInterface) PermanentSlotSnippet(slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}
// ConnectedPlugSnippet returns security snippet specific to a given connection between the hello plug and some slot.
func (iface *HelloInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}
// PermanentPlugSnippet returns the configuration snippet required to use a hello interface.
func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}
// AutoConnect returns true if plugs and slots should be implicitly
// auto-connected when an unambiguous connection candidate is available.
//
// This interface does not auto-connect.
func (iface *HelloInterface) AutoConnect() bool {
return false
}
TIP: Any time you are making code changes use go fmt to re-format all of the code in the current working directory to the go formatting standards. Static analysis checkers in the snappy tree enforce this so your code won't be able to land without first being formatted correctly.
diff --git a/interfaces/builtin/all_test.go b/interfaces/builtin/all_test.go
index 46ca587..86c8fad 100644
--- a/interfaces/builtin/all_test.go
+++ b/interfaces/builtin/all_test.go
@@ -62,4 +62,5 @@ func (s *AllSuite) TestInterfaces(c *C) {
c.Check(all, DeepContains, builtin.NewCupsControlInterface())
c.Check(all, DeepContains, builtin.NewOpticalDriveInterface())
c.Check(all, DeepContains, builtin.NewCameraInterface())
+ c.Check(all, Contains, &builtin.HelloInterface{})
}
diff --git a/interfaces/builtin/hello.go b/interfaces/builtin/hello.go
index d791fc5..616985e 100644
--- a/interfaces/builtin/hello.go
+++ b/interfaces/builtin/hello.go
@@ -130,3 +130,7 @@ func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securit
func (iface *HelloInterface) AutoConnect() bool {
return false
}
+
+func init() {
+ allInterfaces = append(allInterfaces, &HelloInterface{})
+}
diff --git a/snap/implicit.go b/snap/implicit.go
index 3df6810..098b312 100644
--- a/snap/implicit.go
+++ b/snap/implicit.go
@@ -60,6 +60,7 @@ var implicitClassicSlots = []string{
"pulseaudio",
"unity7",
"x11",
+ "hello",
}
// AddImplicitSlots adds implicitly defined slots to a given snap.
diff --git a/snap/implicit_test.go b/snap/implicit_test.go
index e9c4b07..364a6ef 100644
--- a/snap/implicit_test.go
+++ b/snap/implicit_test.go
@@ -56,7 +56,7 @@ func (s *InfoSnapYamlTestSuite) TestAddImplicitSlotsOnClassic(c *C) {
c.Assert(info.Slots["unity7"].Interface, Equals, "unity7")
c.Assert(info.Slots["unity7"].Name, Equals, "unity7")
c.Assert(info.Slots["unity7"].Snap, Equals, info)
- c.Assert(info.Slots, HasLen, 29)
+ c.Assert(info.Slots, HasLen, 30)
}
func (s *InfoSnapYamlTestSuite) TestImplicitSlotsAreRealInterfaces(c *C) {
- The first one adding the dummy hello interface
- The second one registering it with allInterfaces
- The third one adding it to implicit slots on the core snap, on classic

Seeing our interface for the first time
./refresh-bits snapd setup run-snapd restore
- Build snapd from source (based on correctly set $GOPATH)
- Prepare for running locally built snapd
- Run locally built snapd
- Restore regular version of snapd

sudo snap interfaces | grep hello

TIP: If it didn't work for you and you didn't get the hello interface then the most likely cause of the issue is that you were editing your own fork but refresh-bits still built the vanilla upstream version that is checked out somewhere else.
Go to $GOPATH/src/github.com/snapcore/snapd and ensure that this is indeed the fork you were expecting. If not just remove this directory and move your fork (that you may have cloned elsewhere) here and try again.
Great. Now we are in business. Let's recap what we did so far:
- We added a whole new interface by dropping boilerplate code into interfaces/builtin/hello.go
- We registered the interface in the list of allInterfaces
- We made snapd inject an implicit (internally defined) slot on the core snap when running on classic
- We used refresh-bits to run our locally built version and confirmed it really works
Granting permissions through interfaces
The graceful-reboot snap
graceful-reboot.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/reboot.h>
#include <linux reboot.h>
#include <errno.h>
int main() {
sync();
if (reboot(LINUX_REBOOT_CMD_RESTART) != 0) {
switch (errno) {
case EPERM:
printf("Insufficient permissions to reboot the system\n");
break;
default:
perror("reboot()");
break;
}
return EXIT_FAILURE;
}
printf("Reboot requested\n");
return EXIT_SUCCESS;
}
Makefile
TIP: Makefiles rely on differences between tabs and spaces. When copy pasting this sample you need to ensure that tabs are preserved in the clean and install rules
CFLAGS += -Wall
.PHONY: all
all: graceful-reboot
.PHONY: clean
clean:
rm -f graceful-reboot
graceful-reboot: graceful-reboot.c
.PHONY: install
install: graceful-reboot
install -d $(DESTDIR)/usr/bin
install -m 0755 graceful-reboot $(DESTDIR)/usr/bin/graceful-reboot
snapcraft.yaml
name: graceful-reboot
version: 1
summary: Reboots the system gracefully
description: |
This snap contains a graceful-reboot application that requests the system
to reboot by talking to the init daemon. The application uses a custom
"hello" interface that is developed as a part of a tutorial.
confinement: strict
apps:
graceful-reboot:
command: graceful-reboot
plugs: [hello]
parts:
main:
plugin: make
source: .
$ snapcraft
$ sudo snap install ./graceful-reboot_1_amd64.snap
$ graceful-reboot
Bad system call
sie 10 09:47:02 x200t audit[13864]: SECCOMP auid=1000 uid=1000 gid=1000 ses=2 pid=13864 comm="graceful-reboot" exe="/snap/graceful-reboot/x1/usr/bin/graceful-reboot" sig=31 arch=c000003e syscall=169 compat=0 ip=0x7f7ef30dcfd6 code=0x0
$ scmp_sys_resolver 169
reboot
Adjusting the hello^Hreboot interface
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
- return nil, nil+ return []byte(`reboot`), nilcase interfaces.SecurityDBus:

Let's connect them now.
sudo snap connect graceful-reboot:reboot ubuntu-core:reboot
$ sudo snap interfaces | grep reboot
:reboot graceful-reboot
/var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
$ grep reboot /var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
reboot
- Security profiles are derived from interfaces but are only changed when a new connection is made (and that connection affects a particular snap), when the snap is initially installed or every time it is updated.
- In practice we will either disconnect / reconnect the hello interface or reinstall the snap (whichever is more convenient)
- Snapd remembers connections that were made explicitly and will re-establish them across snap updates. If you rename an interface while working on it, snapd may print a message (to system log, not to the console) about being unable to reconnect the "hello" interface because that interface no longer exists in snapd. To make snapd forget all those connections simply remove and reinstall the affected snap.
- You can experiment by editing seccomp profiles directly. Just edit the file mentioned above and add additional system calls. Once you are happy with the result you can adjust snapd source code to match.
- You can also do that with apparmor profiles but you have to re-load the profile into the kernel each time using the command apparmor_parser -r /path/to/the/profile
$ graceful-reboot
Insufficient permissions to reboot the system
sie 10 10:25:55 x200t kernel: audit: type=1400 audit(1470817555.936:47): apparmor="DENIED" operation="capable" profile="snap.graceful-reboot.graceful-reboot" pid=14867 comm="graceful-reboot" capability=22 capname="sys_boot"
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
- return nil, nil
+ return []byte(`capability sys_boot,`), nil
case interfaces.SecuritySecComp:
return []byte(`reboot`), nil
case interfaces.SecurityDBus:
TIP: I'm using sudo with a full path because of a bug in sudo where /snap/bin is not kept on the path.
$ sudo /snap/bin/graceful-reboot
Final touches
// NewRebootInterface returns a new "reboot" interface.
func NewRebootInterface() interfaces.Interface {
return &commonInterface{
name: "reboot",
connectedPlugAppArmor: `capability sys_boot,`,
connectedPlugSecComp: `reboot`,
reservedForOS: true,
}
}
You can find the source code of this interface in my github repository. The code of the graceful reboot snap is here. Feel free to comment below, or on Google+ or ask me questions directly.
This is the 3rd, and final, post in my Hacker Summer Camp 2016 series. Part 1 covered my class at Black Hat, and Part 2 the 2016 BSidesLV Pros versus Joes CTF. Now it’s time to talk about the capstone of the week: DEF CON.
DEF CON is the world’s largest (but not oldest) Hacker conference. This year was the biggest yet, with Dark Tangent stating that they produced 22,000 lanyards – and ran out of lanyards. That’s a lot of attendees. It covered both the Paris and Bally’s conference areas, and that still didn’t feel like enough.
DEF CON is also what I measure my year by. You can have your New Year’s, I measure mine from August to August (though apparently next year it’ll be the end of July…). Probably the single biggest regret in my life is that I didn’t find a way to go to DEF CON before DEF CON 20. The people and experiences there are memorable and well worth it.
Crowds
I don’t talk about it a whole lot, but I actually have pretty bad social anxiety. I do a terrible job of talking with people I don’t know, introducing myself to people, etc. At most events, I’m what you would call a “wallflower”. That doesn’t combine very well with 22,000 people. That especially doesn’t combine well with the chokepoints in a number of places, especially the packet capture village. It was hard to get through the week, but I told myself I wasn’t going to let my social anxiety ruin my con, and I think I did a good job of that.
Capture the Packet
So, since DEF CON 21, I’ve always played in Capture the Packet. At DC22, I even managed a 2nd place overall finish (just one spot away from the coveted black Uber badge). This year, I went back to play and discovered major changes:
- Rounds are now two hours instead of 1.
- There is now a qualifying round, semifinals, and finals.
- They’re really pulling out the obscure protocols.
- Significantly lower submit attempt limits (like 1-2 in most questions).
On the other hand, they had serious gameplay issues that really makes me regret spending so much of my con this year on Capture the Packet. These include:
- Every round started late. The finals were supposed to start at 10:00 on Sunday, they started at 12:30. That was two and a half hours of sitting there waiting.
- There were many answers where the answer in the database contained typos.
- There were many answers where the question contained typos that made it difficult or impossible to find the traffic (wrong IP, wrong MAC, etc.)
- There were many questions that were very poorly phrased. It was nearly impossible to parse some of the questions. One question asked for the “last three hex” of a value, but it wasn’t clear: last 3 bytes in hex format, last 3 hex characters, etc.
Combining the problems with questions and the lowered submission limits, it meant that several times we were locked out of questions just because it wasn’t clear what format the answer should be or how much data they wanted. The organizers clearly need to:
- Increase the limits. (I’m not asking for unlimited tries, but on text answers, give us at least 3-5.)
- Build some sort of fuzzy matching (case insensitive, automatically strip whitespace leading/trailing, etc.)
- Write questions more clearly.
I’m actually amazed that Aries Security is able to sell CTP as a commercial offering for training to government and companies. It’s a wonderful concept and they try hard, but I’m so disappointed in the outcome. I spent ~12 hours sitting in the CTP area, but only 6 of those were actually playing. The other 6 were waiting for games to start, and then the games were disheartening when they didn’t work correctly. I’ll probably play again next year, but I really hope they’ve put some polish on the game by then.
Parties
As usual, DEF CON had a variety of parties to choose from. Most importantly, I got my hit of Dual Core in at the Friday night EDM night, and spent a little bit of time at the Queercon pool party. (Though it was too hot and humid to spend much time by the pool unless you were in the pool, and I’m not someone anyone wants in the pool…)

Just keeping track of all of the parties has become a major task, but the DCP guys have you covered there. I’d love to see some more parties that are a little more “chill”: less loud music, more just hanging out and having a drink with friends. (Or maybe I was just at all the wrong ones this year.)
Next Year
I can’t wait for next year – DEF CON 25 promises to be big, and we’re moving over to Caesars (2 years is all we got out of Bally’s/Paris). I’m trying to come up with ideas of how I can make my own personal DEF CON 25 bigger and better, without ripping off ideas like the AND!XOR badge, but I want to do something cool. Suggestions to @Matir or find my email if you know me. :) Hopefully I’ll see all of you, my hacker friends, out in Las Vegas for another fun Hacker Summer Camp.
Continuing my Hacker Summer Camp Series, I’m going to talk about one of my Hacker Summer Camp traditions. That’s right, it’s the Pros versus Joes CTF at BSidesLV. I’ve written about my experiences and even a player’s guide before, but this was my first year as a Pro, captaining a blue team (The SYNdicate).
It’s important to me to start by congratulating all of the Joes – this is an intense two days, and your pushing through it is a feat in and of itself. In past years, we had players burn out early, but I’m proud to say that nearly all of the Joes (from every team) worked hard until the final scorched earth. Every one of the players on my team was outstanding and worked their ass off for this CTF, and it paid off, as The SYNdicate was declared the victors of the 2016 BSides LV Pros versus Joes.

What worked well
Our team put in incredible amounts of effort into preparation. We built hardening scripts, discussed strategy, and planned our “first hour”. Keep in mind that PvJ simulates you being brought in to harden a network under active attack, so the first hour is absolutely critical. If you are well and thoroughly pwned in that time, getting the red cell out is going to be hard. There’s a lot of ways to persist, and finding them all is time consuming (especially since neither I nor my lieutenant does much IR).
We really jelled as a team and worked very, very, well together on the 2nd day. We hardened faster than I thought was possible and got our network very locked down. In that day, we only lost 1000 points via beacons (10 minutes on one Windows XP host). Our network was reportedly very secure, but I don’t know how thoroughly the other teams were checking versus the “low hanging fruit” approach.
What didn’t work well
The first day, we did not coordinate well. We had machines that hadn’t been touched for hardening even after 4 hours. I failed when setting up the firewall and blocked ICMP for a while, causing all of our services to score as down. I’ve said it before and I’ll say it again: coordination and organization are the most important aspects of working as a team in this environment.
The Controversy
There was an issue with scoring during the competition where tickets were being counted incorrectly. For example, my team had ticket points deducted even when we had 0 open tickets: the normal behavior being that only when you had a ticket open would you lose points. This resulted in massive ticket deductions showing up on the scoreboard, which Dichotomy was only able to correct after gameplay had ended. This was a very controversial issue because it resulted in the team that was leading on the scoreboard dropping to last place and pushed my team to the top. The final scoring (announced on Twitter) was in accordance with the written rules as opposed to the scoreboard, but it still was confusing for every team involved.
Conclusion
Overall, this was a good game, and I’m very proud of my lieutenant, my joes, and all of the other teams for playing so well. I’m also very appreciative of the hard work from Dichotomy, Gold Cell, and Grey Cell in doing all of the things necessary to make this game possible. This game is the closest thing to a live fire security exercise I’ve ever seen at a conference, and I think we all have something to learn from that environment.
August 09, 2016
Using a C++ library, particularly a 3rd party one, can be complicated affair. Library binaries compiled on Windows/OSX/Linux can not simply be copied over to another platform and used there. Linking works differently, compilers bundle different code into binaries on each platform etc.
This is not an insurmountable problem. Libraries like Qt distribute dynamically compiled binaries for major platforms and other libraries have comparable solutions.
There is a category of libraries which considers the portable binaries issue to be a terminal one. Boost is a widespread source of many ‘header only’ libraries, which don’t require a user to link to a particular platform-compatible library binary. There are also many other examples of such ‘header only’ libraries.
Recently there was a blog post describing an example library which can be built as a shared library, or as a static library, or used directly as a ‘header only’ library which doesn’t require the user to link against anything to use the library. The claim is that it is useful for libraries to provide users the option of using a library as a ‘header only’ library and adding preprocessor magic to make that possible.
However, there is yet a fourth option, and that is for the consumer to compile the source files of the library themselves. This has the
advantage that the .cpp file is not #included into every compilation unit, but still avoids the platform-specific library binary.
I decided to write a CMake buildsystem which would achieve all of that for a library. I don’t have an opinion on whether good idea in general for libraries to do things like this, but if people want to do it, it should be easy as possible.
Additionally, of course, the CMake GenerateExportHeader module should be used, but I didn’t want to change the source from Vittorio so much.
The CMake code below compiles the library in several ways and installs it to a prefix which is suitable for packaging:
cmake_minimum_required(VERSION 3.3)
project(example_lib)
# define the library
set(library_srcs
example_lib/library/module0/module0.cpp
example_lib/library/module1/module1.cpp
)
add_library(library_static STATIC ${library_srcs})
add_library(library_shared SHARED ${library_srcs})
add_library(library_iface INTERFACE)
target_compile_definitions(library_iface
INTERFACE LIBRARY_HEADER_ONLY
)
set(installed_srcs
include/example_lib/library/module0/module0.cpp
include/example_lib/library/module1/module1.cpp
)
add_library(library_srcs INTERFACE)
target_sources(library_srcs INTERFACE
$<INSTALL_INTERFACE:${installed_srcs}>
)
# install and export the library
install(DIRECTORY
example_lib/library
DESTINATION
include/example_lib
)
install(FILES
example_lib/library.hpp
example_lib/api.hpp
DESTINATION
include/example_lib
)
install(TARGETS
library_static
library_shared
library_iface
library_srcs
EXPORT library_targets
RUNTIME DESTINATION bin
ARCHIVE DESTINATION lib
LIBRARY DESTINATION lib
INCLUDES DESTINATION include
)
install(EXPORT library_targets
NAMESPACE example_lib::
DESTINATION lib/cmake/example_lib
)
install(FILES example_lib-config.cmake
DESTINATION lib/cmake/example_lib
)
This blog post is not a CMake introduction, so to see what all of those commands are about start with the cmake-buildsystem and cmake-packages documentation.
There are 4 add_library calls. The first two serve the purpose of building the library as a shared library and then as a static library.
The next two are INTERFACE libraries, a concept I introduced in CMake 3.0 when it looked like Boost might use CMake. The INTERFACE target can be used to specify header-only libraries because they specify usage requirements for consumers to use, such as include directories and compile definitions.
The library_iface library functions as described in the blog post from Vittorio, in that users of that library will be built with LIBRARY_HEADER_ONLY and will therefore #include the .cpp files.
The library_srcs library causes the consumer to compile the .cpp files separately.
A consumer of a library like this would then look like:
cmake_minimum_required(VERSION 3.3)
project(example_user)
find_package(example_lib REQUIRED)
add_executable(myexe
src/src0.cpp
src/src1.cpp
src/main.cpp
)
## uncomment only one of these!
# target_link_libraries(myexe
# example_lib::library_static)
# target_link_libraries(myexe
# example_lib::library_shared)
# target_link_libraries(myexe
# example_lib::library_iface)
target_link_libraries(myexe
example_lib::library_srcs)
So, it is up to the consumer how they consume the library, and they determine that by using target_link_libraries to specify which one they depend on.
In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.
cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.
In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.
We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.
Set up the VPS
We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).
The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.
We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).
Here is the summary of the freshly launched VPS server. The IP address is shown as well.
Connect and update the VPS
In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.
myusername@mycomputer:~$ ssh [email protected] Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64) * Documentation: https://help.ubuntu.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myubuntuserver:~$
Let’s update the package list,
ubuntu@myubuntuserver:~$ sudo apt update Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB] ... Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B] Fetched 10.5 MB in 2s (4707 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 67 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@myubuntuserver:~$ sudo apt upgrade Reading package lists... Done Building dependency tree ... Processing triggers for libc-bin (2.23-0ubuntu3) ... ubuntu@myubuntuserver:~$
In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.
Configure LXD/LXC
Let’s see how much free disk space is there,
ubuntu@myubuntuserver:~$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.7G 1.2G 8.6G 12% / ubuntu@myubuntuserver:~$
There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.
ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux Reading package lists... Done ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@myubuntuserver:~$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: myzfspool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 5 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes ...accept the network autoconfiguration settings that you will be asked... LXD has been successfully configured. ubuntu@myubuntuserver:~$
That’s it! We are good to go and configure our first LXD/LXC container.
Testing a container as a Web server
Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.
ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web Creating web Retrieving image: 100% Starting web ubuntu@myubuntuserver:~$
We launched a container called web.
Let’s connect to the container, update the package list and upgrade any available packages.
ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash root@web:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease ... 9 packages can be upgraded. Run 'apt list --upgradable' to see them. root@web:~# apt upgrade Reading package lists... Done ... Processing triggers for initramfs-tools (0.122ubuntu8.1) ... root@web:~#
Still inside the container, we install nginx.
root@web:~# apt install nginx Reading package lists... Done ... Processing triggers for ufw (0.35-0ubuntu2) ... root@web:~#
Let’s make a small change in the default index.html,
root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html --- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000 +++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000 @@ -1,7 +1,7 @@ <!DOCTYPE html> <html> <head>-<title>Welcome to nginx!</title>+<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title> <style> body { width: 35em; @@ -11,7 +11,7 @@ </style> </head> <body>-<h1>Welcome to nginx!</h1>+<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> root@web:/var/www/html#
Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.
root@web:/var/www/html# exit ubuntu@myubuntuserver:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | web | RUNNING | 10.5.242.156 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+ ubuntu@myubuntuserver:~$ ifconfig ens3 ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB) ubuntu@myubuntuserver:~$
Therefore, the iptables command that will allow access to the container is,
ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80 ubuntu@myubuntuserver:~$
Here is the result when we visit the new Web server from our computer,
Benchmarks
We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.
CPU
We are benchmarking the CPU using sysbench with the following parameters.
ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 10000 Test execution summary: total time: 10.9448s total number of events: 10000 total time taken by event execution: 10.9429 per-request statistics: min: 0.96ms avg: 1.09ms max: 2.79ms approx. 95 percentile: 1.27ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 10.9429/0.00 ubuntu@myubuntuserver:~$
The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.
Memory
We are benchmarking the memory using sysbench with the following parameters.
ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing memory operations speed test Memory block size: 1K Memory transfer size: 102400M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 104857600 (1768217.45 ops/sec) 102400.00 MB transferred (1726.77 MB/sec) Test execution summary: total time: 59.3013s total number of events: 104857600 total time taken by event execution: 47.2179 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.80ms approx. 95 percentile: 0.00ms Threads fairness: events (avg/stddev): 104857600.0000/0.00 execution time (avg/stddev): 47.2179/0.00 ubuntu@myubuntuserver:~$
The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.
Disk
We are benchmarking the disk using dd with the following parameters.
ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync 1024+0 records in 1024+0 records out 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s ubuntu@myubuntuserver:~$
It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.
ZFS pool free space
Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,
ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 811M 4,18G - 11% 15% 1.00x ONLINE - ubuntu@myubuntuserver:~$
Again, after a second container was just created, (new and empty)
ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 822M 4,17G - 11% 16% 1.00x ONLINE - ubuntu@myubuntuserver:~$
Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.
Conclusion
We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.
We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.
Finally, we see some benchmarks for the vCPU, the memory and the disk.
I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
- The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
- The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
- The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
- The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
- The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
- The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
- https://github.com/dustinkirkland/howdy-windows
- http://bazaar.launchpad.net/~kirkland/howdy-windows/trunk/files
Dustin
![]()
Welcome to the Ubuntu Weekly Newsletter. This is issue #477 for the week August 1 – 7, 2016, and the full version is available here.
In this issue we cover:
- Ubuntu 14.04.5 LTS released
- Ubuntu Stats
- Ubuntu 16.04 Release Party San Francisco Concluded!
- LoCo Events
- Luke Faraone: Snappy Sprint Heidelberg
- Laura Czajkowski: NoSQL Podcast: Service deployments with JuJu Charms
- Aaron Honeycutt: Working with github
- Ubuntu App Developer Blog: Snapd 2.11/Snapcraft 2.13: downgrade installed snaps, release to users from the command line
- Ubuntu GNOME: The Ubuntu GNOME 16.10 Wallpaper contest has started, guys!
- Justin McPherson: Introducing React Native Ubuntu
- Ubuntu Phone News
- Canonical News
- In The Press
- In The Blogosphere
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 12.04, 14.04 and 16.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth K. Joseph
- Simon Quigley
- Chris Guiver
- Athul Muralidhar
- Chris Sirrs
- Aaron Honeycutt
- And many others
If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
August 08, 2016
This is the fourth article in the series about snappy interfaces. You can check out articles one, two and three though they are not directly required.
In this installment we will explore the layout and properties of the file system at the time snap application is executed. From the point of view of the user nothing special is happening. The user either runs an application by clicking on a desktop icon or by running a shell command. Internally, snapd uses a series of steps (which I will not explain today as they are largely an implementation detail) to configure the application process.
The (ch)root filesystem and a bit of magic
Let’s start with the most important fact: the root filesystem is not the filesystem of the host distribution. Using the host filesystem would lead to lots of inconsistencies. Those are rather obvious: different base libraries, potentially different filesystem layout, etc. At snap application runtime the root filesystem is changed to the core snap itself. You can think of this as a kind of chroot. Obviously the chroot itself would be insufficient as snaps are read only filesystems and the core snap is no different.
Certain directories in the core snap are bind mounted (you can think of this as a special type of a symbolic link or a hard link to a directory though neither are fully accurate) to locations on the host file system. This includes the /home directory, the /run directory and a few others (see Appendix A for the full list). Notably this does not include the /usr/lib or /usr/bin. If a snap needs a library or an executable to function, that library or executable has to be present in the snap itself. The only exception to that are very low level libraries like libc that are present in the core snap.
TIP: explore the core snap to see what is there. Having installed at least one snap you can go to /snap/ubuntu-core/current to see the list of files provided there.
With all those mounts and chroots in place one might wonder how mounts look like for all the other processes in the system? The answer is simple, they look as if nothing special related to snappy was happening.
To understand the answer you have to know a little about Linux namespaces. Namespaces are a way to create a separate view of a given aspect of a Linux system at a per-process level. Unlike full-blown virtual machines (where you run a whole emulated computer with a potentially different operating system kernel and emulated peripheral devices) namespaces are fine grained. Snappy uses just one of the available namespaces, the mount namespace. Now I won’t fool you, while the idea seems simple “mounts in the namespace are isolated from the mounts outside of the namespace” the reality is far more complex because of the so-called shared-subtrees. One interesting consequence is that mounts performed after a snap application is stated (e.g. in the /media directory) are visible to the said application (e.g. to VLC) while the reverse is not true. If a malicious snap tries (and manages despite various defenses put in place) to mount something in say, /usr/ that change will be visible only to the snap application process.
Don’t worry if you don’t fully understand this topic. The main point is that your application sees a different view of the filesystem but that view is consistent across distributions.
TIP: you may have seen the core snap as it looks like on disk if you followed the earlier tip. Now see the real file system at runtime! Install the snapd-hacker-toolbelt snap and run snapd-hacker-toolbelt.busybox sh. This will give you a shell with many of the familiar commands that let you peek and poke at the environment.Now there are a few more tweaks I should point out but won’t go into too much detail:
- Each process gets a private /tmp directory with a fresh tmpfs mounted there. This is a security measure. One simple consequence is that you cannot expect to share files by dropping them there and that you cannot create arbitrarily large files there since tmpfs is backed by a fraction of available system memory.
- There’s also a private instance of /dev/pts directory with pseudo terminal emulators. This is an another security measure. In practice you will not care about this much. It’s just a part of the Linux plumbing that has to be setup in a given way.
- The whole host filesystem is mounted at /var/lib/snapd/hostfs. This can be used by interfaces similar to the content sharing interface, for example. This is super interesting and we will devote a whole article to using this later on.
- There’s special code that exists to support Nvidia proprietary drivers. I will discuss this with a separate installment that may be of interest to game developers.
- The current working directory may be impossible to preserve across the whole chroot and mount and bind mount magic. The easiest way to experience this is to create a directory in /tmp (e.g. /tmp/foo) and try to run any snap command there. Because of the private (and empty) /tmp directory the /tmp/foo directory does not exist for the snap application process. Snap-confine will print an informative error message and refuse to run.
This now much more comfortable. Many of the usual places exist and contain the data the applications are familiar with. This doesn’t mean those directories are readable or writable to the application process, they are just present. Confinement and interfaces decide if something is readable or writable. This brings us to the second big part of snap-confine
Process confinement
Snap-confine (as of version 1.0.39) supports two sandboxing technologies: seccomp and apparmor.Seccomp is used to constrain system calls that an application can use. System calls are the interface between the kernel and user space. If you are unfamiliar with the concept then don’t worry. In very rough terms some of the functions of your programming language are implemented as as system calls and seccomp is the linux subsystem that is responsible for mediating access to them.
Right now when your application runs in devmode you will not get any advice on the system calls you are relying on that are not allowed by the set of used interfaces. The so-called complain mode of seccomp is being actively developed upstream so the situation may change by the time you are reading this.
In strict confinement any attempt to use a disallowed system call will instantly kill the offending process. This is accompanied by a rather cryptic message that you can see in the system log:
sie 08 12:36:53 gateway kernel: audit: type=1326 audit(1470652613.076:27): auid=1000 uid=1000 gid=1000 ses=63 pid=66834 comm="links" exe="/snap/links/2/usr/bin/links" sig=31 arch=c000003e syscall=54 compat=0 ip=0x7f8dcb8ffc8a code=0x0What this tells us is that process ID 66834 was killed with signal 31 (SIGSYS) because it tried to use system call 54 (setsockopt) Note that system call numbers are architecture specific. The output above was from an amd64 machine.
Even when a system call is allowed the particular operation may be intercepted and denied by apparmor. For example, the sandbox is setup so that applications can freely write to $SNAP_USER_DATA (or $SNAP_DATA for services) but cannot, by default, either read or write from the real home directory.
sie 08 12:56:40 gateway kernel: audit: type=1400 audit(1470653800.724:28): apparmor="DENIED" operation="open" profile="snap.snapd-hacker-toolbelt.busybox" name="/home/zyga/.ssh/authorized_keys" pid=67013 comm="busybox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Here we see that process ID 67013 tried to “open” /home/zyga/.ssh/authorized_keys and that the “r” (read) mask was denied. In devmode that is obviously allowed but is accompanied with an appropriate warning message instead.
TIP: Whenever you run into problems like this you should give the snappy-debug snap a try. It is designed to read and understand messages like that and give you useful advice.
Apparmor has much wider feature set and can perform checks for linux capabilities, traditional UNIX IPC like signals and sockets, DBus messages (including details of the object and method invoked). The vast majority of the current snap confinement is made with apparmor profiles. We will look at all the features in greater detail in the next few articles in this series where we will actively implement simple new interfaces from scratch.
There's one last thing that snap-confine does, in some cases is creates...
A device control group
Putting it all together
With all of those changes in place snap-confine executes (using execv) a wrapper script corresponding to the application command entry in the snapcraft.yaml file. This happens each time you run an application.If you are interested in learning more about snap-confine I encourage you to check out its manual page (snap-confine.5) and source code. If you have any questions please feel free to ask at the snapcraft.io mailing list or using comments on this blog below.
Next time we will look at creating our first, extremely simple, interface in practice.
Appendix A: List of host directories that are bind-mounted.
NOTE: This list will slowly get less and less accurate as more of the mount points become dynamic and controlled by available interfaces.- /dev
- /etc (except for /etc/alternatives)
- /home
- /root
- /proc
- /snap
- /sys
- /var/snap
- /var/lib/snapd
- /var/tmp
- /var/log
- /run
- /media
- /lib/modules
- /usr/src
It’s Episode Twenty-three ½ of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen, Martin Wimpress and Joe Ressington are live and speaking to your brain.
We’re here again, but this time live from FOSS Talk!
In this week’s show:
- We give out platefuls of biscuits.
-
We discuss the community news:
- We discuss what gadget or technology can’t we live without and why?
- And then we discuss what gadget or technology we thought we couldn’t live without but found that we could.
- We also “enforce some fun” on the FOSS Talk audience and get their gadgets or technology they can’t live without.
- We each pick our best command line lurves E-V-E-R!
-
This weeks cover image was taken by Clemency Cooper.
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to [email protected] or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
- Join us on IRC in #ubuntu-podcast on Freenode
PKCS#11 is a standard API to interface with HSMs, Smart Cards, or other types
of random hardware backed crypto. On my travel laptop, I use a few Yubikeys in
PKCS#11 mode using OpenSC to handle system login. libpam-pkcs11 is a pretty
easy to use module that will let you log into your system locally using a
PKCS#11 token locally.
One of the least documented things, though, was how to use an OpenSC PKCS#11 token in Chrome. First, close all web browsers you have open.
sudo apt-get install libnss3-tools
certutil -U -d sql:$HOME/.pki/nssdb
modutil -add "OpenSC" -libfile /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -dbdir sql:$HOME/.pki/nssdb
modutil -list "OpenSC" -dbdir sql:$HOME/.pki/nssdb
modutil -enable "OpenSC" -dbdir sql:$HOME/.pki/nssdb
Now, we'll have the PKCS#11 module ready for nss to use, so let's double
check that the tokens are registered:
certutil -U -d sql:$HOME/.pki/nssdb
certutil -L -h "OpenSC" -d sql:$HOME/.pki/nssdb
If this winds up causing issues, you can remove it using the following command:
modutil -delete "OpenSC" -dbdir sql:$HOME/.pki/nssdb
August 07, 2016
I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.
I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.
To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:
# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account <MSMTP_ACCOUNT_NAME>
host smtp.gmail.com
port 587
auth login
user <EMAIL_USERNAME>
password <PASSWORD>
from <FROM_ADDRESS>
logfile /var/log/msmtp/msmtp.log
account default : <MSMTP_ACCOUNT_NAME>
Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.
Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.
sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chown 0600 /etc/msmtprc
Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.
/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}
Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following
mail ('[email protected]', 'Test Subject', 'Test body text');
exit();
Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).
I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.
August 06, 2016
August 05, 2016
In the Webapps team at Canonical, we are always looking to make sure that web and near-web technologies are available to developers. We want to make everyone's life easier, enable the use of tools that are familiar to web developers and provide an easy path to using them on the Ubuntu platform.
We have support for web applications and creating and packaging Cordova applications, both of these enable any web framework to be used in creating great application experiences on the Ubuntu platform.
One popular web framework that can be used in these environments is React.js; React.js is a UI framework with a declarative programming model and strong component system, which focuses primarily on the composition of the UI, so you can use what you like elsewhere.
While these environments are great, sometimes you need just that bit more performance, or to be able to work with native UI components directly, but working in a less familiar environment might not be a good use of time. If you are familiar with React.js, it's easy to move into full native development with all your existing knowledge and tools by developing with React Native. React Native is the sister to React.js, you can use the same style and code to create an application that works directly with native components with native levels of performance, but with the ease of and rapid development you would expect.

We are happy to announce that along with our HTML5 application support, it is now possible to develop React Native applications on the Ubuntu platform. You can port existing iOS or Android React Native applications, or you can start a new application leveraging your web-dev skills.
You can find the source code for React Native Ubuntu here,
To get started, follow the instructions in README-ubuntu.md and create your first application.
The Ubuntu support includes the ability to generate packages. Managed by the React Native CLI, building a snap is as easy as 'react-native package-ubuntu --snap'. It's also possible to build a click package for Ubuntu devices; meaning React Native Ubuntu apps are store ready from the start.
Over the next little while there will be blogs posts on everything you need to know about developing a React Native Application for the Ubuntu platform; creating the app, the development process, packaging and releasing to the store. There will also be some information on how to develop new reusable modules, that can add extra functionality to the runtime and be distributed as Node Package Manager (npm) modules.
Go and experiment, and see what you can create.
August 04, 2016

We have created a Flick group for your submissions: https://www.flickr.com/groups/ubuntu-gnome-16-10
From all submissions, 10 wallpapers will be selected to be included in the 16.10 (Yakkety Yak) release of Ubuntu GNOME.
Rules of the Contest:
It is important to note Ubuntu – and hence Ubuntu GNOME – is shipped to users from every part of the globe. Your images should be considerate of this diversity and refrain from the following:
- No brand names or trademarks of any kind.
- No branding assets like “Ubuntu GNOME” or text in order to permit use by derivative distributions.
- No version numbers as some may prefer to continue to use your theme with an older version of Ubuntu.
- No illustrations some may consider inappropriate, offensive, hateful, tortuous, defamatory, slanderous or libelous.
- No sexually explicit or provocative images.
- No images of weapons or violence.
- No alcohol, tobacco, or drug use imagery.
- No designs which promotes bigotry, racism, hatred or harm against groups or individuals; or promotes discrimination based on race, gender, religion, nationality, disability, sexual orientation or age.
- No religious, political, or nationalist imagery.
Constraints:
- Each user can submit up to 2 wallpapers. The final dimension should be at least 2560×1440 px (and 16:9 proportion if possible). Any smaller size will not be considered.
- Use PNG format for bitmap files, use JPG format for photos.
- Submissions must adhere to the Creative Commons ShareAlike 4.0 (see www.flickr.com/creativecommons/). If not specified, it’ll be assumed that the work is released under CC-BY-SA 4.0.
- Attribution must be declared if the submission is based on another design.
In last three months one of the task I had at archon.ai was to implement a pipeline to autodeploy our services. We use an instance of Gitlab to host our code, so after some proof of concepts we chose to use Gitlab CI to test and deploy our code.
Gitlab CI is amazing (as Gitlab is), Gitlab team is doing a great work and they implement new features every month. So today I chose to move also this blog to Gitlab CI.
This blog is based on Jekyll. The source code was already hosted on Gitlab but, until yesterday, it didn’t use Gitlab CI: every time I pushed something, a webhook called a script on my server, the server downloaded the source code, compiled it and then published it.
The bad in this that approach is the same server which runs the website (and other services as well) wasted CPU, storage and time doing compilation.
I have others servers as well (but if you do not, don’t worry, Gitlab offers free runners for Gitlab CI if you host your project on Gitlab.com), so I installed a Gitlab runner as explained here and set it to use Docker.
gitlab-ci.yml
The first thing to do after enabling the runner was to create a gitlab-ci.yml file to explain to the runner how to do its job. The fact Gitlab uses a file to configure runners it’s a winning choice: developers can have it versioned in the source and each branch can have its own rules.
My configuration file is this:
image: ruby:2.3
stages:
- deploy
cache:
paths:
- vendor
key: "$CI_BUILD_REPO"
before_script:
- gem install bundler
deploy_site:
stage: deploy
only:
- master
script:
- bundle install --path=vendor/
- bundle exec jekyll build
artifacts:
paths:
- _site/
Quite simple, isn’t it?
In the end I need only to deploy the website, I do not have tests, so I have only one stage, the deploy one.
There are however few interesting things to highlight:
- I install gems in
vendor/instead of the default directory, so I can cache them and reuse in others builds, to save time and bandwidth - The cache is shared between all the branches in the repo (key: “$CI_BUILD_REPO”). By default it is shared only between multiple builds of the same branch
- The deploy step is executed only when I push to master branch
- The site is build in _site/ directory, so I need to specify it in the
artifactssection
If you want to see how to tune these settings, or learn about others (there are a lot of them, it is a very versatile system which can do anything), take a look to the official guide.
The Gemfile for bundler is very basic:
source 'https://rubygems.org'
gem "github-pages"
gem "pygments.rb"
It is important to add vendor directory to the exclude section in _config.yml, otherwise Jekyll will publish it as well.
Deploy
If you push these files on your Gitlab’s repo, and if you have done a good job setting up the runner, you will have an artifact in your repo to download.

Next step is to deploy it to the server. There are tons of different possible solutions to do that. I created a sh script which is invoked by an hook.
Since I already have PHP-fpm installed on the server due my Nextcloud installation, I use it to invoke the sh script through a php script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.

Unfortunately, the documentation about webhooks is very poor, and there isn’t any mention about builds payload.
Anyway, after a couple of tries, I created this script:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
Since the repo of this blog is public, I cannot insert the token in the script itself (and I cannot insert it in the script on the server, because it is overwritten at every deploy).
So I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has
to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. Again, I cannot put it in the script itself, so I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN
and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.
Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!
If you have any question or feedback about this blog post, please drop me an email at [email protected] :-)
Bye for now,
R.
Snapd 2.11/Snapcraft 2.13: downgrade installed snaps, release to users from the command line
Ubuntu App Developer Blog
The latest version of snapd, the service powering snaps, has just landed in Ubuntu 16.04, here are some of the highlights of this release.
New commands: buy, find private, disable, revert
A lot of new commands are available, allowing you, for example, to downgrade, disable and buy snaps:
- When logged into a store,
snap find --privatelets you see snaps that have been shared with you privately. - The new
buycommand presents you a choice of payment backends for non-free snaps.

snap disableallows you to disable specific snaps. A disabled snap won't be updated or launched anymore. It can be enabled with thesnap enablecommand.snap revertallows you to revert a snap to its previous installed version.

- The
refreshcommand now works with snaps installed indevmode.
Snap try and broken states handling
When using the snap try command to mount a folder containing a snap tree as an installed snap, you can end up with a broken snap if you happen to delete the folder without removing the snap first.
This "broken" state is now acknowledged as a potential snap state and handled gracefully by the system. The broken tag now appears next to the snap in the snap list output and you can remove it with snap remove.
Interfaces changes
getsockopthas been allowed for connectedx11plugs./usr/bin/localeaccess is now part of the default confinement policy.- A new
hardware-observeinterface that gives snaps read access to hardware information from the system. See the implementation for details.
Snapcraft 2.13
Snapcraft has also seen a new release (2.13) that brings:
- Enhanced Ubuntu Store integration with the introduction of
snapcraft push(which deprecatesupload) andsnapcraft release. These are very important pieces to the Continuous Integration aspect of snapcraft, you will have more to read on this front very soon! - A new
plainboxplugin which allows parts containing a Plainbox test collection. - Many improvements on sanitizing cloud parts declarations.
Java plugins
There has also been a strong focus on improving Java plugins with, for example:
- Improvements to the
antandmavenplugins (support for targets). - Introduction of a
gradleplugin
To learn how to use these plugins, the easiest way is to run snapcraft help ant, snapcraft help maven and snapcraft help gradle.
Usage examples can be found in the Playpen repository and guidance in the snapcraft documentation.
Heyo all! I’ve been quiet since SELF happened in June but since late July I’ve working quite a bit on github lately on UBports, Magic Device Tool and the Kubuntu Manual. I’ve also been making a few changes with the Kubuntu Podcast team to get our show on Pocket Casts Android podcast app and have taken over editing the show.
Dec was when I uploaded the Kubuntu Manual on github for the first time.
The UBport is a project lead mostly by one guy to port Ubuntu Touch to as many devices as possible! I’ve been fixing grammar and typo issues since he is not a native English speaker.
The Magic Device Tool has been getting merges from me for similar fixes as well, as well as some testing on the Nexus 4 and Nexus 7. It’s a series of scripts that can flash all the supported Ubuntu Touch devices from BQ, Meizu, Nexus 4 and 7 as well as some UBport devices with the newest release.
The Kubuntu Manual has been a bit of a child of mine for over a year or so since I basically took over documentation for the Kubuntu project. It’s currently being worked on for the 16.10 release but we are also adding to the 16.04 one as well that is on docs.kubuntu.org.
Once a month fellow Kubuntu members Rick Timmis, Ovidiu-Florin Bogdan and I do a Podcast highlighting the work that has been done in the Kubuntu world aka #kubuntu-devel on IRC/Kubuntu Devel on Telegram. We also go over what we have been doing in and out of the project, our picks of Linux based apps in, a few interviews from people in KDE, Ubuntu and other Open Source related places, as well as my Game On section where we are proving you don’t have to be on Windows to get your “Game On”! We have the Show on BBB (BigBlueButton) which is on a server that was donated/sponsored to use by the company.



















