<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:planet="http://planet.intertwingly.net/" xmlns:indexing="urn:atom-extension:indexing" indexing:index="no"><access:restriction xmlns:access="http://www.bloglines.com/about/specs/fac-1.0" relationship="deny"/>
  <title>Planet Gentoo</title>
  <updated>2019-07-30T12:02:27Z</updated>
  <generator uri="http://intertwingly.net/code/venus/">Venus</generator>
  <author>
    <name>Welcome to &lt;b&gt;Planet Gentoo&lt;/b&gt;, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in &lt;a href="https://planet.gentoo.org/universe/"&gt;Gentoo Universe&lt;/a&gt;.</name>
    <email>planet@gentoo.org</email>
  </author>
  <id>https://planet.gentoo.org/atom.xml</id>
  <link href="https://planet.gentoo.org/atom.xml" rel="self" type="application/atom+xml"/>
  <link href="https://planet.gentoo.org/" rel="alternate"/>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=946</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/07/09/verifying-gentoo-election-results-via-votrify/" rel="alternate" type="text/html"/>
    <title>Verifying Gentoo election results via Votrify</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Gentoo elections are conducted using a custom software called votify. During the voting period, the developers place their votes in their respective home directories on one of the Gentoo servers. Afterwards, the election officials collect the votes, count them, compare their results and finally announce them. The simplified description stated above suggests two weak points. Firstly, we rely on honesty of election officials. If they chose to conspire, … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2019/07/09/verifying-gentoo-election-results-via-votrify/">Continue reading<span class="screen-reader-text"> "Verifying Gentoo election results via Votrify"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Gentoo elections are conducted using a custom software called votify.  During the voting period, the developers place their votes in their respective home directories on one of the Gentoo servers.  Afterwards, the election officials collect the votes, count them, compare their results and finally announce them.</p>
<p>The simplified description stated above suggests two weak points.  Firstly, we rely on honesty of election officials.  If they chose to conspire, they could fake the result.  Secondly, we rely on honesty of all Infrastructure members, as they could use root access to manipulate the votes (or the collection process).</p>
<p>To protect against possible fraud, we make the elections transparent (but pseudonymous).  This means that all votes cast are public, so everyone can count them and verify the result.  Furthermore, developers can verify whether their personal vote has been included.  Ideally, all developers would do that and therefore confirm that no votes were manipulated.</p>
<p>Currently, we are pretty much implicitly relying on developers doing that, and assuming that no protest implies successful verification.  However, this is not really reliable, and given the unfriendly nature of our scripts I have reasons to doubt that the majority of developers actually verify the election results.  In this post, I would like to shortly explain how Gentoo elections work, how they could be manipulated and introduce <a href="https://github.com/mgorny/votrify" rel="external">Votrify</a> — a tool to explicitly verify election results.</p>
<p><span id="more-946"/></p>
<h2>Gentoo voting process in detail</h2>
<p>Once the nomination period is over, an election official sets the voting process up by creating control files for the voting scripts.  Those control files include election name, voting period, ballot (containing all vote choices) and list of eligible voters.</p>
<p>There are no explicit events corresponding to the beginning or the end of voting period.  The votify script used by developers reads election data on each execution, and uses it to determine whether the voting period is open.  During the voting period, it permits the developer to edit the vote, and finally to ‘submit’ it.  Both draft and submitted vote are stored as appropriate files in the developer’s home directory, ‘submitted’ votes are not collected automatically.  This means that the developer can still manually manipulate the vote once voting period concludes, and before the votes are manually collected.</p>
<p>Votes are collected explicitly by an election official.  When run, the countify script collects all vote files from developers’ home directories.  An unique ‘confirmation ID’ is generated for each voting developer.  All votes along with their confirmation IDs are placed in so-called ‘master ballot’, while mapping from developer names to confirmation IDs is stored separately.  The latter is used to send developers their respective confirmation IDs, and can be discarded afterwards.</p>
<p>Each of the election officials uses the master ballot to count the votes.  Afterwards, they compare their results and if they match, they announce the election results.  The master ballot is attached to the announcement mail, so that everyone can verify the results.</p>
<h2>Possible manipulations</h2>
<p>The three methods of manipulating the vote that I can think of are:</p>
<ol>
<li><em>Announcing fake results.</em>  An election result may be presented that does not match the votes cast.  This is actively prevented by having multiple election officials, and by making the votes transparent so that everyone can count them.</li>
<li><em>Manipulating votes cast by developers.</em>  The result could be manipulated by modifying the votes cast by individual developers.  This is prevented by including pseudonymous vote attribution in the master ballot.  Every developer can therefore check whether his/her vote has been reproduced correctly.  However, this presumes that the developer is active.</li>
<li><em>Adding fake votes to the master ballot.</em>  The result could be manipulated by adding votes that were not cast by any of the existing developers.  This is a major problem, and such manipulation is entirely plausible if the turnout is low enough, and developers who did not vote fail to check whether they have not been added to the casting voter list.</li>
</ol>
<p>Furthermore, the efficiency of the last method can be improved if the attacker is able to restrict communication between voters and/or reliably deliver different versions of the master ballot to different voters, i.e. convince the voters that their own vote was included correctly while manipulating the remaining votes to achieve the desired result.  The former is rather unlikely but the latter is generally feasible.</p>
<p>Finally, the results could be manipulated via manipulating the voting software.  This can be counteracted through verifying the implementation against the algorithm specification or, to some degree, via comparing the results a third party tool.  Robin H. Johnson and myself were historically working on this (or more specifically, on verifying whether the Gentoo implementation of Schulze method is correct) but neither of us was able to finish the work.  If you’re interested in the topic, you can look at my <a href="https://github.com/mgorny/election-compare" rel="external">election-compare</a> repository.  For the purpose of this post, I’m going to consider this possibility out of scope.</p>
<h2>Verifying election results using Votrify</h2>
<p>Votrify uses a two-stage verification model.  It consists of <em>individual verification</em> which is performed by each voter separately and produces <em>signed confirmations</em>, and <em>community verification</em> that uses the aforementioned files to provide final verified election result.</p>
<p>The individual verification part involves:</p>
<ol>
<li><em>Verifying that the developer’s vote has been recorded correctly.</em>  This takes part in detecting whether any votes have been manipulated.  The positive result of this verification is implied by the fact that a confirmation is produced.  Additionally, developers who did not cast a vote also need to produce confirmations, in order to detect any extraneous votes.</li>
<li><em>Counting the votes and producing the election result.</em>  This produces the election results as seen from the developer’s perspective, and therefore prevents manipulation via announcing fake results.  Furthermore, comparing the results between different developers helps finding implementation bugs.</li>
<li><em>Hashing the master ballot.</em>  The hash of master ballot file is included, and comparing it between different results confirms that all voters received the same master ballot.</li>
</ol>
<p>If the verification is positive, a confirmation is produced and signed using developer’s OpenPGP key.  I would like to note that no private data is leaked in the process.  It does not even indicate whether the dev in question has actually voted — only that he/she participates in the verification process.</p>
<p>Afterwards, confirmations from different voters are collected.  They are used to perform community verification which involves:</p>
<ol>
<li><em>Verifying the OpenPGP signature.</em>  This is necessary to confirm the authenticity of the signed confirmation.  The check also involves verifying that the key owner was an eligible voter and that each voter produced only one confirmation.  Therefore, it prevents attempts to~fake the verification results.</li>
<li><em>Comparing the results and master ballot hashes.</em>  This confirms that everyone participating received the same master ballot, and produced the same results.</li>
</ol>
<p>If the verification for all confirmations is positive, the election results are repeated, along with explicit quantification of how trustworthy they are.  The number indicates how many confirmations were used, and therefore how many of the votes (or non-votes) in master ballot were confirmed.  The difference between the number of eligible voters and the number of confirmations indicates how many votes may have been altered, planted or deleted.  Ideally, if all eligible voters produced signed confirmations, the election would be 100% confirmed.</p></div>
    </content>
    <updated>2019-07-09T14:15:43Z</updated>
    <category term="Gentoo"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/autoclicker-for-linux/</id>
    <link href="https://blog.lordvan.com/blog/autoclicker-for-linux/" rel="alternate" type="text/html"/>
    <title>Autoclicker for Linux</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>So I wanted an autoclicker for linux - for one of my browser based games that require a lot of clicking.</p>
<p>Looked around and tried to find something useful, but all i could find was old pages outdated download links,..</p>
<p>In the end I stumbled upon something simple yet immensely more powerful:<a href="https://github.com/jordansissel/xdotool">xdotool (github) </a> or check out the <a href="https://www.semicomplete.com/projects/xdotool/">xdootool website</a></p>
<p>As an extra bonus it is in the Gentoo repository so a simple</p>
<pre>emerge xdotool</pre>
<p>Got it installed. it also has minimal dependencies which is nice.</p>
<p>The good part, but also a bit of a downside is that there is no UI (maybe I'll write one when I get a chance .. just as a wrapper).</p>
<p>anyway to do what I wanted was simply this:</p>
<pre>xdotool click --repeat 1000 --delay 100 1</pre>
<p>Pretty self explainatory, but here's a short explaination anyway:</p>
<ul>
<li>click .. simulate a mouse click</li>
<li>--repeat 1000 ... repeat 1000 times</li>
<li>--delay 100 ... wait 100ms between clicks</li>
<li>1  .. mouse button 1</li>
</ul>
<p>The only problem is I need to know how many clicks I need beforehand - which can also be a nice feature of course.</p>
<p>There is one way to stop it if you have the terminal you ran this command from visible (which i always have - and set it to always on top): click with your left mouse button - this stops the click events being registered since it is mouse-down and waits for mouse-up i guess .. but not sure if that is the reason. then move to the terminal and either close it or <code>ctrl+c</code> abort the command -- or just wait for the program to exit after finishing the requested number of clicks. -- On a side note if you don't like that way of stopping it you could always just <code>ctrl+alt+f1</code> (or whatever terminal you want to use) and log in there and kill the xdotool process (either find thepid and kill it or just <code>killall xdotool</code> - which will of course kill all, but i doubt you'll run more than one at once)</p>
<p/></div>
    </summary>
    <updated>2019-07-09T14:02:11Z</updated>
    <category term="Games"/>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <category term="Tools"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=893</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/07/04/sks-poisoning-keys-openpgp-org-hagrid-and-other-non-solutions/" rel="alternate" type="text/html"/>
    <title>SKS poisoning, keys.openpgp.org / Hagrid and other non-solutions</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">The recent key poisoning attack on SKS keyservers shook the world of OpenPGP. While this isn’t a new problem, it has not been exploited on this scale before. The attackers have proved how easy it is to poison commonly used keys on the keyservers and effectively render GnuPG unusably slow. A renewed discussion on improving keyservers has started as a result. It also forced Gentoo to employ countermeasures. You can … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2019/07/04/sks-poisoning-keys-openpgp-org-hagrid-and-other-non-solutions/">Continue reading<span class="screen-reader-text"> "SKS poisoning, keys.openpgp.org / Hagrid and other non-solutions"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>The recent <a href="https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d6955275f" rel="external">key poisoning attack on SKS keyservers</a> shook the world of OpenPGP.  While this isn’t a new problem, it has not been exploited on this scale before.  The attackers have proved how easy it is to poison commonly used keys on the keyservers and effectively render GnuPG unusably slow.  A renewed discussion on improving keyservers has started as a result.  It also forced Gentoo to employ countermeasures.  You can read more on them in the <a href="https://www.gentoo.org/news/2019/07/03/sks-key-poisoning.html" rel="external">‘Impact of SKS keyserver poisoning on Gentoo’ news item</a>.</p>
<p>Coincidentally, the attack happened shortly after the launch of <a href="https://keys.openpgp.org/about" rel="external">keys.openpgp.org</a>, that advertises itself as both poisoning-resistant and GDPR-friendly keyserver.  Naturally, many users see it as the ultimate solution to the issues with SKS.  I’m afraid I have to disagree — in my opinion, this keyserver does not solve any problems, it merely cripples OpenPGP in order to avoid being affected by them, and harms its security in the process.</p>
<p>In this article, I’d like to shortly explain what the problem is, and which of the different solutions proposed so far to it (e.g. on <a href="https://lists.gnupg.org/pipermail/gnupg-users/" rel="external">gnupg-users mailing list</a>) make sense, and which make things even worse.  Naturally, I will also cover the new Hagrid keyserver as one of the glorified non-solutions.</p>
<p><span id="more-893"/></p>
<h2>The attack — key poisoning</h2>
<p>OpenPGP uses a distributed design — once the primary key is created, additional packets can be freely appended to it and recombined on different systems.  Those packets include subkeys, user identifiers and signatures.  Signatures are used to confirm the authenticity of appended packets.  The packets are only meaningful if the client can verify the authenticity of their respective signatures.</p>
<p>The attack is carried through third-party signatures that normally are used by different people to confirm the authenticity of the key — that is, to state that the signer has verified the identity of the key owner.  It relies on three distinct properties of OpenPGP:</p>
<ol>
<li>The key can contain unlimited number of signatures.  After all, it is natural that very old keys will have a large number of signatures made by different people on them.</li>
<li>Anyone can append signatures to any OpenPGP key.  This is partially keyserver policy, and partially the fact that SKS keyserver nodes are propagating keys one to another.</li>
<li>There is no way to distinguish legitimate signatures from garbage.  To put it other way, it is trivial to make garbage signatures look like the real deal.</li>
</ol>
<p>The attacker abuses those properties by creating a large number of garbage signatures and sending them to keyservers.  When users fetch key updates from the keyserver, GnuPG normally appends all those signatures to the local copy.  As a result, the key becomes unusually large and causes severe performance issues with GnuPG, preventing its normal usage.  The user ends up having to manually remove the key in order to fix the installation.</p>
<h2>The obvious non-solutions and potential solutions</h2>
<p>Let’s start by analyzing the properties I’ve listed above.  After all, removing at least one of the requirements should prevent the attack from being possible.  But can we really do that?</p>
<p>Firstly, we could set a hard limit on number of signatures or key size.  This should obviously prevent the attacker from breaking user systems via huge keys.  However, it will make it entirely possible for the attacker to ‘brick’ the key by appending garbage up to the limit.  Then it would no longer be possible to append any valid signatures to the key.  Users would suffer less but the key owner will lose the ability to use the key meaningfully.  It’s a no-go.</p>
<p>Secondly, we could limit key updates to the owner.  However, the keyserver update protocol currently does not provide any standard way of verifying who the uploader is, so it would effectively require incompatible changes at least to the upload protocol.  Furthermore, in order to prevent malicious keyservers from propagating fake signatures we’d also need to carry the verification along when propagating key updates.  This effectively means an extension of the key format, and it has been proposed e.g. in <a href="https://tools.ietf.org/html/draft-dkg-openpgp-abuse-resistant-keystore-00" rel="external">‘Abuse-Resistant OpenPGP Keystores’ draft</a>.  This is probably a wortwhile option but it will take time before it’s implemented.</p>
<p>Thirdly, we could try to validate signatures.  However, any validation can be easily worked around.  If we started requiring signing keys to be present on the keyserver, the attackers can simply mass-upload keys used to create garbage signatures.  If we went even further and e.g. started requiring verified e-mail addresses for the signing keys, the attackers can simply mass-create e-mail addresses and verify them.  It might work as a temporary solution but it will probably cause more harm than good.</p>
<p>There were other non-solutions suggested — most notably, blacklisting poisoned keys.  However, this is even worse.  It means that every victim of poisoning attack would be excluded from using the keyserver, and in my opinion it will only provoke the attackers to poison even more keys.  It may sound like a good interim solution preventing users from being hit but it is rather short-sighted.</p>
<h2>keys.openpgp.org / Hagrid — a big non-solution</h2>
<p>A common suggestion for OpenPGP users — one that even Gentoo news item mentions for lack of alternative — is to switch to <a href="https://keys.openpgp.org/about" rel="external">keys.openpgp.org</a> keyserver, or switch keyservers to their <a href="https://gitlab.com/hagrid-keyserver/hagrid" rel="external">Hagrid</a> software.  It is not vulnerable to key poisoning attack because it strips away <em>all</em> third-party signatures.  However, this and other limitations make it a rather poor replacement, and in my opinion can be harmful to security of OpenPGP.</p>
<p>Firstly, stripping all third-party signatures is not a solution.  It simply avoids the problem by killing a very important portion of OpenPGP protocol — the Web of Trust.  Without it, the keys obtained from the server can not be authenticated otherwise than by direct interaction between the individuals.  For example, <a href="https://wiki.gentoo.org/wiki/Project:Infrastructure/Authority_Keys" rel="external">Gentoo Authority Keys</a> can’t work there.  Most of the time, you won’t be able to tell whether the key on keyserver is legitimate or forged.</p>
<p>The e-mail verification makes it even worse, though not intentionally.  While I agree that many users do not understand or use WoT, Hagrid is implicitly going to cause users to start relying on e-mail verification as proof of key authenticity.  In other words, people are going to assume that if a key on keys.openpgp.org has verified e-mail address, it has to be legitimate.  This makes it trivial for an attacker that manages to gain unauthorized access to the e-mail address or the keyserver to publish a forged key and convince others to use it.</p>
<p>Secondly, Hagrid does not support UID revocations.  This is an entirely absurd case where GDPR fear won over security.  If your e-mail address becomes compromised, you will not be able to revoke it.  Sure, the keyserver admins may eventually stop propagating it along with your key, but all users who fetched the key before will continue seeing it as a valid UID.  Of course, if users send encrypted mail the attacker won’t be able to read it.  However, the users can be trivially persuaded to switch to a new, forged key.</p>
<p>Thirdly, Hagrid rejects all UIDs except for verified e-mail-based UIDs.  This is something we could live with if key owners actively pursue having their identities verified.  However, this also means you can’t publish a photo identity or use <a href="https://keybase.io/" rel="external">keybase.io</a>.  The ‘explicit consent’ argument used by upstream is rather silly — apparently every UID requires separate consent, while at the same time you can trivially share somebody else’s <abbr title="Personally identifiable information">PII</abbr> as the real name of a valid e-mail address.</p>
<p>Apparently, upstream is willing to resolve the first two of those issues once satisfactory solutions are established.  However, this doesn’t mean that it’s fine to ignore those problems.  Until they are resolved, and necessary OpenPGP client updates are sufficiently widely deployed, I don’t believe Hagrid or its instance at keys.openpgp.org are good replacements for SKS and other keyservers.</p>
<h2>So what are the solutions?</h2>
<p>Sadly, I am not aware of any good global solution at the moment.  The best workaround for GnuPG users so far is the <a href="https://github.com/gpg/gnupg/commit/2e349bb6173789e0e9e42c32873d89c7bc36cea4" rel="external">new self-sigs-only option</a> that prevents it from importing third-party signatures.  Of course, it shares the first limitation of Hagrid keyserver.  The future versions of GnuPG will supposedly fallback to this option upon meeting excessively large keys.</p>
<p>For domain-limited use cases such as Gentoo’s, running a local keyserver with restricted upload access is an option.  However, it requires users to explicitly specify our keyserver, and effectively end up having to specify multiple different keyservers for each domain.  Furthermore, <a href="https://wiki.gnupg.org/WKD" rel="external" title="Web Key Directory">WKD</a> can be used to distribute keys.  Sadly, at the moment GnuPG uses it only to locate new keys and does not support refreshing keys via WKD (<a href="https://github.com/mgorny/gemato" rel="external">gemato</a> employs a cheap hack to make it happen).  In both cases, the attack is prevented via isolating the infrastructure and preventing public upload access.</p>
<p>The long-term solution probably lies in the ‘First-party-attested Third-party Certifications‘ section of the <a href="https://tools.ietf.org/html/draft-dkg-openpgp-abuse-resistant-keystore-00" rel="external">‘Abuse-Resistant OpenPGP Keystores’ draft</a>.  In this proposal, every third-party signature must be explicitly attested by the key owner.  Therefore, only the key owner can append additional signatures to the key, and keyservers can reject any signatures that were not attested.  However, this is not currently supported by GnuPG, and once it is, deploying it will most likely take significant time.</p></div>
    </content>
    <updated>2019-07-04T11:23:53Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/marecki/?p=47</id>
    <link href="https://blogs.gentoo.org/marecki/2019/07/03/case-label-for-pocket-science-lab-v5/#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed" rel="alternate" type="text/html"/>
    <title>Case label for Pocket Science Lab V5</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">tl;dr: Here (PDF, 67 kB) is a case label for Pocket Science Lab version 5 that is compatible with the design for a laser-cut case published by FOSSAsia. In case you haven’t heard about it, Pocket Science Lab [1] is a really nifty board developed by the FOSSAsia community which combines a multichannel, megahertz-range oscilloscope, … <a class="more-link" href="https://blogs.gentoo.org/marecki/2019/07/03/case-label-for-pocket-science-lab-v5/">Continue reading <span class="screen-reader-text">Case label for Pocket Science Lab V5</span></a></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><em>tl;dr</em>: <a href="http://blogs.gentoo.org/marecki/files/2019/07/pslabV5.pdf#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">Here</a> (PDF, 67 kB) is a case label for Pocket Science Lab version 5 that is compatible with the design for a laser-cut case published by FOSSAsia.</p>
<hr/>
<p>In case you haven’t heard about it, Pocket Science Lab <a href="https://blogs.gentoo.org/marecki/feed/#ref1#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[1]</a> is a really nifty board developed by the FOSSAsia community which combines a multichannel, megahertz-range oscilloscope, a multimeter, a logic probe, several voltage sources and a current source, several wave generators, UART and I2C interfaces… and all of this in the form factor of an Arduino Mega, <i>i.e.</i> only somewhat larger than that of a credit card. Hook it up over USB to a PC or an Android device running the official (free and open source, of course) app and you are all set.</p>
<p>Well, not quite set yet. What you get for your 50-ish EUR is just the board itself. You will quite definitely need a set of probe cables (sadly, I have yet to find even an unofficial adaptor allowing one to equip PSLab with standard industry oscilloscope probes using BNC connectors) but if you expect to lug yours around anywhere you go, you will quite definitely want to invest in a case of some sort. While FOSSAsia does not to my knowledge sell PSLab cases, they provide a design for one <a href="https://blogs.gentoo.org/marecki/feed/#ref2#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[2]</a>. It is meant to be laser-cut but I have successfully managed to 3D-print it as well, and for the more patient among us it shouldn’t be too difficult to hand-cut one with a jigsaw either.</p>
<p>Of course in addition to making sure your Pocket Science Lab is protected against accidental damage it would also be nice to have all the connectors clearly labelled. Documentation bundled with PSLab software does show not a few “how to connect instrument X” diagrams but unfortunately said diagrams picture a version 4 of the board and the current major version, V5, features radically different pinout (compare <a href="https://blogs.gentoo.org/marecki/feed/#ref3#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[3]</a> with <a href="https://blogs.gentoo.org/marecki/feed/#ref4#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[4]</a>/<a href="https://blogs.gentoo.org/marecki/feed/#ref5#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[5]</a> and you will see immediately what I mean), not to mention that having to stare at a screen while wiring your circuit isn’t always optimal. Now, all versions of the board feature a complete set of header labels (along with LEDs showing the device is active) on the front side and at least the more recent ones additionally show more detailed descriptions on the back, clearly suggesting the optimal way to go is to make your case our of transparent material. But what if looking at the provided labels directly is not an option, for instance because you have gone eco-friendly and made your case out of wood? Probably stick a label to the front of the case… which brings us back to the problem of the case label from <a href="https://blogs.gentoo.org/marecki/feed/#ref5#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[5]</a> not being compatible with recent versions of the board.</p>
<p>Which brings me to my take on adapting the design from <a href="https://blogs.gentoo.org/marecki/feed/#ref5#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[5]</a> to match the header layout and labels of PSLab V5.1 as well as the laser-cut case design from <a href="https://blogs.gentoo.org/marecki/feed/#ref2#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed">[2]</a>. It could probably be more accurate but having tried it out, it is close enough. Bluetooth and ICSP-programmer connectors near the centre of the board are not included because the current case design does not provide access to them and indeed, they haven’t even got headers soldered in. Licence and copyright: same as the original.</p>
<p><a name="ref1"><br/>
[1] </a><a href="https://pslab.io/">https://pslab.io/</a><br/>
<a name="ref2"><br/>
[2] </a><a href="https://github.com/fossasia/pslab-case">https://github.com/fossasia/pslab-case</a><br/>
<a name="ref3"><br/>
[3] </a><a href="https://github.com/fossasia/pslab-hardware/raw/master/docs/images/PSLab_v5_top.png">https://github.com/fossasia/pslab-hardware/raw/master/docs/images/PSLab_v5_top.png</a><br/>
<a name="ref4"><br/>
[4] </a><a href="https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslab_version_previews/PSLab_v4.png">https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslab_version_previews/PSLab_v4.png</a><br/>
<a name="ref5"><br/>
[5] </a><a href="https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslabdesign.png">https://github.com/fossasia/pslab-hardware/raw/master/docs/images/pslabdesign.png</a></p></div>
    </content>
    <updated>2019-07-03T17:28:54Z</updated>
    <category term="Uncategorized"/>
    <category term="electronics"/>
    <category term="pocket science lab"/>
    <category term="pslab"/>
    <author>
      <name>marecki</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/marecki</id>
      <link href="https://blogs.gentoo.org/marecki/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/marecki" rel="alternate" type="text/html"/>
      <subtitle>Thoughts and mental notes on (mostly) Linux</subtitle>
      <title>Alice in Penguinland</title>
      <updated>2019-07-04T10:02:17Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2019/07/03/sks-key-poisoning.html</id>
    <link href="https://www.gentoo.org/news/2019/07/03/sks-key-poisoning.html" rel="alternate" type="text/html"/>
    <title>Impact of SKS keyserver poisoning on Gentoo</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>The <a href="https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d6955275f">SKS keyserver network has been a victim of certificate poisoning
attack</a>
lately.  The OpenPGP verification used for repository syncing is protected
against the attack.  However, our users can be affected when using GnuPG
directly.  In this post, we would like to shortly summarize what the attack is,
what we did to protect Gentoo against it and what can you do to protect your
system.</p>



<p>The certificate poisoning attack abuses three facts: that OpenPGP keys can
contain unlimited number of signatures, that anyone can append signatures
to any key and that there is no way to distinguish a legitimate signature
from garbage.  The attackers are appending a large number of garbage signatures
to keys stored on SKS keyservers, causing them to become very large and cause
severe performance issues in GnuPG clients that fetch them.</p>

<p>The attackers have poisoned the keys of a few high ranking OpenPGP people
on the SKS keyservers, including one Gentoo developer.  Furthermore, the
current expectation is that the problem won’t be fixed any time soon, so it
seems plausible that more keys may be affected in the future.  We recommend
users not to fetch or refresh keys from SKS keyserver network (this includes
aliases such as <code class="highlighter-rouge">keys.gnupg.net</code>) for the time being.  GnuPG upstream is
already working on client-side countermeasures and they can be expected to
enter Gentoo as soon as they are released.</p>

<p>The Gentoo key infrastructure has not been affected by the attack.  Shortly
after it was reported, we have disabled fetching developer key updates from SKS
and today we have disabled public key upload access to prevent the keys stored
on the server from being poisoned by a malicious third party.</p>

<p>The gemato tool used to verify the Gentoo ebuild repository uses
<a href="https://wiki.gnupg.org/WKD">WKD</a> by default. During normal operation it should
not be affected by this vulnerability. Gemato has a keyserver fallback that
might be vulnerable if WKD fails, however gemato operates in an isolated
environment that will prevent a poisoned key from causing permanent damage to
your system. In the worst case; Gentoo repository syncs will be slow or hang.</p>

<p>The webrsync and delta-webrsync methods also support gemato, although it is
not used by default at the moment.  In order to use it, you need to remove
<code class="highlighter-rouge">PORTAGE_GPG_DIR</code> from <code class="highlighter-rouge">/etc/portage/make.conf</code> (if it present) and put
the following values into <code class="highlighter-rouge">/etc/portage/repos.conf</code>:</p>

<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[gentoo]
sync-type = webrsync
sync-webrsync-delta = true  # false to use plain webrsync
sync-webrsync-verify-signature = true
</code></pre></div></div>

<p>Afterwards, calling <code class="highlighter-rouge">emerge --sync</code> or <code class="highlighter-rouge">emaint sync --repo gentoo</code> will use
gemato key management rather than the vulnerable legacy method.  The default is
going to be changed in a future release of Portage.</p>

<p>When using GnuPG directly, Gentoo developer and service keys can
be securely fetched (and refreshed) via:</p>

<ol>
  <li>Web Key Directory, e.g. <code class="highlighter-rouge">gpg --locate-key developer@gentoo.org</code></li>
  <li><a href="https://keys.gentoo.org">Gentoo keyserver</a>,
e.g. <code class="highlighter-rouge">gpg --keyserver hkps://keys.gentoo.org ...</code></li>
  <li>Key bundles, e.g.:
<a href="https://qa-reports.gentoo.org/output/active-devs.gpg">active devs</a>,
<a href="https://qa-reports.gentoo.org/output/service-keys.gpg">service keys</a></li>
</ol>

<p>Please note that the aforementioned services provide only keys specific
to Gentoo.  Keys belonging to other people will not be found on our keyserver.
If you are looking for them, you may try <a href="https://keys.openpgp.org/">keys.openpgp.org</a> keyserver that is not vulnerable to the attack,
at the cost of stripping all signatures and unverified UIDs.</p></div>
    </summary>
    <updated>2019-07-03T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/blueknight/?p=14</id>
    <link href="https://blogs.gentoo.org/blueknight/2019/04/28/gentoo-blogs-update/#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed" rel="alternate" type="text/html"/>
    <title>Gentoo Blogs Update</title>
    <summary>This is just a notification that the Blogs and the appropriate plug-ins for the release 5.1.1 have been updated. With the release of these updated we (The Gentoo Blog Team) have updated the themes that had updates. If you have a blog on this site, and have a theme that is based on one of […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>This is just a notification that the Blogs and the appropriate plug-ins for the release 5.1.1 have been updated. </p>



<p>With the release of these updated we (The Gentoo Blog Team) have updated the themes that had updates.  If you have a blog on this site, and have a theme that is based on one of the following themes please consider updating as these themes are no longer updated and things will break in your blogs.</p>



<ul><li>KDE Breathe</li><li>KDE Graffiti</li><li>Oxygen </li><li>The Following WordPress versions might stop working (Simply because of age)<ul><li>Twenty Fourteen</li><li>Twenty Fifteen</li><li>Twenty Sixteen</li></ul></li></ul>



<p>If you are using one of these themes it is recommended that you update to the other themes available. If you think that there is an open source theme that you would like to have available please contact the Blogs team by opening a Bugzilla Bug with pertinent information. </p></div>
    </content>
    <updated>2019-04-29T03:41:13Z</updated>
    <category term="Uncategorized"/>
    <category term="blogs"/>
    <category term="Gentoo Blogs"/>
    <category term="plugins"/>
    <category term="theme"/>
    <category term="update"/>
    <category term="wordpress"/>
    <author>
      <name>blueknight</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/blueknight</id>
      <link href="https://blogs.gentoo.org/blueknight/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/blueknight" rel="alternate" type="text/html"/>
      <subtitle>BlueKnight Blog</subtitle>
      <title>BlueKnight Blog</title>
      <updated>2019-04-29T04:02:12Z</updated>
    </source>
  </entry>

  <entry>
    <id>tag:blog.mthode.org,2019-04-24:/posts/2019/Apr/building-gentoo-disk-images/</id>
    <link href="http://blog.mthode.org/posts/2019/Apr/building-gentoo-disk-images/" rel="alternate" type="text/html"/>
    <title>Building Gentoo disk images</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><h2>Disclaimer</h2>
<p>I'm not responsible if you ruin your system, this guide functions as documentation for future me.  Remember to back up your data.</p>
<h2>Why this is useful / needed</h2>
<p>It's useful to have a way of building a disk image for shipping, either for testing or production usage.  The image output formats could be qcow2, raw or compressed tarball, it's up to you to make this what you want it to be.</p>
<h2>Pre-work</h2>
<p>Install diskimage-builder, for Gentoo you just have to 'emerge' the latest version.  I personally keep one around in a virtual environment for testing (this allows me to build musl images as well easily).</p>
<h2>The actual setup</h2>
<p>What diskimage-builder actually does is take elements and run them. Each elements consists of a set of phases where the element takes actions.  All you are really doing is defining the elements and they will insert themselves where needed.
It also uses environment variables for tunables, or for other various small tweaks.</p>
<p>This is how I build the images at http://distfiles.gentoo.org/experimental/amd64/openstack/</p>
<div class="highlight"><pre><span/><span class="nb">export</span> <span class="nv">GENTOO_PORTAGE_CLEANUP</span><span class="o">=</span>True
<span class="nb">export</span> <span class="nv">DIB_INSTALLTYPE_pip_and_virtualenv</span><span class="o">=</span>package
<span class="nb">export</span> <span class="nv">DIB_INSTALLTYPE_simple_init</span><span class="o">=</span>repo
<span class="nb">export</span> <span class="nv">GENTOO_PYTHON_TARGETS</span><span class="o">=</span><span class="s2">"python3_6"</span>
<span class="nb">export</span> <span class="nv">GENTOO_PYTHON_ACTIVE_VERSION</span><span class="o">=</span><span class="s2">"python3.6"</span>
<span class="nb">export</span> <span class="nv">ELEMENTS</span><span class="o">=</span><span class="s2">"gentoo simple-init growroot vm openssh-server block-device-mbr"</span>
<span class="nb">export</span> <span class="nv">COMMAND</span><span class="o">=</span><span class="s2">"disk-image-create -a amd64 -t qcow2 --image-size 3"</span>
<span class="nb">export</span> <span class="nv">DATE</span><span class="o">=</span><span class="s2">"</span><span class="k">$(</span>date -u +%Y%m%d<span class="k">)</span><span class="s2">"</span>

<span class="nv">GENTOO_PROFILE</span><span class="o">=</span>default/linux/amd64/17.0/no-multilib/hardened <span class="si">${</span><span class="nv">COMMAND</span><span class="si">}</span> -o <span class="s2">"gentoo-openstack-amd64-hardened-nomultilib-</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> <span class="si">${</span><span class="nv">ELEMENTS</span><span class="si">}</span>
<span class="nv">GENTOO_PROFILE</span><span class="o">=</span>default/linux/amd64/17.0/no-multilib <span class="si">${</span><span class="nv">COMMAND</span><span class="si">}</span> -o <span class="s2">"gentoo-openstack-amd64-default-nomultilib-</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> <span class="si">${</span><span class="nv">ELEMENTS</span><span class="si">}</span>
<span class="nv">GENTOO_PROFILE</span><span class="o">=</span>default/linux/amd64/17.0/hardened <span class="si">${</span><span class="nv">COMMAND</span><span class="si">}</span> -o <span class="s2">"gentoo-openstack-amd64-hardened-</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> <span class="si">${</span><span class="nv">ELEMENTS</span><span class="si">}</span>
<span class="nv">GENTOO_PROFILE</span><span class="o">=</span>default/linux/amd64/17.0/systemd <span class="si">${</span><span class="nv">COMMAND</span><span class="si">}</span> -o <span class="s2">"gentoo-openstack-amd64-systemd-</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> <span class="si">${</span><span class="nv">ELEMENTS</span><span class="si">}</span>
<span class="si">${</span><span class="nv">COMMAND</span><span class="si">}</span> -o <span class="s2">"gentoo-openstack-amd64-default-</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> <span class="si">${</span><span class="nv">ELEMENTS</span><span class="si">}</span>
</pre></div>


<p>For musl I've had to do some custom work as I have to build the stage4s locally, but it's largely the same (with the additional need to define a musl overlay.</p>
<div class="highlight"><pre><span/><span class="nb">cd</span> ~/diskimage-builder
cp ~/10-gentoo-image.musl diskimage_builder/elements/gentoo/root.d/10-gentoo-image
pip install -U .
<span class="nb">cd</span> ~/

<span class="nb">export</span> <span class="nv">GENTOO_PORTAGE_CLEANUP</span><span class="o">=</span>False
<span class="nb">export</span> <span class="nv">DIB_INSTALLTYPE_pip_and_virtualenv</span><span class="o">=</span>package
<span class="nb">export</span> <span class="nv">DIB_INSTALLTYPE_simple_init</span><span class="o">=</span>repo
<span class="nb">export</span> <span class="nv">GENTOO_PYTHON_TARGETS</span><span class="o">=</span><span class="s2">"python3_6"</span>
<span class="nb">export</span> <span class="nv">GENTOO_PYTHON_ACTIVE_VERSION</span><span class="o">=</span><span class="s2">"python3.6"</span>
<span class="nv">DATE</span><span class="o">=</span><span class="s2">"</span><span class="k">$(</span>date +%Y%m%d<span class="k">)</span><span class="s2">"</span>
<span class="nb">export</span> <span class="nv">GENTOO_OVERLAYS</span><span class="o">=</span><span class="s2">"musl"</span>
<span class="nb">export</span> <span class="nv">GENTOO_PROFILE</span><span class="o">=</span>default/linux/amd64/17.0/musl/hardened

disk-image-create -a amd64 -t qcow2 --image-size <span class="m">3</span> -o gentoo-openstack-amd64-hardened-musl-<span class="s2">"</span><span class="si">${</span><span class="nv">DATE</span><span class="si">}</span><span class="s2">"</span> gentoo simple-init growroot vm

<span class="nb">cd</span> ~/diskimage-builder
git checkout diskimage_builder/elements/gentoo/root.d/10-gentoo-image
pip install -U .
<span class="nb">cd</span> ~/
</pre></div>


<h2>Generic images</h2>
<p>The elements I use are for an OpenStack image, meaning there is no default user/pass, those are set by cloud-init / glean.  For a generic image you will want the following elements.</p>
<p>'gentoo growroot devuser vm'</p>
<p>The following environment variables are needed as well (changed to match your needs).</p>
<p>DIB_DEV_USER_PASSWORD=supersecrete DIB_DEV_USER_USERNAME=secrete DIB_DEV_USER_PWDLESS_SUDO=yes DIB_DEV_USER_AUTHORIZED_KEYS=/foo/bar/.ssh/authorized_keys</p>
<h2>Fin</h2>
<p>All this work was done upstream, if you have a question (or feature request) just ask.  I'm on irc (Freenode) as prometheanfire or the same nick at gentoo.org for email.</p></div>
    </summary>
    <updated>2019-04-24T05:00:00Z</updated>
    <category term="stages"/>
    <category term="images"/>
    <category term="openstack"/>
    <author>
      <name>Matthew Thode (prometheanfire)</name>
    </author>
    <source>
      <id>http://blog.mthode.org/</id>
      <link href="http://blog.mthode.org/" rel="alternate" type="text/html"/>
      <link href="https://mthode.org/feeds/gentoo.rss.xml" rel="self" type="application/rss+xml"/>
      <title>Let's Play a Game - Gentoo</title>
      <updated>2019-04-25T03:02:38Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2019/04/16/nitrokey.html</id>
    <link href="https://www.gentoo.org/news/2019/04/16/nitrokey.html" rel="alternate" type="text/html"/>
    <title>Nitrokey partners with Gentoo Foundation to equip developers with USB keys</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://www.nitrokey.com/">
  <img alt="Nitrokey logo" src="https://www.gentoo.org/assets/img/sponsors/nitrokey.png"/>
</a></p>

<p>The <a href="https://wiki.gentoo.org/wiki/Foundation:Main_Page">Gentoo Foundation</a> has
partnered with <a href="https://www.nitrokey.com/">Nitrokey</a> to equip all Gentoo developers
with free <a href="https://www.nitrokey.com/files/doc/Nitrokey_Pro_factsheet.pdf">Nitrokey Pro 2</a>
devices. Gentoo developers will use the Nitrokey devices to store cryptographic
keys for signing of git commits and software packages, GnuPG keys, and SSH
accounts.</p>



<p>Thanks to the Gentoo Foundation and Nitrokey’s discount, each Gentoo developer
is eligible to receive one free Nitrokey Pro 2. To receive their Nitrokey, developers
will need to register with their <code class="highlighter-rouge">@gentoo.org</code> email address at the <a href="https://gentoo.nitrokey.com/">dedicated order
form</a>.</p>

<p>A <a href="https://wiki.gentoo.org/wiki/Project:Infrastructure/Nitrokey_Pro_2_guide_for_Gentoo_developers">Nitrokey Pro 2 Guide</a> is available
on the Gentoo Wiki with FAQ &amp; instructions for integrating Nitrokeys into developer
workflow.</p>

<h2 id="about-nitrokey-pro-2">ABOUT NITROKEY PRO 2</h2>

<p><a href="https://www.nitrokey.com/files/doc/Nitrokey_Pro_factsheet.pdf">Nitrokey Pro 2</a>
has strong reliable hardware encryption, thanks to open source.  It can help
you to: sign Git commits; encrypt emails and files; secure server access; and
protect accounts against identity theft via two-factor authentication (one-time
passwords).</p>

<h2 id="about-gentoo">ABOUT GENTOO</h2>

<p><a href="https://www.gentoo.org/">Gentoo Linux</a> is a free, source-based, rolling
release meta distribution that features a high degree of flexibility and high
performance. It empowers you to make your computer work for you, and offers a
variety of choices at all levels of system configuration.</p>

<p>As a community, Gentoo consists of approximately two hundred developers and
over fifty thousand users globally.</p>

<p>The <a href="https://wiki.gentoo.org/wiki/Foundation:Main_Page">Gentoo Foundation</a>
supports the development of Gentoo, protects Gentoo’s intellectual property,
and oversees adherence to Gentoo’s Social Contract.</p>

<h2 id="about-nitrokey">ABOUT NITROKEY</h2>

<p><a href="https://www.nitrokey.com/">Nitrokey</a> is a German IT security startup committed
to open source hardware and software. Nitrokey develops and produces USB keys
for data encryption, email encryption (PGP/GPG, S/MIME), and secure account
logins (SSH, two-factor authentication via OTP and FIDO).</p>

<p>Nitrokey is proud to support the Gentoo Foundation in further securing the
Gentoo infrastructure and contributing to a secure open source Linux
ecosystem.</p></div>
    </summary>
    <updated>2019-04-16T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2184</id>
    <link href="http://www.ultrabug.fr/scylla-four-ways-to-optimize-your-disk-space-consumption/" rel="alternate" type="text/html"/>
    <title>Scylla: four ways to optimize your disk space consumption</title>
    <summary>We recently had to face free disk space outages on some of our scylla clusters and we learnt some very interesting things while outlining some improvements that could be made to the ScyllaDB guys. 100% disk space usage? First of all I wanted to give a bit of a heads up about what happened when […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>We recently had to face free disk space outages on some of our scylla clusters and we learnt some very interesting things while outlining some improvements that could be made to the ScyllaDB guys.</p>



<figure class="wp-block-image"><img alt="" class="wp-image-2193" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2019/03/2019-03-29-120403_219x158_scrot.png"/></figure>



<h3>100% disk space usage?</h3>



<p>First of all I wanted to give a bit of a heads up about what happened when some of our scylla nodes reached (almost) 100% disk space usage.</p>



<p>Basically they:</p>



<ul><li>stopped listening to client requests</li><li>complained in the logs</li><li>wouldn’t flush commitlog (expected)</li><li>abort their compaction work (which actually gave back a few GB of space)</li><li>stay in a stuck / unable to stop state (unexpected, this has been reported)</li></ul>



<p>After restarting your scylla server, the first and obvious thing you can try to do to get out of this situation is to run the <strong>nodetool clearsnapshot</strong> command which will remove any data snapshot that could be lying around. That’s a handy command to reclaim space usually.</p>



<p><em>Reminder: depending on your compaction strategy, it is usually not advised to allow your data to grow over 50% of disk space.</em>..</p>



<p>But that’s only a patch so let’s go down the rabbit hole and look at the optimization options we have.</p>



<hr class="wp-block-separator is-style-dots"/>



<h3>Optimize your schemas</h3>



<p>Schema design and the types your choose for your columns have a huge impact on disk space usage! And in our case we indeed overlooked some of the optimizations that we could have done from the start and that did cost us a lot of wasted disk space. Fortunately it was easy and fast to change.</p>



<p>To illustrate this, I’ll take a sample of 100,000 rows of a simple and naive schema associating readings of 50 integers to a user ID:</p>



<p><em>Note: all those operations were done using Scylla 3.0.3 on Gentoo Linux.</em></p>



<pre class="wp-block-preformatted">CREATE TABLE IF NOT EXISTS test.not_optimized<br/>(<br/>    uid text,<br/>    readings list&lt;int&gt;,<br/>    PRIMARY KEY(uid)<br/>) WITH compression = {};</pre>



<p>Once inserted on disk, this takes about <strong>250MB</strong> of disk space:</p>



<pre class="wp-block-preformatted">250M    not_optimized-00cf1500520b11e9ae38000000000004</pre>



<p>Now depending on your use case, if those readings at not meant to be updated for example you could use a <strong>frozen list</strong> instead, which will allow a huge storage optimization:</p>



<pre class="wp-block-preformatted">CREATE TABLE IF NOT EXISTS test.mid_optimized<br/> (<br/>     uid text,<br/>     readings frozen&lt;list&lt;int&gt;&gt;,<br/>     PRIMARY KEY(uid)<br/> ) WITH compression = {};</pre>



<p>With this frozen list we now consume <strong>54MB</strong> of disk space <strong>for the same data</strong>!</p>



<pre class="wp-block-preformatted">54M     mid_optimized-011bae60520b11e9ae38000000000004</pre>



<p>There’s another optimization that we could do since our user ID are UUIDs. Let’s switch to the <strong>uuid type instead of text</strong>:</p>



<pre class="wp-block-preformatted">CREATE TABLE IF NOT EXISTS test.optimized<br/> (<br/>     uid uuid,<br/>     readings frozen&lt;list&lt;int&gt;&gt;,<br/>     PRIMARY KEY(uid)<br/> ) WITH compression = {};</pre>



<p>By switching to <strong>uuid</strong>, we now consume <strong>50MB</strong> of disk space: that’s a <strong>80% reduced disk space consumption</strong> compared to the naive schema for the same data!</p>



<pre class="wp-block-preformatted">50M     optimized-01f74150520b11e9ae38000000000004</pre>



<h3>Enable compression</h3>



<p>All those examples were not using compression. If your workload latencies allows it, you should probably enable compression on your sstables.</p>



<p>Let’s see its impact on our tables:</p>



<pre class="wp-block-preformatted">ALTER TABLE test.not_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};<br/>ALTER TABLE test.mid_optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};<br/>ALTER TABLE test.optimized WITH compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'};</pre>



<p>Then we run a <strong>nodetool compact test</strong> to force a (re)compaction of all the sstables and we get:</p>



<pre class="wp-block-preformatted">63M     not_optimized-00cf1500520b11e9ae38000000000004<br/>28M     mid_optimized-011bae60520b11e9ae38000000000004<br/>24M     optimized-01f74150520b11e9ae38000000000004</pre>



<p>Compression is really a great gain here allowing another <strong>50% reduced disk space usage reduction on our optimized table</strong>!</p>



<h3>Switch to the new “mc” sstable format</h3>



<p>Since the Scylla 3.0 release you can use the latest “mc” sstable storage format on your scylla clusters. It promises a greater efficiency for <a href="https://www.scylladb.com/2019/01/24/scylla-sstable-3-0-can-decrease-file-sizes-50-or-more/" rel="noreferrer noopener" target="_blank">usually a way more reduced disk space</a> consumption!</p>



<p>It is <strong>not </strong>enabled by default, you have to add the <strong>enable_sstables_mc_format: true</strong> parameter to your scylla.yaml for it to be taken into account.</p>



<p>Since it’s backward compatible, you have nothing else to do as new compactions will start being made using the “mc” storage format and the scylla server will seamlessly read from old sstables as well.</p>



<p>But in our case of immediate disk space outage, we switched to the new format one node at a time, dropped the data from it and ran a <strong>nodetool rebuild</strong> to reconstruct the whole node using the new sstable format.</p>



<p>Let’s demonstrate its impact on our test tables: we add the option to the <strong>scylla.yaml</strong> file, restart scylla-server and run n<strong>odetool compact test</strong> again:</p>



<pre class="wp-block-preformatted">49M     not_optimized-00cf1500520b11e9ae38000000000004<br/>26M     mid_optimized-011bae60520b11e9ae38000000000004<br/>22M     optimized-01f74150520b11e9ae38000000000004</pre>



<p>That’s a pretty cool gain of disk space, even more for the not optimized version of our schema!</p>



<p>So if you’re in great need of disk space or it is hard for you to change your schemas, switching to the new “mc” sstable format is a simple and efficient way to free up some space without effort.</p>



<h3>Consider using secondary indexes</h3>



<p>While denormalization is the norm (<em>yep.. legitimate pun</em>) in the NoSQL world this does not mean we have to duplicate everything all the time. A good example lies in the internals of <strong>secondary indexes</strong> if your workload can compromise with its moderate impact on latency.</p>



<p>Secondary indexes on scylla are built on top of Materialized Views that basically stores an up to date pointer from your indexed column to your main table partition key. That means that <strong>secondary indexes MVs are not duplicating all the columns (and thus the data) from your main table</strong> as you would have to do when denormalizing a table to query by another column: <strong>this saves disk space!</strong></p>



<p>This of course comes with a latency drawback because if your workload is interested in the other columns than the partition key of the main table, the coordinator node will actually issue two queries to get all your data:</p>



<ol><li>query the secondary index MV to get the pointer to the partition key of the main table</li><li>query the main table with the partition key to get the rest of the columns you asked for</li></ol>



<p>This has been an effective trick to avoid duplicating a table and save disk space for some of our workloads!</p>



<h3>(not a tip) Move the commitlog to another disk / partition?</h3>



<p>This should only be considered as a sort of emergency procedure or for cost efficiency (cheap disk tiering) on <strong>non critical clusters</strong>.</p>



<p>While this is possible even if the disk is not formatted using XFS, it not advised to separate the commitlog from data on modern SSD/NVMe disks but… you technically can do it (as we did) <strong>on non production clusters</strong>.</p>



<p>Switching is simple, you just need to change the <strong>commitlog_directory</strong> parameter in your scylla.yaml file.</p></div>
    </content>
    <updated>2019-03-29T11:47:32Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="optimization"/>
    <category term="scylla"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:32Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2019/03/27/gnome-330-openrc.html</id>
    <link href="https://www.gentoo.org/news/2019/03/27/gnome-330-openrc.html" rel="alternate" type="text/html"/>
    <title>Gentoo GNOME 3.30 for all init systems</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://www.gnome.org/">
  <img alt="GNOME logo" height="80" src="https://www.gentoo.org/assets/img/news/2019/logo-gnome.svg"/>
</a></p>

<p><a href="https://www.gnome.org/news/2018/09/gnome-3-30-released/">GNOME 3.30</a> is now 
available in Gentoo Linux testing branch. Starting with this release, GNOME on 
Gentoo once again works with <a href="https://wiki.gentoo.org/wiki/Project:OpenRC">OpenRC</a>, 
in addition to the usual systemd option. This is achieved through the <a href="https://github.com/elogind/elogind">elogind 
project</a>, a standalone logind implementation 
based on systemd code, which is currently maintained by a fellow Gentoo user. 
Gentoo would like to thank Mart Raudsepp (leio), Gavin Ferris, and all others
working on this for their contributions. More information can be found in
<a href="https://blogs.gentoo.org/leio/2019/03/26/gnome-3-30/">Mart’s blog post</a>.</p></div>
    </summary>
    <updated>2019-03-27T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/leio/?p=77</id>
    <link href="https://blogs.gentoo.org/leio/2019/03/26/gnome-3-30/#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed" rel="alternate" type="text/html"/>
    <title>Gentoo GNOME 3.30 for all init systems</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">GNOME 3.30 is now available in Gentoo Linux testing branch. Starting with this release, GNOME on Gentoo once again works with OpenRC, in addition to the usual systemd option. This is achieved through the elogind project, a standalone logind implementation based on systemd code, which is currently maintained by a fellow Gentoo user. It provides … <a class="more-link" href="https://blogs.gentoo.org/leio/2019/03/26/gnome-3-30/">Continue reading <span class="screen-reader-text">Gentoo GNOME 3.30 for all init systems</span></a></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>GNOME 3.30 is now available in Gentoo Linux testing branch.<br/>
Starting with this release, GNOME on Gentoo once again works with OpenRC, in addition to the usual systemd option. This is achieved through the <a href="https://github.com/elogind/elogind">elogind project</a>, a standalone logind implementation based on systemd code, which is currently maintained by a fellow Gentoo user. It provides the missing <a href="https://www.freedesktop.org/wiki/Software/systemd/logind/">logind interfaces</a> currently required by GNOME without booting with systemd.</p>
<p>For easier GNOME install, the <i>desktop/gnome</i> profiles now set up default USE flags with elogind for OpenRC systems, while the <i>desktop/gnome/systemd</i> profiles continue to do that for systemd systems. Both have been updated to provide a better initial GNOME install experience. After profile selection, a full install should be simply a matter of `emerge gnome` for testing branch users. Don’t forget to <a href="https://wiki.gentoo.org/wiki/Handbook:AMD64/Working/USE#Adapting_the_entire_system_to_the_new_USE_flags">adapt your system</a> to any changed USE flags on previously installed packages too.</p>
<p>GNOME 3.32 is expected to be made available in testing branch soon as well, followed by introducing all this for stable branch users. This is hoped to complete within 6-8 weeks.</p>
<p>If you encounter issues, don’t hesitate to file <a href="https://bugs.gentoo.org/">bug reports</a> or, if necessary, <a href="https://wiki.gentoo.org/wiki/User:Leio">contact me</a> via e-mail or IRC. You can also discuss the elogind aspects on the <a href="https://forums.gentoo.org/viewtopic-t-1094796.html">Gentoo Forums</a>.</p>
<h4 style="color: grey; font-size: 0.9em;">Acknowledgements</h4>
<p style="color: grey; font-size: 0.8em;">I’d like to thank Gavin Ferris, for kindly agreeing to sponsor my work on the above (upgrading GNOME on Gentoo from 3.26 to 3.30 and introducing Gentoo GNOME elogind support); and dantrell, for his pioneering overlay work integrating GNOME 3 with OpenRC on Gentoo, and also the GNOME and elogind projects.</p></div>
    </content>
    <updated>2019-03-26T16:51:49Z</updated>
    <category term="Gentoo"/>
    <category term="GNOME"/>
    <author>
      <name>leio</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/leio</id>
      <link href="https://blogs.gentoo.org/leio/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/leio" rel="alternate" type="text/html"/>
      <subtitle>Just another Gentoo Blogs site</subtitle>
      <title>Gentoo – Mart Raudsepp</title>
      <updated>2019-03-26T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2179</id>
    <link href="http://www.ultrabug.fr/py3status-v3-17/" rel="alternate" type="text/html"/>
    <title>py3status v3.17</title>
    <summary>I’m glad to announce a new (awaited) release of py3status featuring support for the sway window manager which allows py3status to enter the wayland environment! Updated configuration and custom modules paths detection The configuration section of the documentation explains the updated detection of the py3status configuration file (with respect of XDG_CONFIG environment variables): ~/.config/py3status/config ~/.config/i3status/config […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I’m glad to announce a new (awaited) release of <strong>py3status</strong> featuring support for the <strong>sway window manager</strong> which allows py3status to enter the wayland environment!</p>



<h3>Updated configuration and custom modules paths detection</h3>



<p>The <a href="https://py3status.readthedocs.io/en/latest/configuration.html" rel="noreferrer noopener" target="_blank">configuration section</a> of the documentation explains the updated detection of the py3status configuration file (with respect of XDG_CONFIG environment variables):</p>



<ul><li>~/.config/py3status/config</li><li>~/.config/i3status/config</li><li>~/.config/i3/i3status.conf</li><li>~/.i3status.conf</li><li>~/.i3/i3status.conf</li><li>/etc/xdg/i3status/config</li><li>/etc/i3status.conf</li></ul>



<p>Regarding <a href="https://py3status.readthedocs.io/en/latest/writing_modules.html" rel="noreferrer noopener" target="_blank">custom modules paths detection</a>, py3status does as described in the documentation:</p>



<ul><li>~/.config/py3status/modules</li><li>~/.config/i3status/py3status</li><li>~/.config/i3/py3status</li><li>~/.i3/py3status</li></ul>



<h3>Highlights</h3>



<p>Lots of modules improvements and clean ups, see <a href="https://github.com/ultrabug/py3status/blob/master/CHANGELOG" rel="noreferrer noopener" target="_blank">changelog</a>.</p>



<ul><li>we worked on the <strong>documentation</strong> sections and content which allowed us to fix a bunch of typos</li><li>our magic <strong>@lasers</strong> have worked a lot on harmonizing thresholds on modules along with a lot of code clean ups</li><li>new module: <a href="https://py3status.readthedocs.io/en/latest/modules.html#scroll" rel="noreferrer noopener" target="_blank">scroll</a> to scroll modules on your bar (#1748)</li><li><strong>@lasers</strong> has worked a lot on a more granular pango support for modules output (still work to do as it breaks some composites)</li></ul>



<h3>Thanks contributors</h3>



<ul><li>Ajeet D’Souza</li><li>@boucman</li><li>Cody Hiar</li><li>@cyriunx</li><li>@duffydack</li><li>@lasers</li><li>Maxim Baz</li><li>Thiago Kenji Okada</li><li>Yaroslav Dronskii</li></ul></div>
    </content>
    <updated>2019-03-25T14:12:58Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="portage"/>
    <category term="py3status"/>
    <category term="release"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:32Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/ago/?p=2393</id>
    <link href="https://blogs.gentoo.org/ago/2019/03/20/install-gentoo-in-less-than-one-minute/#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed" rel="alternate" type="text/html"/>
    <title>Install Gentoo in less than one minute</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">I’m pretty sure that the title of this post will catch your attention…and/or maybe your curiosity. Well..this is something I’m doing since years…and since did not cost too much to make it in a public and usable state, I decided … <a href="https://blogs.gentoo.org/ago/2019/03/20/install-gentoo-in-less-than-one-minute/">Continue reading <span class="meta-nav">→</span></a></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I’m pretty sure that the title of this post will catch your attention…and/or maybe your curiosity.</p>



<p>Well..this is something I’m doing since years…and since did not cost too much to make it in a public and usable state, I decided to share my work, to help some people to avoid waste of time and to avoid to be angry when your cloud provider does not offer the gentoo image.<br/></p>



<p>So what are the goals of this project?</p>



<ol><li>Install gentoo on cloud providers that do not offer a Gentoo image (e.g Hetzner)</li><li>Install gentoo everywhere in few seconds.</li></ol>



<p>To do a fast installation, we need a stage4….but what is exactly a stage4? In this case the stage4 is composed by the official gentoo stage3 plus grub, some more utilities and some file already configured.</p>



<p>So since the stage4 has already everything to complete the installation, we just need to make some replacement (fstab, grub and so on), install grub on the disk………..and…..it’s done (by the auto-installer script)!</p>



<p>At this point I’d expect some people to say….”yeah…it’s so simply and logical…why I didn’t think about that” – Well, I guess that every gentoo user didn’t discover that just after the first installation…so you don’t need to blame yourself <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" style="height: 1em;"/></p>



<p>The technical details are covered by the README in the <a href="https://github.com/asarubbo/gentoo-stage4">gentoo-stage4 git repository</a></p>



<p>As said in the README: </p>



<ul><li>If you have any request, feel free to contact me</li><li>A star on the project will give me the idea of the usage and then the effort to put here.</li></ul>



<p>So what’s more? Just a screenshot of the script in action <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" style="height: 1em;"/></p>



<figure class="wp-block-image"><img alt="" class="wp-image-2403" src="https://blogs.gentoo.org/ago/files/2019/03/screen.png"/></figure>



<p># Gentoo hetzner cloud<br/># Gentoo stage4<br/># Gentoo cloud</p></div>
    </content>
    <updated>2019-03-20T18:35:12Z</updated>
    <category term="gentoo"/>
    <author>
      <name>ago</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/ago</id>
      <link href="https://blogs.gentoo.org/ago/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/ago" rel="alternate" type="text/html"/>
      <subtitle>my fuzzer always find something more than your...</subtitle>
      <title>gentoo – agostino's blog</title>
      <updated>2019-03-20T19:02:11Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/postgresql-major-version-upgrade-gentoo/</id>
    <link href="https://blog.lordvan.com/blog/postgresql-major-version-upgrade-gentoo/" rel="alternate" type="text/html"/>
    <title>Postgresql major version upgrade (gentoo)</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Just did an upgrade from postgres 10.x to 11.x on a test machine..</p>
<p>The guide on the <a href="https://wiki.gentoo.org/wiki/PostgreSQL/QuickStart">Gentoo Wiki</a> is pretty good, but a few things I forgot at first:</p>
<p>First off when initializing the new cluster with "<code>emerge --config =dev-db/postgresql-11.1</code>" making sure the DB init options are the same as the old cluster. They are stored in <code>/etc/conf.d/postgresql-XX.Y</code> so just make sure PG_INITDB_OPTS collation ,.. match - if not delete the new cluster and re-run emerge --config ;)</p>
<p>The second thing was <code>pg_hba.conf</code>: make sure to re-add extra user/db/connection permissions again (in my case I ran diff and then just copied the old config file as the only difference was the extra permissions I had added)</p>
<p>The third thing was <code>postgresql.conf</code>: here I forgot to make sure <code>listen_addresses</code> and <code>port</code> are the same as in the old config (I did not copy this one as there are a lot more differences here. -- and of course check the rest of the config file too (diff is your friend ;) )</p>
<p>other than that <code>pg_upgrade</code> worked really well for me and it is now up and running agian.</p></div>
    </summary>
    <updated>2019-02-22T10:37:10Z</updated>
    <category term="Database"/>
    <category term="Development"/>
    <category term="Gentoo"/>
    <category term="Postgresql"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:22Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=844</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/02/20/gen-revoke-extending-revocation-certificates-to-subkeys/" rel="alternate" type="text/html"/>
    <title>gen-revoke: extending revocation certificates to subkeys</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Traditionally, OpenPGP revocation certificates are used as a last resort. You are expected to generate one for your primary key and keep it in a secure location. If you ever lose the secret portion of the key and are unable to revoke it any other way, you import the revocation certificate and submit the updated … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2019/02/20/gen-revoke-extending-revocation-certificates-to-subkeys/">Continue reading<span class="screen-reader-text"> "gen-revoke: extending revocation certificates to subkeys"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Traditionally, OpenPGP revocation certificates are used as a last resort.  You are expected to generate one for your primary key and keep it in a secure location.  If you ever lose the secret portion of the key and are unable to revoke it any other way, you import the revocation certificate and submit the updated key to keyservers.  However, there is another interesting use for revocation certificates — revoking shared organization keys.</p>
<p>Let’s take Gentoo, for example.  We are using a few keys needed to perform automated signatures on servers.  For this reason, the key is especially exposed to attacks and we want to be able to revoke it quickly if the need arises.  Now, we really do not want to have every single Infra member hold a copy of the secret primary key.  However, we can give Infra members revocation certificates instead.  This way, they maintain the possibility of revoking the key without unnecessarily increasing its exposure.</p>
<p>The problem with traditional revocation certificates is that they are supported for the purpose of revoking the primary key only.  In our security model, the primary key is well protected, compared to subkeys that are totally exposed.  Therefore, it is superfluous to revoke the complete key when only a subkey is compromised.  To resolve this limitation, <a href="https://github.com/mgorny/gen-revoke" rel="external">gen-revoke</a> tool was created that can create exported <em>revocation signatures</em> for both the primary key and subkeys.</p>
<p><span id="more-844"/></p>
<h2>Technical background</h2>
<p>The OpenPGP key (v4, as defined by <a href="https://tools.ietf.org/html/rfc4880" rel="external">RFC 4880</a>) consists of a primary key, one or more UIDs and zero or more subkeys.  Each of those keys and UIDs can include zero or more <em>signature packets</em>.  Those packets bind information to the specific key or UID, and their authenticity is confirmed by a signature made using the secret portion of a primary key.</p>
<p>Signatures made by the key’s owner are called <em>self-signatures</em>.  The most basic form of them serve as a binding between the primary key and its subkeys and UIDs.  Since both those classes of objects are created independently of the primary key, self-signatures are necessary to distinguish authentic subkeys and UIDs created by the key owner from potential fakes.  Appropriately, GnuPG will only accept subkeys and UIDs that have valid self-signature.</p>
<p>One specific type of signatures are <em>revocation signatures</em>.  Those signatures indicate that the relevant key, subkey or UID has been revoked.  If a revocation signature is found, it takes precedence over any other kinds of signatures and prevents the revoked object from being further used.</p>
<p>Key updates are means of distributing new data associated with the key.  What’s important is that during an update the key is not replaced by a new one.  Instead, GnuPG collects all the new data (subkeys, UIDs, signatures) and adds it to the local copy of the key.  The validity of this data is verified against appropriate signatures.  Appropriately, anyone can submit a key update to the keyserver, provided that the new data includes valid signatures.  Similarly to local GnuPG instance, the keyserver is going to update its copy of the key rather than replacing it.</p>
<p>Revocation certificates specifically make use of this property.  Technically, a revocation certificate is simply an exported form of a revocation signature, signed using the owner’s primary key.  As long as it’s not on the key (i.e. GnuPG does not see it), it does not do anything.  When it’s imported, GnuPG adds it to the key.  Further submissions and exports include it, effectively distributing it to all copies of the key.</p>
<p>gen-revoke builds on this idea.  It creates and exports revocation signatures for the primary key and subkeys.  Due to implementation limitations (and for better compatibility), rather than exporting the signature alone it exports a minimal copy of the relevant key.  This copy can be imported just like any other key export, and it causes the revocation signature to be added to the key.  Afterwards, it can be exported and distributed just like a revocation done directly on the key.</p>
<h2>Usage</h2>
<p>To use the script, you need to have the secret portion of the primary key available, and public encryption keys for all the people who are supposed to obtain a copy of the revocation signatures (recipients).</p>
<p>The script takes at least two parameters: an identifier of the key for which revocation signatures should be created, followed by one or more e-mail addresses of signature recipients.  It creates revocation signatures both for the primary key and for all valid subkeys, for all the people specified.</p>
<p>The signatures are written into the current directory as key exports and are encrypted to each specified person.  They should be distributed afterwards, and kept securely by all the individuals.  If a need to revoke either a subkey or the primary key arises, the first person available can decrypt the signature, import it and send the resulting key to keyservers.</p>
<p>Additionally, each signature includes a comment specifying the person it was created for.  This comment will afterwards be displayed by GnuPG if one of the revocation signatures is imported.  This provides a clear audit trace as to who revoked the key.</p>
<h2>Security considerations</h2>
<p>Each of the revocation signatures can be used by an attacker to disable the key in question.  The signatures are protected through encryption.  Therefore, the system is vulnerable to the key of a single signature owner being compromised.</p>
<p>However, this is considerably safer than the equivalent option of distributing the secret portion of the primary key.  In the latter case, the attacker would be able to completely compromise the key and use it for malicious purposes; in the former, it is only capable of revoking the key and therefore causing some frustration.  Furthermore, the revocation comment helps identifying the compromised user.</p>
<p>The tradeoff between reliability and security can be adjusted by changing the number of revocation signature holders.</p></div>
    </content>
    <updated>2019-02-20T13:18:52Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=841</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/01/31/evolution-uid-trust-extrapolation-attack-on-openpgp-signatures/" rel="alternate" type="text/html"/>
    <title>Evolution: UID trust extrapolation attack on OpenPGP signatures</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">This article describes the UI deficiency of Evolution mail client that extrapolates the trust of one of OpenPGP key UIDs into the key itself, and reports it along with the (potentially untrusted) primary UID. This creates the possibility of tricking the user into trusting a phished mail via adding a forged UID to a key … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2019/01/31/evolution-uid-trust-extrapolation-attack-on-openpgp-signatures/">Continue reading<span class="screen-reader-text"> "Evolution: UID trust extrapolation attack on OpenPGP signatures"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>This article describes the UI deficiency of Evolution mail client that extrapolates the trust of one of OpenPGP key UIDs into the key itself, and reports it along with the (potentially untrusted) primary UID.  This creates the possibility of tricking the user into trusting a phished mail via adding a forged UID to a key that has a previously trusted UID.</p>
<p><a href="https://dev.gentoo.org/~mgorny/articles/evolution-uid-trust-extrapolation.html" rel="external">Continue reading</a></p></div>
    </content>
    <updated>2019-01-31T06:00:36Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=814</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/01/29/identity-with-openpgp-trust-model/" rel="alternate" type="text/html"/>
    <title>Identity with OpenPGP trust model</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Let’s say you want to send a confidential message to me, and possibly receive a reply. Through employing asymmetric encryption, you can prevent a third party from reading its contents, even if it can intercept the ciphertext. Through signatures, you can verify the authenticity of the message, and therefore detect any possible tampering. But for … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2019/01/29/identity-with-openpgp-trust-model/">Continue reading<span class="screen-reader-text"> "Identity with OpenPGP trust model"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Let’s say you want to send a confidential message to me, and possibly receive a reply.  Through employing asymmetric encryption, you can prevent a third party from reading its contents, even if it can intercept the ciphertext.  Through signatures, you can verify the authenticity of the message, and therefore detect any possible tampering.  But for all this to work, you need to be able to verify the authenticity of the public keys first.  In other words, we need to be able to prevent the aforementioned third party — possibly capable of intercepting your communications and publishing a forged key with my credentials on it — from tricking you into using the wrong key.</p>
<p>This renders key authenticity the fundamental problem of asymmetric cryptography.  But before we start discussing how key certification is implemented, we need to cover another fundamental issue — identity.  After all, who am I — who is the person you are writing to?  Are you writing to a person you’ve met?  Or to a specific Gentoo developer?  Author of some project?  Before you can distinguish my authentic key from a forged key, you need to be able to clearly distinguish <em>me</em> from an impostor.</p>
<p><span id="more-814"/></p>
<h2>Forms of identity</h2>
<h3>Identity via e-mail address</h3>
<p>If your primary goal is to communicate with the owner of the particular e-mail address, it seems obvious to associate the identity with the owner of the e-mail address.  However, how in reality would you distinguish a ‘rightful owner’ of the e-mail address from a cracker who managed to obtain access to it, or to intercept your network communications and inject forged mails?</p>
<p>The truth is, the best you can certify is that the owner of a particular key is able to read and/or send mails from a particular e-mail address, at a particular point in time.  Then, if you can certify the same for a long enough period of time, you may reasonably assume the address is continuously used by the same identity (which may qualify as a legitimate owner or a cracker with a lot of patience).</p>
<p>Of course, all this relies on your trust in mail infrastructure not being compromised.</p>
<h3>Identity via personal data</h3>
<p>A stronger protection against crackers may be provided by associating the identity with personal data, as confirmed by government-issued documents.  In case of OpenPGP, this is just the real name; X.509 certificates also provide fields for street address, phone number, etc.</p>
<p>The use of real names seems to be based on two assumptions: that your real name is reasonable well-known (e.g. it can be established with little risk of being replaced by a third party), and that the attacker does not wish to disclose his own name.  Besides that, using real names meets with some additional criticism.</p>
<p>Firstly, requiring one to use his real name may be considered an invasion on privacy.  Most notably, some people wish not to disclose or use their real names, and this effectively prevents them from ever being certified.</p>
<p>Secondly, real names are not unique.  After all, the naming systems developed from the necessity of distinguishing individuals in comparatively small groups, and they simply don’t scale to the size of the Internet.  Therefore, name collisions are entirely possible and we are relying on sheer luck that the attacker wouldn’t happen to have the same name as you do.</p>
<p>Thirdly and most importantly, verifying identity documents is non-trivial and untrained individuals are likely to fall victim of mediocre quality fakes.  After all, we’re talking about people who hopefully read some article on verifying a particular kind of document but have no experience recognizing forgery, no specialized hardware (I suppose most of you don’t carry a magnifying glass and a UV light on yourself) and who may lack skills in comparing signatures or photographs (not to mention some people have <em>really old</em> photographs in documents).  Some countries don’t even issue any official documentation for document verification in English!</p>
<p>Finally, even besides the point of forged documents, this relies on trust in administration.</p>
<h3>Identity via photographs</h3>
<p>This one I’m mentioning merely for completeness.  OpenPGP keys allow adding a photo as one of your UIDs.  However, this is rather rarely used (out of the keys my GnuPG fetched so far, less than 10% have photographs).  The concerns are similar as for personal data: it assumes that others are reliably able to know how you look like, and that they are capable of reliably comparing faces.</p>
<h3>Online identity</h3>
<p>An interesting concept is to use your public online activity to prove your identity — such as websites or social media.  This is generally based on cross-referencing multiple resources with cryptographically proven publishing access, and assuming that an attacker would not be able to compromise all of them simultaneously.</p>
<p>A form of this concept is utilized by <a href="https://keybase.io" rel="external">keybase.io</a>.  This service builds trust in user profiles via cryptographically cross-linking your profiles on some external sites and/or your websites.  Furthermore, it actively encourages other users to verify those external proofs as well.</p>
<p>This identity model entirely relies on trust in network infrastructure and external sites.  The likeliness of it being compromised is reduced by (potentially) relying on multiple independent sites.</p>
<h2>Web of Trust model</h2>
<p>Most of time, you won’t be able to directly verify the identity of everyone you’d like to communicate with.  This creates a necessity of obtaining indirect proof of authenticity, and the model normally used for that purpose in OpenPGP is the Web of Trust.  I won’t be getting into the fine details — you can find them e.g. in the <a href="https://www.gnupg.org/gph/en/manual.html" rel="external">GNU Privacy Handbook</a>.  For our purposes, it suffices to say that in WoT the authenticity of keys you haven’t verified may be assessed by people whose keys you trust already, or people they know, with a limited level of recursion.</p>
<p>The more key holders you can trust, the more keys you can have verified indirectly and the more likely it is that your future recipient will be in that group.  Or that you will be able to get someone from across the world into your WoT by meeting someone residing much closer to yourself.  Therefore, you’d naturally want the WoT to grow fast and include more individuals.  You’d want to <em>preach</em> OpenPGP onto non-crypto-aware people.  However, this comes with inherent danger: can you really trust that they will properly verify the identity of the keys they sign?</p>
<p>I believe this is the most fundamental issue with WoT model: for it to work outside of small specialized circles, it has to include more and more individuals across the world.  But this growth inevitable makes it easier for a malicious third party to find people that can be tricked into certifying keys with forged identities.</p>
<h2>Conclusion</h2>
<p>The fundamental problem in OpenPGP usage is finding the correct key and verifying its authenticity.  This becomes especially complex given there is no single clear way of determining one’s identity in the Internet.  Normally, OpenPGP uses a combination of real name and e-mail address, optionally combined with a photograph.  However, all of them have their weaknesses.</p>
<p>Direct identity verification for all recipients is non-practical, and therefore requires indirect certification solutions.  While the WoT model used by OpenPGP attempts to avoid centralized trust specific to PKI, it is not clear whether it’s practically manageable.  On one hand, it requires trusting more people in order to improve coverage; on the other, it makes it more vulnerable to fraud.</p>
<p>Given all the above, the trust-via-online-presence concept may be of some interest.  Most importantly, it establishes a closer relationship between the identity you actually need and the identity you verify — e.g. you want to mail the person being an open source developer, author of some specific projects rather than arbitrary person with a common enough name.  However, this concept is not established broadly yet.</p></div>
    </content>
    <updated>2019-01-29T13:50:05Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:22Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=812</id>
    <link href="https://blogs.gentoo.org/mgorny/2019/01/26/attack-on-git-signature-verification-via-crafting-multiple-signatures/" rel="alternate" type="text/html"/>
    <title>Attack on git signature verification via crafting multiple signatures</title>
    <summary>This article shortly explains the historical git weakness regarding handling commits with multiple OpenPGP signatures in git older than v2.20. The method of creating such commits is presented, and the results of using them are described and analyzed. Continue reading</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>This article shortly explains the historical git weakness regarding handling commits with multiple OpenPGP signatures in git older than v2.20.  The method of creating such commits is presented, and the results of using them are described and analyzed.</p>
<p><a href="https://dev.gentoo.org/~mgorny/articles/attack-on-git-signature-verification.html" rel="external">Continue reading</a></p></div>
    </content>
    <updated>2019-01-26T10:24:11Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2156</id>
    <link href="http://www.ultrabug.fr/py3status-v3-16/" rel="alternate" type="text/html"/>
    <title>py3status v3.16</title>
    <summary>Two py3status versions in less than a month? That’s the holidays effect but not only! Our community has been busy discussing our way forward to 4.0 (see below) and organization so it was time I wrote a bit about that. Community A new collaborator First of all we have the great pleasure and honor to […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Two <strong>py3status</strong> versions in less than a month? That’s the holidays effect but not only!</p>



<p>Our community has been busy discussing our way forward to 4.0 (see below) and organization so it was time I wrote a bit about that.</p>



<h2>Community</h2>



<h3>A new collaborator</h3>



<p>First of all we have the great pleasure and honor to welcome Maxim Baz @<strong>maximbaz</strong> as a new collaborator on the project!<br/><br/>His engagement, numerous contributions and insightful reviews to py3status has made him a well known community member, not to mention his IRC support <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/><br/><br/>Once again, thank you for being there Maxim!</p>



<h3>Zen of py3status</h3>



<p>As a result of an interesting discussion, we worked on defining better <a href="https://py3status.readthedocs.io/en/latest/contributing.html#contributing" rel="noreferrer noopener" target="_blank">how to contribute to py3status</a> as well as a set of guidelines we agree on to get the project moving on smoothly.<br/><br/>Here is born the <a href="https://py3status.readthedocs.io/en/latest/contributing.html#zen" rel="noreferrer noopener" target="_blank">zen of py3status</a> which extends the <a href="https://py3status.readthedocs.io/en/latest/intro.html#philosophy" rel="noreferrer noopener" target="_blank">philosophy</a> from the user point of view to the contributor point of view!<br/><br/>This allowed us to handle the numerous open pull requests and get their number down to 5 at the time of writing this post!<br/><br/>Even our dear @lasers don’t have any open PR anymore <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/> </p>



<h2>3.15 + 3.16 versions</h2>



<p>Our magic @<strong>lasers</strong> has worked a lot on <a href="https://py3status.readthedocs.io/en/latest/configuration.html#py3status-configuration-section" rel="noreferrer noopener" target="_blank">general modules options</a> as well as adding support for <strong>i3-gaps</strong> added features such as border coloring and fine tuning.<br/><br/>Also interesting is the work of Thiago Kenji Okada @<strong>m45t3r</strong> around <a href="https://twitter.com/k0kada/status/1079206973671964673" rel="noreferrer noopener" target="_blank">NixOS packaging of py3status</a>. Thanks a lot for this work and for sharing Thiago!</p>



<p>I also liked the question of Andreas Lundblad @<strong>aioobe</strong> <a href="https://github.com/ultrabug/py3status/issues/1610" rel="noreferrer noopener" target="_blank">asking if we could have a feature allowing to display a custom graphical output</a>, such as a small PNG or anything upon clicking on the i3bar, you might be interested in following up the <a href="https://github.com/i3/i3/issues/3578" rel="noreferrer noopener" target="_blank">i3 issue he opened</a>.<br/><br/>Make sure to read the amazing <a href="https://github.com/ultrabug/py3status/blob/master/CHANGELOG" rel="noreferrer noopener" target="_blank">changelog</a> for details, a lot of modules have been enhanced!</p>



<h3>Highlights</h3>



<ul><li>You can now set a background, border colors and their urgent counterparts on a global scale or per module</li><li>CI now checks for black format on modules, so now all the code base obey the black format style!</li><li>All HTTP requests based modules now have a <a href="https://py3status.readthedocs.io/en/latest/configuration.html#request-timeout" rel="noreferrer noopener" target="_blank">standard way to define HTTP timeout</a> as well as the same 10 seconds default timeout</li><li>py3-cmd now allows sending <strong>click events with modifiers</strong></li><li>The py3status <strong>-n / –interval command line argument has been removed </strong>as it was obsolete. We will ignore it if you have set it up, but better remove it to be clean</li><li>You can specify your own i3status binary path using the new <strong>-u, –i3status</strong> command line argument thanks to @Dettorer and @lasers</li><li>Since Yahoo! decided to retire its public &amp; free weather API, the <strong>weather_yahoo module has been removed</strong></li></ul>



<h3>New modules</h3>



<ul><li> new <strong>conky</strong> module: display conky system monitoring (#1664), by lasers </li><li>new module <strong>emerge_status</strong>: display information about running gentoo emerge (#1275), by AnwariasEu</li><li>new module <strong>hueshift</strong>: change your screen color temperature (#1142), by lasers</li><li>new module <strong>mega_sync</strong>: to check for MEGA service synchronization (#1458), by Maxim Baz</li><li>new module <strong>speedtest</strong>: to check your internet bandwidth (#1435), by cyrinux</li><li>new module <strong>usbguard</strong>: control usbguard from your bar (#1376), by cyrinux</li><li>new module <strong>velib_metropole</strong>: display velib metropole stations and (e)bikes (#1515), by cyrinux</li></ul>



<h2>A word on 4.0</h2>



<p>Do you wonder what’s gonna be in the 4.0 release?<br/>Do you have ideas that you’d like to share?<br/>Do you have dreams that you’d love to become true?<br/><br/>Then make sure to <a href="https://github.com/ultrabug/py3status/issues/1584" rel="noreferrer noopener" target="_blank">read and participate in the open RFC on 4.0 version</a>!<br/><br/>Development has not started yet; we really want to hear from you.</p>



<h2>Thank you contributors!</h2>



<p>There would be no py3status release without our amazing contributors, so thank you guys!</p>



<ul><li>AnwariasEu</li><li>cyrinux</li><li>Dettorer</li><li>ecks</li><li>flyingapfopenguin</li><li>girst</li><li>Jack Doan</li><li>justin j lin</li><li>Keith Hughitt</li><li>L0ric0</li><li>lasers</li><li>Maxim Baz</li><li>oceyral</li><li>Simon Legner</li><li>sridhars</li><li>Thiago Kenji Okada</li><li>Thomas F. Duellmann</li><li>Till Backhaus</li></ul></div>
    </content>
    <updated>2019-01-20T21:10:04Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="portage"/>
    <category term="py3status"/>
    <category term="release"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:33Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2019/01/09/gentoo-fosdem-2019.html</id>
    <link href="https://www.gentoo.org/news/2019/01/09/gentoo-fosdem-2019.html" rel="alternate" type="text/html"/>
    <title>FOSDEM 2019</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://fosdem.org/2019/">
  <img alt="FOSDEM logo" src="https://www.gentoo.org/assets/img/news/2019/logo-fosdem-wide.png"/>
</a></p>

<p>It’s FOSDEM time again! Join us at <a href="https://fosdem.org/2019/practical/transportation/">Université libre de Bruxelles</a>,
Campus du Solbosch, in Brussels, Belgium. This year’s <a href="https://fosdem.org/2019/">FOSDEM 2019</a> will 
be held on February 2nd and 3rd.</p>

<p>Our developers will be happy to greet all open source enthusiasts at 
our <a href="https://fosdem.org/2019/stands/">Gentoo stand</a> in building K. 
Visit <a href="https://wiki.gentoo.org/wiki/FOSDEM_2019">this year’s wiki page</a> to see
who’s coming. So far eight developers have specified their
attendance, with most likely many more on the way!</p></div>
    </summary>
    <updated>2019-01-09T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2131</id>
    <link href="http://www.ultrabug.fr/scylla-summit-2018-write-up/" rel="alternate" type="text/html"/>
    <title>Scylla Summit 2018 write-up</title>
    <summary>It’s been almost one month since I had the chance to attend and speak at Scylla Summit 2018 so I’m relieved to finally publish a short write-up on the key things I wanted to share about this wonderful event! Make Scylla boring This statement of Glauber Costa sums up what looked to me to be […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>It’s been almost one month since I had the chance to attend and speak at <strong><a href="https://www.scylladb.com/scylla-summit-2018/" rel="noreferrer noopener" target="_blank">Scylla Summit 2018</a></strong> so I’m relieved to finally publish a short write-up on the key things I wanted to share about this wonderful event!</p>



<figure class="wp-block-image"><img alt="" class="wp-image-2141" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/12/DrQOptgXgAAcWIX.jpg"/></figure>



<h2>Make Scylla boring<br/></h2>



<p>This statement of Glauber Costa sums up what looked to me to be the main driver of the engineering efforts put into Scylla lately: making it work so consistently well on any kind of workload that it’s boring to operate <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/><br/><br/>I will follow up on this statement to highlight the things I heard and (hopefully) understood during the summit. I hope you’ll find it insightful.<br/></p>



<h3>Reduced  operational efforts<br/></h3>



<p>The thread-per-core and queues design still has a lot of possibilities to be leveraged.<br/><br/>The recent addition of <strong>RPC streaming</strong> capabilities to seastar allows a <a href="https://www.scylladb.com/2018/08/14/upcoming-improvements-scylla-streaming/" rel="noreferrer noopener" target="_blank">drastic reduction</a> in the time it takes the cluster to grow or shrink (data rebalancing / resynchronization).<br/><br/><strong>Incremental compaction</strong> is also very promising as this background process is one of the most expensive there is in the database’s design.<br/><br/>I was happy to hear that <strong>scylla-manager</strong> will soon be made available and free to use with basic features while retaining more advanced ones for enterprise version (like backup/restore).<br/>I also noticed that the current version was not supporting SSL enabled clusters to store its configuration. So I directly asked Michał for it and I’m glad that it will be released on version 1.3.1.<br/></p>



<h3>Performant multi-tenancy<br/></h3>



<p>Why choose between <a href="https://sched.co/GcEr" rel="noreferrer noopener" target="_blank">real-time OLTP &amp; analytics OLAP</a> workloads?<br/><br/>The goal here is to be able to run both on the same cluster by giving users the ability to assign “SLA” <strong>shares</strong> to  ROLES. That’s basically like pools on Hadoop at a much finer grain since it will create dedicated queues that will be weighted by their share.<br/><br/>Having <strong>one queue per usage</strong> and full accounting will allow to limit resources efficiently and users to have their say on their latency SLAs.<br/><br/>But Scylla also has a lot to do in the background to run smoothly. So while this design pattern was already applied to tamper compactions, a lot of work has also been done on automatic flow control and back pressure.<br/><br/>For instance, Materialized Views are updated asynchronously  which means that while we can interact and put a lot of pressure on the table its based on (called the Main Table), we could overwhelm the background work that’s needed to keep MVs View Tables in sync. To mitigate this, a smart <strong><a href="https://www.scylladb.com/2018/12/04/worry-free-ingestion-flow-control/" rel="noreferrer noopener" target="_blank">back pressure</a></strong> approach was developed and will throttle the clients to make sure that Scylla can manage to do everything at the best performance the hardware allows!<br/><br/>I was happy to hear that work on <strong>tiered storage</strong> is also planned to better optimize disk space costs for certain workloads.<br/><br/>Last but not least, <strong>columnar storage</strong> optimized for time series and analytics workloads are also something the developers are looking at.<br/></p>



<h3>Latency is expensive<br/></h3>



<p>If you care for latency, you might be happy to hear that a new polling API (named <a href="https://old.lwn.net/Articles/742978/" rel="noreferrer noopener" target="_blank">IOCB_CMD_POLL</a>) has been contributed by Christoph Hellwig and Avi Kivity to the 4.19 Linux kernel which avoids context switching I/O by using a shared ring between kernel and userspace. Scylla will be using it by default if the kernel supports it.<br/><br/>The iotune utility has been upgraded since 2.3 to generate an <a href="https://www.scylladb.com/2018/04/19/scylla-i-o-scheduler-3/" rel="noreferrer noopener" target="_blank">enhanced I/O</a> configuration.<br/><br/>Also, persistent (disk backed) <strong><a href="https://sched.co/GcdR" rel="noreferrer noopener" target="_blank">in-memory tables</a></strong> are getting ready and are very promising for latency sensitive workloads!<br/></p>



<h3>A word on drivers<br/></h3>



<p>ScyllaDB has been relying on the Datastax drivers since the start. While it’s a good thing for the whole community, it’s important to note that the shard-per-CPU approach on data that Scylla is using is not known and leveraged by the current drivers.<br/><br/><a href="https://lists.apache.org/thread.html/7539f11b6e2e4c7841f0409f15a05b6d6e32cdf7ce6f92024d62f965@%3Cdev.cassandra.apache.org%3E" rel="noreferrer noopener" target="_blank">Discussions</a> took place and it seems that Datastax will not allow the protocol to evolve so that drivers could discover if the connected cluster is shard aware or not and then use this information to be more clever in which write/read path to use.<br/><br/>So for now ScyllaDB has been forking and developing their <strong>shard aware</strong> drivers for Java and Go (no Python yet… I was disappointed).<br/></p>



<h3>Kubernetes &amp; containers<br/></h3>



<p>The ScyllaDB guys of course couldn’t avoid the Kubernetes frenzy so Moreno Garcia gave a lot of <a href="https://www.scylladb.com/2018/08/09/cost-containerization-scylla/" rel="noreferrer noopener" target="_blank">feedback and tips</a> on how to operate Scylla on docker with minimal performance degradation.<br/><br/>Kubernetes has been designed for stateless applications, not stateful ones and Docker does some automatic magic that have rather big performance hits on Scylla. You will basically have to play with affinities to dedicate one Scylla instance to run on one server with a “retain” reclaim policy.<br/><br/>Remember that the official Scylla docker image runs with dev-mode enabled by default which turns off all performance checks on start. So start by disabling that and look at all the tips and literature that Moreno has put online!<br/></p>



<figure class="wp-block-image"><img alt="" class="wp-image-2143" src="https://www.ultrabug.fr/wordpress/wp-content/uploads/2018/12/DrVXLqWXQAAJnmS-1024x329.jpg"/></figure>



<h2>Scylla 3.0<br/></h2>



<p>A lot has been <a href="https://www.scylladb.com/2018/11/08/overheard-at-scylla-summit-2018/" rel="noreferrer noopener" target="_blank">written on it already</a> so I will just be short on things that important to understand in my point of view.</p>



<ul><li>Materialized Views do back fill the whole data set<br/><ul><li>this job is done by the view building process</li><li>you can watch its progress in the <strong>system_distributed.view_build_status</strong> table<br/></li></ul></li><li>Secondary Indexes are Materialized Views under the hood<br/><ul><li>it’s like a reverse pointer to the primary key of the Main Table<br/></li><li>so if you read the whole row by selecting on the indexed column, two reads will be issued under the hood: one on the indexed MV view table to get the primary key and one on the main table to get the rest of the columns<br/></li><li><strong>so if your workload is mostly interested by the whole row, you’re better off creating a complete MV to read from than using a SI</strong></li><li>this is even more true if you plan to do range scans as this double query could lead you to read from multiple nodes instead of one<br/></li></ul></li><li><a href="https://www.scylladb.com/2018/11/01/more-efficient-range-scan-paging-with-scylla-3-0/" rel="noreferrer noopener" target="_blank">Range scan</a> is way more performant<ul><li>ALLOW FILTERING finally allows a great flexibility by providing <strong>server-side filtering</strong>!</li></ul></li></ul>



<figure class="wp-block-image"><img alt="" class="wp-image-2145" src="https://www.ultrabug.fr/wordpress/wp-content/uploads/2018/12/DrcOcTSUUAAJSVi-1024x248.jpg"/></figure>



<h2>Random notes</h2>



<p>Support for <strong>LWT</strong> (lightweight transactions) will be relying on a future implementation of the Raft consensus algorithm inside Scylla. This work will also benefits Materialized Views consistency. Duarte Nunes will be the one working on this and I envy him very much!<br/></p>



<p>Support for <strong>search</strong> workloads is high in the ScyllaDB devs priorities so we should definitely hear about it in the coming months.</p>



<p>Support for “<strong>mc</strong>” sstables (new generation format) is done and will reduce storage requirements thanks to metadata / data compression. Migration will be transparent because Scylla can read previous formats as well so it will upgrade your sstables as it compacts them.<br/></p>



<p>ScyllaDB developers have not settled on how to best implement <strong>CDC</strong>. I hope they do rather soon because it is crucial in their ability to integrate well with Kafka!<br/></p>



<p>Materialized Views, Secondary Indexes and filtering will benefit from the work on <strong>partition key and indexes intersections</strong> to avoid server side filtering on the coordinator. That’s an important optimization to come!</p>



<p>Last but not least, I’ve had the pleasure to discuss with Takuya Asada who is the packager of Scylla for RedHat/CentOS &amp; Debian/Ubuntu. We discussed <strong>Gentoo Linux packaging</strong> requirements as well as the recent and promising work on a relocatable package. We will collaborate more closely in the future!<br/></p></div>
    </content>
    <updated>2018-12-06T22:53:04Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="scylla"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:32Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=807</id>
    <link href="https://blogs.gentoo.org/mgorny/2018/11/25/portability-of-tar-features/" rel="alternate" type="text/html"/>
    <title>Portability of tar features</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">The tar format is one of the oldest archive formats in use. It comes as no surprise that it is ugly — built as layers of hacks on the older format versions to overcome their limitations. However, given the POSIX standarization in late 80s and the popularity of GNU tar, you would expect the interoperability … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2018/11/25/portability-of-tar-features/">Continue reading<span class="screen-reader-text"> "Portability of tar features"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>The tar format is one of the oldest archive formats in use.  It comes as no surprise that it is ugly — built as layers of hacks on the older format versions to overcome their limitations.  However, given the POSIX standarization in late 80s and the popularity of GNU tar, you would expect the interoperability problems to be mostly resolved nowadays.</p>
<p>This article is directly inspired by my proof-of-concept work on new binary package format for Gentoo.  My original proposal used volume label to provide user- and file(1)-friendly way of distinguish our binary packages.  While it is a GNU tar extension, it falls within POSIX ustar implementation-defined file format and you would expect that non-compliant implementations would extract it as regular files.  What I did not anticipate is that some implementation reject the whole archive instead.</p>
<p>This naturally raised more questions on how portable various tar formats actually are.  To verify that, I have decided to analyze the standards for possible incompatibility dangers and build a suite of test inputs that could be used to check how various implementations cope with that.  This article describes those points and provides test results for a number of implementations.</p>
<p>Please note that this article is focused merely on read-wise format compatibility.  In other words, it establishes how tar files should be written in order to achieve best probability that it will be read correctly afterwards.  It does not investigate what formats the listed tools can write and whether they can correctly create archives using specific features.</p>
<p><a href="https://dev.gentoo.org/~mgorny/articles/portability-of-tar-features.html" rel="external">Continue reading</a></p></div>
    </content>
    <updated>2018-11-25T14:26:20Z</updated>
    <category term="Gentoo"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:22Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2127</id>
    <link href="http://www.ultrabug.fr/py3status-v3-14/" rel="alternate" type="text/html"/>
    <title>py3status v3.14</title>
    <summary>I’m happy to announce this release as it contains some very interesting developments in the project. This release was focused on core changes. IMPORTANT notice There are now two optional dependencies to py3status: gevent will monkey patch the code to make it concurrent the main benefit is to use an asynchronous loop instead of threads […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I’m happy to announce this release as it contains some very interesting developments in the project. This <a href="https://github.com/ultrabug/py3status/issues/1526" rel="noreferrer noopener" target="_blank">release was focused</a> on core changes.<br/></p>



<h2>IMPORTANT notice</h2>



<p>There are now two <a href="https://py3status.readthedocs.io/en/latest/intro.html#installation" rel="noreferrer noopener" target="_blank">optional dependencies</a> to py3status:</p>



<ul><li><strong>gevent</strong><ul><li>will monkey patch the code to make it concurrent</li><li>the main benefit is to use an asynchronous loop instead of threads<br/></li></ul></li><li><strong>pyudev</strong><ul><li>will enable a udev monitor if a module asks for it (only xrandr so far)</li><li>the benefit is described below<br/></li></ul></li></ul>



<p>To install them all using pip, simply do:<br/></p>



<pre class="wp-block-preformatted">pip install py3status[all]</pre>



<p/>



<h2>Modules can now react/refresh on udev events<br/></h2>



<p>When pyudev is available, py3status will <a href="https://py3status.readthedocs.io/en/latest/configuration.html#refreshing-modules-on-udev-events-with-on-udev-dynamic-options" rel="noreferrer noopener" target="_blank">allow modules to subscribe and react to udev events</a>!</p>



<p>The xrandr module uses this feature by default which allows the module to instantly refresh when you plug in or off a secondary monitor. This also allows to stop running the xrandr command in the background and saves a lot of CPU!</p>



<h2>Highlights</h2>



<ul><li> py3status core uses black formatter</li><li>fix default i3status.conf detection<ul><li>add ~/.config/i3 as a default config directory, closes #1548</li><li>add .config/i3/py3status in default user modules include directories</li></ul></li><li>add markup (pango) support for modules (#1408), by @MikaYuoadas</li><li>py3: notify_user module name in the title (#1556), by @lasers</li><li>print module information to sdtout instead of stderr (#1565), by @robertnf</li><li>battery_level module: default to using sys instead of acpi (#1562), by @eddie-dunn</li><li>imap module: fix output formatting issue (#1559), by @girst</li></ul>



<h2>Thank you contributors!</h2>



<ul><li>eddie-dunn</li><li>girst</li><li>MikaYuoadas</li><li>robertnf</li><li>lasers</li><li>maximbaz</li><li>tobes</li></ul></div>
    </content>
    <updated>2018-11-10T21:08:47Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="portage"/>
    <category term="py3status"/>
    <category term="release"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:32Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/10/04/clip-os.html</id>
    <link href="https://www.gentoo.org/news/2018/10/04/clip-os.html" rel="alternate" type="text/html"/>
    <title>CLIP OS - a hardened, multi-level OS based on Gentoo Hardened</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://clip-os.org/en/">
  <img alt="CLIP OS logo" src="https://www.gentoo.org/assets/img/news/2018/logo-clipos.png"/>
</a>
<a href="https://ssi.gouv.fr/en">ANSSI, the National Cybersecurity Agency of France</a>, 
has <a href="https://clip-os.org/en/">released the sources of CLIP OS</a>, that aims to 
build a hardened, multi-level operating system, based on the Linux kernel and a 
lot of free and open source software. We are happy to hear that it is based on 
<a href="https://wiki.gentoo.org/wiki/Hardened_Gentoo">Gentoo Hardened</a>!</p></div>
    </summary>
    <updated>2018-10-04T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2101</id>
    <link href="http://www.ultrabug.fr/py3status-v3-13/" rel="alternate" type="text/html"/>
    <title>py3status v3.13</title>
    <summary>I am once again lagging behind the release blog posts but this one is an important one. I’m proud to announce that our long time contributor @lasers has become an official collaborator of the py3status project! Dear @lasers, your amazing energy and overwhelming ideas have served our little community for a while. I’m sure we’ll […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I am once again lagging behind the release blog posts but this one is an important one.</p>



<p>I’m proud to announce that our long time contributor <strong>@lasers</strong> has become an official collaborator of the py3status project!</p>



<p>Dear <strong>@<a href="https://github.com/lasers" target="_blank">lasers</a></strong>, your amazing energy and overwhelming ideas have served our little community for a while. I’m sure we’ll have a great way forward as we learn to work together with @tobes <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/> Thank you again very much for everything you do!</p>



<p>This release is as much dedicated to you as it is yours <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/></p>



<h2>IMPORTANT notice</h2>



<p>After this release, py3status coding style CI will enforce the ‘<a href="https://pypi.org/project/black/" target="_blank">black</a>‘ formatter style.<br/></p>



<h2>Highlights</h2>



<p>Needless to say that the <a href="https://github.com/ultrabug/py3status/blob/master/CHANGELOG" target="_blank">changelog</a> is huge, as usual, here is a very condensed view:</p>



<ul><li>documentation updates, especially on the formatter (thanks @L0ric0)<br/></li><li>py3 storage: use $XDG_CACHE_HOME or ~/.cache</li><li>formatter: multiple variable and feature fixes and enhancements</li><li>better config parser</li><li> new modules: lm_sensors, loadavg, mail, nvidia_smi, sql, timewarrior, wanda_the_fish </li></ul>



<h2>Thank you contributors!</h2>



<ul><li>lasers</li><li>tobes</li><li>maximbaz</li><li>cyrinux</li><li>Lorenz Steinert @L0ric0</li><li>wojtex</li><li>horgix</li><li>su8</li><li>Maikel Punie</li></ul></div>
    </content>
    <updated>2018-09-28T11:56:52Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="portage"/>
    <category term="py3status"/>
    <category term="release"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:33Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=804</id>
    <link href="https://blogs.gentoo.org/mgorny/2018/09/27/new-copyright-policy-explained/" rel="alternate" type="text/html"/>
    <title>New copyright policy explained</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">On 2018-09-15 meeting, the Trustees have given the final stamp of approval to the new Gentoo copyright policy outlined in GLEP 76. This policy is the result of work that has been slowly progressing since 2005, and that has taken considerable speed by the end of 2017. It is a major step forward from the status quo that has been used since the forming of Gentoo Foundation, and that mostly has been inherited … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2018/09/27/new-copyright-policy-explained/">Continue reading<span class="screen-reader-text"> "New copyright policy explained"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>On 2018-09-15 meeting, the Trustees have given the final stamp of approval to the new Gentoo copyright policy outlined in <a href="https://www.gentoo.org/glep/glep-0076.html" rel="external">GLEP 76</a>.  This policy is the result of work that has been slowly progressing since 2005, and that has taken considerable speed by the end of 2017.  It is a major step forward from the status quo that has been used since the forming of Gentoo Foundation, and that mostly has been inherited from earlier Gentoo Technologies.</p>
<p>The policy aims to cover all copyright-related aspects, bringing Gentoo in line with the practices used in many other large open source projects.  Most notably, it introduces a concept of Gentoo Certificate of Origin that requires all contributors to confirm that they are entitled to submit their contributions to Gentoo, and corrects the copyright attribution policy to be viable under more jurisdictions.</p>
<p>This article aims to shortly reiterate over the most important points in the new copyright policy, and provide a detailed guide on following it in Q&amp;A form.</p>
<p><a href="https://dev.gentoo.org/~mgorny/articles/new-gentoo-copyright-policy-explained.html" rel="external">Continue reading</a></p></div>
    </content>
    <updated>2018-09-27T06:47:24Z</updated>
    <category term="Legalese &amp; politics"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=789</id>
    <link href="https://blogs.gentoo.org/mgorny/2018/09/15/overriding-misreported-screen-dimensions-with-kms-backed-drivers/" rel="alternate" type="text/html"/>
    <title>Overriding misreported screen dimensions with KMS-backed drivers</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">With Qt5 gaining support for high-DPI displays, and applications starting to exercise that support, it’s easy for applications to suddenly become unusable with some screens. For example, my old Samsung TV reported itself as 7″ screen. While this used not to really matter with websites forcing you to force the resolution of 96 DPI, the high-DPI applications started scaling themselves to occupy most of my screen, with elements … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2018/09/15/overriding-misreported-screen-dimensions-with-kms-backed-drivers/">Continue reading<span class="screen-reader-text"> "Overriding misreported screen dimensions with KMS-backed drivers"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>With Qt5 gaining support for high-DPI displays, and applications starting to exercise that support, it’s easy for applications to suddenly become unusable with some screens.  For example, my old Samsung TV reported itself as 7″ screen.  While this used not to really matter with websites forcing you to force the resolution of 96 DPI, the high-DPI applications started scaling themselves to occupy most of my screen, with elements becoming really huge (and ugly, apparently due to some poor scaling).</p>
<p>It turns out that it is really hard to find a solution for this.  Most of the guides and tips are focused either on proprietary drivers or on getting custom resolutions.  The <kbd>DisplaySize</kbd> specification in <kbd>xorg.conf</kbd> apparently did not change anything either.  Finally, I was able to resolve the issue by overriding the EDID data for my screen.  This guide explains how I did it.</p>
<p><span id="more-789"/></p>
<h2>Step 1: dump EDID data</h2>
<p>Firstly, you need to get the EDID data from your monitor.  Supposedly <a href="http://www.polypux.org/projects/read-edid/" rel="external">read-edid</a> tool could be used for this purpose but it did not work for me.  With only a little bit more effort, you can get it e.g. from xrandr:</p>
<pre>$ <kbd>xrandr --verbose</kbd>
[...]
HDMI-0 connected primary 1920x1080+0+0 (0x57) normal (normal left inverted right x axis y axis) 708mm x 398mm
[...]
  EDID:
    00ffffffffffff004c2dfb0400000000
    2f120103804728780aee91a3544c9926
    0f5054bdef80714f8100814081809500
    950fb300a940023a801871382d40582c
    4500c48e2100001e662150b051001b30
    40703600c48e2100001e000000fd0018
    4b1a5117000a2020202020200000000a
    0053414d53554e470a20202020200143
    020323f14b901f041305140312202122
    2309070783010000e2000f67030c0010
    00b82d011d007251d01e206e285500c4
    8e2100001e011d00bc52d01e20b82855
    40c48e2100001e011d8018711c162058
    2c2500c48e2100009e011d80d0721c16
    20102c2580c48e2100009e0000000000
    00000000000000000000000000000029
[...]</pre>
<p>If you have multiple displays connected, make sure to use the EDID for the one you’re overriding.  Copy the hexdump and convert it to a binary blob.  You can do this by passing it through <kbd>xxd -p -r</kbd> (installed by vim).</p>
<h2>Step 2: fix screen dimensions</h2>
<p>Once you have the EDID blob ready, you need to update the screen dimensions inside it.  Initially, I did it using hex editor which involved finding all the occurrences, updating them (and manually encoding into the weird split-integers) and correcting the checksums.  Then, I’ve written <a href="https://github.com/mgorny/edid-fixdim" rel="external">edid-fixdim</a> so you wouldn’t have to repeat that experience.</p>
<p>First, use <kbd>--get</kbd> option to verify that your EDID is supported correctly:</p>
<pre>$ <kbd>edid-fixdim -g edid.bin</kbd>
EDID structure: 71 cm x 40 cm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
CEA EDID found
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm
Detailed timing desc: 708 mm x 398 mm</pre>
<p>So your EDID consists of basic EDID structure, followed by one extension block.  The screen dimensions are stored in 7 different blocks you’d have to update, and referenced in two checksums.  The tool will take care of updating it all for you, so just pass the correct dimensions to <kbd>--set</kbd>:</p>
<pre>$ <kbd>edid-fixdim -s 1600x900 edid.bin</kbd>
EDID structure updated to 160 cm x 90 cm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
CEA EDID found
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm
Detailed timing desc updated to 1600 mm x 900 mm</pre>
<p>Afterwards, you can use <kbd>--get</kbd> again to verify that the changes were made correctly.</p>
<h2>Step 3: overriding EDID data</h2>
<p>Now it’s just the matter of putting the override in motion.  First, make sure to enable <kbd>CONFIG_DRM_LOAD_EDID_FIRMWARE</kbd> in your kernel:</p>
<pre>Device Drivers  ---&gt;
  Graphics support  ---&gt;
    Direct Rendering Manager (XFree86 4.1.0 and higher DRI support)  ---&gt;
      [*] Allow to specify an EDID data set instead of probing for it</pre>
<p>Then, determine the correct connector name.  You can find it in <kbd>dmesg</kbd> output:</p>
<pre>$ <kbd>dmesg | grep -C 1 Connector</kbd>
[   15.192088] [drm] ib test on ring 5 succeeded
[   15.193461] [drm] Radeon Display Connectors
[   15.193524] [drm] Connector 0:
[   15.193580] [drm]   <strong>HDMI-A-1</strong>
--
[   15.193800] [drm]     DFP1: INTERNAL_UNIPHY1
[   15.193857] [drm] Connector 1:
[   15.193911] [drm]   <strong>DVI-I-1</strong>
--
[   15.194210] [drm]     CRT1: INTERNAL_KLDSCP_DAC1
[   15.194267] [drm] Connector 2:
[   15.194322] [drm]   <strong>VGA-1</strong></pre>
<p>Copy the new EDID blob into location of your choice inside <kbd>/lib/firmware</kbd>:</p>
<pre>$ <kbd>mkdir /lib/firmware/edid</kbd>
$ <kbd>cp edid.bin /lib/firmware/edid/samsung.bin</kbd></pre>
<p>Finally, add the override to your kernel command-line:</p>
<pre><kbd>drm.edid_firmware=HDMI-A-1:edid/samsung.bin</kbd></pre>
<p>If everything went fine, <kbd>xrandr</kbd> should report correct screen dimensions after next reboot, and <kbd>dmesg</kbd> should report that EDID override has been loaded:</p>
<p/><pre>$ <kbd>dmesg | grep EDID<kbd>
[   15.549063] [drm] Got external EDID base block and 1 extension from "edid/samsung.bin" for connector "HDMI-A-1"</kbd></kbd></pre>
<p>If it didn't, check <kbd>dmesg</kbd> for error messages.</p></div>
    </content>
    <updated>2018-09-15T09:00:38Z</updated>
    <category term="Gentoo"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:22Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/09/07/gsoc.html</id>
    <link href="https://www.gentoo.org/news/2018/09/07/gsoc.html" rel="alternate" type="text/html"/>
    <title>Gentoo congratulates our GSoC participants</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://summerofcode.withgoogle.com/">
  <img alt="GSOC logo" src="https://www.gentoo.org/assets/img/news/2018/logo-gsoc.png"/>
</a>
Gentoo would like to congratulate Gibix and JSteward for finishing and passing Google’s Summer of Code for the 2018 calendar year. 
Gibix contributed by enhancing Rust (programming language) support within Gentoo.
JSteward contributed by making a full Gentoo GNU/Linux distribution, managed by Portage, run on devices which use the original Android-customized kernel.</p>

<p>The final reports of their projects can be reviewed on their personal blogs:</p>
<ul>
  <li>Gibix: <a href="https://gibix.github.io/gsoc/2018/08/11/journey-into-gentoo-eclass.html">Journey into Gentoo eclass</a>, <a href="https://gibix.github.io/gsoc/2018/08/12/gsoc-timeline.html">GSoC timeline</a></li>
  <li>JSteward: <a href="https://jsteward.moe/gsoc-2018-final-report.html">Final_report</a></li>
</ul></div>
    </summary>
    <updated>2018-09-07T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blogs.gentoo.org/mgorny/?p=778</id>
    <link href="https://blogs.gentoo.org/mgorny/2018/08/24/securing-google-authenticator-libpam-against-reading-secrets/" rel="alternate" type="text/html"/>
    <title>Securing google-authenticator-libpam against reading secrets</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">I have recently worked on enabling 2-step authentication via SSH on the Gentoo developer machine. I have selected google-authenticator-libpam amongst different available implementations as it seemed the best maintained and having all the necessary features, including a friendly tool for users to configure it. However, its design has a weakness: it stores the secret unprotected in user’s home directory. This means that if an attacker manages to gain at least temporary … <p class="link-more"><a class="more-link" href="https://blogs.gentoo.org/mgorny/2018/08/24/securing-google-authenticator-libpam-against-reading-secrets/">Continue reading<span class="screen-reader-text"> "Securing google-authenticator-libpam against reading secrets"</span></a></p></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I have recently worked on enabling 2-step authentication via SSH on the Gentoo developer machine.  I have selected <a href="https://github.com/google/google-authenticator-libpam" rel="external">google-authenticator-libpam</a> amongst different available implementations as it seemed the best maintained and having all the necessary features, including a friendly tool for users to configure it.  However, its design has a weakness: it stores the secret unprotected in user’s home directory.</p>
<p>This means that if an attacker manages to gain at least temporary access to the filesystem with user’s privileges — through a malicious process, vulnerability or simply because someone left the computer unattended for a minute — he can trivially read the secret and therefore clone the token source without leaving a trace.  It would completely defeat the purpose of the second step, and the user may not even notice until the attacker makes real use of the stolen secret.</p>
<p><span id="more-778"/></p>
<p>In order to protect against this, I’ve created <a href="https://github.com/mgorny/google-authenticator-wrappers" rel="external">google-authenticator-wrappers</a> (as upstream <a href="https://github.com/google/google-authenticator-libpam/issues/105" rel="external">decided to ignore the problem</a>).  This package provides a rather trivial setuid wrapper that manages a write-only, authentication-protected secret store for the PAM module.  Additionally, it comes with a test program (so you can test the OTP setup without jumping through the hoops or risking losing access) and friendly wrappers for the default setup, as used on Gentoo Infra.</p>
<p>The recommended setup (as utilized by <a href="https://packages.gentoo.org/packages/sys-auth/google-authenticator-wrappers" rel="external">sys-auth/google-authenticator-wrappers</a> package) is to use a dedicated user for the password store.  In this scenario, the users are unable to read their secrets, and all secret operations (including authentication via the PAM module) are done using an unprivileged user.  Furthermore, any operation regarding the configuration (either updating it or removing the second step) require regular PAM authentication (e.g. typing your own password).</p>
<p>This is consistent with e.g. how shadow operates (users can’t read their passwords, nor update them without authenticating first), how most sites using 2-factor authentication operate (again, users can’t read their secrets) and follows the <a href="https://tools.ietf.org/html/rfc6238" rel="external">RFC 6238</a> recommendation (that <q>keys […] SHOULD be protected against unauthorized access and usage</q>).  It solves the aforementioned issue by preventing user-privileged processes from reading the secrets and recovery codes.  Furthermore, it prevents the attacker with this particular level of access from disabling 2-step authentication, changing the secret or even weakening the configuration.</p></div>
    </content>
    <updated>2018-08-24T06:44:17Z</updated>
    <category term="Security"/>
    <author>
      <name>Michał Górny</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/mgorny</id>
      <link href="https://blogs.gentoo.org/mgorny/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/mgorny" rel="alternate" type="text/html"/>
      <subtitle>Retroactively fixing the world</subtitle>
      <title>Gentoo – Michał Górny</title>
      <updated>2019-07-10T18:02:21Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/lu_zero/?p=685</id>
    <link href="https://blogs.gentoo.org/lu_zero/2018/08/17/gentoo-on-integricloud/#utm_source=feed&amp;utm_medium=feed&amp;utm_campaign=feed" rel="alternate" type="text/html"/>
    <link href="https://dev.gentoo.org/~lu_zero/install-powerpc-minimal-20180815.iso" rel="enclosure" title="installcd"/>
    <title>Gentoo on Integricloud</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Integricloud gave me access to their infrastructure to track some issues on ppc64 and ppc64le. Since some of the issues are related to the compilers, I obviously installed Gentoo on it and in the process I started to fix some issues with catalyst to get a working install media, but that’s for another blogpost. Today … <a class="more-link" href="https://blogs.gentoo.org/lu_zero/2018/08/17/gentoo-on-integricloud/">Continue reading <span class="screen-reader-text">Gentoo on Integricloud</span></a></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a href="https://integricloud.com">Integricloud</a> gave me access to their infrastructure to track some issues on <strong>ppc64</strong> and <strong>ppc64le</strong>.</p>
<p>Since some of the issues are related to the compilers, I obviously installed Gentoo on it and in the process I started to fix some issues with catalyst to get a working install media, but that’s for another blogpost.</p>
<p>Today I’m just giving a walk-through on how to get a ppc64le (and ppc64 soon) VM up and running.</p>
<h2>Preparation</h2>
<p>Read <a href="https://secure.integricloud.com/content/kb/1.html">this</a> and get your install media available to your instance.</p>
<h2>Install Media</h2>
<p>I’m using the Gentoo <a href="https://dev.gentoo.org/~lu_zero/install-powerpc-minimal-20180815.iso">installcd</a> I’m currently refining.</p>
<h2>Booting</h2>
<p>You have to append <code>console=hvc0</code> to your boot command, the boot process might figure it out for you on newer install medias (I still have to send patches to update <a href="https://cgit.gentoo.org/proj/livecd-tools.git/">livecd-tools</a>)</p>
<h2>Network configuration</h2>
<p>You have to manually setup the network.<br/>
You can use <code>ifconfig</code> and <code>route</code> or <code>ip</code> as you like, refer to your instance setup for the parameters.</p>
<pre><code>ifconfig enp0s0 ${ip}/16
route add -net default gw ${gw}
echo "nameserver 8.8.8.8" &gt; /etc/resolv.conf
</code></pre>
<pre><code>ip a add ${ip}/16 dev enp0s0
ip l set enp0s0 up
ip r add default via ${gw}
echo "nameserver 8.8.8.8" &gt; /etc/resolv.conf
</code></pre>
<h2>Disk Setup</h2>
<p>OpenFirmware seems to like gpt much better:</p>
<pre><code>parted /dev/sda mklabel gpt
</code></pre>
<p>You may use <code>fdisk</code> to create:<br/>
– a PowerPC PrEP boot partition of 8M<br/>
– root partition with the remaining space</p>
<pre><code>Device     Start      End  Sectors Size Type
/dev/sda1   2048    18431    16384   8M PowerPC PReP boot
/dev/sda2  18432 33554654 33536223  16G Linux filesystem
</code></pre>
<p>I’m using <code>btrfs</code> and zstd-compress <code>/usr/portage</code> and <code>/usr/src/</code>.</p>
<pre><code>mkfs.btrfs /dev/sda2
</code></pre>
<h2>Initial setup</h2>
<p>It is pretty much the usual.</p>
<pre><code>mount /dev/sda2 /mnt/gentoo
cd /mnt/gentoo
wget https://dev.gentoo.org/~mattst88/ppc-stages/stage3-ppc64le-20180810.tar.xz
tar -xpf stage3-ppc64le-20180810.tar.xz
mount -o bind /dev dev
mount -t devpts devpts dev/pts
mount -t proc proc proc
mount -t sysfs sys sys
cp /etc/resolv.conf etc
chroot .
</code></pre>
<p>You just have to emerge <code>grub</code> and <code>gentoo-sources</code>, I diverge from the defconfig by making <code>btrfs</code> builtin.</p>
<p>My <code>/etc/portage/make.conf</code>:</p>
<pre><code>CFLAGS="-O3 -mcpu=power9 -pipe"
# WARNING: Changing your CHOST is not something that should be done lightly.
# Please consult https://wiki.gentoo.org/wiki/Changing_the_CHOST_variable beforee
 changing.
CHOST="powerpc64le-unknown-linux-gnu"

# NOTE: This stage was built with the bindist Use flag enabled
PORTDIR="/usr/portage"
DISTDIR="/usr/portage/distfiles"
PKGDIR="/usr/portage/packages"

USE="ibm altivec vsx"

# This sets the language of build output to English.
# Please keep this setting intact when reporting bugs.
LC_MESSAGES=C
ACCEPT_KEYWORDS=~ppc64

MAKEOPTS="-j4 -l6"
EMERGE_DEFAULT_OPTS="--jobs 10 --load-average 6 "
</code></pre>
<p>My minimal set of packages I need before booting:</p>
<pre><code>emerge grub gentoo-sources vim btrfs-progs openssh
</code></pre>
<blockquote><p>
  <strong>NOTE:</strong> You want to emerge again <code>openssh</code> and make sure <code>bindist</code> is not in your <code>USE</code>.
</p></blockquote>
<h3>Kernel &amp; Bootloader</h3>
<pre><code>cd /usr/src/linux
make defconfig
make menuconfig # I want btrfs builtin so I can avoid a initrd
make -j 10 all &amp;&amp; make install &amp;&amp; make modules_install
grub-install /dev/sda1
grub-mkconfig -o /boot/grub/grub.cfg
</code></pre>
<blockquote><p>
  <strong>NOTE:</strong> make sure you pass <code>/dev/sda1</code> otherwise grub will happily assume <strong>OpenFirmware</strong> knows about <code>btrfs</code> and just point it to your directory.<br/>
  That’s not the case unfortunately.
</p></blockquote>
<h3>Networking</h3>
<p>I’m using <a href="https://wiki.gentoo.org/wiki/Netifrc">netifrc</a> and I’m using the eth0-naming-convention.</p>
<pre><code>touch /etc/udev/rules.d/80-net-name-slot.rules
ln -sf /etc/init.d/net.{lo,eth0}
echo -e "config_eth0=\"${ip}/16\"\nroutes_eth0="default via ${gw}\"\ndns_servers_eth0=\"8.8.8.8\"" &gt; /etc/conf.d/net
</code></pre>
<h3>Password and SSH</h3>
<p>Even if the <code>mticlient</code> is quite nice, you would rather use <code>ssh</code> as much as you could.</p>
<pre><code>passwd 
rc-update add sshd default
</code></pre>
<h3>Finishing touches</h3>
<p>Right now <code>sysvinit</code> does not add the <code>hvc0</code> console as it should due to a profile quirk, for now check <code>/etc/inittab</code> and in case add:</p>
<pre><code>echo 'hvc0:2345:respawn:/sbin/agetty -L 9600 hvc0' &gt;&gt; /etc/inittab
</code></pre>
<p>Add your user and add your ssh key and you are ready to use your new system!</p></div>
    </content>
    <updated>2018-08-17T22:44:08Z</updated>
    <category term="Gentoo"/>
    <category term="Power"/>
    <author>
      <name>lu_zero</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/lu_zero</id>
      <link href="https://blogs.gentoo.org/lu_zero/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/lu_zero" rel="alternate" type="text/html"/>
      <subtitle>Just another Gentoo Blogs site</subtitle>
      <title>Gentoo – Luca Barbato</title>
      <updated>2019-07-01T21:02:28Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/08/12/pwnie-for-hanno.html</id>
    <link href="https://www.gentoo.org/news/2018/08/12/pwnie-for-hanno.html" rel="alternate" type="text/html"/>
    <title>Congratulations: Hanno Böck and co-authors win Pwnie!</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://pwnies.com/">
  <img alt="Pwnies logo" src="https://www.gentoo.org/assets/img/news/2018/logo-pwnies.png"/>
</a></p>

<p>Congratulations to security researcher and Gentoo developer <a href="https://wiki.gentoo.org/wiki/User:Hanno">Hanno Böck</a> and his co-authors Juraj Somorovsky and Craig Young for winning one of this year’s coveted <a href="https://pwnies.com/winners/#crypto">Pwnie awards</a>!</p>

<p>The award is for their work on the <a href="https://robotattack.org/">Return Of Bleichenbacher’s Oracle Threat or ROBOT vulnerability</a>, which at the time of discovery affected such illustrious sites as Facebook and Paypal. Technical details can be found in the <a href="https://eprint.iacr.org/2017/1189">full paper published at the Cryptology ePrint Archive</a>.</p></div>
    </summary>
    <updated>2018-08-12T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/08/12/gentoo-at-froscon.html</id>
    <link href="https://www.gentoo.org/news/2018/08/12/gentoo-at-froscon.html" rel="alternate" type="text/html"/>
    <title>Gentoo booth at the FrOSCon, St. Augustin, Germany</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><a class="news-img-right" href="https://www.froscon.de/en/">
  <img alt="FroSCon logo" src="https://www.gentoo.org/assets/img/news/2017/logo-froscon.png"/>
</a></p>

<p>As last year, there will be a Gentoo booth again at the
upcoming <a href="https://www.froscon.de/en/">FrOSCon “Free and Open Source
Conference”</a> in St. Augustin near Bonn! Visitors
can meet Gentoo developers to ask any question, get Gentoo swag, and prepare,
configure, and compile their own Gentoo buttons.</p>

<p>The conference is 25th and 26th of August 2018, and there is no entry fee. See you there!</p></div>
    </summary>
    <updated>2018-08-12T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2084</id>
    <link href="http://www.ultrabug.fr/authenticating-and-connecting-to-a-ssl-enabled-scylla-cluster-using-spark-2/" rel="alternate" type="text/html"/>
    <title>Authenticating and connecting to a SSL enabled Scylla cluster using Spark 2</title>
    <summary>This quick article is a wrap up for reference on how to connect to ScyllaDB using Spark 2 when authentication and SSL are enforced for the clients on the Scylla cluster. We encountered multiple problems, even more since we distribute our workload using a YARN cluster so that our worker nodes should have everything they […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>This quick article is a wrap up for reference on how to <strong>connect to ScyllaDB using Spark 2</strong> when authentication and SSL are enforced for the clients on the Scylla cluster.</p>
<p>We encountered multiple problems, even more since we distribute our workload using a YARN cluster so that our worker nodes should have everything they need to connect properly to Scylla.</p>
<p>We found very little help online so I hope it will serve anyone facing similar issues (that’s also why I copy/pasted them here).</p>
<p>The <a href="http://docs.scylladb.com/operating-scylla/security/" rel="noopener" target="_blank">authentication part</a> is easy going by itself and was not the source of our problems, SSL on the client side was.</p>
<h1>Environment</h1>
<ul>
<li>(py)spark: 2.1.0.cloudera2</li>
<li>spark-cassandra-connector: datastax:spark-cassandra-connector: 2.0.1-s_2.11</li>
<li>python: 3.5.5</li>
<li>java: 1.8.0_144</li>
<li>scylladb: 2.1.5</li>
</ul>
<h1>SSL cipher setup</h1>
<p>The Datastax spark cassandra driver uses default the <strong>TLS_RSA_WITH_AES_256_CBC_SHA</strong> cipher that the JVM does not support by default. This raises the following error when connecting to Scylla:</p>
<pre>18/07/18 13:13:41 WARN channel.ChannelInitializer: Failed to initialize a channel. Closing: [id: 0x8d6f78a7]
java.lang.IllegalArgumentException: Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers
</pre>
<p>According to the <a href="https://github.com/datastax/spark-cassandra-connector/blob/master/doc/reference.md#cassandra-ssl-connection-options" rel="noopener" target="_blank">ssl documentation</a> we have two ciphers available:</p>
<ol>
<li>TLS_RSA_WITH_AES_256_CBC_SHA</li>
<li>TLS_RSA_WITH_AES_128_CBC_SHA</li>
</ol>
<p>We can get get rid of the error by lowering the cipher to <strong>TLS_RSA_WITH_AES_128_CBC_SHA</strong> using the following configuration:</p>
<pre>.config("spark.cassandra.connection.ssl.enabledAlgorithms", "TLS_RSA_WITH_AES_128_CBC_SHA")\
</pre>
<p>However, this is not really a good solution and instead we’d be inclined to <strong>use the TLS_RSA_WITH_AES_256_CBC_SHA</strong> version. For this we need to follow this <a href="https://support.datastax.com/hc/en-us/articles/204226129-Receiving-error-Caused-by-java-lang-IllegalArgumentException-Cannot-support-TLS-RSA-WITH-AES-256-CBC-SHA-with-currently-installed-providers-on-DSE-startup-after-setting-up-client-to-node-encryption" rel="noopener" target="_blank">Datastax’s procedure</a>.</p>
<p>Then we need to deploy the JCE security jars <strong>on our all client nodes</strong>, if using YARN like us this means that you have to deploy these jars to all your NodeManager nodes.</p>
<p>For example by hand:</p>
<pre># unzip jce_policy-8.zip
# cp UnlimitedJCEPolicyJDK8/*.jar /opt/oracle-jdk-bin-1.8.0.144/jre/lib/security/
</pre>
<h1>Java trust store</h1>
<p>When connecting, the clients need to be able to validate the Scylla cluster’s self-signed CA. This is done by setting up a <strong>trustStore JKS file</strong> and providing it to the spark connector configuration (note that you protect this file with a password).</p>
<h2>keyStore vs trustStore</h2>
<p>In SSL handshake purpose of <strong>trustStore is to verify credentials</strong> and purpose of <strong>keyStore is to provide credentials</strong>. keyStore in Java stores private key and certificates corresponding to the public keys and is required if you are a SSL Server or SSL requires client authentication. TrustStore stores certificates from third parties or your own self-signed certificates, your application identify and validates them using this trustStore.</p>
<p>The <a href="https://github.com/datastax/spark-cassandra-connector/blob/master/doc/reference.md#cassandra-ssl-connection-options" rel="noopener" target="_blank">spark-cassandra-connector documentation</a> has two options to handle keyStore and trustStore.</p>
<p>When we did not use the <strong>trustStore</strong> option, we would get some obscure error when connecting to Scylla:</p>
<pre>com.datastax.driver.core.exceptions.TransportException: [node/1.1.1.1:9042] Channel has been closed
</pre>
<p>When enabling DEBUG logging, we get a clearer error which indicated a failure in validating the SSL certificate provided by the Scylla server node:</p>
<pre>Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
</pre>
<h2>setting up the trustStore JKS</h2>
<p>You need to have the self-signed CA public certificate file, then issue the following command:</p>
<pre># keytool -importcert -file /usr/local/share/ca-certificates/MY_SELF_SIGNED_CA.crt -keystore COMPANY_TRUSTSTORE.jks -noprompt
Enter keystore password:  
Re-enter new password: 
Certificate was added to keystore
</pre>
<h2>using the trustStore</h2>
<p>Now you need to configure spark to use the trustStore like this:</p>
<pre>.config("spark.cassandra.connection.ssl.trustStore.password", "PASSWORD")\
.config("spark.cassandra.connection.ssl.trustStore.path", "COMPANY_TRUSTSTORE.jks")\
</pre>
<h1>Spark SSL configuration example</h1>
<p>This wraps up the SSL connection configuration used for spark.</p>
<p>This example uses pyspark2 and reads a table in Scylla from a YARN cluster:</p>
<pre>$ pyspark2 --packages datastax:spark-cassandra-connector:2.0.1-s_2.11 --files COMPANY_TRUSTSTORE.jks

&gt;&gt;&gt; spark = SparkSession.builder.appName("scylla_app")\
.config("spark.cassandra.auth.password", "test")\
.config("spark.cassandra.auth.username", "test")\
.config("spark.cassandra.connection.host", "node1,node2,node3")\
.config("spark.cassandra.connection.ssl.clientAuth.enabled", True)\
.config("spark.cassandra.connection.ssl.enabled", True)\
.config("spark.cassandra.connection.ssl.trustStore.password", "PASSWORD")\
.config("spark.cassandra.connection.ssl.trustStore.path", "COMPANY_TRUSTSTORE.jks")\
.config("spark.cassandra.input.split.size_in_mb", 1)\
.config("spark.yarn.queue", "scylla_queue").getOrCreate()

&gt;&gt;&gt; df = spark.read.format("org.apache.spark.sql.cassandra").options(table="my_table", keyspace="test").load()
&gt;&gt;&gt; df.show()
</pre></div>
    </content>
    <updated>2018-07-19T11:37:43Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="scylla"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:33Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2066</id>
    <link href="http://www.ultrabug.fr/a-botspot-story/" rel="alternate" type="text/html"/>
    <title>A botspot story</title>
    <summary>I felt like sharing a recent story that allowed us identify a bot in a haystack thanks to Scylla.   The scenario While working on loading up 2B+ of rows into Scylla from Hive (using Spark), we noticed a strange behaviour in the performances of one of our nodes:   So we started wondering why […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>I felt like sharing a recent story that allowed us identify a bot in a haystack thanks to Scylla.</p>
<p><img alt="" class="alignleft wp-image-2068 " height="126" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/07/2018-07-06-102727_852x920_scrot-278x300.png" width="117"/></p>
<p> </p>
<h1>The scenario</h1>
<p>While working on loading up 2B+ of rows into Scylla from Hive (using Spark), we noticed a strange behaviour in the performances of one of our nodes:</p>
<p><img alt="" class="alignleft wp-image-2069 size-full" height="258" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/07/2018-07-06-103256_970x258_scrot.png" width="970"/></p>
<p> </p>
<p>So we started wondering why that server in blue was having those peaks of load and was clearly diverging from the two others… As we obviously <strong>expect the three nodes to behave the same</strong>, there were two options on the table:</p>
<ol>
<li><strong>hardware problem</strong> on the node</li>
<li><strong>bad data distribution</strong> (bad schema design? consistent hash problem?)</li>
</ol>
<p>We shared this with our pals from ScyllaDB and started working on finding out what was going on.</p>
<h1>The investigation</h1>
<h2>Hardware?</h2>
<p>Hardware problem was pretty quickly evicted, nothing showed up on the monitoring and on the kernel logs. I/O queues and throughput were good:</p>
<p><img alt="" class="alignleft wp-image-2079 size-full" height="255" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/07/2018-07-06-163923_983x255_scrot.png" width="983"/></p>
<h2>Data distribution?</h2>
<p>Avi Kivity (ScyllaDB’s CTO) quickly got the feeling that something was wrong with the data distribution and that we could be facing a <strong>hotspot situation</strong>. He quickly nailed it down to shard 44 thanks to the scylla-grafana-monitoring platform.</p>
<p>Data is distributed between shards that are stored on nodes (consistent hash ring). This distribution is done by hashing the primary key of your data which dictates the shard it belongs to (and thus the node(s) where the shard is stored).</p>
<p>If one of your keys is over represented in your original data set, then the shard it belongs to can be overly populated and the related node overloaded. <strong>This is called a hotspot situation</strong>.</p>
<h3>tracing queries</h3>
<p>The first step was to <strong>trace queries in Scylla</strong> to try to get deeper into the hotspot analysis. So we enabled tracing using the following formula to get about 1 trace per second in the <strong>system_traces</strong> namespace.</p>
<pre>tracing probability = 1 / expected requests per second throughput</pre>
<p>In our case, we were doing between 90K req/s and 150K req/s so we settled for 100K req/s to be safe and enabled tracing on our nodes like this:</p>
<pre># nodetool settraceprobability 0.00001</pre>
<p>Turns out tracing didn’t help very much in our case because the traces do not include the query parameters in Scylla 2.1, it is becoming available in the soon to be released 2.2 version.</p>
<p><strong>NOTE</strong>: traces expire on the tables, make sure your TRUNCATE the <strong>events</strong> and <strong>sessions</strong> tables while iterating. Else you will have to wait for the next gc_grace_period (10 days by default) before they are actually removed. If you do not do that and generate millions of traces like we did, querying the mentioned tables will likely time out because of the “tombstoned” rows even if there is no trace inside any more.</p>
<h3>looking at cfhistograms</h3>
<p>Glauber Costa was also helping on the case and got us looking at the <strong>cfhistograms</strong> of the tables we were pushing data to. That proved to be clearly highlighting a hotspot problem:</p>
<pre>histograms
Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count
                             (micros)          (micros)           (bytes)                  
50%             0,00              6,00              0,00               258                 2
75%             0,00              6,00              0,00               535                 5
95%             0,00              8,00              0,00              1916                24
98%             0,00             11,72              0,00              3311                50
99%             0,00             28,46              0,00              5722                72
Min             0,00              2,00              0,00               104                 0
Max             0,00          45359,00              0,00          14530764            182785</pre>
<p>What this basically means is that 99% percentile of our partitions are small (5KB) while the biggest is 14MB! That’s a huge difference and clearly shows that we have a hotspot on a partition somewhere.</p>
<p>So now we know for sure that we have an over represented key in our data set, <strong>but what key is it and why?</strong></p>
<h1>The culprit</h1>
<p>So we looked at the cardinality of our data set keys which are SHA256 hashes and found out that indeed we had one with more than 1M occurrences while the second highest one was around 100K!…</p>
<p>Now that we had the main culprit hash, we turned to our data streaming pipeline to figure out what kind of event was generating the data associated to the given SHA256 hash… and surprise! <strong>It was a client’s quality assurance bot that was constantly browsing their own website with legitimate behaviour and identity credentials associated to it</strong>.</p>
<p>So we modified our pipeline to detect this bot and discard its events so that it stops polluting our databases with fake data. Then we cleaned up the million of events worth of mess and traces we stored about the bot.</p>
<h1>The aftermath</h1>
<p>Finally, we cleared out the data in Scylla and tried again from scratch. Needless to say that the curves got way better and are exactly <strong>what we should expect from a well balanced cluster</strong>:</p>
<p><img alt="" class="alignleft size-full wp-image-2076" height="256" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/07/2018-07-06-163356_781x256_scrot.png" width="781"/></p>
<p><strong>Thanks a lot to the ScyllaDB team</strong> for their thorough help and high spirited support!</p>
<p>I’ll quote them conclude this quick blog post:</p>
<p><img alt="" class="alignleft wp-image-2077 size-full" height="211" src="http://www.ultrabug.fr/wordpress/wp-content/uploads/2018/07/2018-07-06-163628_387x211_scrot.png" width="387"/></p></div>
    </content>
    <updated>2018-07-06T14:50:48Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="scylla"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:33Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/php-56-php-71-upgrade-issues-internal-server-error/</id>
    <link href="https://blog.lordvan.com/blog/php-56-php-71-upgrade-issues-internal-server-error/" rel="alternate" type="text/html"/>
    <title>php 5.6 -&gt; php 7.1 upgrade issues .. (internal server error)</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Turns out I should read things like the gentoo wiki upgrade guide *first* to avoid issues ..</p>
<p>After installing php 7.1 to replace the (quite old) php 5.6 I did check php.ini ,.. but forgot to check for compiled modules and setting PHP_TARGETS .. then wondered why I just got Internal Server Error messages..</p>
<p>Thanks to the php team for writing this nice guide to remind people like me of what to do:</p>
<p><a href="https://wiki.gentoo.org/wiki/PHP/Upgrading_to_PHP_7.1">https://wiki.gentoo.org/wiki/PHP/Upgrading_to_PHP_7.1</a><a href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/Upgrading%20to%20PHP%207.1%20(Gentoo%20Wiki)" target="_blank" title="Upgrading to PHP 7.1 (Gentoo Wiki)"/></p>
<p>As always the Gentoo Wiki is a great source of information, and I like to use it as a reminder of the things needed when installing/upgrading,.. ;)</p></div>
    </summary>
    <updated>2018-07-03T09:41:16Z</updated>
    <category term="Gentoo"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://blog.sumptuouscapital.com/?p=1102</id>
    <link href="https://blog.sumptuouscapital.com/2018/06/my-comments-on-the-gentoo-github-hack/" rel="alternate" type="text/html"/>
    <title>My comments on the Gentoo Github hack</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Several news outlets are reporting on the takeover of the Gentoo GitHub organization that was announced recently. Today 28 June at approximately 20:20 UTC unknown individuals have gained control of the Github Gentoo organization, and modified the content of repositories as well as pages there. We are still working to determine the exact extent and … <a class="more-link" href="https://blog.sumptuouscapital.com/2018/06/my-comments-on-the-gentoo-github-hack/">Continue reading<span class="screen-reader-text"> "My comments on the Gentoo Github hack"</span></a><img alt="" height="0" src="https://analytics.sumptuouscapital.com/piwik.php?idsite=1&amp;rec=1&amp;url=https%3A%2F%2Fblog.sumptuouscapital.com%2F2018%2F06%2Fmy-comments-on-the-gentoo-github-hack%2F&amp;action_name=My+comments+on+the+Gentoo+Github+hack&amp;urlref=https%3A%2F%2Fblog.sumptuouscapital.com%2Ffeed%2F" style="border: 0; width: 0; height: 0;" width="0"/></div>
    </summary>
    <updated>2018-06-29T16:00:11Z</updated>
    <category term="Gentoo"/>
    <category term="gentoo"/>
    <category term="openpgp"/>
    <author>
      <name>Kristian Fiskerstrand</name>
    </author>
    <source>
      <id>https://blog.sumptuouscapital.com</id>
      <link href="https://blog.sumptuouscapital.com/category/gentoo-linux/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blog.sumptuouscapital.com" rel="alternate" type="text/html"/>
      <title>Gentoo – Sumptuous Capital: Blog</title>
      <updated>2018-06-29T19:02:28Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/06/28/Github-gentoo-org-hacked.html</id>
    <link href="https://www.gentoo.org/news/2018/06/28/Github-gentoo-org-hacked.html" rel="alternate" type="text/html"/>
    <title>Github Gentoo organization hacked - resolved</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><h2 id="2018-07-04-1400-utc">2018-07-04 14:00 UTC</h2>
<p>We believe this incident is now resolved. Please see the <a href="https://wiki.gentoo.org/wiki/Github/2018-06-28" title="Incident Report">incident report</a> for details about the incident, its impact, and resolution.</p>

<h2 id="2018-06-29-1515-utc">2018-06-29 15:15 UTC</h2>
<p>The community raised questions about the provenance of Gentoo packages. Gentoo development is performed on
hardware run by the Gentoo Infrastructure team (not <code class="highlighter-rouge">github</code>). The Gentoo hardware was unaffected by this incident.
Users using the default Gentoo mirroring infrastructure should not be affected.</p>

<p>If you are still concerned about provenance or are unsure what solution you are using, please consult https://wiki.gentoo.org/wiki/Project:Portage/Repository_Verification. This will instruct you on how to verify your repository.</p>

<h2 id="2018-06-29-0645-utc">2018-06-29 06:45 UTC</h2>
<p>The <code class="highlighter-rouge">gentoo</code> GitHub organization remains temporarily locked down by GitHub
support, pending fixes to pull-request content.</p>

<p>For ongoing status, please see the <a href="https://infra-status.gentoo.org/notice/20180629-github">Gentoo infra-status incident page</a>.</p>

<p>For later followup, please see the Gentoo Wiki page for <a href="https://wiki.gentoo.org/wiki/Github/2018-06-28">GitHub 2018-06-28</a>. An incident post-mortem will follow on the wiki.</p></div>
    </summary>
    <updated>2018-06-28T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.hartwork.org/posts/outdated-packages-feed-per-maintainer/</id>
    <link href="https://blog.hartwork.org/posts/outdated-packages-feed-per-maintainer/" rel="alternate" type="text/html"/>
    <title>Upstream release notification for package maintainers</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><div><p><a href="https://repology.org/">Repology</a> is monitoring package repositories
across Linux distributions.
By now,
<a href="https://github.com/repology/repology/issues/308">Atom feeds of per-maintainer outdated packages</a>
that I was waiting for have been implemented.</p>
<p>So I subscribed to
<a href="https://repology.org/maintainer/sping%40gentoo.org/feed-for-repo/gentoo/atom">my own Gentoo feed</a>
using <code>net-mail/rss2email</code> and now Repology notifies me via e-mail
of new upstream releases that other Linux distros have packaged that I still need to bump in Gentoo.
In my case, it brought an update of <code>dev-vcs/svn2git</code> to my attention
that I would have missed (or heard about <em>later</em>), otherwise.</p>
<p>Based on <a href="https://github.com/repology/repology/issues/308#issuecomment-391298282">this comment</a>,
Repology may soon do release detection upstream similar to what
<a href="http://euscan.gentooexperimental.org/maintainers/">euscan</a> does, as well.</p></div></div>
    </summary>
    <updated>2018-06-03T13:22:35Z</updated>
    <category term="Gentoo"/>
    <category term="Planet Gentoo"/>
    <author>
      <name>Sebastian Pipping</name>
    </author>
    <source>
      <id>https://blog.hartwork.org/</id>
      <link href="https://blog.hartwork.org/" rel="alternate" type="text/html"/>
      <link href="https://blog.hartwork.org/topics/planet-gentoo.xml" rel="self" type="application/rss+xml"/>
      <rights type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Contents © 2019 <a href="mailto:sebastian@pipping.org">Sebastian Pipping</a></div>
      </rights>
      <title>Hartwork Blog (Posts about Planet Gentoo)</title>
      <updated>2019-06-15T19:02:25Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://rgm.io/post/updates/index.html</id>
    <link href="https://rgm.io/post/updates/index.html" rel="alternate" type="text/html"/>
    <title>Updates</title>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Since I don't write anything here for almost 2 years, I think it is time for
some short updates:</p>
<ul>
<li>I left RedHat and moved to Berlin, Germany, in March 2017.</li>
<li>The series of posts about balde was stopped. The first post got a lot of
<a href="https://news.ycombinator.com/item?id=11747607">Hacker News</a> attention, and
I will come back with it as soon as I can implement the required changes in
the framework. Not going to happen very soon, though.</li>
<li>I've been spending most of my free time with flight simulation. You can
expect a few related posts soon.</li>
<li><a href="https://twitter.com/rafaelmartins/status/964836989953507328">I left the Gentoo GSoC administration this year.</a></li>
<li><a href="https://blogc.rgm.io/">blogc</a> is the only project that is currently getting
some frequent attention from me, as I use it for most of my websites. Check
it out! ;-)</li>
</ul>
<p>That's all for now.</p></div>
    </content>
    <updated>2018-04-21T14:35:00Z</updated>
    <published>2018-04-21T14:35:00Z</published>
    <author>
      <name>Rafael Martins</name>
      <email>rafael@rafaelmartins.eng.br</email>
    </author>
    <source>
      <id>https://rgm.io/atom/gentoo/index.xml</id>
      <author>
        <name>Rafael Martins</name>
        <email>rafael@rafaelmartins.eng.br</email>
      </author>
      <link href="https://rgm.io/" rel="alternate" type="text/html"/>
      <link href="https://rgm.io/atom/gentoo/index.xml" rel="self" type="application/atom+xml"/>
      <subtitle>Gentoo Linux, Engineering and random stuff.</subtitle>
      <title>Rafael Martins - gentoo</title>
      <updated>2018-04-21T14:35:00Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://blogs.gentoo.org/zmedico/?p=251</id>
    <link href="https://blogs.gentoo.org/zmedico/2018/04/17/portage-api-asyncio-event-loop/" rel="alternate" type="text/html"/>
    <title>portage API now provides an asyncio event loop policy</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">In portage-2.3.30, portage’s python API provides an asyncio event loop policy via a DefaultEventLoopPolicy class. For example, here’s a little program that uses portage’s DefaultEventLoopPolicy to do the same thing as emerge --regen, using an async_iter_completed function to implement the --jobs and --load-average options: #!/usr/bin/env python from __future__ import print_function import argparse import functools import multiprocessing import … <a class="more-link" href="https://blogs.gentoo.org/zmedico/2018/04/17/portage-api-asyncio-event-loop/">Continue reading <span class="screen-reader-text">portage API now provides an asyncio event loop policy</span></a></div>
    </summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>In portage-2.3.30, portage’s python API provides an <a href="https://docs.python.org/3/library/asyncio-eventloops.html#asyncio.set_event_loop_policy">asyncio event loop policy</a> via a DefaultEventLoopPolicy class. For example, here’s a little program that uses portage’s DefaultEventLoopPolicy to do the same thing as emerge --regen, using an async_iter_completed function to implement the --jobs and --load-average options:</p>
<pre>#!/usr/bin/env python

from __future__ import print_function

import argparse
import functools
import multiprocessing
import operator

import portage
from portage.util.futures.iter_completed import (
    async_iter_completed,
)
from portage.util.futures.unix_events import (
    DefaultEventLoopPolicy,
)


def handle_result(cpv, future):
    metadata = dict(zip(portage.auxdbkeys, future.result()))
    print(cpv)
    for k, v in sorted(metadata.items(),
        key=operator.itemgetter(0)):
        if v:
            print('\t{}: {}'.format(k, v))
    print()


def future_generator(repo_location, loop=None):

    portdb = portage.portdb

    for cp in portdb.cp_all(trees=[repo_location]):
        for cpv in portdb.cp_list(cp, mytree=repo_location):
            future = portdb.async_aux_get(
                cpv,
                portage.auxdbkeys,
                mytree=repo_location,
                loop=loop,
            )

            future.add_done_callback(
                functools.partial(handle_result, cpv))

            yield future


def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--repo',
        action='store',
        default='gentoo',
    )
    parser.add_argument(
        '--jobs',
        action='store',
        type=int,
        default=multiprocessing.cpu_count(),
    )
    parser.add_argument(
        '--load-average',
        action='store',
        type=float,
        default=multiprocessing.cpu_count(),
    )
    args = parser.parse_args()

    try:
        repo_location = portage.settings.repositories.\
            get_location_for_name(args.repo)
    except KeyError:
        parser.error('unknown repo: {}\navailable repos: {}'.\
            format(args.repo, ' '.join(sorted(
            repo.name for repo in
            portage.settings.repositories))))

    policy = DefaultEventLoopPolicy()
    loop = policy.get_event_loop()

    try:
        for future_done_set in async_iter_completed(
            future_generator(repo_location, loop=loop),
            max_jobs=args.jobs,
            max_load=args.load_average,
            loop=loop):
            loop.run_until_complete(future_done_set)
    finally:
        loop.close()



if __name__ == '__main__':
    main()
</pre></div>
    </content>
    <updated>2018-04-18T06:15:38Z</updated>
    <category term="Gentoo"/>
    <category term="Python"/>
    <category term="asyncio"/>
    <category term="portage"/>
    <category term="python"/>
    <author>
      <name>zmedico</name>
    </author>
    <source>
      <id>https://blogs.gentoo.org/zmedico</id>
      <link href="https://blogs.gentoo.org/zmedico/category/gentoo/feed/" rel="self" type="application/rss+xml"/>
      <link href="https://blogs.gentoo.org/zmedico" rel="alternate" type="text/html"/>
      <subtitle>Just another Gentoo Blogs site</subtitle>
      <title>Gentoo – Zac Medico</title>
      <updated>2018-11-15T15:04:34Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=2055</id>
    <link href="http://www.ultrabug.fr/py3status-v3-8/" rel="alternate" type="text/html"/>
    <title>py3status v3.8</title>
    <summary>Another long awaited release has come true thanks to our community! The changelog is so huge that I had to open an issue and cry for help to make it happen… thanks again @lasers for stepping up once again 🙂 Highlights gevent support (-g option) to switch from threads scheduling to greenlets and reduce resources […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Another long awaited release has come true <strong>thanks to our community</strong>!</p>
<p>The <a href="https://github.com/ultrabug/py3status/blob/master/CHANGELOG" rel="noopener" target="_blank">changelog</a> is so huge that I had to open an issue and cry for help to make it happen… thanks again <strong>@lasers</strong> for stepping up once again <img alt="&#x1F642;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f642.png" style="height: 1em;"/></p>
<h2>Highlights</h2>
<ul>
<li><strong>gevent support</strong> (-g option) to switch from threads scheduling to greenlets and reduce resources consumption</li>
<li><a href="http://py3status.readthedocs.io/en/latest/configuration.html#environment-variables" rel="noopener" target="_blank"><strong>environment variables support</strong></a> in i3status.conf to remove sensible information from your config</li>
<li>modules can now leverage a <strong>persistent data store</strong></li>
<li><strong>hundreds of improvements</strong> for various modules</li>
<li>we now have an official <strong>debian package</strong></li>
<li>we reached 500 stars on github #vanity</li>
</ul>
<h2>Milestone 3.9</h2>
<ul>
<li>try to release a version faster than every 4 months (j/k) <img alt="&#x1F609;" class="wp-smiley" src="https://s.w.org/images/core/emoji/11.2.0/72x72/1f609.png" style="height: 1em;"/></li>
</ul>
<p>The next release will focus on bugs and modules improvements / standardization.</p>
<h2>Thanks contributors!</h2>
<p>This release is their work, thanks a lot guys!</p>
<ul>
<li>alex o’neill</li>
<li>anubiann00b</li>
<li>cypher1</li>
<li>daniel foerster</li>
<li>daniel schaefer</li>
<li>girst</li>
<li>igor grebenkov</li>
<li>james curtis</li>
<li>lasers</li>
<li>maxim baz</li>
<li>nollain</li>
<li>raspbeguy</li>
<li>regnat</li>
<li>robert ricci</li>
<li>sébastien delafond</li>
<li>themistokle benetatos</li>
<li>tobes</li>
<li>woland</li>
</ul></div>
    </content>
    <updated>2018-04-03T12:06:29Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="portage"/>
    <category term="py3status"/>
    <category term="release"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:32Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/enlightenment-0223-fixes-lock-screen-bug-on-linux-pam-related/</id>
    <link href="https://blog.lordvan.com/blog/enlightenment-0223-fixes-lock-screen-bug-on-linux-pam-related/" rel="alternate" type="text/html"/>
    <title>enlightenment 0.22.3 fixes lock screen bug on linux (pam related)</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Thanks to the enlightenment devs for fixing this ;) no lock screen sucks :D</p>
<p><a href="https://www.enlightenment.org/news/e0.22.3_release">https://www.enlightenment.org/news/e0.22.3_release</a><a href="https://www.enlightenment.org/news/e0.22.3_release"/></p>
<p>also it is in my Gentoo dev overlay as of now.</p></div>
    </summary>
    <updated>2018-04-02T18:20:45Z</updated>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/gentoo-dev-overlay-in-layman-again-contains-efl-1207-and-enlightenment-02212/</id>
    <link href="https://blog.lordvan.com/blog/gentoo-dev-overlay-in-layman-again-contains-efl-1207-and-enlightenment-02212/" rel="alternate" type="text/html"/>
    <title>Gentoo Dev overlay in layman again - contains efl 1.20.7 and enlightenment 0.22.[12]</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>So a while ago I cleaned out my dev overlay and added dev-libs/efl-1.20.7 and x11-wm/enlightenment-0.22.1 (and 0.22.2)</p>
<p>Works for me at the moment (except the screen (un-)lock) but not sure if that has to do with my box. any testers welcome</p>
<p>Here's the link: <a href="https://gitweb.gentoo.org/dev/lordvan.git/">https://gitweb.gentoo.org/dev/lordvan.git/</a><a href="https://gitweb.gentoo.org/dev/lordvan.git/"/></p>
<p>Oh and I added it to layman's repo list again, so gentoo users can easily just "<code>layman -a lordvan</code>" to test it.</p>
<p>On a side note: 0.22.1 gave me trouble with a 2nd screen plugged in, which seems fixed in 0.22.2, but that has (pam related) problems with the lock screen ..</p></div>
    </summary>
    <updated>2018-04-02T17:53:58Z</updated>
    <category term="Development"/>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.hartwork.org/posts/holy-cow-larry-the-cow-gentoo-tattoo/</id>
    <link href="https://blog.hartwork.org/posts/holy-cow-larry-the-cow-gentoo-tattoo/" rel="alternate" type="text/html"/>
    <title>Holy cow! Larry the cow Gentoo tattoo</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Probably not new but was new to me: Just ran into this Larry the Cow tattoo
online: <a href="http://www.geekytattoos.com/larry-the-gender-challenged-cow/">http://www.geekytattoos.com/larry-the-gender-challenged-cow/</a></p></div>
    </summary>
    <updated>2018-03-17T14:53:08Z</updated>
    <category term="Gentoo"/>
    <category term="Planet Gentoo"/>
    <category term="Planet Gentoo Universe"/>
    <author>
      <name>Sebastian Pipping</name>
    </author>
    <source>
      <id>https://blog.hartwork.org/</id>
      <link href="https://blog.hartwork.org/" rel="alternate" type="text/html"/>
      <link href="https://blog.hartwork.org/topics/planet-gentoo.xml" rel="self" type="application/rss+xml"/>
      <rights type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml">Contents © 2019 <a href="mailto:sebastian@pipping.org">Sebastian Pipping</a></div>
      </rights>
      <title>Hartwork Blog (Posts about Planet Gentoo)</title>
      <updated>2019-06-15T19:02:25Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/running-ucs-univention-corporate-server-core-on-gentoo-with-kvm-using-an-lvm-volume/</id>
    <link href="https://blog.lordvan.com/blog/running-ucs-univention-corporate-server-core-on-gentoo-with-kvm-using-an-lvm-volume/" rel="alternate" type="text/html"/>
    <title>Running UCS (Univention Corporate Server) Core on Gentoo with kvm + using an LVM volume</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Just a quick post about how to run UCS (Core Edition in my case) with KVM on gentoo.</p>
<p>First off I go with the assumption that</p>
<ul>
<li>KVM is working (kernel, config,..)</li>
<li>qemu is installed (+ init scripts)</li>
<li>bridge networking is set up and working</li>
</ul>
<p>If any of the above are not yet set up: <a href="https://wiki.gentoo.org/wiki/QEMU">https://wiki.gentoo.org/wiki/QEMU</a></p>
<p>First download the Virtualbox Image from <a href="https://www.univention.de/download/">https://www.univention.de/download/</a> .</p>
<p>Further for the kvm name I use ucs-dc</p>
<p>next we convert the image to qcow2:</p>
<pre>qemu-img convert -f vmdk -O qcow2 UCS-DC/UCS-DC-virtualbox-disk1.vmdk  UCS-DC_disk1.qcow2</pre>
<p>create your init script link:</p>
<pre>cd /etc/init.d; ln -s qemu kvm.ucs-dc</pre>
<p>Then in <code>/etc/conf.d</code> copy <code>qemu.conf.example</code> to <code>kvm.ucs-dc</code></p>
<p>Check / change the following:</p>
<ol>
<li>change the MACADDR (it includes a command line to generate one) -- The reason this is first is, that if you forget you might spend hours - like me -  trying to find out why your network is not working ..</li>
<li>QEMU_TYPE="x86_64"</li>
<li>NIC_TYPE=br</li>
<li>point DISKIMAGE=  to your qcow2 file</li>
<li>ENABLE_KVM=1 (believe me disabling kvm is noticeable)</li>
<li>adjust MEMORY (I set it to 2GB for the DC) and SMP (i set that to 2)</li>
<li>FOREGROUND="vnc=:&lt;port&gt;" - so you can connect to your console using VNC</li>
<li>check the other stuff if it applies to you (OTHER_ARGS is quite useful for example to also add a CD/usb emulation of a rescue disk,..</li>
</ol>
<p>run it with</p>
<pre>/etc/init.d/kvm.ucs-dc start</pre>
<p>connect with your favourite VNC client and set up your UCS Server.</p>
<p>One thing I did on the fileserver instance (I run 3 UCS kvms at the moment - DC, Backup-DC and File Server):</p>
<p>I created a LVM Volume for the file share on the Host, and mapped it to the KVM - here's the config line:</p>
<pre>OTHER_ARGS="-drive format=raw,file=/dev/mapper/&lt;your volume devide&gt;,if=virtio,aio=native,cache.direct=on"</pre>
<p>works great for me, and I will also add another one for other shares later I think. but this way if i really have any VM problems my files are just on the lvm device and i can get to it easily (also there are lvm snapshots,.. that could be useful eventually)</p></div>
    </summary>
    <updated>2018-03-01T10:48:48Z</updated>
    <category term="Admin"/>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <category term="Univention Corporate Server"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/tryton-setup-config/</id>
    <link href="https://blog.lordvan.com/blog/tryton-setup-config/" rel="alternate" type="text/html"/>
    <title>Tryton setup &amp; config</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Because I keep forgetting stuff I need to do (or the order) here a very quick overview:</p>
<p>Install trytond, modules + deps (on gentoo add the tryton overlay and just emerge)</p>
<p>If you don'T use sqlite create a user (and database) for tryton.</p>
<p>Gentoo Init scripts use /etc/conf.d/trytond (here's mine):</p>
<pre># Location of the configuration file<br/>CONFIG=/etc/tryton/trytond.conf<br/># Location of the logging configuration file<br/>LOGCONF=/etc/tryton/logging.conf<br/># The database names to load (space separated)<br/>DATABASES=tryton</pre>
<p>since it took me a while to find a working logging.conf example here's my working one:</p>
<pre>[formatters]<br/>keys=simple<br/><br/>[handlers]<br/>keys=rotate,console<br/><br/>[loggers]<br/>keys=root<br/><br/>[formatter_simple]<br/>format=%(asctime)s] %(levelname)s:%(name)s:%(message)s<br/>datefmt=%a %b %d %H:%M:%S %Y<br/><br/>[handler_rotate]<br/>class=handlers.TimedRotatingFileHandler<br/>args=('/var/log/trytond/trytond.log', 'D', 1, 120)<br/>formatter=simple<br/><br/>[handler_console]<br/>class=StreamHandler<br/>formatter=simple<br/>args=(sys.stdout,)<br/><br/>[logger_root]<br/>level=INFO<br/>handlers=rotate,console</pre>
<p>(Not going into details here, if you want to know more there are plenty of resources online)</p>
<p>As for config I went and got an example online (from open Suse) and modified it:</p>
<pre># /etc/tryton/trytond.conf - Configuration file for Tryton Server (trytond)<br/>#<br/># This file contains the most common settings for trytond (Defaults<br/># are commented).<br/># For more information read<br/># /usr/share/doc/trytond-&lt;version&gt;/<br/><br/>[database]<br/># Database related settings<br/><br/># The URI to connect to the SQL database (following RFC-3986)<br/># uri = database://username:password@host:port/<br/># (Internal default: sqlite:// (i.e. a local SQLite database))<br/>#<br/># PostgreSQL via Unix domain sockets<br/># (e.g. PostgreSQL database running on the same machine (localhost))<br/>#uri = postgresql://tryton:tryton@/<br/>#<br/>#Default setting for a local postgres database<br/>#uri = postgresql:///<br/><br/>#<br/># PostgreSQL via TCP/IP<br/># (e.g. connecting to a PostgreSQL database running on a remote machine or<br/># by means of md5 authentication. Needs PostgreSQL to be configured to accept<br/># those connections (pg_hba.conf).)<br/>#uri = postgresql://tryton:tryton@localhost:5432/<br/>uri = postgresql://tryton:mypassword@localhost:5432/<br/><br/># The path to the directory where the Tryton Server stores files.<br/># The server must have write permissions to this directory.<br/># (Internal default: /var/lib/trytond)<br/>path = /var/lib/tryton<br/><br/># Shall available databases be listed in the client?<br/>#list = True<br/><br/># The number of retries of the Tryton Server when there are errors<br/># in a request to the database<br/>#retry = 5<br/><br/># The primary language, that is used to store entries in translatable<br/># fields into the database.<br/>#language = en_US<br/>language = de_AT<br/><br/>[ssl]<br/># SSL settings<br/># Activation of SSL for all available protocols.<br/># Uncomment the following settings for key and certificate<br/># to enable SSL.<br/><br/># The path to the private key<br/>#privatekey = /etc/ssl/private/ssl-cert-snakeoil.key<br/><br/># The path to the certificate<br/>#certificate = /etc/ssl/certs/ssl-cert-snakeoil.pem<br/><br/>[jsonrpc]<br/># Settings for the JSON-RPC network interface<br/><br/># The IP/host and port number of the interface<br/># (Internal default: localhost:8000)<br/>#<br/># Listen on all interfaces (IPv4)<br/><br/>listen = 0.0.0.0:8000<br/><br/>#<br/># Listen on all interfaces (IPv4 and IPv6)<br/>#listen = [::]:8000<br/><br/># The hostname for this interface<br/>#hostname =<br/><br/># The root path to retrieve data for GET requests<br/>#data = jsondata<br/><br/>[xmlrpc]<br/># Settings for the XML-RPC network interface<br/><br/># The IP/host and port number of the interface<br/>#listen = localhost:8069<br/><br/>[webdav]<br/># Settings for the WebDAV network interface<br/><br/># The IP/host and port number of the interface<br/>#listen = localhost:8080<br/>listen = 0.0.0.0:8080<br/><br/>[session]<br/># Session settings<br/><br/># The time (in seconds) until an inactive session expires<br/>timeout = 3600<br/><br/># The server administration password used by the client for<br/># the execution of database management tasks. It is encrypted<br/># using using the Unix crypt(3) routine. A password can be<br/># generated using the following command line (on one line):<br/># $ python -c 'import getpass,crypt,random,string; \<br/># print crypt.crypt(getpass.getpass(), \<br/># "".join(random.sample(string.ascii_letters + string.digits, 8)))'<br/># Example password with 'admin'<br/>#super_pwd = jkUbZGvFNeugk<br/>super_pwd = &lt;your pwd&gt;<br/><br/><br/>[email]<br/># Mail settings<br/><br/># The URI to connect to the SMTP server.<br/># Available protocols are:<br/># - smtp: simple SMTP<br/># - smtp+tls: SMTP with STARTTLS<br/># - smtps: SMTP with SSL<br/>#uri = smtp://localhost:25<br/>uri = smtp://localhost:25<br/><br/># The From address used by the Tryton Server to send emails.<br/>from = <a href="mailto:tryton@%3Cyour-domain.tld%3E">tryton@&lt;your-domain.tld&gt;</a><br/><br/>[report]<br/># Report settings<br/><br/># Unoconv parameters for connection to the unoconv service.<br/>#unoconv = pipe,name=trytond;urp;StarOffice.ComponentContext<br/><br/># Module settings<br/>#<br/># Some modules are reading configuration parameters from this<br/># configuration file. These settings only apply when those modules<br/># are installed.<br/>#<br/>#[ldap_authentication]<br/># The URI to connect to the LDAP server.<br/>#uri = ldap://host:port/dn?attributes?scope?filter?extensions<br/># A basic default URL could look like<br/>#uri = ldap://localhost:389/<br/><br/>[web]<br/># Path for the web-frontend<br/>#root = /usr/lib/node-modules/tryton-sao<br/>listen = 0.0.0.0:8000<br/>root = /usr/share/sao<br/><br/></pre>
<p>Set up the database tables, modules, superuser</p>
<pre>trytond-admin -c /etc/tryton/trytond.conf -d tryton --all</pre>
<p>Should you forget to set your superuser password (or need to change it later):</p>
<pre>trytond-admin -c /etc/tryton/trytond.conf -d tryton -p</pre>
<p>It's now time to connect a client to it and enable &amp; configure the modules (make sure to finish the basic configuration (including accounts,..) otherwise you have to either restart, or know what exactly needs to be set up accounting wise !</p>
<ul>
<li>configure user(s)</li>
<li>enable account_eu (config and setup take a while)</li>
<li>set up company
<ul>
<li>create a party for it</li>
<li>assign currency and timezone</li>
</ul>
</li>
<li>set up chart of accounts from the template (only do this manually if you really know what you - and tryton - needs !!)
<ul>
<li>choose company &amp; pick the template (e.g. "Minimaler Kontenplan" (if using german) )
<ul>
<li>set the defaults (only one per option usually)</li>
</ul>
</li>
</ul>
</li>
<li>after applying the above activate and configure whatever else you need (sale, timesheet, ...)</li>
</ul>
<p>during this you can watch trytond.log to see what happens behind the scenes (e.g. country module takes a while,..)</p>
<p>How to add languages:</p>
<ul>
<li>Administration -&gt; Localization -&gt; Languages -&gt; add the language and set it to <br/>active and translatable  </li>
</ul>
<p>If you install new modules or languages run trytond-admin ... --all again (see above) </p></div>
    </summary>
    <updated>2018-03-01T08:10:05Z</updated>
    <category term="ERP"/>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <category term="Tryton"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>http://www.ultrabug.fr/?p=1984</id>
    <link href="http://www.ultrabug.fr/evaluating-scylladb-for-production-2-2/" rel="alternate" type="text/html"/>
    <title>Evaluating ScyllaDB for production 2/2</title>
    <summary>In my previous blog post, I shared 7 lessons on our experience in evaluating Scylla for production. Those lessons were focused on the setup and execution of the POC and I promised a more technical blog post with technical details and lessons learned from the POC, here it is! Before you read on, be mindful […]</summary>
    <content type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p><img alt="" class="aligncenter" height="300" src="https://www.scylladb.com/wp-content/uploads/800x300-blog-SLA-Trumps-IOPS-1.jpg" width="800"/></p>
<p>In my previous blog post, I shared <a href="https://www.ultrabug.fr/evaluating-scylladb-for-production-1-2/" rel="noopener" target="_blank">7 lessons on our experience in evaluating Scylla</a> for production.</p>
<p>Those lessons were focused on the setup and execution of the POC and I promised a more technical blog post with technical details and lessons learned from the POC, here it is!</p>
<p>Before you read on, be mindful that <strong>our POC was set up to test workloads</strong> and workflows, <strong>not to benchmark</strong> technologies. So even if the Scylla figures are great, they have not been the main drivers of the actual conclusion of the POC.</p>
<h1>Business context</h1>
<p>As a data driven company working in the Marketing and Advertising industry, we help our clients make sense of multiple sources of data to build and improve their relationship with their customers and prospects.</p>
<p>Dealing with multiple sources of data is nothing new but their volume has dramatically changed during the past decade. I will spare you with the Big-Data-means-nothing term and the technical challenges that comes with it as you already heard enough of it.</p>
<p>Still, it is clear that our line of business is tied to our capacity at <strong>mixing and correlating a massive amount of different types of events</strong> (data sources/types) coming from various sources which all have their own identifiers (think primary keys):</p>
<ul>
<li>Web navigation tracking: identifier is a cookie that’s tied to the tracking domain (we have our own)</li>
<li>CRM databases: usually the email address or an internal account ID serve as an identifier</li>
<li>Partners’ digital platform: identifier is also a cookie tied to their tracking domain</li>
</ul>
<p>To try to make things simple, let’s take a concrete example:</p>
<p>You work for UNICEF and want to optimize their banner ads budget by targeting the donors of their last fundraising campaign.</p>
<ul>
<li>Your reference user database is composed of the donors who registered with their email address on the last campaign: main identifier is the email address.</li>
<li>To buy web display ads, you use an Ad Exchange partner such as AppNexus or DoubleClick (Google). From their point of view, users are seen as cookie IDs which are tied to their own domain.</li>
</ul>
<p>So you basically need to be able to translate an email address to a cookie ID for every partner you work with.</p>
<h1>Use case: ID matching tables</h1>
<p><strong>We operate and maintain huge ID matching tables</strong> for every partner and a great deal of our time is spent translating those IDs from one to another. In SQL terms, we are basically doing JOINs between a dataset and those ID matching tables.</p>
<ul>
<li>You select your reference population</li>
<li>You JOIN it with the corresponding ID matching table</li>
<li>You get a matched population that your partner can recognize and interact with</li>
</ul>
<p><img alt="" class="aligncenter wp-image-2035 size-large" height="377" src="https://www.ultrabug.fr/wordpress/wp-content/uploads/2018/02/matching_ids-1024x617.png" width="625"/></p>
<p>Those ID matching tables have a <strong>pretty high read AND write throughput</strong> because they’re updated and queried all the time.</p>
<p>Usual figures are <strong>JOINs between a 10+ Million dataset and 1.5+ Billion ID</strong> matching tables.</p>
<p>The reference query basically looks like this:</p>
<pre>SELECT count(m.partnerid)
FROM population_10M_rows AS p JOIN partner_id_match_400M_rows AS m
ON p.id = m.id</pre>
<h2> Current implementations</h2>
<p>We operate a lambda architecture where we handle real time ID matching using <strong>MongoDB</strong> and batch ones using <strong>Hive</strong> (Apache Hadoop).</p>
<p>The first downside to note is that it requires us to maintain <strong>two copies of every ID matching table.</strong> We also couldn’t choose one over the other because <strong>neither MongoDB nor Hive can sustain both the read/write lookup/update ratio while performing within the low latencies that we need</strong>.</p>
<p><strong>This is an operational burden</strong> and requires quite a bunch of engineering to ensure data consistency between different data stores.</p>
<h3>Production hardware overview:</h3>
<ul>
<li>MongoDB is running on a 15 nodes (5 shards) cluster
<ul>
<li>64GB RAM, 2 sockets, RAID10 SAS spinning disks, 10Gbps dual NIC</li>
</ul>
</li>
<li>Hive is running on 50+ YARN NodeManager instances
<ul>
<li>128GB RAM, 2 sockets, JBOD SAS spinning disks, 10Gbps dual NIC</li>
</ul>
</li>
</ul>
<h2>Target implementation</h2>
<p>The key question is simple: is there a technology out there that can sustain our ID matching tables workloads while maintaining consistently low upsert/write and lookup/read latencies?</p>
<p>Having one technology to handle both use cases would allow:</p>
<ul>
<li>Simpler data consistency</li>
<li>Operational simplicity and efficiency</li>
<li>Reduced costs</li>
</ul>
<h3>POC hardware overview:</h3>
<p>So we decided to find out if Scylla could be that technology. For this, we used three decommissioned machines that we had in the basement of our Paris office.</p>
<ul>
<li>2 DELL R510
<ul>
<li>19GB RAM, 2 socket 8 cores, RAID0 SAS spinning disks, 1Gbps NIC</li>
</ul>
</li>
<li>1 DELL R710
<ul>
<li>19GB RAM, 2 socket 4 cores, RAID0 SAS spinning disks, 1Gbps NIC</li>
</ul>
</li>
</ul>
<p>I know, these are not glamorous machines and they are even inconsistent in specs, but we still set up a 3 node Scylla cluster running <strong>Gentoo Linux</strong> with them.</p>
<p>Our take? If those three lousy machines can challenge or beat the production machines on our current workloads, then Scylla can seriously be considered for production.</p>
<h1>Step 1: Validate a schema model</h1>
<p>Once the POC document was complete and the ScyllaDB team understood what we were trying to do, we started iterating on the schema model using a query based modeling strategy.</p>
<p>So we wrote down and rated the questions that our model(s) should answer to, they included stuff like:</p>
<ul>
<li>What are all our cookie IDs associated to the given partner ID ?</li>
<li>What are all the cookie IDs associated to the given partner ID over the last N months ?</li>
<li>What is the last cookie ID/date for the given partner ID ?</li>
<li>What is the last date we have seen the given cookie ID / partner ID couple ?</li>
</ul>
<p>As you can imagine, the reverse questions are also to be answered so ID translations can be done both ways (ouch!).</p>
<h2>Prototyping</h2>
<p>This is no news that I’m a Python addict so I did all my prototyping using Python and the <strong>cassandra-driver</strong>.</p>
<p>I ended up using a <strong>test-driven data modelling</strong> strategy using <strong>pytest</strong>. I wrote tests on my dataset so I could concentrate on the model while making sure that all my questions were being answered correctly and consistently.</p>
<h2>Schema</h2>
<p>In our case, we ended up with three denormalized tables to answer all the questions we had. To answer the first three questions above, you could use the schema below:</p>
<pre>CREATE TABLE IF NOT EXISTS ids_by_partnerid(
 partnerid text,
 id text,
 date timestamp,
 PRIMARY KEY ((partnerid), date, id)
 )
 WITH CLUSTERING ORDER BY (date DESC)</pre>
<h2>Note on clustering key ordering</h2>
<p>One important learning I got in the process of validating the model is about the internals of Cassandra’s file format that resulted in the choice of using a descending order DESC on the date clustering key as you can see above.</p>
<p>If your main use case of querying is to look for the <strong>latest value</strong> of an history-like table design like ours, then make sure to change the default ASC order of your clustering key to DESC. This will ensure that the latest values (rows) are stored at the beginning of the sstable file effectively reducing the read latency when the row is not in cache!</p>
<p>Let me quote Glauber Costa’s detailed explanation on this:</p>
<blockquote><p><em>Basically in Cassandra’s file format, the index points to an entire partition (for very large partitions there is a hack to avoid that, but the logic is mostly the same). So if you want to read the first row, that’s easy you get the index to the partition </em><em>and read the first row. I</em><em>f you want to read the last row, then you get the index to the partition </em><em>and do a linear scan to the next.</em></p></blockquote>
<p>This is the kind of learning you can only get from experts like Glauber and that can justify the whole POC on its own!</p>
<h1>Step 2: Set up scylla-grafana-monitoring</h1>
<p>As I said before, make sure to set up and run the <a href="https://github.com/scylladb/scylla-grafana-monitoring" rel="noopener" target="_blank">scylla-grafana-monitoring</a> project before running your test workloads. This easy to run solution will be of great help to <strong>understand the performance</strong> of your cluster and to <strong>tune your workload for optimal performances</strong>.</p>
<p><img alt="" class="aligncenter wp-image-2040 size-medium" height="180" src="https://www.ultrabug.fr/wordpress/wp-content/uploads/2018/02/2018-02-25-140230_532x319_scrot-300x180.png" width="300"/></p>
<p>If you can, also discuss with the ScyllaDB team to allow them to access the Grafana dashboard. This will be very valuable since they know where to look better than we usually do… I gained a lot of understandings thanks to this!</p>
<h2>Note on scrape interval</h2>
<p>I advise you to <strong>lower the Prometheus scrape interval</strong> to have a shorter and finer sampling of your metrics. This will allow your dashboard to be more reactive when you start your test workloads.</p>
<p>For this, change the <strong>prometheus/prometheus.yml</strong> file like this:</p>
<pre>scrape_interval: 2s # Scrape targets every 2 seconds (5s default)
scrape_timeout: 1s # Timeout before trying to scrape a target again (4s default)</pre>
<h2>Test your monitoring</h2>
<p>Before going any further, <strong>I strongly advise you to run a stress test on your POC cluster</strong> using the <strong>cassandra-stress</strong> tool and share the results and their monitoring graphs with the ScyllaDB team.</p>
<p>This will give you a common understanding of the general performances of your cluster as well as help in outlining any obvious misconfiguration or hardware problem.</p>
<h2>Key graphs to look at</h2>
<p>There are a lot of interesting graphs so I’d like to share the ones that I have been mainly looking at. Remember that depending on your test workloads, some other graphs may be more relevant for you.</p>
<ul>
<li><strong>number of open connections</strong></li>
</ul>
<p>You’ll want to see a steady and high enough number of open connections which will prove that your clients are pushed at their maximum (at the time of testing this graph was not on Grafana and you had to add it yourself)</p>
<ul>
<li><strong>cache hits / misses</strong></li>
</ul>
<p>Depending on your reference dataset, you’ll obviously see that cache hits and misses will have a direct correlation with disk I/O and overall performances. Running your test workloads multiple times should trigger higher cache hits if your RAM is big enough.</p>
<ul>
<li><strong>per shard/node distribution</strong></li>
</ul>
<p>The <em>Requests Served per</em><em> shard</em> graph should display a nicely distributed load between your shards and nodes so that you’re sure that you’re getting the best out of your cluster.</p>
<p>The same is true for almost every other “per shard/node” graph: you’re looking for evenly distributed load.</p>
<ul>
<li><strong>sstable reads</strong></li>
</ul>
<p>Directly linked with your disk performances, you’ll be trying to make sure that you have almost no queued sstable reads.</p>
<h1>Step 3: Get your reference data and metrics</h1>
<p>We obviously need to have <strong>some reference metrics</strong> on our current production stack so we can compare them with the results on our POC Scylla cluster.</p>
<p>Whether you choose to use your current production machines or set up a similar stack on the side to run your test workloads is up to you. We chose to run the vast majority of our tests on our current production machines to be as close to our real workloads as possible.</p>
<h2>Prepare a reference dataset</h2>
<p>During your work on the POC document, you should have detailed the usual <strong>data cardinality and volume</strong> you work with. Use this information to set up a reference dataset that you can use on all of the platforms that you plan to compare.</p>
<p>In our case, we chose a 10 Million reference dataset that we JOINed with a 400+ Million extract of an ID matching table. Those volumes seemed easy enough to work with and allowed some nice ratio for memory bound workloads.</p>
<h2>Measure on your current stack</h2>
<p>Then it’s time to load this reference datasets on your current platforms.</p>
<ul>
<li>If you run a MongoDB cluster like we do, <strong>make sure to shard and index the dataset</strong> just like you do on the production collections.</li>
<li>On Hive, make sure to <strong>respect the storage file format</strong> of your current implementations <strong>as well as their partitioning</strong>.</li>
</ul>
<p>If you chose to run your test workloads on your production machines, make sure to run them multiple times and at different hours of the day and night so you can correlate the measures with the load on the cluster at the time of the tests.</p>
<h2>Reference metrics</h2>
<p>For the sake of simplicity I’ll focus on the Hive-only batch workloads. I performed a count on the JOIN of the dataset and the ID matching table using Spark 2 and then I also ran the JOIN using a simple Hive query through Beeline.</p>
<p>I gave the following definitions on the reference load:</p>
<ul>
<li><strong>IDLE</strong>: YARN available containers and free resources are optimal, parallelism is very limited</li>
<li><strong>NORMAL</strong>: YARN sustains some casual load, parallelism exists but we are not bound by anything still</li>
<li><strong>HIGH</strong>: YARN has pending containers, parallelism is high and applications have to wait for containers before executing</li>
</ul>
<p>There’s always an error margin on the results you get and I found that there was not significant enough differences between the results using Spark 2 and Beeline so I stuck with a simple set of results:</p>
<ul>
<li>IDLE: 2 minutes, 15 seconds</li>
<li>NORMAL: 4 minutes</li>
<li>HIGH: 15 minutes</li>
</ul>
<h1>Step 4: Get Scylla in the mix</h1>
<p>It’s finally time to do your best to break Scylla or at least to push it to its limits on your hardware… But most importantly, you’ll be looking to understand what those limits are depending on your test workloads as well as outlining out all the required tuning that you will be required to do on the client side to reach those limits.</p>
<p>Speaking about the results, we will have to differentiate two cases:</p>
<ol>
<li>The Scylla cluster is fresh and its <strong>cache is empty</strong> (cold start): performance is mostly <strong>Disk I/O bound</strong></li>
<li>The Scylla cluster has been running some test workload already and its <strong>cache is hot</strong>: performance is mostly <strong>Memory bound </strong>with some Disk I/O depending on the size of your RAM</li>
</ol>
<h2>Spark 2 / Scala test workload</h2>
<p>Here I used Scala (yes, I did) and DataStax’s <a href="https://github.com/datastax/spark-cassandra-connector" rel="noopener" target="_blank"><strong>spark-cassandra-connector</strong></a> so I could use the magic <strong>joinWithCassandraTable</strong> function.</p>
<ul>
<li>spark-cassandra-connector-2.0.1-s_2.11.jar</li>
<li>Java 7</li>
</ul>
<p>I had to stick with the 2.0.1 version of the spark-cassandra-connector because newer version (2.0.5 at the time of testing) were performing bad with no apparent reason. The ScyllaDB team couldn’t help on this.</p>
<p>You can interact with your test workload using the spark2-shell:</p>
<pre>spark2-shell --jars jars/commons-beanutils_commons-beanutils-1.9.3.jar,jars/com.twitter_jsr166e-1.1.0.jar,jars/io.netty_netty-all-4.0.33.Final.jar,jars/org.joda_joda-convert-1.2.jar,jars/commons-collections_commons-collections-3.2.2.jar,jars/joda-time_joda-time-2.3.jar,jars/org.scala-lang_scala-reflect-2.11.8.jar,jars/spark-cassandra-connector-2.0.1-s_2.11.jar</pre>
<p>Then use the following Scala imports:</p>
<pre>// main connector import
import com.datastax.spark.connector._

// the joinWithCassandraTable failed without this (dunno why, I'm no Scala guy)
import com.datastax.spark.connector.writer._
implicit val rowWriter = SqlRowWriter.Factory</pre>
<p>Finally I could run my test workload to select the data from Hive and JOIN it with Scylla easily:</p>
<pre>val df_population = spark.sql("SELECT id FROM population_10M_rows")
val join_rdd = df_population.rdd.repartitionByCassandraReplica("test_keyspace", "partner_id_match_400M_rows").joinWithCassandraTable("test_keyspace", "partner_id_match_400M_rows")
val joined_count = join_rdd.count()</pre>
<h3>Notes on tuning spark-cassandra-connector</h3>
<p>I experienced <strong>pretty crappy performances at first</strong>. Thanks to the easy Grafana monitoring, I could see that Scylla was not being the bottleneck at all and that I instead had trouble getting some real load on it.</p>
<p>So I engaged in a thorough tuning of the spark-cassandra-connector with the help of Glauber… and it was pretty painful but we finally made it and got the best parameters to get the load on the Scylla cluster close to 100% when running the test workloads.</p>
<p>This tuning was done in the <strong>spark-defaults.conf</strong> file:</p>
<ul>
<li>have a <strong>fixed set of executors</strong> and boost their overhead memory</li>
</ul>
<p>This will increase test results reliability by making sure you always have a reserved number of available workers at your disposal.</p>
<pre>spark.dynamicAllocation.enabled=false
spark.executor.instances=30
spark.yarn.executor.memoryOverhead=1024</pre>
<ul>
<li>set the <strong>split size to 1MB</strong></li>
</ul>
<p>Default is 8MB but Scylla uses a split size of 1MB so you’ll see a great boost of performance and stability by setting this setting to the right number.</p>
<pre>spark.cassandra.input.split.size_in_mb=1</pre>
<ul>
<li>align <strong>driver timeouts with server timeouts</strong></li>
</ul>
<p>It is advised to make sure that your read request timeouts are the same on the driver and the server so you do not get stalled states waiting for a timeout to happen on one hand. You can do the same with write timeouts if your test workloads are write intensive.</p>
<p>/etc/scylla/scylla.yaml</p>
<pre>read_request_timeout_in_ms: 150000</pre>
<p>spark-defaults.conf</p>
<pre>spark.cassandra.connection.timeout_ms=150000
spark.cassandra.read.timeout_ms=150000

// optional if you want to fail / retry faster for HA scenarios
spark.cassandra.connection.reconnection_delay_ms.max=5000
spark.cassandra.connection.reconnection_delay_ms.min=1000
spark.cassandra.query.retry.count=100</pre>
<ul>
<li>adjust your <strong>reads per second</strong> rate</li>
</ul>
<p>Last but surely not least, this setting you will need to try and find out the best value for yourself since it has a direct impact on the load on your Scylla cluster. You will be looking at pushing your POC cluster to almost 100% load.</p>
<pre>spark.cassandra.input.reads_per_sec=6666</pre>
<p>As I said before, I could only get this to work perfectly using the 2.0.1 version of the spark-cassandra-connector driver. But then it worked very well and with great speed.</p>
<h3>Spark 2 results</h3>
<p>Once tuned, the best results I was able to reach on this hardware are listed below. It’s interesting to see that with spinning disks, the cold start result can compete with the results of a heavily loaded Hadoop cluster where pending containers and parallelism are knocking down its performances.</p>
<ul>
<li><strong>hot cache:</strong> 2min</li>
<li><strong>cold cache:</strong> 12min</li>
</ul>
<p>Wow! <strong>Those three refurbished machines can compete with our current production machines and implementations</strong>, they can even match an idle Hive cluster of a medium size!</p>
<h2>Python test workload</h2>
<p>I couldn’t conclude on a Scala/Spark 2 only test workload. So I obviously went back to my language of choice <strong>Python</strong> only to discover at my disappointment that there is no <strong>joinWithCassandraTable</strong> equivalent available on <strong>pyspark</strong>…</p>
<p>I tried with some projects claiming otherwise with no success until I changed my mind and decided that I probably didn’t need Spark 2 at all. So <strong>I went into the crazy quest of beating Spark 2 performances using a pure Python implementation</strong>.</p>
<p>This basically means that instead of having a JOIN like helper, I had to do a massive amount of single “id -&gt; partnerid” lookups. Simple but greatly inefficient you say? Really?</p>
<p>When I broke down the pieces, I was left with the following steps to implement and optimize:</p>
<ul>
<li>Load the 10M rows worth of population data from Hive</li>
<li>For every row, lookup the corresponding partnerid in the ID matching table from Scylla</li>
<li>Count the resulting number of matches</li>
</ul>
<p>The main problem to compete with Spark 2 is that it is a distributed framework and Python by itself is not. So <strong>you can’t possibly imagine outperforming Spark 2 with your single machine</strong>.</p>
<p>However, let’s remember that Spark 2 is shipped and ran on executors using YARN so we are firing up JVMs and dispatching containers all the time. This is a quite expensive process that we have a chance to avoid using Python!</p>
<p>So what <strong>I needed was a distributed computation framework</strong> that would allow to load data in a partitioned way and run the lookups on all the partitions in parallel before merging the results. In Python, this framework exists and is named <strong>Dask!</strong></p>
<p>You will obviously need to have to deploy a dask topology (that’s easy and <a href="https://dask.pydata.org/en/latest/" rel="noopener" target="_blank">well documented</a>) to have a comparable number of dask workers than of Spark 2 executors (30 in my case) .</p>
<p>The corresponding Python <a href="https://gist.github.com/ultrabug/8a13fe2ef7a616aa7301c3e4e88eda13" rel="noopener" target="_blank">code samples are here</a>.</p>
<h3>Hive + Scylla results</h3>
<p>Reading the population id’s from Hive, the workload can be split and executed concurrently on multiple dask workers.</p>
<ul>
<li>read the 10M population rows from Hive in a partitioned manner</li>
<li>for each partition (slice of 10M), query Scylla to lookup the possibly matching partnerid</li>
<li>create a dataframe from the resulting matches</li>
<li>gather back all the dataframes and merge them</li>
<li>count the number of matches</li>
</ul>
<p>The results showed that <strong>it is possible to compete with Spark 2 with Dask</strong>:</p>
<ul>
<li>hot cache: <strong>2min </strong>(rounded up)</li>
<li>cold cache: <strong>6min</strong></li>
</ul>
<p>Interestingly, those almost two minutes can be broken down like this:</p>
<ul>
<li>distributed read data from Hive: 50s</li>
<li>distributed lookup from Scylla: 60s</li>
<li>merge + count: 10s</li>
</ul>
<p>This meant that if I could cut down the reading of data from Hive <strong>I could go even faster</strong>!</p>
<h3>Parquet + Scylla results</h3>
<p>Going further on my previous remark <strong>I decided to get rid of Hive</strong> and put the 10M rows population data in a <strong>parquet file</strong> instead. I ended up trying to find out the most efficient way to read and load a parquet file from HDFS.</p>
<p>My conclusion so far is that you can’t be the amazing <strong>libhdfs3 + pyarrow</strong> combo. It is faster to load everything on a single machine than loading from Hive on multiple ones!</p>
<p>The results showed that I could almost get rid of a whole minute in the total process, <strong>effectively and easily beating Spark 2</strong>!</p>
<ul>
<li>hot cache: <strong>1min 5s</strong></li>
<li>cold cache: <strong>5min</strong></li>
</ul>
<h3>Notes on the Python <strong>cassandra-driver</strong></h3>
<p>Tests using Python showed robust queries experiencing far less failures than the spark-cassandra-connector, even more during the cold start scenario.</p>
<ul>
<li>The usage of <b>execute_concurrent()</b> provides a clean and linear interface to submit a large number of queries while providing some level of concurrency control</li>
<li>Increasing the <b>concurrency</b> parameter from 100 to <b>512</b> provided additional throughput, but increasing it more looked useless</li>
<li><b>Protocol version 4</b> forbids the tuning of connection requests / number to some sort of auto configuration. All tentative to hand tune it (by lowering protocol version to 2) failed to achieve higher throughput</li>
<li>Installation of <b>libev</b> on the system allows the cassandra-driver to use it to handle concurrency instead of asyncore with a somewhat lower load footprint on the worker node but no noticeable change on the throughput</li>
<li>When reading a parquet file stored on HDFS, the <b>hdfs3 + pyarrow</b> combo provides an insane speed (less than 10s to fully load 10M rows of a single column)</li>
</ul>
<h1>Step 5: Play with High Availability</h1>
<p>I was quite disappointed and surprised by the lack of maturity of the Cassandra community on this critical topic. Maybe the main reason is that the cassandra-driver allows for too many levels of configuration and strategies.</p>
<p>I wrote this simple bash script to allow me to <strong>simulate node failures. </strong>Then I could play with handling those failures and retries on the Python client code.</p>
<pre>#!/bin/bash

iptables -t filter -X
iptables -t filter -F

ip="0.0.0.0/0"
for port in 9042 9160 9180 10000 7000; do
	iptables -t filter -A INPUT -p tcp --dport ${port} -s ${ip} -j DROP
	iptables -t filter -A OUTPUT -p tcp --sport ${port} -d ${ip} -j DROP
done

while true; do
	trap break INT
	clear
	iptables -t filter -vnL
	sleep 1
done

iptables -t filter -X
iptables -t filter -F
iptables -t filter -vnL
</pre>
<p>This topic is worth going in more details on a dedicated blog post that I shall write later on while providing code samples.</p>
<h1>Concluding the evaluation</h1>
<p>I’m happy to say that <strong>Scylla passed our production evaluation</strong> and will soon go live on our infrastructure!</p>
<p>As I said at the beginning of this post, the conclusion of the evaluation has not been driven by the good figures we got out of our test workloads. Those are no benchmarks and never pretended to be but we could still prove that performances were solid enough to not be a blocker in the adoption of Scylla.</p>
<p>Instead we decided on the following points of interest (in no particular order):</p>
<ul>
<li>data consistency</li>
<li>production reliability</li>
<li>datacenter awareness</li>
<li>ease of operation</li>
<li>infrastructure rationalisation</li>
<li>developer friendliness</li>
<li>costs</li>
</ul>
<p><img alt="" class="alignnone wp-image-2043 size-thumbnail" height="150" src="https://www.ultrabug.fr/wordpress/wp-content/uploads/2018/02/mascot-linux-love-1-150x150.png" width="150"/></p>
<p>On the side, I tried Scylla on two other different use cases which proved interesting to follow later on to displace MongoDB again…</p>
<h1>Moving to production</h1>
<p>Since our relationship was great <strong>we also decided to partner with ScyllaDB and support them by subscribing to their enterprise offerings</strong>. They also accepted to support us using <strong>Gentoo Linux</strong>!</p>
<p>We are starting with a three nodes heavy duty cluster:</p>
<ul>
<li>DELL R640
<ul>
<li>dual socket 2,6GHz 14C, 512GB RAM, Samsung 17xxx NVMe 3,2 TB</li>
</ul>
</li>
</ul>
<p>I’m eager to see ScyllaDB building up and will continue to help with my modest contributions. <strong>Thanks again to the ScyllaDB team</strong> for their patience and support during the POC!</p></div>
    </content>
    <updated>2018-02-28T10:32:24Z</updated>
    <category term="Linux"/>
    <category term="gentoo"/>
    <category term="nosql"/>
    <category term="scylla"/>
    <author>
      <name>ultrabug</name>
    </author>
    <source>
      <id>http://www.ultrabug.fr</id>
      <link href="http://www.ultrabug.fr/tag/gentoo-2/feed/" rel="self" type="application/rss+xml"/>
      <link href="http://www.ultrabug.fr" rel="alternate" type="text/html"/>
      <title>gentoo – Ultrabug</title>
      <updated>2019-04-17T09:02:33Z</updated>
    </source>
  </entry>

  <entry xml:lang="en">
    <id>https://blog.lordvan.com/blog/mezzanine-with-mod_wsgi-on-apache-on-gentoo/</id>
    <link href="https://blog.lordvan.com/blog/mezzanine-with-mod_wsgi-on-apache-on-gentoo/" rel="alternate" type="text/html"/>
    <title>Mezzanine with mod_wsgi in virtualenv on apache (on Gentoo)</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>So .. since this page/blog is running on <a href="http://mezzanine.jupo.org/">Mezzanine </a>I thought I'd share what I had to do to get it to work.</p>
<p>First off I did this on Gentoo, but in general stuff should apply to most other distributions anyway.</p>
<p>Versions I used:</p>
<ul>
<li>Apache 2.4.27</li>
<li>python 3.6.3</li>
<li>mod_wsgi 4.5.13</li>
<li>postgresql 9.4</li>
</ul>
<p>Don't forget to enable apache loading mod_wsgi (on Gentoo add "-D WSGI " to APACHE2_OPTS in /etc/conf.d/apache).</p>
<p>I am running Mezzanine on it's own virtualhost.</p>
<p>For the rest of this I go with the assumption that the above is installed and configured correctly.</p>
<p>Quick &amp; dirty Mezzanine install:</p>
<pre>python3.6 -m venv myenv<br/>source myenv/bin/activate<br/>pip install mezzanine south psycopg2</pre>
<p>Of course replace psycopg2 with whatever Database driver you intend to use.</p>
<p>Here is what I installed (<code>pip freeze</code> output):</p>
<pre>beautifulsoup4==4.6.0<br/>bleach==2.1.2<br/>certifi==2018.1.18<br/>chardet==3.0.4<br/>Django==1.10.8<br/>django-contrib-comments==1.8.0<br/>filebrowser-safe==0.4.7<br/>future==0.16.0<br/>grappelli-safe==0.4.7<br/>html5lib==1.0.1<br/>idna==2.6<br/>Mezzanine==4.2.3<br/>oauthlib==2.0.6<br/>Pillow==5.0.0<br/>psycopg2==2.7.4<br/>pytz==2018.3<br/>requests==2.18.4<br/>requests-oauthlib==0.8.0<br/>six==1.11.0<br/>South==1.0.2<br/>tzlocal==1.5.1<br/>urllib3==1.22<br/>webencodings==0.5.1</pre>
<p>Then create your project:</p>
<pre>mezzanine-project &lt;projectname&gt;</pre>
<p>Then edit your &lt;project&gt;/local_settings.py to use the correct database settings - and don't forget to set ALLOWED_HOSTS (like I did at first).</p>
<pre>chmod +x &lt;project&gt;/manage.py<br/>./&lt;project&gt;/manage.py createdb</pre>
<p>This should create the DB, superuser, .. then I ran</p>
<pre>./&lt;project&gt;/manage.py collectstatic</pre>
<p>you can test it with</p>
<pre>./&lt;project&gt;/manage.py runserver </pre>
<p>If you need to change the ip/port:</p>
<pre>./&lt;project&gt;/manage.py runserver &lt;ip&gt;:&lt;port&gt;</pre>
<p>Make sure stuff works ok, set up what you want to setup first or do your development.</p>
<p>For deploying with mod_wsgi .. here's a snippet from my apache config (I run it on https *only* and just redirect http to the https version):</p>
<pre>&lt;VirtualHost &lt;your_ip&gt;:443&gt;<br/>ServerName your.domain.name<br/>ServerAdmin your_email@your_domain<br/>ErrorLog /path/to/your/logs/your.domain.name_error.log<br/>CustomLog /path/to/your/logs/your.domain.name_access.log combined<br/><br/>LogLevel Info<br/><br/>SSLEngine on<br/>SSLCertificateFile /path/to/your/sslcerts/cert.pem<br/>SSLCertificateKeyFile /path/to/your/sslcerts/privkey.pem<br/>SSLCertificateChainFile /path/to/your/sslcerts/fullchain.pem<br/><br/>WSGIDaemonProcess mymezz home=/path/to/your/MezzanineInstall/Mezzanine/myenv processes=1 threads=15 display-name=[wsgi-mymezz]httpd python-path=/path/to/your/MezzanineInstall/Mezzanine/mezzproject:/path/to/your/MezzanineInstall/Mezzanine/myenv/lib64/python3.6/site-packages<br/><br/>WSGIProcessGroup mymezz<br/>WSGIApplicationGroup %{GLOBAL}<br/><br/>WSGIScriptAlias / /path/to/your/MezzanineInstall/Mezzanine/mezzproject/apache.wsgi<br/>Alias /static /path/to/your/MezzanineInstall/Mezzanine/mezzproject/static<br/>Alias /robots.txt /path/to/your/MezzanineInstall/Mezzanine/htdocs_static/robots.txt<br/>Alias /favicon.ico /path/to/your/MezzanineInstall/Mezzanine/htdocs_static/favicon.ico<br/><br/>&lt;Directory /path/to/your/MezzanineInstall/Mezzanine/mezzproject&gt;<br/>  Options -Indexes +FollowSymLinks +MultiViews<br/>  php_flag engine off<br/>  &lt;IfModule mod_authz_host.c&gt;<br/>    Require all granted<br/>  &lt;/IfModule&gt;<br/>&lt;/Directory&gt;<br/><br/>&lt;Directory /path/to/your/MezzanineInstallMezzanine/mezzproject/static&gt;<br/>   Options -Indexes +FollowSymLinks +MultiViews -ExecCGI<br/>   php_flag engine off<br/>   RemoveHandler .cgi .php .php3 .php4 .phtml .pl .py .pyc .pyo<br/>   AllowOverride None<br/>   &lt;IfModule mod_authz_host.c&gt;<br/>      Require all granted<br/>   &lt;/IfModule&gt;<br/>&lt;/Directory&gt;<br/><br/>&lt;/VirtualHost&gt;</pre>
<p>Should all be pretty self explainatory (maybe I'll elaborate at a later point, but I don't have that much time now and I'd rather get it finished).</p>
<p>Here's the apache.wsgi file:</p>
<pre>from __future__ import unicode_literals<br/>import os, sys, site<br/><br/>site.addsitedir('/path/to/your/MezzanineInstall/myenv/lib64/python3.6/site-packages')<br/>activate_this = os.path.expanduser('/path/to/your/MezzanineInstall/myenv/bin/activate_this.py')<br/>exec(open(activate_this, 'r').read(), dict(__file__=activate_this))<br/><br/>PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))<br/>sys.path.append(os.path.join(PROJECT_ROOT, ".."))<br/>settings_module = "%s.settings" % PROJECT_ROOT.split(os.sep)[-1]<br/>os.environ["DJANGO_SETTINGS_MODULE"] = settings_module<br/><br/>from django.core.wsgi import get_wsgi_application<br/>application = get_wsgi_application()</pre>
<p>I found activate_this.py at <a href="https://github.com/pypa/virtualenv/blob/master/virtualenv_embedded/activate_this.py">https://github.com/pypa/virtualenv/blob/master/virtualenv_embedded/activate_this.py </a>(since with python3 execfile wasn't really working for me):</p>
<pre>"""By using execfile(this_file, dict(__file__=this_file)) you will<br/>activate this virtualenv environment.<br/>This can be used when you must use an existing Python interpreter, not<br/>the virtualenv bin/python<br/>"""<br/><br/>try:<br/>    __file__<br/>except NameError:<br/>    raise AssertionError(<br/>        "You must run this like execfile('path/to/activate_this.py', dict(__file__='path/to/activate_this.py'))")<br/>import sys<br/>import os<br/><br/>old_os_path = os.environ.get('PATH', '')<br/>os.environ['PATH'] = os.path.dirname(os.path.abspath(__file__)) + os.pathsep + old_os_path<br/>base = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))<br/>if sys.platform == 'win32':<br/>    site_packages = os.path.join(base, 'Lib', 'site-packages')<br/>else:<br/>    site_packages = os.path.join(base, 'lib', 'python%s' % sys.version[:3], 'site-packages')<br/>prev_sys_path = list(sys.path)<br/>import site<br/>site.addsitedir(site_packages)<br/>sys.real_prefix = sys.prefix<br/>sys.prefix = base<br/># Move the added items to the front of the path:<br/>new_sys_path = []<br/>for item in list(sys.path):<br/>    if item not in prev_sys_path:<br/>        new_sys_path.append(item)<br/>        sys.path.remove(item)<br/>sys.path[:0] = new_sys_path</pre>
<p>that would be the Apache + mod_wsgi config (make sure to replace the python version number if you don'T use 3.6)</p>
<p><strong>Make sure that apache has the correct permissions to all the files too btw ;)</strong></p></div>
    </summary>
    <updated>2018-02-27T14:42:12Z</updated>
    <category term="Django"/>
    <category term="Gentoo"/>
    <category term="Linux"/>
    <category term="Python"/>
    <author>
      <name>lordvan</name>
    </author>
    <source>
      <id>https://blog.lordvan.com/blog/</id>
      <category term="Admin"/>
      <category term="Anime"/>
      <category term="CAD"/>
      <category term="Cloud"/>
      <category term="DBMail"/>
      <category term="Database"/>
      <category term="Development"/>
      <category term="Django"/>
      <category term="Drink"/>
      <category term="ERP"/>
      <category term="Events"/>
      <category term="Food"/>
      <category term="Games"/>
      <category term="Gentoo"/>
      <category term="Hardware"/>
      <category term="Japan"/>
      <category term="Japanese Language"/>
      <category term="Linux"/>
      <category term="Martial Arts"/>
      <category term="Migrated from old blog"/>
      <category term="Music"/>
      <category term="News"/>
      <category term="Pets"/>
      <category term="Photos"/>
      <category term="Postgresql"/>
      <category term="Python"/>
      <category term="RaspberryPI"/>
      <category term="Solid Edge"/>
      <category term="Star Trek"/>
      <category term="Tea"/>
      <category term="Tools"/>
      <category term="Tryton"/>
      <category term="Univention Corporate Server"/>
      <category term="Webpage"/>
      <category term="Windows"/>
      <link href="https://blog.lordvan.com/blog/" rel="alternate" type="text/html"/>
      <link href="https://blog.lordvan.com/blog/category/gentoo/feeds/rss/" rel="self" type="application/rss+xml"/>
      <subtitle>If you were looking for something specific you probably got redirected here from an old link to my (now gone) drupal blog. I migrated all the pages &amp; blog entries to this blog, so just use the search here to find what you were looking for.</subtitle>
      <title>Blog | LordVan's Page / Blog</title>
      <updated>2019-07-18T09:02:23Z</updated>
    </source>
  </entry>

  <entry xml:lang="en-US">
    <id>https://www.wireguard.com/gsoc/</id>
    <link href="https://www.wireguard.com/gsoc/" rel="alternate" type="text/html"/>
    <title>WireGuard in Google Summer of Code</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>WireGuard is <a href="https://www.wireguard.com/gsoc/">participating in Google Summer of Code 2018</a>. If you're a student — bachelors, masters, PhD, or otherwise — who would like to be funded this summer for writing interesting kernel code, studying cryptography, building networks, making mobile apps, contributing to the larger open source ecosystem, doing web development, writing documentation, or working on a wide variety of interesting problems, then this may be appealing. You'll be mentored by world-class experts, and the summer will certainly boost your skills. <a href="https://www.wireguard.com/gsoc/">Details are on this page</a> — simply contact the WireGuard team to get a proposal into the pipeline.</p></div>
    </summary>
    <updated>2018-02-19T14:55:21Z</updated>
    <source>
      <id>https://www.zx2c4.com/</id>
      <author>
        <name>Jason A. Donenfeld</name>
      </author>
      <link href="https://www.zx2c4.com/" rel="alternate" type="text/html"/>
      <link href="https://www.zx2c4.com/feed.xml" rel="self" type="application/rss+xml"/>
      <rights>Copyright 1996-2018 Jason A. Donenfeld. All Rights Reserved.</rights>
      <subtitle>{{{ ZX2C4 | Jason A. Donenfeld }}}</subtitle>
      <title>Nerdling Sapple</title>
      <updated>2018-02-19T16:02:35Z</updated>
    </source>
  </entry>

  <entry>
    <id>https://www.gentoo.org/news/2018/02/19/Gentoo-GSoC-2018.html</id>
    <link href="https://www.gentoo.org/news/2018/02/19/Gentoo-GSoC-2018.html" rel="alternate" type="text/html"/>
    <title>Gentoo accepted into Google Summer of Code 2018</title>
    <summary type="xhtml"><div xmlns="http://www.w3.org/1999/xhtml"><p>Students who want to spend their summer having fun and writing code can do so now for Gentoo. Gentoo <a href="https://summerofcode.withgoogle.com/organizations/4918228900380672/">has been accepted</a> as a mentoring organization for this year’s Google Summer of Code.</p>

<p>The GSoC is an excellent opportunity for gaining real-world experience in software design and making one’s self known in the broader open source community. It also looks great on a resume.</p>

<p>Initial project ideas can be <a href="https://wiki.gentoo.org/wiki/Google_Summer_of_Code/2018/Ideas">found here</a>, although new projects ideas are welcome. For new projects time is of the essence: there is typically some idea-polishing which must occur before the <strong><em>March 27th deadline</em></strong>. Because of this it is strongly recommended that students refine new project ideas with a mentor <em>before</em> proposing the idea formally.</p>

<p>GSoC students are encouraged to begin discussing ideas in the <a href="https://webchat.freenode.net/?channels=gentoo-soc">#gentoo-soc</a> IRC channel on the Freenode network.</p>

<p>Further information can be found on the <a href="https://wiki.gentoo.org/wiki/Google_Summer_of_Code/2018">Gentoo GSoC 2018 wiki page</a>. Those with unanswered questions should not hesitate to <a href="mailto:soc-mentors@gentoo.org">contact</a> the Summer of Code mentors via the mailing list.</p></div>
    </summary>
    <updated>2018-02-19T00:00:00Z</updated>
    <source>
      <id>https://www.gentoo.org/</id>
      <author>
        <name>Gentoo News</name>
      </author>
      <link href="https://www.gentoo.org/" rel="alternate" type="text/html"/>
      <link href="https://www.gentoo.org/feeds/news.xml" rel="self" type="application/rss+xml"/>
      <subtitle>News and information from Gentoo Linux</subtitle>
      <title>Gentoo Linux</title>
      <updated>2019-07-30T11:50:20Z</updated>
    </source>
  </entry>
</feed>
